title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Computing Surface Reaction Rates by Adaptive Multilevel Splitting Combined with Machine Learning and Ab Initio Molecular Dynamics",
"Computing Surface Reaction Rates by Adaptive Multilevel Splitting Combined with Machine Learning and Ab Initio Molecular Dynamics"
] | [
"Thomas Pigeon [email protected] ",
"Gabriel Stoltz ",
"Manuel Corral-Valero ",
"Ani Anciaux-Sedrakian ",
"Maxime Moreaud ",
"Tony Lelièvre [email protected] ",
"Pascal Raybaud [email protected] ",
"¶ †matherials ",
"\nteam-project\nInria Paris\n2 Rue Simone Iff, École des Ponts ParisTech, 6-8 Avenue Blaise Pascal, Marne-la-Vallée75012, 77455ParisFrance ‡CERMICS, France\n",
"\nRond-Point de l'Echangeur de Solaize\n¶IFP Energies Nouvelles\nBP 369360SolaizeFrance\n",
"\n§IFP Energies Nouvelles\n1 et 4 avenue de Bois-PréauF-92852Rueil-Malmaison CedexFrance\n"
] | [
"team-project\nInria Paris\n2 Rue Simone Iff, École des Ponts ParisTech, 6-8 Avenue Blaise Pascal, Marne-la-Vallée75012, 77455ParisFrance ‡CERMICS, France",
"Rond-Point de l'Echangeur de Solaize\n¶IFP Energies Nouvelles\nBP 369360SolaizeFrance",
"§IFP Energies Nouvelles\n1 et 4 avenue de Bois-PréauF-92852Rueil-Malmaison CedexFrance"
] | [] | Computing accurate rate constants for catalytic events occurring at the surface of a given material represents a challenging task with multiple potential applications in chemistry. To address this question, we propose an approach based on a combination of the rare event sampling method called Adaptive Multilevel Splitting (AMS) and ab initio molecular dynamics (AIMD). The AMS method requires a one dimensional reaction coordinate to index the progress of the transition. Identifying a good reaction coordinate is difficult, especially for high dimensional problems such a those encountered in catalysis. We probe various approaches to build reaction coordinates such as Support Vector Machine and path collective variables. The AMS is implemented so as to communicate with a DFT-plane wave code. A relevant case study in catalysis: the change of conformation and the dissociation of a water molecule chemisorbed on the (100) γ-alumina surface is used to evaluate our approach. The calculated rate constants and transition mechanisms are discussed and compared to those obtained by a conventional static approach based on the Eyring-Polanyi equation with harmonic approximation. It is revealed that the AMS method may provide rate constants which are smaller than the static approach by up to two orders of magnitude due to entropic effects involved in the chemisorbed water. | 10.1021/acs.jctc.3c00280 | [
"https://export.arxiv.org/pdf/2303.05993v1.pdf"
] | 257,482,472 | 2303.05993 | ffa3a0913534de74911a5d31d3ce82f136f3f5a0 |
Computing Surface Reaction Rates by Adaptive Multilevel Splitting Combined with Machine Learning and Ab Initio Molecular Dynamics
10 Mar 2023
Thomas Pigeon [email protected]
Gabriel Stoltz
Manuel Corral-Valero
Ani Anciaux-Sedrakian
Maxime Moreaud
Tony Lelièvre [email protected]
Pascal Raybaud [email protected]
¶ †matherials
team-project
Inria Paris
2 Rue Simone Iff, École des Ponts ParisTech, 6-8 Avenue Blaise Pascal, Marne-la-Vallée75012, 77455ParisFrance ‡CERMICS, France
Rond-Point de l'Echangeur de Solaize
¶IFP Energies Nouvelles
BP 369360SolaizeFrance
§IFP Energies Nouvelles
1 et 4 avenue de Bois-PréauF-92852Rueil-Malmaison CedexFrance
Computing Surface Reaction Rates by Adaptive Multilevel Splitting Combined with Machine Learning and Ab Initio Molecular Dynamics
10 Mar 20231
Computing accurate rate constants for catalytic events occurring at the surface of a given material represents a challenging task with multiple potential applications in chemistry. To address this question, we propose an approach based on a combination of the rare event sampling method called Adaptive Multilevel Splitting (AMS) and ab initio molecular dynamics (AIMD). The AMS method requires a one dimensional reaction coordinate to index the progress of the transition. Identifying a good reaction coordinate is difficult, especially for high dimensional problems such a those encountered in catalysis. We probe various approaches to build reaction coordinates such as Support Vector Machine and path collective variables. The AMS is implemented so as to communicate with a DFT-plane wave code. A relevant case study in catalysis: the change of conformation and the dissociation of a water molecule chemisorbed on the (100) γ-alumina surface is used to evaluate our approach. The calculated rate constants and transition mechanisms are discussed and compared to those obtained by a conventional static approach based on the Eyring-Polanyi equation with harmonic approximation. It is revealed that the AMS method may provide rate constants which are smaller than the static approach by up to two orders of magnitude due to entropic effects involved in the chemisorbed water.
Introduction
The determination of chemical reaction rate constants is of tremendous importance to better understand and quantify the kinetics of molecular transformations. This can be a challenging task, especially in catalysis where multiple elementary steps are involved for one targeted reaction. Evaluating each of them by experimental methods being often out of reach, an alternative lies in the theoretical modeling of each of them. Thanks to the significant increase of computational resources, quantum simulation approaches are widely used nowadays to address numerous catalytic systems involved in petrochemistry, fine chemistry and biomass conversion. [1][2][3][4] However, at the simulation time scale, such chemical transformations are rare events.
The typical time step for the integration of stochastic dynamics modeling the evolution of the system is of the order of 10 −15 s, while the frequency of chemical reactions lies in the range from 10 0 to 10 12 s −1 . Moreover, to accurately simulate catalytic activation of chemical bond breaking and formation, the simulation must include the explicit treatment of valence electrons and the quantum chemical calculation of the Hellmann-Feynman forces for each step of the dynamics. 5 Such an ab initio molecular dynamics (AIMD) approach becomes so computationally demanding that it is generally impossible to simulate a trajectory that is long enough to observe multiple reaction events, allowing the accurate quantification of rate constants.
Theoretical approaches most commonly used to explore chemical transformations are based on transition state theory (TST). 6 Within this formalism, the reactant and product are considered to be separated in phase space by a dynamical bottleneck, 7 which can be characterized as a surface in the configuration space. For a reaction with only one reactive path and only one energy barrier to cross, assuming momenta are not relevant for the transition process, this surface should contain the first order saddle points. The term transition state (TS) is versatile as sometimes it refers to a first order saddle point and sometimes to a isocommittor surface, as defined by IUPAC. 8 Considering TS to be surfaces, the reaction rate can be approximated as the frequency at which this surface is crossed. The most common approach to compute the reaction rate constant is called harmonic TST (hTST) as it allows to reduce general TST expression into the "generalized" Eyring-Polanyi equation thanks to harmonic approximation of the potential energy surface. 6,[9][10][11]
k hTST = κ(T ) k B T h e − ∆G ‡ k B T ,(1)
where ∆G ‡ is the free energy of activation computed as the difference of the free energy of the metastable basin and of the transition state, k B the Boltzmann constant, h the Planck constant, T the temperature and κ(T ) the transmission coefficient. This last quantity has to be between 0 and 1 and accounts for the recrossing of the surface, as discussed later on.
The free energy of activation is approximated via an harmonic approximation around the saddle point and the minima. Although hTST is one of the most widely used method to determine activation free energies and the rate constants of chemical events, particularly catalytic ones, it suffers from some weaknesses. Among them, the harmonic approximation of the potential energy surfaces as well as the determination of the prefactor κ in (1) might be questionable. In general when the entropy of the metastable state and the transition state differ by a non negligible amount, the harmonic approximation can lead to significant errors.
This can occur in various systems of interest to catalysis such as solid-liquid interfaces, zeolites, porous solids and supported nano-particles. 12 More general expression for the TST rate, using a one dimensional reaction coordinate 7,13 and relying on sampling methods to estimate free energies, [14][15][16] were proposed to overcome some limitations of (1). However, TST reaction rates contain a transmission coefficient κ ∈ (0, 1], accounting for the recrossing of the transition state surface, which is rather difficult to evaluate and which explains why bare TST overestimates the transition rate. 7,11,13,[17][18][19] There are of course alternative approaches to TST. A first one is based on the evolution of a time correlation function 13,20 which found applications in Transition Path Sampling (TPS) 21 or other approaches such as the recent work relying on Onsager-Machlup path probability distribution of Ref. 22. Another alternative, that we will use in the present work, is provided by approaches based on the Hill relation: 23
k Hill = Φ R p R→P (∂R),(2)
where Φ R is the flux of trajectories leaving the reactant state R and the committor probability at the boundary p R→P (∂R), the probability of reaching the product state P before returning to R starting from the boundary ∂R. In other words, this relation states that the rate constant is the average rate at which the system attempts to leave the initial state times the probability of success. Relation (2) has been proven correct assuming that the reactant state R is metastable for systems evolving according to the overdamped Langevin dynamics 24 or Langevin dynamics. 25 The Hill relation is used in various approaches corresponding to so-called path sampling methods such as Transition Interface Sampling (TIS), 26 Forward
Flux Sampling (FFS), 27 Weighted Ensembles (WE) 28 and Adaptive Multi-level Splitting (AMS). 29 All these methods are designed to compute the probability p R→P (∂R), which is the most difficult object to evaluate in (2). As a side product, these methods sample some reactive trajectories.
All the methods described previously have different precisions and computational efficiencies. On the one hand, the hTST approach is, by far, the most inexpensive methodology in terms of computational resources but, as mentioned above, it may lead to significant errors. On the other hand, the computational cost of enhanced sampling methods to estimate free energies is not negligible. The Hill relation has the advantage of being exact compared to TST approach but the required computational cost can be high depending on the method to compute the probability p R→P (∂R). Moreover, numerical methods sampling reactive trajectories offer the possibility of performing a more detailed analysis of reaction mechanisms.
Most methods to compute reaction rate constants require the definition of a Collective Variable (CV), either to define the states of the system, its free energy, or to use it as a one dimensional Reaction Coordinate (RC) indexing the progress of the transition. In many situations, reactions go through one or a few channels in phase space. CVs should describe these channels with a minimal number of dimensions. Usually, CVs are defined thanks to chemical intuition or through the expert knowledge of the chemical system. They are typically based on key distances or angles associated with atoms central to the reaction mechanism. Nonetheless, this kind of heuristic approach can have some limitations especially when the studied mechanism is a priori unknown. Automatic and data based approaches using various Machine Learning (ML) methods currently offer very appealing perspectives in this context. Recent reviews [30][31][32][33] provide an overview of current options to propose CVs and discuss their advantages and drawbacks. These methods bear the promise of more systematic and efficient ways to define CVs, albeit at the expense of interpretability compared to intuitive CVs such as angles or distances. Nonetheless, machine-learned CVs are becoming common practice in the field. For example, Support Vector Machine (SVM) models trained on a set of data generated by molecular dynamics were used for exploring the configurational transitions of model protein molecules. 34 In material sciences, the combination of SVM and AIMD was used for the mechanistic study of the diffusion of Al atoms on Al (100) surface. 35 To the best of our knowledge, SVM has not been used to explore more complex reactive events, such as chemical bond breaking/formation catalyzed by an oxide material's surface such as proposed in the present work. Considering the challenge of the chemical reactivity of the alumina catalysts highlighted before, we will aim at determining rate constants for a reaction network involving various water rotation, dissociation and association events on the (100) γ-alumina surface.
This article is organized as follows. In the methods section, the general computational approach following a flowchart leading to the determination of rate constants is described.
First, it is presented how the implementation of AMS coupled to a reference plane wave-DFT software enables the determination of rate constants. In a second part, numerical tools such as SVM and path collective variables (PCV) used to define CVs and RCs are presented.
The results section first describes the catalytic model system of water activated on the γalumina surface, used to probe the theoretical approach. The constructions of CVs and RCs corresponding to the water molecule transformation path are then explained. Finally, the numerical values of reaction rates and the reactive trajectories are analyzed and compared with the standard hTST approach.
Methods
The general flowchart of our approach is given in Figure 1. The first step is the definition of states defined as the ensemble of structures in the vicinity of a local potential energy minimum characterizing either a reactant or a product. In practice, these configurations are sampled by running a short AIMD starting from minima identified on the Potential Energy Surface (PES). Then, using this trajectory, the function numerically defining states is obtained by SVM and well-chosen chemical descriptors. Depending on the reaction rate constant to compute, each state has to be labeled as reactant or product. A reaction coordinate (RC) is then built, for instance by using the decision functions of the classifiers previously used to define states. Once states and a RC are defined, AMS is run to obtain an estimate of the reaction rate constant of the Langevin dynamics which is assumed to model accurately the system dynamics.
Reaction rate constant estimation using AMS
Motivation. To compute rate constants of rare events by using the Hill relation (2), the flux of trajectories leaving the initial reactant state R (or the frequency at which trajectories leave R) must be evaluated. If the reactant state is properly defined, this quantity can be computed in a reasonably short time by unbiased MD. The difficulty lies in the estimation of the probability that a trajectory leaving R is reactive (i.e. goes to a product state P ), since the probability p R→P (∂R) is in most cases exceedingly small. The AMS algorithm is specifically designed to evaluate low probability events. 29 The key point of AMS is to propose a method that has a good behavior in terms of variance and computational efficiency to compute the probability p R→P (∂R). This is achieved by first decomposing the rare event of interest into a succession of less unlikely events, the target probability to estimate being the product of the conditional probabilities associated with the sub-events (see SI Section 1).
Moreover, the sub-events are built such that the associated conditional probabilities are all the same. This is indeed a desirable feature in order to reduce the overall variance of the estimator. 51 The mathematical analysis of the variance of the AMS estimator is provided in Ref. 29 and 54. We focus here on the presentation of the algorithm adapted to MD rare events and only mention that this algorithm is unbiased. 55 This means that, whatever the choice of the reaction coordinate ξ and the number of replicas of the system (see below), repeating the algorithm sufficiently many times will always provide the same result in average, and this average value coincide with the target probability. On the other hand, the variance of the probability estimator depends on the quality of ξ. This opens a way to define an iterative procedure to improve the definition of reaction coordinates, using the sampled reactive trajectories to define better reaction coordinates.
Computing the flux and sampling initial conditions. A separating surface Σ R close to R is introduced for the estimation of the flux Φ R , to determine actual exits out of R. 24 This surface has to enclose the reactant state, so that any trajectory going from R to P has to cross Σ R (see Fig. 2). Indeed, the location of this surface allows to select the trajectories that make actual excursions off the state R, in contrast to trajectories that would only wander out of R for a few steps and go back inside R right away. The flux Φ R is then evaluated by starting a dynamics in the state R, counting the number of times n loop−RΣ R R it goes from R to Σ R , crosses Σ R and goes back andto R, dividing this number by the overall time t tot :
Φ R = n loop−RΣ R R t tot = 1 t loop−RΣ R R ,(3)
where t loop−RΣ R R is the average time that a trajectory takes to go out of R, cross Σ R and go back to R. Now that the calculation of the first term Φ R has been discussed, let us focus on the second one: p R→P (∂R). Computation of the flux Φ R simultaneously allows to generate some positions on the surface Σ R , which will serve to estimate the probability p R→P (Σ R ). Indeed, the estimated quantity is p R→P (Σ R ) instead of p R→P (∂R), this does not bias the result as far as R and Σ R are within the same metastable basin. 24 These initial conditions must correspond to the first time a trajectory leaving R reaches the level Σ R . As efficient calculation of the flux and the sampling of initial conditions relies on parallelization strategies, a Fleming-Viot particle process is used in our implementation of this initialization procedure. 56 The particles undergo independent molecular dynamics which means they can be run in parallel without requiring frequent communications.
AMS Requirements.
To run an AMS estimation, the reactant state R and the product state P have to be defined. The surface Σ R has to be placed such that each trajectory linking the reactant and the product state goes through Σ R . Its distance to the boundary of R should be sufficiently small so that the sampling of initial conditions and the determination of the flux Φ R (see the previous paragraph) is not exceedingly expensive in terms of computational cost. A number of replicas N rep (or walkers) has to be defined, as well as a minimum number k min of replicas to kill at each iteration of AMS. N rep different initial conditions on the surface Σ R are selected uniformly among the initial conditions sampled following the procedure described in the previous paragraph (purple points in Figure 2). Finally a reaction coordinate ξ should be defined to index the progression along the R → P transition.
It has to be consistent with the states R and P which can be generally enforced by setting ξ(q) = −∞ for q ∈ R and ξ(q) = +∞ for q ∈ P .
AMS initialization. First, all the replicas are run from their initial conditions on Σ R until either the R or P state is reached (see Fig. 2 a) which depicts an initialised set of three replicas). They are then iteratively updated until they all finish in the product state P . An illustration of an iteration is provided in Figure 2, the process being detailed in the next paragraph. In what follows, q i,n t denotes the position of the i-th replica at time t and iteration n. In particular, {q i,n 0 } 1≤i≤Nrep are initial conditions on Σ R . The method to estimate the probability is also summarized in the pseudo-code presented in SI Section 1.
AMS iteration. Each iteration of the main AMS loop starts by defining the largest value of the RC for each replica at the n-th iteration as z i,n max = sup t (ξ(q i,n t )). The replicas are then reordered by increasing values z i,n max (see Figure 2). According to the value of k min , the level at which positions are killed is identified as an empirical quantile: z n+1 kill = z k min ,n max . This means that all the trajectories for which z i,n max ≤ z n+1 kill are killed. The number of killed trajectories at this iteration is denoted by η n+1 killed . Note that η n+1 killed ≥ k min by construction, but it could happen that η n+1 killed ≥ k min +1 when several trajectories reach exactly the same z k min ,n max . To keep the number of replica constant, η n+1 killed trajectories have to be created by randomly branching η n+1 killed trajectories among the remaining ones. More precisely, trajectories are duplicated until the first time they reach the level z n+1 kill and then the dynamics is ran from these points until it reaches R or P . In fact, at each iteration, the estimated probability 1 − η n+1 killed Nrep is the probability for a trajectory to reach the surface Σ z n+1 kill starting on the surface Σ z n kill . Any AMS iteration can be summarized by the succession of the steps illustrated in
p R→P (Σ R ) = nmax n=1 1 − η n killed N rep ,(4)
where n max is the final number of iterations of the algorithm. The second option (not explicitely presented on Figure 3 since the RC is typically chosen so that this does not happen) is that at a certain iteration n, η n killed is equal to the total number of replicas of the algorithm. This can happen if at some point all the copied replicas have the same value of z i,n max . This termination event is called "failure" as the algorithm is not able to provide reactive trajectories and the estimated probability is p R→P (Σ R ) = 0, consistently with expression (4). Such a situation can be encountered if the system is stuck and all the replicas are progressively replaced by the copy of a single replica. It is also possible that the replicas reach their maximum in ξ in a zone of the phase space on which the reaction coordinate ξ remains constant while the trajectories are different.
It is possible to estimate the statistical error on the estimated probability p R→P (Σ R ) in (4) by repeating the estimation of the probability M real times. These realisations should be independent and can take advantage of parallel architecture of current super-computers.
The confidence intervals presented in the results section all correspond to a 90% confidence.
More details can be found in SI Section 2.
Multiple states case. Defining state R and P in multiple state case needs a specific treatment to compute state to state reaction rates. Two main approaches are proposed and detailed in SI Section 3. The first one samples all possible trajectories starting from a given state. The second approach more specifically focuses on the targeted transition. An illustration and comparison of the two approaches are provided in the results section.
Implementation with a plane wave DFT code. The AMS algorithm and the sampling of initial conditions was implemented in Python scripts calling the VASP software for AIMD simulations. 57,58 All DFT simulations parameters are listed in SI Section 4.1 while AIMD parameters are presented in SI Section 4.2. Some slight modifications have been implemented in the VASP code to allow for different stopping conditions of the VASP MD runs.
More details concerning the implementation can be found in the SI Section 5. The various repetitions M real of the AMS estimation can be run independently in parallel. The Fleming-Viot particle scheme also allows for in dependant runs, communications are required only infrequently allowing arbitrary number of particles ran independently in parallel. The development of the scripts and the testing was mostly done on ENER440 calculator at IFPEN.
Results presented in the following section come from simulations ran on Joliot-Curie(Genci)
and Topaze(CCRT).
Tools to define states and reaction coordinates
Let us conclude this section describing the methods by introducing useful tools that will be used to define the states and the reaction coordinates in the next section. descriptor allowing to capture enough information on atomic environments to reach errors of the order of 1 meV for potential energy surface fitting. 63,64 This descriptor turned out to be sufficient for our needs as illustrated in the result section. The detailed parameters used to compute SOAP descriptor using dscribe Python package 65 can be found in SI Section 4.3.
Support Vector Machine. A linear SVM model is designed to find the highest margin separation plane between two sets of labeled points. The margin denotes the minimal dis-tance between the plane and the labeled points. The details concerning this optimization problem can be found in ML textbooks 66 or the scikit-learn documentation. 67 The important result for this work is that, once the optimization problem is solved, only a certain subset of the total training set is used in the definition of the plane. These are the so-called support vectors which are the closest to the separation plane. The vector normal to this plane and the scalar defining its position is thus a linear combination of the support vectors.
The classifier decision function is the algebraic distance to the plane multiplied by a scaling factor chosen so that the decision function value on support vectors which are not outliers is either 1 or −1. To define multiple states using SVM, the one versus all approach was chosen, as made precise later on in the result section dedicated to the definition of states. Linear SVM models were trained using the SVC routine of scikit-learn package with a linear kernel. 67 The data were normalized using the standard scaler implemented in the same package.
The regularization parameter was kept to the default value 1 as, after cross validation, the classification scores on the test sets were always 100%.
Path Collective Variables (PCV). The principle of PCVs is to first define a reference path for the transition as a sequence of structures {R i } 0≤i≤L−1 . These structures are represented with a numerical descriptor, here the SOAP descriptor. A reaction coordinate is then constructed as: 53
s(R) = L−1 i=0 i e −λd(R i ,R) L−1 i=0 e −λd(R i ,R) ,(5)
where d is a distance here the Euclidean norm. The parameter λ has to be of the order of variation of the inverse distances between two consecutive structures along the path. If the structures along the path are not evenly spaced along the path according to the distance d, a sequence of values λ i can be used instead. In the present case, we chose λ i as:
λ −1 i = 1 2 (d(R i−1 , R i ) + d(R i , R i+1 )) λ −1 0 = d(R 0 , R 1 ); λ −1 L−1 = d(R L−2 , R L−1 ).(6)
PCVs were directly implemented in the python scripts used for the reaction coordinate evaluation during the dynamics.
3 Results and discussion Data set generation to learn states. Once local minima are identified, the metastability of the basins surrounding them should be assessed because these local minima should be sufficiently separated from other local minima. To quantify this, two AIMD trajectories of 1 ps each were run starting from each minimum. The first AIMD was run with a friction parameter of 5 ps −1 to thermalize the system faster, while the second one was run with γ = 0.5 ps −1 . If the system ends up in another potential energy well during this second part of the trajectory, then the initial well is not considered relevant to be qualified as a metastable state. At 300 K, multiple transitions between all basins were observed, thus all the potential wells cannot be considered metastable and relevant so as to mimic realistic chemical reactions.
At 200 K, 8 genuine metastable states could be identified denoting that at this temperature, the system better mimic chemical reaction conditions. The various identified states are named A i or D i depending on whether the state corresponds to a non-dissociated adsorbed water molecule or to two surface hydroxyls after water dissociation, respectively (see Fig. 5).
Some of these states are in fact identical as there exists a plane symmetry in this structure and thus these metastable potential energy wells should be gathered in the same state. For example, the wells D 1 and D 3 are symmetrically identical.
The numerical definition the states
A 1 , A 2 A 3 , A 4 , D 1 D 3 , D 2 D 4 were built using one versus all (1-vs-all) linear SVM classifiers decision function f X−vs−all . For instance, the state A 1 is defined as {q |f A 1 −vs−all (SOAP (q)) ≤ −1}.
To train these models, the data used was a 1 ps MD trajectory at 50 K starting from each local minimum. The point of running a MD at a lower temperature was to obtain points close to the minimum of the potential energy well. The dynamics starting was run with a friction parameter of 5 ps −1 during 1 ps for equilibration, then run during 1 ps with γ = 0.5 ps −1 .
The production runs of these trajectories were used to train the SVM classifiers. Only As a side remark, in a situation where symmetries of states are unknown, using this kind of approach can help to identify some similarities. In Figure 6, an histogram of the decision function of a A 1 -vs-D 1 SOAP-SVM classifier is plotted. The various colors represent the different labelled states. It is clear that this CV allows to differentiate the A and D states.
Moreover, according to this criterion, the A 2 and A 3 as well as well as the D 1 /D 3 and D 2 /D 4 groups of points bear some similarities for reason of symmetry.
Analysis of AMS rate constants
In this section, we analyze first the sensitivity of the reaction rates to two key parameters, the N rep M real , which roughly corresponds to a fixed computational cost. Indeed, assuming that every branching during one AMS realization has the same cost in average and that η n killed is constant and equals k min at all steps of the AMS realization, the cost of one AMS realisation is given by the product of the number of AMS iterations (n max ) and the number of killed replicas (k min ). Under these assumptions, the AMS estimator (4) writes:
p = 1 − k min N rep nmax .(7)
Assuming that k min Nrep is small, the computational cost of a single AMS simulation is: With too few replicas, the intrinsic variance of the AMS estimator can be so large that the confidence interval of the estimated probability contains 0, leading to not interpretable results. Table 1 reports the evolution of water rotation rate constants k A 1 →A 2 A 3 calculated with AMS for various values of N rep and M real by using the "A 1 -vs-all-SOAP-SVM" reaction coordinate and states defined as R = A 1 and P
k min n max ≈ −N rep ln ( p) .(8)= A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 .R = A 1 , P = A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 , ξ = A 1 -vs-all SOAP SVM RC. M real N rep t loop−RΣ A1 R (fs) p A 1 →A 2 A 3 (Σ A 1 ) k A 1 →A 2 A 3 (s −1 ) 5 400 108 ± 5(Σ A 1 , set R = A 1 and P = A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 .
Using the A 1 -vs-All SOAP-SVM RC to sample trajectories ending in P leads to the results presented in Table 2. This approach
= A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 .
As the results come from the same AMS t loop−RΣ R R is constant and equal to 110 ± 5 fs.
Transition confidence interval contains 0. Moreover, the direct transition from A 1 to D 2 D 4 being so rare that it has not even been sampled. To more accurately quantify the transition A 1 → D 1 D 3 , more specific RCs must be used. The results obtained with two other RCs are compared in Table 3. Changing the reaction coordinate A 1 -vs-all SOAP-SVM into A 1 -vs-D 1 SOAP-SVM for AMS does not significantly improve the rate constant precision as the estimated variance is still so large that 0 is contained in the confidence interval. This is due to the fact that in view of the definition of R and P , AMS still samples trajectories that are of no interest such as the rotation A 1 → A 2 A 3 . To observe only A 1 → D 1 D 3 reactive trajectories, one possibility would be to set R = A 1 and P = D 1 D 3 . However, as the AMS iteration stops once the trajectories finish either in R or P , a trajectory including the A 1 → A 2 A 3 rotation would consume too much computational time before going to R or P as the state A 2 A 3 is metastable. Hence R and P must be defined differently. Considering transition starting from A 1 , with the choice R = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 2 D 4 , P = D 1 D 3 , and initial conditions sampled on Σ A 1 , AMS is compelled to sample A 1 → D 1 D 3 trajectories.
p Transition (Σ A 1 ) k Transition (s −1 ) A 1 → A 2 A 3 (3.38 ± 1.
The difference here with the previous case is that, if a rotation A 1 → A 2 A 3 is observed in the course of the algorithm, then it will be stopped once it enters the A 2 A 3 state and be considered as a non reactive trajectory. Such trajectories will ultimately be discarded and replaced by trajectories having higher values z max of the chosen reaction coordinate defined in a way so as to enhance the sampling of trajectories between the desired metastable states. Both the quality of the reaction coordinate and the choice of the R and P states are important to obtain precise results for the A 1 → D 1 D 3 transition (see Table 3). In our case study, the necessity to change the definition of R and P might be due to the difference of the transition probability between the rotation A 1 → A 2 A 3 and the water dissociation
A 1 → D 1 D 3 .
Indeed, the half size of confidence intervals is larger than the target rate in the case where any type of rotations can be sampled, while constraining the AMS to sample only A 1 → D 1 D 3 trajectories leads to smaller confidence intervals. In the present case the interpolated SOAP-PCV RC is not significantly better that the A 1 -vs-D 1 -SOAP-SVM RC in term of variance as the 90% confidence error represents 97% of the target values while for the A 1 -vs-D 1 -SOAP-SVM RC is is 89%. Comparison of the rate constants calculated with AMS and with hTST. Various rate constants involved in the reaction networks of Figure 5 were computed using AMS.
RC t loop−RΣ A 1 R (fs) p A 1 →D 1 D 3 (Σ A 1 ) k A 1 →D 1 D 3 (s −1 ) R = A 1 ; P = A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 A 1 -R = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 2 D 4 ; P = D 1 D 3 A 1 -vs-D 1 -SOAP-SVM 105 ± 2(
Various reaction coordinates and various definitions of the states R and P were used to obtain the results presented in Table 4. For the sake of clarity, the choice of R, P , RC and AMS parameters for each transition are listed in SI Table 1. These rates obtained by AMS are directly compared to the reaction rate constants computed from the static hTST approach. Activation free energies calculated with hTST are reported in SI Table 3. and they qualitatively compare with previously published DFT data. 44 Reaction rate constants obtained by harmonic approximation are consistently higher than those obtained via the Hill relation and AMS for the Langevin dynamics, with one single exception for the A 2 A 3 → A 4 rotation. Assuming that the friction parameter is set so that Langevin dynamics reproduces accurately the system's dynamics, the AMS rate constants should be more precise than the TST ones due to the intrinsic overestimation of rates of TST as mentioned in the introduction. The harmonic approximation of the
A 1 → D 1 D 3
(1.64 ± 1.59) 10 9 3.37 10 11 D 1 D 3 → A 1 (2.32 ± 1.59) 10 10 1.13 10 12 A 2 A 3 → D 2 D 4 (7.86 ± 7.53) 10 9 5.45 10 13 D 2 D 4 → A 2 A 3 (1.28 ± 0.54) 10 11 1.17 10 13
A 2 A 3 → D 1 D 3 ∅ ∅ D 1 D 3 → A 2 A 3 (2.
33 ± 3.14) 10 8 ∅ potential energy surface for fast approximations of free energies can lead to large errors. In particular, entropic effects are usually mistreated by hTST approaches as it was underlined by previous theoretical studies based on transition path sampling and blue moon ensemble simulations 49 or other approaches. 12 In the present case, this might be the reason for the important overestimation of the rates of formation and dissociation events. Especially in the case of the A 2 A 3 → D 2 D 4 transition, the approximation of the TS free energy is so bad that the activation free energy is negative (as reported in SI Table 3) which leads to the large overestimation of the rates.
Under the assumption of a correctly parameterized dynamics, the values presented in Table 4 allow to realize that most water rotations are at least one order of magnitude faster than dissociation events. Only the direct A 1 → A 4 rotation seems to occur less frequently.
The formation of water happens on the same timescale as the fast water rotations depending on the hydroxyls conformation. This ordering has to be compared to the one from hTST rate constants. The quickest changes are the water formation and dissociation events. The slowest formation event occur as frequently as the fastest water rotation.
Using the presented approach to compute various reaction rate constants, especially those of forward and backward reactions, one can deduce also reaction free energies:
K R→P = k R→P k P →R ,(9)
and
∆G R→P (T ) = −N A k B T ln (K R→P ) ,(10)
where N A is the Avogadro number and K R→P is the reaction equilibrium constant. The values of the reaction free energies of Table 5 allow to identify that according to the harmonic approximation, the most stable state should be the D 1 D 3 , while the most stable one identified with the AMS method is A 1 for T=200 K. Previous ab initio thermodynamic studies within harmonic approximations also identified that the dissociative state is more favored. 39,40 Here also, one may suspect that entropic contributions may be at the origin of the change in the stability order. In particular, within the harmonic approximation, it is assumed that the adsorbed water molecule in A 1 state and in D 1 D 3 has similar rotational and transnational degree of freedoms. We cannot exclude that this assumption leads to errors as AIMD simulation reveals numerous rotational movements of the adsorbed water.
This effect influences the entropy change and stabilizes the non dissociated A 1 state with respect to the dissociated one D 1 D 3 . This thermodynamic analysis may also be consistent with the previous kinetic observation. Indeed, the thermodynamic stabilization of the non dissociated reactant states with AMS induces that water dissociation rate constants are significantly smaller with AMS than with hTST.
Analysis of AMS reactive trajectories.
In addition to computing reaction rates, we show in this section how the AMS method allows to sample reactive trajectories. method is therefore necessary to analyze all of them and some dimensionality reduction is useful to this end.
Clustering reactive trajectories. In the case of the A 4 → A 1 rotation, two paths exist which can be identified by a visual inspection of many reactive trajectories. A more systematic way to proceed would be to rely on clustering methods, which are specially designed to identify groups within a dataset. Among the various possible approaches, we used here an approach based on the K-means algorithm as implemented in scikit-learn. 67 To make numerical representation of each trajectory independant on its length, each trajectory was represented as the intersections of the trajectory and five isolevels of the A 4 -vs-all SOAP-SVM RC. The details of the procedure to perform this clustering are presented in SI Section 7. It is important to mention that K-means method requires to know a priori the number of clusters to find thus various values should be tested. The two types of paths can be identified by visual inspection of the trajectory closest to each cluster's centroid even though all the trajectories are not perfectly assigned by this approach. Of course resorting to other clustering methods could be more efficient but such a systematic study is beyond the scope of the present work.
The "top" path (see Figure 7 and trajectories supplied in electronic supplementary materials) qualitatively looks similar to the path found by the NEB. The fact that this path is less sampled than the "side" path indicates that this transition is rarer. Figure 7: Schematic representation of the two types of paths for the A 4 → A 1 rotation. The first path (blue) is named "side" while the second one (green) is named "top". The purple line represent the RC isolevels used to represent the trajectories.
Stochastic Transition State estimation. One possibility is to consider only one structure per trajectory instead of the whole trajectory. The most important structure q along a trajectory can be defined as the one such that the committor probability is p R→P (q) = 0.5, (where p R→P (q) is the probability that a molecular dynamics trajectory starting from q reaches first the P state rather than R. According to the IUPAC goldbook, 8 The quality of this analysis depends on the quality of the sampling of the reaction path.
Indeed considering the reactive trajectories sampled from the AMS done with: R = A 1
and P = ∪ A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4
the definition of the stochastic TS is of poor quality. This comes from the fact that this AMS mostly samples A 1 → A 2 A 3 trajectories (and only rarely A 1 → D 1 D 3 trajectories). The best approximation of a stochastic TS is on the most sampled path (Region 1 in Fig. 9). This is in line with the obtained results for the confidence interval of the reaction rate constants (see Table 3). In Figure 9, the green curve represents the Σ 0.5 iso-level of the reaction coordinate, while the red one is the Σ 0.5 iso-level of the committor function. As these levels do not match perfectly on the whole space, the best approximation of the stochastic TS is in the region of space where most of the reactive trajectories concentrate (Region 1). This issue in the analysis of the reactive trajectories of less probable transitions is recurrent when multiple paths are sampled. An alternative approach to automatically identify whether multiple paths leading to a single product are present within a set of sampled trajectories would be desirable. Figure 9: Schematic representation of poor match of the Σ 0.5 iso-level of the committor function (red) and the reaction coordinate (green). The green iso-level is placed after an AMS sampling of some reactive trajectories from R to P where a majority of the trajectories has gone via Region 1.
Conclusion
We proposed and implemented a theoretical approach based on the Hill relation to compute the exact reaction rate constants using rare event sampling and support vector machines.
It is illustrated on various chemical events occurring at an oxide material surface. A key algorithm to this end is the Adaptive Multilevel Splitting, which estimates reaction probabilities and samples reactive trajectories by using ab initio molecular dynamics. For that purpose, SVM was used to define the chemically relevant states and reaction coordinates to index the transition from reactant to products. It allows to compute the exact reaction rates for the dynamics at hand and makes possible a detailed analysis of reaction mechanisms via the inspection of reactive trajectories. The implementation done so as to communicate with a plane-wave DFT software allowed to illustrate the approach by studying the reactivity of a water molecule adsorbed on the γ-alumina (100) surface. The computed reaction rate constants were discussed and compared to those of a static hTST approach. The methods precision is impacted by the choice of reaction coordinate, the choice of reactants and products in a multiple state situation, the number of repetitions of the probability estimation and the number of replicas intrinsic to AMS. The hTST approach does not make assumptions on the system's dynamics, but relies on strong assumptions concerning the shape of the potential energy surface, implying uncontrolled approximation of entropy. The proposed methodology allows to alleviate these limitations at the expense of an increased computational cost. Assuming that the Langevin dynamics accurately models the system's dynamics (which involves in particular having a relevant value of the friction coefficient), the presented approach should be more precise than TST approaches. In the case considered here, hTST reaction rate constants are always higher than the ones estimated via AMS and the Hill relation. The relative stability of states is also different. In particular, we show that hTST underestimates the thermodynamic stability of adsorbed water molecules, and simultaneously overestimates rate constants of water dissociation and formation. On top of that, the analysis of reactive trajectories allows to identify possible paths that are not clearly The various approaches using the Hill relation to compute reaction rate constants require a method to estimate the probability p R→P (∂R) to reach P before R, starting from a given distribution on the boundary ∂R of the reactant state R. This probability is often estimated using a splitting estimator such as FFS or AMS. Let us first explain why a naive Monte Carlo estimator is plagued by a large variance, before presenting the AMS estimator, and a pseudo code of the AMS algorithm. We also refer to the main text for a detailed explanation of the main steps of the algorithm.
As observing a reaction is a rare event, the probability p R→P (∂R) is typically very small which is why resorting to a simple Monte-Carlo estimator is in general not efficient. A naive
Monte Carlo estimator consists in running n trajectories starting on the boundary ∂R of R and stopping them once they reach either the state R or the state P . Counting the number n success of trajectories which reach P before R (R → P transitions) yields the Monte-Carlo estimator:
p R→P (∂R) = n success n .
(1)
The normalised variance asscoiated with this estimator writes:
Var p R→P (∂R) p R→P (∂R) = (1 − p R→P (∂R))p R→P (∂R) n(p R→P (∂R)) 2 ≈ 1 np R→P (∂R) ,(2)
as p R→P (∂R) is negligible compared to 1. From Equation (2), it is clear that the lower the transition probability is, the larger the number of trials is needed to obtain a sensible relative error.
To alleviate such a difficulty, a splitting estimator uses a product of conditional probabilities to reformulate the problem. The idea is to include the event of interest into an increasing sequence of more likely events. The target probability is then written as a product of conditional probabilities. More precisely, by introducing M surfaces (Σ j ) 1≤j≤M between R and P such that any transition path from R to P has to cross each of these surfaces, the probability p ∂R R→P for the trajectory to reach P before R, starting from the boundary of R can be estimated as:
p R→P (∂R) = p R→Σ 1 (∂R) M −1 j=1 p R→Σ j+1 (Σ j ) p R→P (Σ M )(3)
where p R→Σ 1 (∂R) is an estimator of the probability for the path to reach Σ 1 before going back to R, p R→Σ j+1 (Σ j ) is an estimator of the probability to reach Σ j+1 before going back to R conditionally on the fact that Σ j was reached before going back to R, and finally p R→P (Σ M ) is an estimator of the probability to reach P before going back to R conditionally on the fact that Σ M was reached before going back to R. It can be shown that such an estimator has a smaller variance than the Monte Carlo estimator. Moreover, for a fixed number of surfaces M , it can be shown that the variance is minimal if the surfaces are chosen such that all the conditional probabilities p R→Σ j+1 (Σ j ) are equal. This leads to the adaptive multilevel splitting (AMS) algorithm, where the surfaces are placed adaptively on a given simulation so that the estimator of these conditional probabilities are all equal, using empirical quantiles. 1
As an illustration, imagine than the probability to be estimated is (1/2) M +1 (left-hand side of (3)), and that the surfaces are positioned so that all the probabilities to be estimated in the product in the right-hand side are 1/2 (there is 50% chance to reach the next surface Σ j+1 before R, knowing that the path has reached Σ j before R). In such a situation, a naive Monte Carlo estimator is plagued by a large variance since the probability (1/2) M +1 to be estimated is very small. On the other hand, estimating (M + 1) times a probability of 1/2 is much easier.
An illustration of this estimator with 7 surfaces is presented in Figure 1. To use such an estimator in practice, one needs to define the number of surfaces, their positions in phase space and to devise a way to estimate all the conditional probabilities p R→Σ j+1 (Σ j ). The AMS algorithm is designed to solve these problems all at once.
p E j →E i (Σ E j ) = n in E i N rep p E j →P (Σ E j ).(12)
This formula is motivated later on by considering the order of the first hitting time of each state. This approach allows to observe various types of transitions using a single AMS run.
However if one transition is less likely to occur compared to another, the sampling of the less probable transition might not be satisfactory as most trajectories would sample the most probable one.
Second approach. To circumvent this issue, one can decide to change the definition of the reactant and product states. Using initial conditions sampled on Σ E j and setting R = i =k E i and P = E k , the AMS is compelled to sample the E j → E k trajectories. The state R contains states E i with i = k as it allows to consider a E j → E i as a non reactive trajectory.
This matter is discussed in the result section.
Justification of equation (12). Let us consider a 3 states case. The reactant state is R = E 1 and the product state is P = E 2 ∪ E 3 . Then we define the time τ E i as the first that the dynamics starting on Σ R reaches the state E i . As three states are considered, six possibilities for the ranking of these three time are possible: All the molecular dynamics runs were generated using the Brünger Brooks Karplus integrator of the Langevin dynamics implemented in VASP. A 1.0 fs time step was used for all of them.
The length of the dynamics runs and the used friction parameter varied upon the purpose of the molecular dynamics run as detailed in the results section of the main text.
SI 4.3: NEB and saddle points
Saddle points on the potential energy surface were identified by nudged elastic band (NEB) methods using the VASP TST tools. 7,8 The spring force was taken to 5.0 eV.Å −2 and nudging was turned on. The number of images was 10 including the reactant and product. The optimizer used was FIRE with the default parameters that can be found in the VTST documentation. 8 The initial path was created by an interpolation between the z-matrix represen-tation of the reactant and product structures with the Opt'n path code. 9 The identification of the relevant saddle points on the potential energy surface was done starting from the NEB results and refined using the quasi-Newton method. The vibrational frequencies of the minima and the saddle points were evaluated using a finite difference method as implemented in the VASP package based on displacements of 0.01Å. Using these frequencies, the free energies of the meta-stable basins and the transition states were computed within the harmonic approximation. The rotational components of the entropy were not explicitly computed and assumed to cancel out between minima and transition states. Detailed expressions used can be found in Ref. 10. Finally the hTST rates were computed using Eyring-Polanyi equation assuming that the transmission coefficient is equal to 1.
SI 4.4: SOAP descriptor parameters
The SOAP descriptors were computed using the dscribe python package. 11 The cutoff radius was set to 6Å as the main structural changes in the example system are within a sphere of this radius around a central atoms. The atomic density in the neighborhood of an atom is approximated as a sum of Gaussians centered on the nearby atomic nuclei and their width σ was 0.05Å. The width of these Gaussians is chosen so that there is not too much overlap between two different structures. The parameters n max and l max controlling the size of the basis on which the atomic density is projected were respectively set to 8 and 6.
SI 5: Implementation with VASP software
The Fleming-Viot particle process to sample initial conditions and the AMS were implemented in python scripts calling the execution of the VASP software for the integration of the unbiased Langevin dynamics. Both these algorithms require to stop the dynamics when it enters a certain state. This means that at every time-step, one has to evaluate the criterion used to define this state and then decide whether the dynamics is to be continued or not. This kind of stopping condition cannot be enforced with the current implementation of VASP and had to be implemented. The collective variable to define the states is computed using a python script CV.py that is used as input. The choice of the stopping conditions is monitored by INCAR tags.
SI 6: Detailed numerical results
The most precise results for each observed transitions during this study are presented in Table 1. From these results, some reaction heats were computed and presented in Table 2. K-means and most clustering algorithms better perform for clustering problems in low dimensions. Performing clustering using the whole reactive trajectories is therefore expected to be inefficient. Moreover, the sampled reactive trajectories do not necessary have the same length and cannot directly be compared. To alleviate these problems, a first preprocessing step is to summarize each trajectory by a small number of structures. These structures correspond to the first point of the trajectory crossing certain reaction coordinate isolevels.
In the example presented in the result section of the main text, five levels were used. These levels are equally spaced between the largest minimal values of the RC along the trajectories and the smallest maximal values of the RC along the trajectories. The next preprocessing step is to numerically represent these few structures per trajectory via the SOAP descriptor centered on the oxygen atom of the water molecule. As these SOAP descriptors are also in high dimension, a principal component analysis is performed for all the normalized SOAP descriptors of structures corresponding to the same level of the RC. Finally, the first four principal components are used as descriptors of the structure. The choice of using the first four principal components is motivated by the fact that these four components capture at least 90% of the variance of the SOAP descriptors for a given level. Finally, a full reactive trajectory of is represented as a vector of size four times the number of levels chosen. As the K-means algorithm strongly depends on its (random) initialisation, the algorithm was repeated 20 times and the best set of clusters was kept. Setting the number of clusters to find to 3 allows to find the two types of pathways for the A 4 → A 1 rotation which were previously discussed. The trajectories are attached as videos in the electronic SI. Three clusters are necessary as both paths are not equally sampled by the AMS simulation. Indeed, one path being less probable, is less sampled. This behavior is typical of K-means which prefers to split one large cluster in two parts rather than identifying a large one and a much smaller one. Of course such issues could be alleviated by resorting to other clustering methods but such study is beyond the scope of the present work. However, the result obtained at this level paves the way to future more detailed investigations.
Figure 1 :
1Global workflow to compute reaction rate constants With the Hill relation using Adaptive Multilevel Splitting and Machine Learning.
Figure 2 :
2First iteration of the AMS algorithm with k min = 1 and N rep = 3. Purple points represent the initial conditions on Σ R . a) Identify the kill level z 1 kill = z k min ,0 max and kill the replicas such that z i,0 max ≤ z 1 kill , i.e. the orange replica. b) Replace the killed replicas by the trajectory of one of the remaining replicas (the green one in this example) until the level z 1 kill and continue the trajectory of the replica until it reaches either the state R or the state P .
Figure 3 .
3Figure 3.
Figure 3 :
3Flowchart of one iteration of AMS. n is an iteration index and i a replica index. AMS termination and probability estimator. The AMS algorithm can terminate in two different manners. First, after a certain number of iterations, all the replicas reach the state P . In such a case, N rep different reactive trajectories are obtained and the estimated transition probability is computed via:
Representation of chemical structures. Reaction coordinates and states definitions must be invariant under rotation, translation and symmetries of the system as well as by permutation of identical atoms. Since description relying on Cartesian coordinates do not exhibit these properties, substantial work was conducted to find representations of atomic systems invariant by Galilean transformations and other symmetries, in particular in the field of ML empirical potentials.[59][60][61][62] We chose the smooth overlap of atomic positions (SOAP)60
3. 1
1γ-Al 2 O 3 models and definition of states Model of the catalytic system. The catalytic case study chosen to benchmark the previously presented method is the transformation of a water molecule adsorbed on the (100) γ-alumina surface. 39,40,42 A representation of the γ-Al 2 O 3 surface on which one water molecule is chemically adsorbed without dissociation on an aluminum Lewis site is given in
Figure 4 .
4More information about the alumina slab used is provided in SI Section 4.1.
Figure 4 :
4Representation of one water molecule adsorbed on an aluminum site of the (100)γ-alumina surface model. Surface atoms are represented as ball and sticks while subsurface ones are represented as lines. Colors: Red: Oxygen, Grey: Aluminum, White: Hydrogen, Black: limit of the periodic cell. a) Top view; b) Side view.The first step is to identify various potential energy minima corresponding to the metastable states of the water molecule adsorbed on the surface either in a dissociative mode or nondissociative modes. As described in what follows, the dissociative modes lead to the formation of two hydroxyl (OH) groups : the first one is formed upon the transfer of a H atom of the water molecule to an O site of the surface; the second one results from the native water molecule. This systematic exploration confirms previous DFT studies where the min-ima were identified by running multiple geometry optimizations starting from various initial conditions.39,40
Figure 5 :
5Representation of the main different minimum energy structures corresponding to metastable states for the water molecule adsorbed on the (100) γ − Al 2 O 3 surface. Arrows represent transitions that might occur. Color legend: gray: aluminum, red; oxygen, white: hydrogen one SOAP descriptor centered on the oxygen atom of the adsorbed water molecule was used as features in the training set. With the parameters mentioned in the SI Section 4.4, this leads to an array of size 2100 to describe each structure. Before training the model, the variation of each dimension of the SOAP descriptors were scaled to have zero mean and unit variance. The test score of the SVM model was 100% in every case, which indicates that the set of structures represented with SOAP descriptors are linearly separable. On the other hand, trying to separate the SOAP descriptor of the trajectories starting from two symmetric minima such as D 1 and D 3 systematically led to smaller test scores. This indicates that the well surrounding these two minima are indeed similar in the sense of the SOAP descriptor.
Figure 6 :
6Histogram of A 1 -vs-D 1 SOAP-SVM CV on the whole labelled dataset.Definition of reaction coordinates (RCs). The first RCs used to perform AMS simulations are the various 1-vs-all SVM decision functions. These RCs are therefore named "1-vs-all SOAP-SVM RC" in the following sections. Some more specific RCs are built using the same approach targeting a specific transition from a state to another. In this case, the decision function is obtained by separating only the two targeted states. The corresponding RCs are termed "1-vs-1 SOAP-SVM RC". Finally, a Path Collective Variables (PCV), termed "SOAP-PCV" is also used as reaction coordinate to index the progression of AMS replicas. The SOAP-PCV RCs differ depending on the reference path. We consider here paths built by an interpolation of the z matrix representations of the minima of two metastable basins. 68 The associated RCs are termed "interpolated SOAP-PCV".
number of replicas (N rep ) and the number of repetitions of the probability estimation (M real ). These parameters also govern the computational cost and how this cost can be distributed on multiple CPUs, taking advantage of the parallel architecture of current supercomputers. Then, the other impacting choices on the precision of the reaction rate constant are the RC and the states, also investigated in what follows. The reaction rate constants obtained for each observed transition are finally compared to values computed from hTST. Parallel calculations against precision. The effect of the number of replicas (N rep ) and the number of AMS repetitions (M real ) is evaluated for a fixed number of initial conditions
Taking into account number of repetitions of the algorithm M real , the final cost of a reaction rate constant estimation is −M real N rep ln ( p). Considering the current implementation of AMS, M real realisation of AMS can be run in parallel. The objective is to find the minimal value of N rep to better distribute the computational cost on multiple parallel realisations.
3.73 ± 3.03)10 −3 (3.67 ± 2.99)10 10 10 200 110 ± 5 (3.38 ± 1.56)10 −3 (3.08 ± 1.43)10 10 20 100 101 ± 5 (3.47 ± 1.96)10 −3 (3.21 ± 1.82)10 10 By definition t loop is not impacted by N rep or M real . The target value of probability and rate are little impacted in the present case, which is not the case for the variance. The choice of N rep = 200 and M real = 10 is sufficient to obtain a A 1 to A 2 A 3 water rotation rate of 3.1 10 10 s −1 with the 90% confidence interval of [1.65 10 10 s −1 , 4.51 10 10 s −1 ]. Similar precision can be obtained with N rep = 100 and M real = 20. Therefore, it is important to perform the AMS simulations a certain number of times (M real ) in order have a proper variance estimation. Hence, for a similar computational cost in CPU time, satisfying accuracy can be obtained using M real ≥ 10. Impact of the definition of reaction coordinates and states. The definitions of the states R and P determine the type of trajectories that can be sampled by the algorithm. The choice of the reaction coordinate impacts the quality of this sampling. For instance, exploring all types of trajectories from A 1 to any other states, requires to sample initial conditions on
in the part of the TS definition referring to a surface, all the structures satisfying the p R→P (q) = 0.5 conditions are part of the transition state. This definition of transition state as a "set of states (each characterized by its own geometry and energy)" is indeed not consistent with the following part of the definition "The transition state is characterized by one and only one imaginary frequency" which presents it as a first order saddle point on the potential energy surface. The various structures q such that p R→P (q) = 0.5 are not necessarily identical to the saddle points identified via the NEB method and harmonic frequencies calculations, although some resemblance is expected. We propose to investigate this point in what follows for one water dissociation on the alumina surface.As mentioned in the methods section, the estimated probability for a trajectory to reachthe surface Σ z n+1 kill starting on the surface Σ z n kill is 1 − η n+1killed Nrep . By identifying the level n 0.define the iso-level Σ 0.5 of the reaction coordinate. The configurations corresponding to reactive trajectories crossing this surface are such that p R→P (q) = 0.5. There should be at least one structure corresponding to this condition per reactive trajectory. Considering only the first structure crossing the iso-level Σ 0.5 , the mean structure is computed, in the sense of the SOAP descriptor. This analysis was applied for the various realizations of AMS that were run. Stochastic Transition State of water dissociation. For the dissociation event A 1 → D 1 D 3 , the interpolated SOAP-PCV reaction coordinate with the reactant and product states defined as R = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 2 D 4 and P = D 1 D 3 conducts to a mean structure of configuration such that p R→P (q) = 0.5 qualitatively similar to the saddle point of the PES determined with the NEB method, as represented in Figure 8 (the corresponding trajec-tory is provided in electronic supplementary information). From a quantitative viewpoint, some slight structural differences can be noted regarding the O-H distances involving the transferred H atom. For the saddle point, the broken O-H bond is 0.14Å shorter than for AMS, whereas the newly formed O-H bond is 0.14Å larger. This difference might come from the fact that the momenta can bear a certain importance in the committor. Indeed the committor values estimated bears dynamical information while the saddle point is defined only with the positions.
Figure 8 :
8Ball and sticks representation of a) saddle point on the PES and b) mean structures such that p R→P (q) = 0.5 on AMS using interpolated SOAP PCV RC. Color legend, red: oxygen, gray: aluminum, white: hydrogen.
identified via the NEB approach. Finally, this method used in combination with ab-initio molecular dynamics can be computationally expensive. This issue might be alleviated by the use of Machine Learning Force Fields (MLFF), which can approach the accuracy of DFT force calculations at a much smaller computational cost. It also provides the opportunity to accurately describe nuclear quantum effects using path integral molecular dynamics such asin Ref. 69. Some active learning schemes to train MLFFs have been proposed recently and they could articulate well with the present method. 70,71 In particular, in contrast to standard MD, the presented approach favor a sampling of transition regions which are crucial to the description of chemical event. The study of a specific system could be done by first using jointly AMS and active learning to generate an accurate MLFF. Then, it could be used to evaluate accurately reaction rate constants and sample reactive trajectories. (66) Murphy, K. P. Probabilistic Machine Learning: An introduction; MIT Press, 2022. (67) Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.;Édouard Duchesnay, Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825-2830. (68) Fleurat-Lessard, P. http://pfleurat.free.fr/ReactionPath.php. (69) Bocus, M.; Goeminne, R.; Lamaire, A.; Cools-Ceuppens, M.; Verstraelen, T.; Speybroeck, V. V. Nuclear quantum effects on zeolite proton hopping kinetics explored with machine learning potentials and path integral molecular dynamics. Nat. Commun. 2023, 14 . (70) Vandermause, J.; Torrisi, S. B.; Batzner, S.; Xie, Y.; Sun, L.; Kolpak, A. M.; Kozinsky, B. On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events. Npj Comput. Mater. 2020, 6 . (71) Jinnouchi, R.; Miwa, K.; Karsai, F.; Kresse, G.; Asahi, R. On-the-fly active learning of interatomic potentials for large-scale atomistic simulations. J. Phys. Chem. Lett. 2020, 11, 6946-6955.
Figure 1 :
1Schematic representation of a splitting estimator. The complete pseudo code of the AMS algorithm is presented in Algorithm 1, see the main text for the explanations of each steps.
05 ± 2.77) 10 −5 (2.33 ± 3.14) 10 8 a Rate sampled using N rep = 200, M real = 10, R = A 1 ,P = A 2 A 3 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 and ξ = A 1 -vs-all SOAP-SVM RC. b Rate sampled using N rep = 100, M real = 20, R = A 2 A 3 , P = A 1 ∪ A 4 ∪ D 1 D 3 ∪ D 2 D 4 and ξ = A 2 A 3 -vs-all SOAP-SVM RC. c Rate sampled using N rep = 200, M real = 10, R = A 4 , P = A 1 ∪ A 2 A 3 ∪ D 1 D 3 ∪ D 2 D 4 and ξ = A 4 -vs-all SOAP-SVM RC. d Rate sampled using N rep = 200, M real = 10, R = D 1 D 3 , P = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 2 D 4 and ξ = D 1 D 3 -vs-all SOAP-SVM RC. e Rate sampled using N rep = 200, M real = 10, R = D 2 D 4 , P = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 1 D 3 and ξ = D 2 D 4 -vs-all SOAP-SVM RC. f Rate sampled using N rep = 200, M real = 10, R = A 1 ∪ A 2 A 3 ∪ A 4 ∪ D 2 D 4 , P = D 1 D 3 , Σ R = Σ A 1 and ξ = interpolated SOAP-PCVSimilar results can be obtained via the hTST approach and are presented inTables 3 and 4.
To benchmark an innovative methodology based on the Hill relation for exploring reaction mechanisms occurring on catalytic materials, we chose in this work a relevant case study: the reactivity of water on the (100) orientation of γ−alumina, a widely used support in heterogeneous catalysis applied to biomass conversion.36,37 Comprehensive DFT based studies have revealed the versatile nature of active sites (Lewis Al and Bronsted Al-OH), their thermodynamic properties[38][39][40][41][42] and their kinetic ones (TS and activation barriers) by us-ing predominantly hTST calculations.36,37,43,44 As for the study of many chemical reactions, especially in catalysis, most of the reaction rate constants are computed within the TST framework.4 Unbiased AIMD simulations have been also applied to decipher the gammaalumina's reactivity, its local structure and spectroscopic features, in the presence of liquid water in order to obtain a better understanding of phenomena occurring during the catalyst preparation or catalytic reaction.45,46 TPS was used in particular for studying the catalytic Methods based on the Hill relation and rare event simulation methods are rarely used for studying chemical reactions 50 and to the best of our knowledge, they have never been used to describe reactions in heterogeneous catalysis. In particular, the AMS method has only been used for molecular dynamics applications to study the isomerization of small biomolecules 51 or a protein-ligand dissociation, 52 up to now.Hence, the aim of the present work is to highlight how AMS applied to AIMD rare event sampling, combined with ML approach, is able to compute reaction rate constants via the Hill relation in a relevant case study for heterogeneous catalysis. The CVs and RCs are built using SVM or Path Collective Variables and well-chosen chemical descriptors.53 reactivity of other oxide materials, 47,48 also in combination with the blue-moon ensemble
formalism. 49
Table 1 :
1Estimation of probability, rate and the corresponding accuracy at 90% confidence for water rotation. The number of initial conditions M real N rep was varying M real and N rep .
Table 2 :
2Transition rates leaving A 1 estimated using A 1 -vs-all SOAP-SVM RC, N rep = 200, M real = 10, R = A 1 and P
allows to sample the transition A 1 → A 2 A 3 with a reasonable accuracy according to the estimate of the rate constant's variance. However, the less probable transitions (A 1 → D 1 D 3 and A 1 → A 4 ) are under-sampled and the rate estimations are not precise enough as the 90%56) 10 −3 (3.17 ± 1.43) 10 10
A 1 → D 1 D 3 (1.79 ± 1.86) 10 −3 (1.63 ± 1.70) 10 10
A 1 → A 4
(3.66 ± 6.02) 10 −7 (3.44 ± 5.50) 10 6
Table 3 :
3Variation of the RC and reactant / product states R and P to sample the A 1 → D 1 D 3 transition with N rep = 200, M real = 10 and initial conditions sampled on Σ A 1
Table 4 :
4Transition rate constants for all the transitions observed in this study with 90% confidence interval for AMS results.Transition
k Transition−AMS (s −1 ) k Transition−hTST (s −1 )
Water rotations
A 1 → A 2 A 3
(3.08 ± 1.43) 10 10
7.55 10 10
A 2 A 3 → A 1
(1.49 ± 0.46) 10 11
2.06 10 12
A 2 A 3 → A 4
(4.33 ± 2.20) 10 10
3.64 10 10
A 4 → A 2 A 3
(2.35 ± 0.87) 10 11
5.66 10 11
A 1 → A 4
(3.34 ± 6.56) 10 6
2.04 10 8
A 4 → A 1
(1.34 ± 0.68) 10 10
8.65 10 10
Hydroxyl rotation
D 1 D 3 → D 2 D 4 ∅
2.38 10 9
D 2 D 4 → D 1 D 3 (2.86 ± 4.71) 10 8
4.15 10 9
Formation and dissociation of water
Table 5 :
5Reaction heats at 200 K computed fromTable 4 and hTSTAMS Value (kJ.mol −1 ) hTST Value (kJ.mol −1 )
Water rotations
∆G A 1 →A 2 A 3
2.62 ± 2.66
5.50
∆G A 2 A 4 →A 4
2.81 ± 2.83
4.56
∆G A 1 →A 4
13.8 ± 4.43
10.1
Hydroxyl rotations
∆G D 1 D 3 →D 2 D 4 ∅
0.93
Water dissociations
∆G A 1 →D 1 D 3
4.41 ± 3.88
−2.56
∆G A 2 A 3 →D 2 D 4 4.64 ± 3.54
2.01
The overall AMS trajectories lengths are in the order of 200 ps. Qualitatively speaking, some chemically relevant trends can be identified. We identify there are two pathways for the rotation A 4 → A 1 . The first, and the less likely one, is similar to the path identified by the NEB static approach. The second one seems to be more similar to a A 4 → A 2 A 3 → A 1 rotation, where the trajectory does not actually enter the A 2 A 3 state but approaches it for a few femtoseconds before continuing toward the A 1 state. The same type of paths are observed in the few trajectories where a transition D 1 D 3 → A 2 A 3 occurs. However, such a systematic analysis of each reactive trajectory might become rapidly tedious and not safe enough to capture the overall chemical trends, since more than 2000 A 1 → D 1 D 3 trajectories are sampled by the AMS algorithm. An automated
SI 3: State to state probability estimation in a multi-state caseOnly two states R and P are necessary for AMS while multiple states are generally present in catalysis and the reaction rate constants between all the states {E 1 , .. E i , .. E N } are of interest. To address this issue with AMS, two approaches can be proposed. approach. We first define R = E j and P = i =j E i . The initial conditions are sampled on the surface Σ E j surrounding the state E j , then the transition probability Σ E j → P can be estimated. Finally the probabilities Σ E j → E i can be estimated by counting the number of trajectories n in E i that indeed finished in the E i state.First
SI 4: Calculation parameters SI 4.1: DFT parametersThe DFT functional was PBE 4 with the D3 dispersion correction. 5 A Gaussian smearing was used with σ = 0.05 eV. The γ-alumina bulk structure taken from Ref.6. The K point grid was set to 2 × 2 × 4 and centered at the Γ point. The bulk structure was first fully relaxed (allowing box volume to change) with a 800 eV kinetic energy cutoff to ensure low Pulay stress. All other DFT calculations on slabs representing the γ-alumina (100) surface were achieved by keeping the cell's volume constant with a kinetic energy cutoff of 450 eV.The (100) surface model is composed of a four layer slab structure with 15Å of vacuum inserted in the direction perpendicular to the surface plane (that is, the x direction in this case). Calculations on this system were carried out with a K point grid set to 1 × 2 × 4.Geometry optimizations were done using the conjugate gradient algorithm as implemented in VASP with a convergence criterion of 0.01 eV/Å.
Table 1 :
1Transition rate constants computed with AMS for all the transition observed in this study with 90% precision.Transitiont loop−RΣ R R (fs) p Transition k Transition (s −1 ) Water rotations A 1 → A 2 A 3a
110 ± 5
(3.38 ± 1.56) 10 −3 (3.08 ± 1.43) 10 10
A 2 A 3 → A 1
b
85 ± 9
(1.34 ± 0.41) 10 −2 (1.49 ± 0.46) 10 11
A 2 A 3 → A 4
b
85 ± 9
(3.90 ± 1.97) 10 −3 (4.33 ± 2.20) 10 10
A 4 → A 2 A 3
c
100 ± 5
(2.24 ± 0.83) 10 −2 (2.35 ± 0.87) 10 11
A 1 → A 4
a
110 ± 3
(3.66 ± 7.18) 10 −7 (3.34 ± 6.56) 10 6
A 4 → A 1
Table 2 :
2Reaction heats at 200 K computed AMS reaction rates of Table 1 Value (kJ.mol −1 ) Water rotations ∆G A 1 →A 2 A 3 2.62 ± 2.66 ∆G A 2 A 4 →A 4 2.81 ± 2.83 ∆G A 1 →A 4 13.8 ± 4.45 Water dissociations ∆G A 2 A 3 →D 2 D 4 4.64 ± 3.54 ∆G A 1 →D 1 D 3 4.41 ± 3.88
Table 3 :
3Activation energies and rate constant computed with the harmonic approximation at 200K. Formation and dissociation of water A 1 → D 1 D 3 4.18 3.37 10 11 D 1 D 3 → A 1 2.17 1.13 10 12 A 2 A 3 → D 2 D 4 −4.28 5.45 10 13 D 2 D 4 → A 2 A 3 −1.71 1.17 10 13Transition
∆G ‡
Transition (kJ.mol −1 ) k Transition (s −1 )
Water rotations
A 1 → A 2 A 3
6.67
7.55 10 10
A 2 A 3 → A 1
1.17
2.06 10 12
A 2 A 3 → A 4
7.88
3.64 10 10
A 4 → A 2 A 3
3.31
5.66 10 11
A 1 → A 4
16.5
2.04 10 8
A 4 → A 1
6.44
8.65 10 10
Hydroxyl rotation
D 1 D 3 → D 2 D 4
12.4
2.38 10 9
D 2 D 4 → D 1 D 3
11.5
4.15 10 9
Table 4 :
4Reaction heats computed from harmonic approximation of the free energy at 200 K Value (kJ.mol −1 ) Water rotations ∆G A 1 →A 2 A 3 5.50 ∆G A 2 A 4 →A 4 4.56 ∆G A 1 →A 4 10, 1 Water dissociations ∆G D 1 D 3 →D 2 D 4 0, 93 Water dissociations ∆G A 2 A 3 →D 2 D 4 −2.56 ∆G A 1 →D 1 D 3 2.01 SI 7: Clustering reactive trajectories
t (ξ(q k t ));return p
AcknowledgementThis project was realized in the framework of the joint laboratory IFPEN-Inria ConvergenceAlgorithm 1: Simplified AMS pseudo algorithmRequires; N rep , k min , numerical definition of the states R and P , Reaction coordinate ξ, N rep initial conditions {q j ini } 1≤j≤Nrep on Σ , Molecular dynamics engine MD step, argsort function returning the permutation of indices to sort an array of scalars, randpick functions that randomly picks an element of an array; Output: p: the estimated probability of reaching P before R starting from Σ. Algorithm; p ← 1;SI 2: Rate constant error estimationWe provide in this section, an expression for the confidence interval for the reaction rate using the Delta method, which is a standard technique in statistics.2The Hill relation writes:where Φ R is the flux of trajectories leaving the state R and p R→P is the probability of reaching P before R starting on the boundary of R. The flux in (4) is estimated via:To obtain uncorrelated loop times t loop−RΣ R R , the results presented were computed considering one loop every five loops. From(4)and(5), the reaction rate writes:Assuming M real realizations of AMS were done, let us consider the two estimators of the term of the quotient:where p i are independent results of AMS with the same N rep and k min , the times t i are the times of different loops going from R to Σ and then back to R.The AMS estimator satisfies the central limit theorem in the limit of an infinitely large number of replicas, see Ref.3. Concerning the estimator of this flux, the central limit theorem can be invoked only if the times t i are not correlated. It is clear that two successive times might be correlated but due to the fact that the Langevin dynamics is stochastic, it 6 is possible to assume that t i and t i+n are not correlated if n is large enough. The number of times n one should skip to ensure that depends on the friction parameter of the dynamics and was assumed to be 5 in this work. Assuming that the central limit theorem holds, onewhere G t and G p are two real valued random variables distributed according to a standard Gaussian distribution. By truncation of the Taylor expansion at the first order in 1 √ M real and 1 √ n loop−RΣ R R , one gets:As the sum of two zero mean Gaussian random variables is also a zero mean Gaussian random variable, it therefore holds:Using the unbiased variance estimatorsand replacing t loop−RΣ R R and p Σ R →P by their estimators in (9), the following confidence 7 interval are finally deduced:where θ α 2 stand for the quantile α 2 of the Gaussian law to obtain a 1 − alpha precision confidence interval.
Applications of molecular modeling in heterogeneous catalysis research. L J Broadbelt, R Q Snurr, Appl. Catal. A: Gen. Broadbelt, L. J.; Snurr, R. Q. Applications of molecular modeling in heterogeneous catalysis research. Appl. Catal. A: Gen. 2000, 200, 23-46.
Density functional theory simulations of complex catalytic materials in reactive environments: beyond the ideal surface at low coverage. C Chizallet, P Raybaud, Catal. Sci. Technol. 4Chizallet, C.; Raybaud, P. Density functional theory simulations of complex catalytic materials in reactive environments: beyond the ideal surface at low coverage. Catal. Sci. Technol. 2014, 4, 2797-2813.
Computational methods in heterogeneous catalysis. B W J Chen, L Xu, M Mavrikakis, Chem. Rev. 121Chen, B. W. J.; Xu, L.; Mavrikakis, M. Computational methods in heterogeneous catalysis. Chem. Rev. 2020, 121, 1007-1048.
Ab initio molecular dynamics with enhanced sampling in heterogeneous catalysis. G Piccini, M.-S Lee, S F Yuk, D Zhang, G Collinge, L Kollias, M.-T Nguyen, V.-A Glezakou, R Rousseau, Catal. Sci. Technol. 2022. 12Piccini, G.; Lee, M.-S.; Yuk, S. F.; Zhang, D.; Collinge, G.; Kollias, L.; Nguyen, M.-T.; Glezakou, V.-A.; Rousseau, R. Ab initio molecular dynamics with enhanced sampling in heterogeneous catalysis. Catal. Sci. Technol. 2022, 12, 12-37.
Forces in molecules. R P Feynman, Phys. Rev. 56Feynman, R. P. Forces in molecules. Phys. Rev. 1939, 56, 340-343.
The activated complex in chemical reactions. H Eyring, J. Chem. Phys. 3Eyring, H. The activated complex in chemical reactions. J. Chem. Phys. 1935, 3, 107- 115.
Algorithms for Chemical Computations. C H Bennett, Bennett, C. H. Algorithms for Chemical Computations;
The IUPAC Compendium of Chemical Terminology. International Union of Pure and Applied Chemistry. IUPACp 10.1351/goldbook.t06468The IUPAC Compendium of Chemical Terminology; International Union of Pure and Applied Chemistry (IUPAC), 2014; p 10.1351/goldbook.t06468.
Reaction-rate theory: fifty years after Kramers. P Hänggi, P Talkner, M Borkovec, Rev. Mod. Phys. 62Hänggi, P.; Talkner, P.; Borkovec, M. Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys. 1990, 62, 251-341.
Some applications of the transition state method to the calculation of reaction velocities, especially in solution. M G Evans, M Polanyi, Trans. Faraday Soc. 875Evans, M. G.; Polanyi, M. Some applications of the transition state method to the calculation of reaction velocities, especially in solution. Trans. Faraday Soc. 1935, 31, 875.
The transition state method. E Wigner, Trans. Faraday Soc. 3429Wigner, E. The transition state method. Trans. Faraday Soc. 1938, 34, 29.
Effect of collective dynamics and anharmonicity on entropy in heterogenous catalysis: Building the case for advanced molecular simulations. G Collinge, S F Yuk, M.-T Nguyen, M.-S Lee, V.-A Glezakou, R Rousseau, ACS Catal. 10Collinge, G.; Yuk, S. F.; Nguyen, M.-T.; Lee, M.-S.; Glezakou, V.-A.; Rousseau, R. Effect of collective dynamics and anharmonicity on entropy in heterogenous catalysis: Building the case for advanced molecular simulations. ACS Catal. 2020, 10, 9236-9260.
Statistical mechanics of isomerization dynamics in liquids and the transition state approximation. D Chandler, J. Chem. Phys. 2959Chandler, D. Statistical mechanics of isomerization dynamics in liquids and the tran- sition state approximation. J. Chem. Phys. 1978, 68, 2959.
Free energy calculations. C Chipot, A Pohorille, Chipot, C., Pohorille, A., Eds. Free energy calculations;
Free Energy Computations. M Rousset, G Stoltz, T Lelièvre, Imperial College PressRousset, M.; Stoltz, G.; Lelièvre, T. Free Energy Computations; Imperial College Press, 2010.
Enhanced sampling in molecular dynamics. Y I Yang, Q Shao, J Zhang, L Yang, Y Q Gao, J. Chem. Phys. 70902Yang, Y. I.; Shao, Q.; Zhang, J.; Yang, L.; Gao, Y. Q. Enhanced sampling in molecular dynamics. J. Chem. Phys. 2019, 151, 070902.
On the statistical mechanical treatment of the absolute rate of chemical reaction. J Horiuti, Bull. Chem. Soc. Jpn. 13Horiuti, J. On the statistical mechanical treatment of the absolute rate of chemical reaction. Bull. Chem. Soc. Jpn. 1938, 13, 210-216.
Statistical investigation of dissociation cross-sections for diatoms. J Keck, Faraday Discuss. 173Keck, J. Statistical investigation of dissociation cross-sections for diatoms. Faraday Discuss. 1962, 33, 173.
Transition state theory: Variational formulation, dynamical corrections, and error estimates. E Vanden-Eijnden, F A Tal, J. Chem. Phys. 184103Vanden-Eijnden, E.; Tal, F. A. Transition state theory: Variational formulation, dy- namical corrections, and error estimates. J. Chem. Phys. 2005, 123, 184103.
Quantum mechanical rate constants for bimolecular reactions. W H Miller, S D Schwartz, J W Tromp, J. Chem. Phys. 79Miller, W. H.; Schwartz, S. D.; Tromp, J. W. Quantum mechanical rate constants for bimolecular reactions. J. Chem. Phys. 1983, 79, 4889-4898.
. C Dellago, P G Bolhuis, P L Geissler, Advances in Chemical Physics. Dellago, C.; Bolhuis, P. G.; Geissler, P. L. Advances in Chemical Physics;
Metadynamics of paths. D Mandelli, B Hirshberg, M Parrinello, Phys. Rev. Lett. 26001Mandelli, D.; Hirshberg, B.; Parrinello, M. Metadynamics of paths. Phys. Rev. Lett. 2020, 125, 026001.
Free Energy Transduction in Biology: The Steady-State Kinetic and Thermodynamic Formalism. T Hill, Hill, T. Free Energy Transduction in Biology: The Steady-State Kinetic and Thermo- dynamic Formalism;
On the Hill relation and the mean reaction time for metastable processes. M Baudel, A Guyader, T Lelièvre, Stoch Process Their Appl 2023. 155Baudel, M.; Guyader, A.; Lelièvre, T. On the Hill relation and the mean reaction time for metastable processes. Stoch Process Their Appl 2023, 155, 393-436.
Estimation of statistics of transitions and Hill relation for Langevin dynamics. T Lelièvre, M Ramil, J Reygner, arXiv:2206.132642022to appear in Annales de l'Institut Henri PoincaréLelièvre, T.; Ramil, M.; Reygner, J. Estimation of statistics of transitions and Hill rela- tion for Langevin dynamics. arXiv:2206.13264 2022, to appear in Annales de l'Institut Henri Poincaré.
A novel path sampling method for the calculation of rate constants. T S Van Erp, D Moroni, P G Bolhuis, J. Chem. Phys. 118van Erp, T. S.; Moroni, D.; Bolhuis, P. G. A novel path sampling method for the calculation of rate constants. J. Chem. Phys. 2003, 118, 7762-7774.
Sampling rare switching events in biochemical networks. R J Allen, P B Warren, P R Ten Wolde, Phys. Rev. Lett. 18104Allen, R. J.; Warren, P. B.; ten Wolde, P. R. Sampling rare switching events in bio- chemical networks. Phys. Rev. Lett. 2005, 94, 018104.
Weighted-ensemble Brownian dynamics simulations for protein association reactions. G Huber, S Kim, Biophys. J. 70Huber, G.; Kim, S. Weighted-ensemble Brownian dynamics simulations for protein association reactions. Biophys. J. 1996, 70, 97-110.
Adaptive multilevel splitting for rare event analysis. F Cérou, A Guyader, Stoch. Anal. Appl. 25Cérou, F.; Guyader, A. Adaptive multilevel splitting for rare event analysis. Stoch. Anal. Appl. 2007, 25, 417-443.
Unsupervised learning methods for molecular simulation data. A Glielmo, B E Husic, A Rodriguez, C Clementi, F Noé, A Laio, Chem. Rev. 121Glielmo, A.; Husic, B. E.; Rodriguez, A.; Clementi, C.; Noé, F.; Laio, A. Unsupervised learning methods for molecular simulation data. Chem. Rev. 2021, 121, 9722-9758.
Collective variable-based enhanced sampling and machine learning. M Chen, Eur. Phys. J. B. 211Chen, M. Collective variable-based enhanced sampling and machine learning. Eur. Phys. J. B 2021, 94, 211.
Machine learning force fields and coarse-grained variables in molecular dynamics: Application to materials and biological systems. P Gkeka, G Stoltz, A B Farimani, Z Belkacemi, M Ceriotti, J D Chodera, A R Dinner, A L Ferguson, J.-B Maillet, H Minoux, C Peter, F Pietrucci, A Silveira, A Tkatchenko, Z Trstanova, R Wiewiora, T Lelièvre, J. Chem. Theory Comput. 16Gkeka, P.; Stoltz, G.; Farimani, A. B.; Belkacemi, Z.; Ceriotti, M.; Chodera, J. D.; Dinner, A. R.; Ferguson, A. L.; Maillet, J.-B.; Minoux, H.; Peter, C.; Pietrucci, F.; Silveira, A.; Tkatchenko, A.; Trstanova, Z.; Wiewiora, R.; Lelièvre, T. Machine learn- ing force fields and coarse-grained variables in molecular dynamics: Application to materials and biological systems. J. Chem. Theory Comput. 2020, 16, 4757-4775.
Machine learning and data science in soft materials engineering. A L Ferguson, J. Condens. Matter Phys. 43002Ferguson, A. L. Machine learning and data science in soft materials engineering. J. Condens. Matter Phys. 2017, 30, 043002.
Automated design of collective variables using supervised machine learning. M M Sultan, V S Pande, J. Chem. Phys. 94106Sultan, M. M.; Pande, V. S. Automated design of collective variables using supervised machine learning. J. Chem. Phys. 2018, 149, 094106.
Optimizing transition states via kernel-based machine learning. Z D Pozun, K Hansen, D Sheppard, M Rupp, K.-R Müller, G Henkelman, J. Chem. Phys. 136174101Pozun, Z. D.; Hansen, K.; Sheppard, D.; Rupp, M.; Müller, K.-R.; Henkelman, G. Optimizing transition states via kernel-based machine learning. J. Chem. Phys. 2012, 136, 174101.
Density functional theory -Computed mechanisms of ethylene and diethyl ether formation from ethanol on γ-Al 2 O 3 (100). M A Christiansen, G Mpourmpakis, D G Vlachos, ACS Catal. 3Christiansen, M. A.; Mpourmpakis, G.; Vlachos, D. G. Density functional theory - Computed mechanisms of ethylene and diethyl ether formation from ethanol on γ- Al 2 O 3 (100). ACS Catal. 2013, 3, 9, 1965-1975.
Lauron-Pernot, H. Influence of coadsorbed water and alcohol molecules on isopropyl alcohol dehydration on γ-alumina: Multiscale modeling of experimental kinetic profiles. K Larmier, A Nicolle, C Chizallet, N Cadran, S Maury, A.-F Lamic-Humblot, E Marceau, ACS Catal. 6Larmier, K.; Nicolle, A.; Chizallet, C.; Cadran, N.; Maury, S.; Lamic-Humblot, A.-F.; Marceau, E.; Lauron-Pernot, H. Influence of coadsorbed water and alcohol molecules on isopropyl alcohol dehydration on γ-alumina: Multiscale modeling of experimental kinetic profiles. ACS Catal. 2016, 6, 1905-1920.
The chemistry of water on alumina surfaces: Reaction dynamics from first principles. K C Hass, W F Schneider, A Curioni, W Andreoni, Science. 282Hass, K. C.; Schneider, W. F.; Curioni, A.; Andreoni, W. The chemistry of water on alumina surfaces: Reaction dynamics from first principles. Science 1998, 282, 265-268.
Hydroxyl groups on γalumina surfaces: A DFT study. M Digne, P Sautet, P Raybaud, P Euzen, H Toulhoat, J. Catal. 211Digne, M.; Sautet, P.; Raybaud, P.; Euzen, P.; Toulhoat, H. Hydroxyl groups on γ- alumina surfaces: A DFT study. J. Catal. 2002, 211, 1-5.
Use of DFT to achieve a rational understanding of acido-basic properties of γ-alumina surfaces. M Digne, P Sautet, P Raybaud, P Euzen, J. Catal. 226Digne, M.; Sautet, P.; Raybaud, P.; Euzen, P. Use of DFT to achieve a rational under- standing of acido-basic properties of γ-alumina surfaces. J. Catal. 2004, 226, 54-68.
R Wischert, P Laurent, C Copéret, F Delbecq, P Sautet, The essential and unexpected role of water for the structure, stability, and reactivity of. defect" sitesWischert, R.; Laurent, P.; Copéret, C.; Delbecq, F.; Sautet, P. γ-Alumina: The essential and unexpected role of water for the structure, stability, and reactivity of "defect" sites.
. J. Am. Chem. Soc. 134J. Am. Chem. Soc. 2012, 134, 14430-14449.
Revisiting γ-alumina surface models through the topotactic transformation of boehmite surfaces. T Pigeon, C Chizallet, P Raybaud, J. Catal. 2022Pigeon, T.; Chizallet, C.; Raybaud, P. Revisiting γ-alumina surface models through the topotactic transformation of boehmite surfaces. J. Catal. 2022, 405, 140-151.
H 2 O Adsorption/Dissociation and H 2 generation by the reaction of H 2 O with Al 2 O 3 materials: A first-principles investigation. Y.-H Lu, S.-Y Wu, H.-T Chen, J. Phys. Chem. C. 120Lu, Y.-H.; Wu, S.-Y.; Chen, H.-T. H 2 O Adsorption/Dissociation and H 2 generation by the reaction of H 2 O with Al 2 O 3 materials: A first-principles investigation. J. Phys. Chem. C 2016, 120, 21561-21570.
Adsorption and protonation of CO 2 on partially hydroxylated γ-Al 2 O 3 surfaces: A density functional theory study. Y Pan, C.-J Liu, Q Ge, Langmuir. 24Pan, Y.; Liu, C.-J.; Ge, Q. Adsorption and protonation of CO 2 on partially hydroxy- lated γ-Al 2 O 3 surfaces: A density functional theory study. Langmuir 2008, 24, 12410- 12419.
An atomistic description of the γ-alumina/water interface revealed by ab initio molecular dynamics. B F Ngouana-Wakou, P Cornette, M C Valero, D Costa, P Raybaud, J. Phys. Chem. C. 121Ngouana-Wakou, B. F.; Cornette, P.; Valero, M. C.; Costa, D.; Raybaud, P. An atom- istic description of the γ-alumina/water interface revealed by ab initio molecular dy- namics. J. Phys. Chem. C 2017, 121, 10351-10363.
Structuration and dynamics of interfacial liquid water at hydrated γ-alumina determined by ab initio molecular simulations: Implications for nanoparticle stability. R Réocreux, T Jiang, M Iannuzzi, C Michel, P Sautet, ACS Appl. Nano Mater. 1Réocreux, R.; Jiang, T.; Iannuzzi, M.; Michel, C.; Sautet, P. Structuration and dynam- ics of interfacial liquid water at hydrated γ-alumina determined by ab initio molecular simulations: Implications for nanoparticle stability. ACS Appl. Nano Mater. 2017, 1, 191-199.
Application of transition path sampling methods in catalysis: A new mechanism for CC bond formation in the methanol coupling reaction in Chabazite. C S Lo, R Radhakrishnan, B L Trout, Catal. Today. 105Lo, C. S.; Radhakrishnan, R.; Trout, B. L. Application of transition path sampling methods in catalysis: A new mechanism for CC bond formation in the methanol cou- pling reaction in Chabazite. Catal. Today 2005, 105, 93-105.
Mechanism of alkane dehydrogenation catalyzed by acidic zeolites: Ab initio transition path sampling. T Bucko, L Benco, O Dubay, C Dellago, J Hafner, J. Chem. Phys. 214508Bucko, T.; Benco, L.; Dubay, O.; Dellago, C.; Hafner, J. Mechanism of alkane dehy- drogenation catalyzed by acidic zeolites: Ab initio transition path sampling. J. Chem. Phys. 2009, 131, 214508.
Dynamic features of transition states for beta-scission reactions of alkenes over acid zeolites revealed by AIMD simulations. J Rey, C Bignaud, P Raybaud, T Bucko, C Chizallet, Angew. Chem., Int. Ed. Engl. 59Rey, J.; Bignaud, C.; Raybaud, P.; Bucko, T.; Chizallet, C. Dynamic features of tran- sition states for beta-scission reactions of alkenes over acid zeolites revealed by AIMD simulations. Angew. Chem., Int. Ed. Engl. 2020, 59, 18938-18942.
Chemistrees: Data-driven identification of reaction pathways via machine learning. S Roet, C D Daub, E Riccardi, J. Chem. Theory Comput. 17Roet, S.; Daub, C. D.; Riccardi, E. Chemistrees: Data-driven identification of reaction pathways via machine learning. J. Chem. Theory Comput. 2021, 17, 6193-6202.
Analysis of the adaptive multilevel splitting method on the isomerization of alanine dipeptide. L J S Lopes, T Lelièvre, J. Comput. Chem. 40Lopes, L. J. S.; Lelièvre, T. Analysis of the adaptive multilevel splitting method on the isomerization of alanine dipeptide. J. Comput. Chem. 2019, 40, 1198-1208.
Adaptive multilevel mplitting method for molecular dynamics calculation of benzamidine-trypsin dissociation time. I Teo, C G Mayne, K Schulten, T Lelièvre, J. Chem. Theory Comput. 12Teo, I.; Mayne, C. G.; Schulten, K.; Lelièvre, T. Adaptive multilevel mplitting method for molecular dynamics calculation of benzamidine-trypsin dissociation time. J. Chem. Theory Comput. 2016, 12, 2983-2989.
From A to B in free energy space. D Branduardi, F L Gervasio, M Parrinello, J. Chem. Phys. 54103Branduardi, D.; Gervasio, F. L.; Parrinello, M. From A to B in free energy space. J. Chem. Phys. 2007, 126, 054103.
On the Asymptotic Normality of Adaptive Multilevel Splitting. F Cérou, B Delyon, A Guyader, M Rousset, SIAM-ASA J. Uncertain. Quantif. 7Cérou, F.; Delyon, B.; Guyader, A.; Rousset, M. On the Asymptotic Normality of Adaptive Multilevel Splitting. SIAM-ASA J. Uncertain. Quantif. 2019, 7, 1-30.
Unbiasedness of some generalized adaptive multilevel splitting algorithms. C.-E Bréhier, M Gazeau, L Goudenège, T Lelièvre, M Rousset, J. Appl. Probab. 26Bréhier, C.-E.; Gazeau, M.; Goudenège, L.; Lelièvre, T.; Rousset, M. Unbiasedness of some generalized adaptive multilevel splitting algorithms. J. Appl. Probab. 2016, 26, 3559 -3601.
A generalized parallel replica dynamics. A Binder, T Lelièvre, G Simpson, J. Comput. Phys. 284Binder, A.; Lelièvre, T.; Simpson, G. A generalized parallel replica dynamics. J. Com- put. Phys. 2015, 284, 595-616.
Ab-initio molecular dynamics for liquid metals. G Kresse, J Hafner, Phys. Rev. B. 47Kresse, G.; Hafner, J. Ab-initio molecular dynamics for liquid metals. Phys. Rev. B 1993, 47, 558-561.
From ultrasoft pseudopotentials to the projector augmentedwave method. G Kresse, D Joubert, Phys. Rev. B. 59Kresse, G.; Joubert, D. From ultrasoft pseudopotentials to the projector augmented- wave method. Phys. Rev. B 1999, 59, 1758-1775.
Generalized neural-network representation of highdimensional potential-energy surfaces. J Behler, M Parrinello, Phys. Rev. Lett. 146401Behler, J.; Parrinello, M. Generalized neural-network representation of high- dimensional potential-energy surfaces. Phys. Rev. Lett. 2007, 98, 146401.
On representing chemical environments. A P Bartók, R Kondor, G Csányi, Phys. Rev. B. 184115Bartók, A. P.; Kondor, R.; Csányi, G. On representing chemical environments. Phys. Rev. B 2013, 87, 184115.
Atomic cluster expansion for accurate and transferable interatomic potentials. R Drautz, Phys. Rev. B. 14104Drautz, R. Atomic cluster expansion for accurate and transferable interatomic poten- tials. Phys. Rev. B 2019, 99, 014104.
A critical review of machine learning of energy materials. C Chen, Y Zuo, W Ye, X Li, Z Deng, S P Ong, Adv. Energy Mater. 10Chen, C.; Zuo, Y.; Ye, W.; Li, X.; Deng, Z.; Ong, S. P. A critical review of machine learning of energy materials. Adv. Energy Mater. 2020, 10, 1903242.
The Gaussian Approximation Potential. A Bartók-Pártay, Bartók-Pártay, A. The Gaussian Approximation Potential ;
Machine learning a generalpurpose interatomic potential for silicon. A P Bartók, J Kermode, N Bernstein, G Csányi, Phys. Rev. 841048Bartók, A. P.; Kermode, J.; Bernstein, N.; Csányi, G. Machine learning a general- purpose interatomic potential for silicon. Phys. Rev. X 2018, 8, 041048.
DScribe: Library of descriptors for machine learning in materials science. L Himanen, M O Jäger, E V Morooka, F F Canova, Y S Ranawat, D Z Gao, P Rinke, A S Foster, Comput. Phys. Commun. 106949Himanen, L.; Jäger, M. O.; Morooka, E. V.; Canova, F. F.; Ranawat, Y. S.; Gao, D. Z.; Rinke, P.; Foster, A. S. DScribe: Library of descriptors for machine learning in materials science. Comput. Phys. Commun. 2020, 247, 106949.
Hence, the sum of the probability of these 6 events in equals to 1 and, given that the states do not overlap, these events are independent. It is possible to identify that the event corresponding to reaching P before R correspond the events 3 to 6 while the events 1 and 2 correspond to reaching R first. Reaching E 2 before R corresponds to the events 3 and 4 and finally reaching E 3 before R corresponds to events 5 and 6. Then, to estimate the probability of the last two events one has just to identify the fraction of events 3 and 4 (or 5 and 6) that occurred during the realization of the events 3 to 6. This is exactly what theHence, the sum of the probability of these 6 events in equals to 1 and, given that the states do not overlap, these events are independent. It is possible to identify that the event corresponding to reaching P before R correspond the events 3 to 6 while the events 1 and 2 correspond to reaching R first. Reaching E 2 before R corresponds to the events 3 and 4 and finally reaching E 3 before R corresponds to events 5 and 6. Then, to estimate the probability of the last two events one has just to identify the fraction of events 3 and 4 (or 5 and 6) that occurred during the realization of the events 3 to 6. This is exactly what the
Analysis of adaptive multilevel splitting algorithms in an idealized case. C.-E Bréhier, T Lelièvre, M Rousset, Stat. 19ESAIM -ProbabBréhier, C.-E.; Lelièvre, T.; Rousset, M. Analysis of adaptive multilevel splitting algo- rithms in an idealized case. ESAIM -Probab. Stat. 2015, 19, 361-394.
. R V Hogg, J W Mckean, A T Craig, Introduction to Mathematical Statistics. Hogg, R. V.; McKean, J. W.; Craig, A. T. Introduction to Mathematical Statistics;
. Pearson, 768Pearson, p 768.
On the Asymptotic Normality of Adaptive Multilevel Splitting. F Cérou, B Delyon, A Guyader, M Rousset, SIAM-ASA J. Uncertain. Quantif. 7Cérou, F.; Delyon, B.; Guyader, A.; Rousset, M. On the Asymptotic Normality of Adaptive Multilevel Splitting. SIAM-ASA J. Uncertain. Quantif. 2019, 7, 1-30.
Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 1996, 77, 3865-3868.
A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. S Grimme, J Antony, S Ehrlich, H Krieg, J. Chem. Phys. 154104Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J. Chem. Phys. 2010, 132, 154104.
Theoretical study of the dehydration process of Boehmite to γ-Alumina. X Krokidis, P Raybaud, A.-E Gobichon, B Rebours, P Euzen, H Toulhoat, J. Phys. Chem. Krokidis, X.; Raybaud, P.; Gobichon, A.-E.; Rebours, B.; Euzen, P.; Toulhoat, H. Theoretical study of the dehydration process of Boehmite to γ-Alumina. J. Phys. Chem.
. B , 105B. 2001, 105, 5121-5130.
Nudged elastic band method for finding minimum energy paths of transitions. Classical and quantum dynamics in condensed phase simulations. H Jónsson, G Mills, K W Jacobsen, Jónsson, H.; Mills, G.; Jacobsen, K. W. Nudged elastic band method for finding mini- mum energy paths of transitions. Classical and quantum dynamics in condensed phase simulations. 1998; pp 385-404.
. G Henkelman, Henkelman, G. https://theory.cm.utexas.edu/vtsttools/index.html.
. P Fleurat-Lessard, Fleurat-Lessard, P. http://pfleurat.free.fr/ReactionPath.php.
Computational quantum chemistry. J J Mcdouall, Royal Society of ChemistryMcDouall, J. J. W. Computational quantum chemistry; Royal Society of Chemistry, 2013.
DScribe: Library of descriptors for machine learning in materials science. L Himanen, M O Jäger, E V Morooka, F F Canova, Y S Ranawat, D Z Gao, P Rinke, A S Foster, Comput. Phys. Commun. 106949Himanen, L.; Jäger, M. O.; Morooka, E. V.; Canova, F. F.; Ranawat, Y. S.; Gao, D. Z.; Rinke, P.; Foster, A. S. DScribe: Library of descriptors for machine learning in materials science. Comput. Phys. Commun. 2020, 247, 106949.
| [] |
[
"A Unified Model of Particle Physics and Cosmology: Origin of Inflation, Dark Energy, Dark Matter, Baryon Asymmetry and Neutrino Mass",
"A Unified Model of Particle Physics and Cosmology: Origin of Inflation, Dark Energy, Dark Matter, Baryon Asymmetry and Neutrino Mass"
] | [
"Wei-Min Yang [email protected] \nDepartment of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiPeople's Republic of China\n"
] | [
"Department of Modern Physics\nUniversity of Science and Technology of China\n230026HefeiPeople's Republic of China"
] | [] | I propose a unified model of particle physic and cosmology based on both a new extension of the standard particle model and the fundamental principle of the standard cosmology. It can fully and coherently describe the universe evolution from the primordial inflation to the followed reheating, to the baryogenesis, to the early hot expansion, to the later CDM condensation into the current dark energy, namely it can simultaneously account for the common origin of inflation, reheating, baryon asymmetry, dark matter, dark energy, and neutrino mass, moreover, it establishes the internal relations between these processes and particle physics. For the evolution of each phase, I give its complete dynamical system of equations and solve them by some special techniques, the numerical results clearly show how each process is successfully implemented, in particular, the dark energy genesis is essentially a reverse process of the slowroll inflation. By use of fewer input parameters, the unified model not only perfectly reproduces the measured inflationary data and the current energy density budget, but also finely predicts many important quantities such as the tensor-to-scalar ratio r 0.05 ≈ 1.86 × 10 −7 , the inflaton mass M Φ ≈ 8.88 × 10 10 GeV, the reheating temperature T re ≈ 2.43 × 10 11 GeV, the CDM mass M S ≈ 600 GeV, η B ≈ 6.13 × 10 −10 , h ≈ 0.674, and so on. Finally, we expect the ongoing and future experiments to test the model. | null | [
"https://export.arxiv.org/pdf/2104.11073v8.pdf"
] | 257,496,322 | 2104.11073 | fd384091a628d362c7739c671e5f4b40542e2b29 |
A Unified Model of Particle Physics and Cosmology: Origin of Inflation, Dark Energy, Dark Matter, Baryon Asymmetry and Neutrino Mass
3 Jun 2023
Wei-Min Yang [email protected]
Department of Modern Physics
University of Science and Technology of China
230026HefeiPeople's Republic of China
A Unified Model of Particle Physics and Cosmology: Origin of Inflation, Dark Energy, Dark Matter, Baryon Asymmetry and Neutrino Mass
3 Jun 2023beyond standard modelinflationdark energydark matterbaryogenesisneutrino mass 1
I propose a unified model of particle physic and cosmology based on both a new extension of the standard particle model and the fundamental principle of the standard cosmology. It can fully and coherently describe the universe evolution from the primordial inflation to the followed reheating, to the baryogenesis, to the early hot expansion, to the later CDM condensation into the current dark energy, namely it can simultaneously account for the common origin of inflation, reheating, baryon asymmetry, dark matter, dark energy, and neutrino mass, moreover, it establishes the internal relations between these processes and particle physics. For the evolution of each phase, I give its complete dynamical system of equations and solve them by some special techniques, the numerical results clearly show how each process is successfully implemented, in particular, the dark energy genesis is essentially a reverse process of the slowroll inflation. By use of fewer input parameters, the unified model not only perfectly reproduces the measured inflationary data and the current energy density budget, but also finely predicts many important quantities such as the tensor-to-scalar ratio r 0.05 ≈ 1.86 × 10 −7 , the inflaton mass M Φ ≈ 8.88 × 10 10 GeV, the reheating temperature T re ≈ 2.43 × 10 11 GeV, the CDM mass M S ≈ 600 GeV, η B ≈ 6.13 × 10 −10 , h ≈ 0.674, and so on. Finally, we expect the ongoing and future experiments to test the model.
I. Introduction
The standard model of particle physics (SM) and the ΛCDM model of cosmology together have successfully accounted for a great deal of the cosmic observations from the BBN era to the present day [1], but they can not address the origin of the hot big bang of the universe [2], namely what happened before the standard hot expansion, and also can not answer the origins of the current dark energy [3], cold dark matter (CDM) [4], and baryon asymmetry [5], in addition, the generation of the sub-eV neutrino mass is yet a puzzle [6]. At present the theoretical and experimental investigations have clearly indicated that the very early universe certainly underwent the inflation phase and the followed reheating one [7]. These two processes not only provide the initial conditions of the hot expansion, but also are related to the universe matter genesis [8], nevertheless the relations between them and particle physics are unknown, so their evolution dynamics have been unestablished as yet. To solve all of the above problems, we have to seek an underlying theory beyond the SM and ΛCDM, therefore this becomes the most challenging research for particle physics and cosmology. This aspect is currently attracting more and more attentions of theoretical and experimental physicists [9].
In fact, there have been numerous theories about the explanations of the inflation, dark energy, dark matter, baryon asymmetry and neutrino mass, which include some unified particle models [10], some paradigms of the inflation and reheating [11], some special dark energy models [12], even some models based on the non-standard gravity [13], many CDM candidates [14], the modified Newtonian dynamics [15], many mechanisms of leptogenesis and baryogenesis [16], and many models of neutrino mass [17]. However, a wide variety of these proposals have a common shortcoming, namely they are only aiming at one or two specific aspects of the abovementioned universe phenomena rather than considering internal connections among them, in other words, these phenomena are dealt in isolation without regard to their integration in the universe evolution, this is obviously unnatural and inadvisable because the uniqueness of the universe origin and evolution destines that there are surely some intrinsic relations among these universe ingredients. Today, the vast majority of these models have been ruled out by the recent data and analyses [18].
At the present day, by means of the analyses for the power spectra of the anisotropic and polarized temperature of the cosmic microwave background (CMB) [19], we have obtained the following inflationary data, the tensor-to-scalar ratio, the scalar spectral index, the running of the spectral index, and the scalar power spectra. On the other hand, from the global analyses of cosmology which includes CMB, BBN, structure formation, gravitational lenses, particle physic experiments, etc. [20], we have extracted the following universe data, the dark energy density, the CDM density, the ratio of the baryon number density to the photon one, and the neutrino mass sum. The present optimum values of these cosmological data are given as follows [1],
r
These data undoubtedly contain the key information of the universe origin and evolution, any one successful theory of particle physics and cosmology has to confront them unavoidably, therefore Eq. (1) severely constrains new model builds [21]. Based on the universe concordance and the nature unification, I attempt to build a unified model of particle physics and cosmology, it can naturally relate the above-mentioned universe SM (visible sector) Table 1: The particle contents and symmetries of the unified model. The notation explanations are as follows, H = (H + , H 0 ) T , l α = (ν 0 αL , e − αL ) T , (α, β = 1, 2, 3) are the fermion family indices, F L/R = (F 0 L/R , F − L/R ) T is the fourth generation lepton with a heavy mass, Φ = (Φ + , Φ 0 ) T is the super-heavy inflation field. The third row is the quantum numbers of SU L (2) ⊗ U Y (1), the fourth row is the dark hypercharges under U X (1), the last row is the dark electric charges under U Q D (1), their corresponding dark gauge fieldsB µ andà µ are not listed. The two complex scalars of φ 1 and φ 2 develop φ 1 and φ 2 to break U X (1) and U Q D (1), respectively, the unbroken and stable scalar S will become the cold dark matter and eventually condense into the dark energy. Note that the global B − L number is incidentally conserved in the model. ingredients together and really establish connections among them in the universe evolution, of course, this is also fitting to Occam's Razor. Firstly, I put forward to a new extension of the SM, which covers the SM particles and the dark particles beyond the SM. Secondly, on the basis of the new particle model as well as the fundamental principle of the standard cosmology, I in detail research the dynamical evolutions of the inflation, the reheating and the current era, in particular, I introduce some new ideas and techniques to solve all of the above-mentioned issues elegantly and completely. The idea framework of the unified model will be described in the next Section and shown by Fig. 3. Lastly, the model numerical results will clearly show the evolution of each phase, they not only perfectly fit all of the observed data in Eq. (1), but also give many interesting predictions. In a word, this model can successfully account for the origin and evolution of the universe in a unified and integrated way.
BSM (dark sector) H l α e − βR ν 0 βR F L F R Φ φ 1 φ 2 S SU L (2) ⊗ U Y (1) (2, 1) (2, −1) (1, −2) (1, 0) (2, −1) (2, −1) (2, 1) (1, 0) (1, 0) (1, 0) U X (1) 0 0 0 1 1 − 1 2 1 − 3 2 − 1 2 1 2 U Q D (1) 0 0 0 1 1 1 1 0 1 1
The remainder of this paper is organized as follows. In Section II, I outline the new extension of the SM and discuss the neutrino mass, leptogenesis mechanism and dark matter annihilation. In Section III, I discuss a complete solution of the slow-roll inflation. The reheating evolution and the baryogenesis are discussed in Section IV. The current CDM condensation and dark energy genesis are discussed in Section V. Section VI is a summary of the numerical results of the unified model. Section VII is devoted to conclusions.
II. Particle Model
The unified theory is based on the following particle model. I assume that below the GUT scale of ∼ 10 16 GeV, the particle contents and symmetries in the universe are showed by Table 1 (where the irrelevant quarks and gauge bosons are all omitted), all kinds of the notations are explained by the above caption. The SM particles are all in the visible sector, while the particles beyond SM (BSM) are all consisting in the dark sector. The dark particles have the dark gauge symmetries of U X (1) ⊗ U Q D (1) where X and Q D are respectively the dark hypercharge and the dark electric charge,B µ andà µ (which are not listed in Table 1 because of the space limit) are respectively the dark gauge fields associated with them. Note that the X hypercharge arrangement guarantees that U X (1) is free-anomaly. The doublet scalar Φ is the inflation field whose mass is ∼ 10 11 GeV, its primordial evolution leads to the universe inflation and hot big bang. The two complex scalars of φ 1 and φ 2 develop φ 1 ∼ 10 9 GeV and φ 2 ∼ 500 GeV to break U X (1) and U Q D (1), respectively, while the electroweak breaking is implemented by H ≈ 174 GeV. After the model symmetries are broken, the doublet fermion F obtains its mass of ∼ 10 7 GeV, which can be regarded as the fourth generation lepton, ν L and ν R are combined into the sub-eV Dirac neutrino. Finally, the unbroken S is a stable complex scalar, its mass is about several hundred GeV, the remnant of S gradually cools into the cold dark matter after it annihilating and decoupling, in the very later stage the CDM becomes super-cool so that it eventually condenses into the current dark energy. In a word, the model has fully and perfectly accommodated all of the ingredients required by the universe evolution.
Based on the particle contents and symmetries in Table 1, the full invariant Lagrangian of the model are
L = ν R iγ µ D µ ν R + F L iγ µ D µ F L + F R iγ µ D µ F R + (D µ Φ) † D µ Φ + (D µ φ 1 ) † D µ φ 1 + (D µ φ 2 ) † D µ φ 2 + (D µ S) † D µ S + [ Y e αβ l α e βR H + Y ν αβ l α ν βR (iτ 2 Φ * ) + y l α l α F R φ * 2 + y e β F L e βR Φ + y ν β F L ν βR (iτ 2 H * ) + y F F L F R φ * 1 + h.c.] − V H − V Φ − V φ 1 − V φ 2 − V S + 2λ 0 [H † Φφ 1 φ * 2 + h.c.] − 2λ 1 φ * 1 φ 1 φ * 2 φ 2 − 2 [λ 2 φ * 1 φ 1 + λ 3 φ * 2 φ 2 ]H † H − 2 [λ 4 φ * 1 φ 1 + λ 5 φ * 2 φ 2 + λ 6 H † H]S * S − 2 [λ 7 φ * 1 φ 1 + λ 8 φ * 2 φ 2 + λ 9 H † H + λ 10 S * S]Φ † Φ ,(2)V H = − µ 2 H H † H + λ H (H † H) 2 , V φ 1 = −µ 2 φ 1 φ * 1 φ 1 + λ φ 1 (φ * 1 φ 1 ) 2 , V φ 2 = −µ 2 φ 2 φ * 2 φ 2 + λ φ 2 (φ * 2 φ 2 ) 2 , V S =µ 2 S S * S + λ S (S * S) 2 + · · · , V Φ = µ 2 Φ Φ † Φ + λ Φ (Φ † Φ) 2 + · · · , D µ =∂ µ + ig XBµX + ie DõQD + · · · ,
where the irrelevant parts of the SM Lagrangian are omitted, D µ is the gauge covariant derivative, g X and e D are two dark gauge coupling constants, τ 2 is the second Pauli matrix. Note that the global B − L number is incidentally conserved in Eq. (2), so any Majorana-type mass or couplings are automatically prohibited. [Y e , Y ν , y l , · · · ] are all Yukawa coupling parameters, the repeated family indices are summed by default. We can individually rotate the flavor spaces of l, e R , ν R so as to make both real diagonal Y e (namely the basis of the charged lepton mass eigenstate) and real y ν , then the irremovable complex phases in Y ν , y l , y e will become CP -violating sources in the lepton sector. V H , V φ 1 , V φ 2 have usual self-interacting scalar potentials, but V S and V Φ have unusual potential forms, later the detailed inflationary potential will be given by Eq. (29), here I only write the quadratic and quartic terms of their series expansions since the S and Φ masses are only related to these terms, the higher order terms with the dimension being 6 are all suppressed by the power of the squared Planck mass. Note that the special V S and V Φ will individually lead to the distinctive dynamic evolution of the S and Φ fields. The quartic scalar couplings with λ 0 ∼ 0.1 is very important, it is a key knot linking all kinds of the following vacua. Finally, I assume [λ 1 , λ 2 , λ 3 , · · · ] ≪ 1, namely these couplings between two different scalars are all very weak. In short, Eq. (2) completely describes all the interactions among the model particles from the primordial inflation to the present universe.
The model symmetries are spontaneously broken by the following vacuum structures of the Figure 1: The tree and one-loop diagrams of Φ → l c + ν R . The CP asymmetry of this decay can equally generates the asymmetric anti-lepton and the asymmetric ν R although the net lepton number is conserved as zero, the latter is forever frozen out in the dark sector, whereas the former is partly converted into the baryon asymmetry through the SM sphaleron transition.
Φ ν R l Φ F L e R l ν R H
where M ν is diagonalized by the two unitary matrices of U ν L and U ν R which respectively rotate ν αL and ν βR . Eq. (6) indicates that the most difference between M e and M ν is essentially an interchange of v H and v Φ , obviously, this neutrino mass mechanism is a Dirac-type seesaw, which is different from the usual Majorana-type seesaw [22]. All kinds of the Yukawa parameters are chosen as [Y e , y F ] ∼ 10 −2 , Y ν ∼ 10 −3 , [y l , y e , y ν ] ∼ 10 −4 , thus there are naturally M F ∼ 10 7 GeV, M e ∼ 1 GeV, M ν ∼ 0.05 eV. Obviously, the second term in M e is too small and can be ignored. By contrast, the second term in M ν is ∼ 0.05 eV and its first term is ∼ 10 −3 eV, but the second term has only one eigenvalue, while the first term has three eigenvalues, thus this may lead to such mass spectrum as m ν 3 ≈ 0.05 > m ν 2 ≈ 0.01 > m ν 1 ≈ 0.005 (eV as unit), thereby we naturally explain that △m 2 32 ≈ 2.4 × 10 −3 eV 2 is much larger than △m 2 21 ≈ 7.5 × 10 −5 eV 2 , which is exactly required by the experimental data of the neutrino. Under the flavor basis of real diagonal Y e and real y ν , then U ν L is namely the lepton mixing matrix U P M N S , the CP -violating phase in U P M N S purely arises from the complex Y ν and y l , furthermore, we can correctly fit the neutrino mixing angles by choosing a suitable texture of M ν , here we do not go into it in depth. Based on both the neutrino oscillation experiments and the astrophysics investigations [1], I will take suitable m ν ≈ 0.06 eV as an input parameter of the unified model, see the following Table 2.
The dark sector of this model has very important implications and phenomena for cosmology. Firstly, the Φ field slow-roll causes the primordial inflation, see Section III. Secondly, after the inflation the Φ decay brings about the universe reheating and the hot big bang, see Section IV, in particular, the leptogenesis directly arises from the out-of-equilibrium decay of Φ in the reheating process. In the light of Eq. (2), the Φ decay modes have Φ → l c + ν R , Φ → e c + F and Φ → H + φ * 1 + φ 2 , but the first two decays are relatively slow and weak, while the last one is actually the predominant decay. However, these two decays of Φ → l c + ν R and Φ * → l + ν c R have CP asymmetric widths through the interference between the tree diagram amplitude and the one-loop diagram one, as shown in Fig. 1, their CP asymmetry and the relevant decay Figure 2: The annihilation of S + S * → ν R + ν c R via the massive dark photon mediation. This process leads that the remnants of the stable dark particle S will become the cold dark matter and eventually condensate into the dark energy.
S S * ν R ν R A µ
width are calculated as follows,
Γ(Φ → l c + ν R ) = M Φ 16π Tr[Y ν Y ν † ] ≪ Γ(Φ → H + φ * 1 + φ 2 ) = M Φ λ 2 0 128π 3 , A CP = Γ(Φ → l c + ν R ) − Γ(Φ * → l + ν c R ) Γ Φ = 2πImTr[Y e iα y e * α y ν β Y ν † βj ] λ 2 0 (1 − M 2 F M 2 Φ ),(7)
where Γ Φ is the total width but there is obviously
Γ Φ ≈ Γ(Φ → H + φ * 1 + φ 2 )
. Note that M F < M Φ guarantees that the imaginary part of the loop integral factor is non-vanishing. In Eq. (7), the CP -violating sources purely come from the irremovable complex phases in y e and/or Y ν , the latter also consists in the neutrino mass matrix in Eq. (6), so the CP violation in the leptogenesis may be related to the CP violation in the neutrino experiments [23]. Since there are λ 0 ∼ 0.1, Y e ∼ 10 −2 , Y ν ∼ 10 −3 and [y e , y ν ] ∼ 10 −4 as before, then we can naturally obtain A CP ∼ 10 −10 , which is vital for the matter-antimatter asymmetry, see the following Eq. (37). In addition, a simple calculation can prove that Γ(Φ → l c + ν R ) is smaller than the universe expansion rate at the temperature of T = M Φ ≈ 8.88 × 10 10 GeV, so the decay in Fig. 1 is indeed out-of-equilibrium process. Note that the dilute process of l c + ν R → φ 2 + H via the t-channel F mediation is invalid at any temperature because |y l y ν | 2 M F ≪ 1 M P l guarantees that its reaction rate is always severely out-of-equilibrium. Consequently, the CP asymmetry in Eq. (7) can equally generate the asymmetric anti-lepton and the asymmetric ν R although the net lepton number is conserved as zero, the latter will be forever frozen out in the dark sector, whereas the former will be partly converted into the baryon asymmetry through the SM sphaleron transition [24], see Section IV.
After the dark gauge symmetries are broken, the discrete transformation of S → −S becomes a residual Z 2 symmetry since S can not develop a non-vanishing vacuum expectation value, this thus guarantees that S is a stable dark particle without any decay. In the hot evolution of the dark sector, the vast majority of the S particles are depleted by the annihilation of S + S * → ν R + ν c R which is via the darkà µ mediation, as shown in Fig. 2, the remnants of S are then decoupled from the dark radiation consisting of ν R , thus they become the current CDM as the universe temperature cools, eventually, the supercool CDM will condensate into the current dark energy. The thermally averaged annihilation cross-section and the freeze-out temperature ΦDE Figure 3: The sketch of the universe origin and evolution described by the unified model. The universe energy is step by step released and reduced from the primordial dark energy to the current energy budget, the whole evolution process is analogous to a cascade of hydropower stations, there is not so-called "cosmological constant problem" in the model. are simply calculated by the following relations,
σv r T f = a + b v 2 r T f + c v 4 r T f ≈ a + b 6T f M S , a = 0 , b = πα 2 D 4M 2 S , σv r T f n S (T f ) = H(T f ) = 1.66 g * (T f ) T 2 f M P l , n S (T f ) = g S T 3 f [ M S 2πT f ] 3 2 e − M S T f , =⇒ M S T f ≈ 20 + 1 2 ln g 2 S M S g * (T f )T f + ln M S σv r T f 10 −9 GeV −1 ,(8)
where v r = 2 1 − 4M 2 S s is the relative velocity (s is the squared center-of-mass energy and it is much larger than M 2 Aµ ), M P l is the Planck mass, g * (T f ) = 91.5 is the effective number of relativistic degrees of freedom at T f ≈ 22.5 GeV and g S = 2 is degrees of freedom of S. The
dark α D = e 2 D
4π is similar to the electromagnetic α e , provided α D ≈ 0.05 (namely e D ≈ 0.8) and M S ≈ 600 GeV, then we can figure out σv r T f ≈ 1.27 × 10 −9 GeV −2 and M S T f ≈ 26.7, these values are vital for the current density budget of the CDM and dark energy, which will be discussed in Section V.
On the basis of the above-mentioned particle model, we can describe the idea framework of the universe origin and evolution by the sketch shown as Fig. 3. In sequence, the universe went through the primordial inflation, the followed reheating, the early hot expansion, the transformation from the radiation-dominated to the matter-dominated, the supercool CDM condensation into the dark energy and the present DE-dominated universe. The primordial inflation is implemented by the Φ field slowly rolling. Φ has the two physical states or energy forms of Φ DE and Φ DM due to its special nature. Φ DE is an inert condensed state with a negative pressure, it has no kinetic energy and can not take part in couplings to the other fields, whereas Φ DM is an excited massive particle state with a vanishing pressure, it has kinetic energy and can interact with the other particles, see Eq. (10). Inappropriately, the relationship between Φ DE and Φ DM is analogous to ice and vapour, which are merely the two different physical states of the same material. The same physical meanings also apply to the S field, namely it has also the two physical states or energy forms of S DE and S DM , see Eq. (39). In brief, I give the above explanations about the unknown physical nature of the dark energy and the dark matter. In the following Sections, we will see that the physical essence of the slow-roll inflation is that the superheavy dark matter Φ DM is slowly growing from the primordial dark energy Φ DE , namely a process of Φ DE gradually converting into Φ DM . After the inflation is terminated, the Φ DM decay is responsible for the reheating and the leptogenesis. When the hot bath is formed and the radiation begins to dominate the universe, the asymmetric lepton and the asymmetric right-handed neutrino have equally been generated, but they are isolated in the visible sector and the dark sector, respectively. In the hot expansion stage, the asymmetric right-handed neutrino in the dark sector is forever frozen out, whereas the asymmetric lepton in the visible sector can be partly converted into the baryon asymmetry through the SM sphaleron transition. The followed evolution in the visible sector is well-known. In the dark sector, the stable S is decoupled from ν R below the T f temperature, as the universe temperature declining, S will gradually cool into the CDM denoted by S DM . In the very later stage, the temperature is more and more approaching to absolute zero, the kinetic energy of S DM is completely exhausted, thus the supercool S DM will eventually condense into S DE which is namely the current dark energy, this condensation is in essence that S DE is slowly growing from S DM or S DM gradually converting into S DE , therefore, the current condensation is essentially a reverse process of the primordial inflation. Although there is a great difference about 106 orders of magnitude between the primordial dark energy denoted by Φ DE and the current dark energy denoted by S DE , the universe energy is step by step released and reduced through a cascade of the above-mentioned evolutions, this is analogous to a cascade of hydropower stations at the Changjiang River, by which a huge drop of water potential is converted into electrical energy, therefore there is not naturally the so-called "cosmological constant problem" in the model. Finally, I emphasize that all of these assumptions of the unified model are moderate, reasonable and consistent, by which we can successfully and completely account for the universe origin and evolution.
III. Primordial Inflation
The dynamic evolution of the primordial inflation is described by what follows. According to the standard paradigm [25], the inflation field Φ is considered as spatially uniform distribution, but there are very small fluctuations, which will become sources of the structure formation. Under the flat FLRW metric, namely g µν = Diag(1, −a 2 , −a 2 , −a 2 ) where a(t) is the scale factor of the universe expansion, the energy density and pressure of Φ are given by its energymomentum tensor as follows,
L Φ = g µν ∂ µ Φ † ∂ ν Φ − V Φ , T µ ν (Φ) = 2g µβ ∂ β Φ † ∂ ν Φ − δ µ ν L Φ , =⇒ T 0 0 = ρ Φ = |Φ| 2 + V Φ , − 1 3 δ i j T j i = P Φ = |Φ| 2 − V Φ ,(9)
where L Φ is the Lagrangian of pure Φ and
V Φ = V (Φ † Φ) = V (|Φ| 2 )
is its self-interacting potential energy,Φ = dΦ dt and |Φ| 2 =Φ †Φ =Φ +Φ− +Φ 0 * Φ0 is the kinetic energy of Φ. Obviously, the potential energy and the kinetic energy together determine ρ Φ and P Φ , and vice versa. ρ Φ and P Φ are however two super-high values in the inflation period.
I now introduce the dark energy Φ DE and the dark matter Φ DM , they are merely two energy forms or physical states of the same Φ field, each of them has own density and pressure, which are determined by the following relations,
P Φ DE = −ρ Φ DE , P Φ DM = 0 , ρ Φ DE + ρ Φ DM = ρ Φ , P Φ DE + P Φ DM = P Φ = w Φ ρ Φ , =⇒ ρ Φ DE = −w Φ ρ Φ = −2w Φ 1 − w Φ V Φ , ρ Φ DM = (1 + w Φ )ρ Φ = |Φ| 2 + 1 + w Φ 1 − w Φ V Φ = 2|Φ| 2 ,(10)
where I employ Eq. (9). w Φ is a parameter-of-state varying with the time, which relates the total pressure to the total energy density, there is generally
−1 w Φ 0, Φ is purely Φ DE when w Φ = −1, while Φ entirely becomes Φ DM when w Φ = 0. Φ DE
is an inert condensed state with a negative pressure, it has only potential energy without kinetic energy, so ρ Φ DE contributes a part of the total V Φ , in contrast, Φ DM is an excited massive particle state with a vanishing pressure, so it carries both kinetic energy and potential energy (which is the rest of the total V Φ ), and both are always equal to each other. In short, Eq. (10) explicitly shows the inherent relations among all kinds of the energy forms of the Φ field, the physical implications of Φ DE and Φ DM will be further clear in the following context. At the beginning of the inflation, the Φ field is purely in the Φ DE form (or state), then Φ DM is slowly growing from Φ DE , Φ DM is more and more generated and Φ DE is more and more depleted, thus Φ DE gradually converts into Φ DM , this process is namely so-called slow-roll inflation. The dynamics of the inflationary evolution are collectively determined by the Friedmann equation, the Φ continuity equation and the Φ DM growth equation, which are respectively
ρ Φ = ρ Φ DE + ρ Φ DM = 3M 2 p H 2 , ρ Φ + 3Hρ Φ (1 + w Φ ) = 0 =⇒ −ρ Φ DE =ρ Φ DM + 3Hρ Φ DM , ρ Φ DM = −2η(t)Hρ Φ DM ,(11)
where I employ Eq. (10).
M p = 1 √ 8πG ≈ 2.43×10 18 GeV is the reduced Planck mass, H(t) =ȧ (t) a(t)
is the universe expansion rate, the proportional parameter −η(t) > 0 controls the Φ DM growth rate, in fact η is namely one of the slow-roll parameters defined below. Once the evolution of η(t) is specified, Eq. (11) is then a closed system of equations, from which we can solve all the evolutions of ρ Φ DM , ρ Φ DE , ρ Φ and H. The above continuity equation indicates that the ρ Φ DM growth in the comoving volume is entirely from the ρ Φ DE reduction, therefore the primordial inflation is rightly the process of Φ DM growing from Φ DE . The Φ field will entirely become the pure Φ DM form (or state) at the end of the inflation. From Eqs. (10) and (11), we can easily derive
η(t) = − dlnρ Φ DM 2Hdt = − dln|Φ| Hdt , ǫ(t) = − dlnρ Φ 2Hdt = −Ḣ H 2 = 3(1 + w Φ ) 2 ,(12)− 1 = w Φ (0) w Φ (t) w Φ (t inf ) = 0 , 0 = ǫ(0) ǫ(t) ǫ(t inf ) = 3 2 ,(13)
where η and ǫ are two slow-roll parameters defined as usual, they respectively characterize the ρ Φ DM varying rate and the ρ Φ one. Note that η and ǫ themselves also vary with the inflationary time, or else the inflation will continue on without termination. Eq. (13) gives the inflationary boundary condition, hereinafter we take t = 0 as the time of inflation begin and use the "inf"
subscript to indicate the time of inflation finish. In addition, we can obtain the expansion acceleration equation,ä
a = (1 − ǫ)H 2 = − 1 + 3w Φ 2 H 2 .(14)
Eq. (14) shows that the accelerating or decelerating expansion only depends on the value of ǫ or w Φ , there areä(t) 0 when 0 ǫ 1 andä(t) < 0 when 1 < ǫ 3 2 , the former is in the Φ DE -dominated universe, whereas the latter is in the Φ DM -dominated universe.Ḣ 0 also indicates that the expansion rate and the total energy density are always decreased in the inflation period.
Put Eq. (10) and Eq. (12) together, we can use ρ Φ and ǫ to express all kinds of the energy forms as follows,
ρ Φ DE = (1 − 2ǫ 3 )ρ Φ , ρ Φ DM = 2ǫ 3 ρ Φ , |Φ| 2 = ǫ 3 ρ Φ , V Φ = (1 − ǫ 3 )ρ Φ .(15)
Eqs. (11), (12) and (15) together make up the fundamental equations of the inflationary evolutions, while Eq. (13) is the boundary condition. If we can provide the evolution of any one of the nine inflationary quantities,
H, ρ Φ , ρ Φ DE , ρ Φ DM , |Φ| 2 , V Φ , ǫ, η, w Φ , in principle,
then the solutions of the other inflationary quantities will completely be determined by this system of equations. In what follows, we will find the solution for the inflation problem by a special technique.
One of the inflationary features is that the universe size or the scale factor expands about 10 25 times in an extremely short duration, therefore we use the e-fold number to characterize the inflationary time span instead of the scale factor, it is defined as follows,
N (t) = ln a(t inf ) a(t) = t inf t H(t ′ )dt ′ =⇒Ṅ (t) = −H(t),(16)0 = a(0) a(t) a(t inf ), +∞ = N (0) N (t) N (t inf ) = 0,
where the starting point of the inflation is set as a(0) = 0 and N (0) = +∞. Eq. (16) now acts as the role ofȧ a = H since N (t) replaces a(t) as the time scale, it will frequently be used in the following formula derivations. By use of Eqs. (11), (12), (15) and (16), we can order by order give the slow-roll parameters by the total energy density ρ Φ and its derivative as follows,
ρ ′ Φ = dρ Φ dN = 3ρ Φ DM , ρ ′′ Φ = d 2 ρ Φ dN 2 = 3ρ ′ Φ DM , ρ ′′′ Φ = d 3 ρ Φ dN 3 = 3ρ ′′ Φ DM , . . . ǫ = dlnρ Φ 2dN , η = dlnρ ′ Φ 2dN = dlnρ Φ DM 2dN , θ = dln(−ρ ′′ Φ ) 2dN = dln(−ρ ′ Φ DM ) 2dN , δ = dln|ρ ′′′ Φ | 2dN , . . .(17)=⇒ dlnǫ 2dN = η − ǫ , dln(−η) 2dN = θ − η , dln|θ| 2dN = δ − θ , . . .(18)
where hereinafter the " ′ " superscript denotes a derivative with regard to N . These slow-roll parameters in Eq. (17) are closely related to the observable quantities of the inflation, ǫ and η are relevant to the tensor-to-scalar ratio and the scalar spectral index, θ is involved in the running of the spectral index, see the following Eq. (27), therefore finding the correct solutions of these slow-roll parameters is a key of solving the inflation problem. From the previous fundamental equations and Eq. (16), we can also derive the equation of motion of Φ and its formal solution as follows,
Φ + 3HΦ + Φ dV Φ d|Φ| 2 = 0 =⇒ Φ ′′ − 3Φ ′ + (3 − ǫ)ΦM 2 p dln|V Φ | d|Φ| 2 = 0 ,(19)=⇒ Φ 1 (N ) = Φ 2 (N ) = Φ 3 (N ) = Φ 4 (N ) =⇒ d|Φ| dN = √ 2 |Φ ′ i | = |Φ ′ | =M p √ ǫ , dϕ = i (dΦ i ) 2 = √ 2dΦ † dΦ =⇒ dϕ dN = √ 2 |Φ ′ | , =⇒ ϕ(N ) √ 2M p = |Φ(N )| − |Φ(0)| M p = N 0 ǫ(N ′ )dN ′ ,(20)where Φ ′ = dΦ dN , |Φ| 2 = 1 2 i Φ 2 i , Φ i (i=1,2,3,4)
are four real degree of freedoms of Φ, obviously the solutions of Φ i are degenerate since V (|Φ| 2 ) is fully symmetric with regard to them. In Eq. (20), I introduce an auxiliary field ϕ(N ) in order to get rid of the multiple-component difficulty, and we can freely fix ϕ(0) = 0. We can immediately calculate the ϕ and |Φ| evolution once ǫ(N ) is provided. ϕ ′ > 0 (namelyφ < 0) indicates that ϕ and |Φ| are gradually reduced with the time.
By convention, if the inflationary potential V Φ is provided, then the conventional slow-roll parameters are given by V Φ as follows,
ǫ V =M 2 p 2 [ dV Φ V Φ dϕ ] 2 = ǫ[ 3 − η 3 − ǫ ] 2 ǫ, η≪1 − −−− → ǫ , η V =M 2 p [ d 2 V Φ V Φ dϕ 2 ] = (ǫ + η)(3 − η) − η ′ 3 − ǫ ǫ, η, θ≪1 − −−−− → ǫ + η − η ′ 3 , ξ 2 V =M 4 p [ dV Φ V Φ dϕ ][ d 3 V Φ V Φ dϕ 3 ] = [ 3 − η 3 − ǫ ] 2 [4ǫη + η ′ (3 − 3ǫ + 2η − 2θ) − 2ηθ ′ 3 − η ] ǫ, η, θ, δ≪1 −−−−−−→ 4ǫη + η ′ − 2ηθ ′ 3 ,(21)
where η ′ = dη dN and θ ′ = dθ dN , and I employ the foregoing equations to derive the relations in Eq. (21), which relate these two sets of slow-roll parameters to each other. However, it should be stressed that the above approximations are held only when ǫ, η, θ, δ ≪ 1, this case is only in the early and middle phases of the inflation, when the inflation is close to its end, some slow-roll parameters actually become ∼ 1, thus these approximations are invalid.
When the inflationary potential is characterized by V Φ (ϕ) whose argument is ϕ, we can make Taylor expansion of V Φ (ϕ) around ϕ = 0 and obtain the following results,
V Φ (ϕ) = V Φ (ϕ)| ϕ=0 + dV Φ (ϕ) dϕ | ϕ=0 ϕ + d 2 V Φ (ϕ) dϕ 2 | ϕ=0 ϕ 2 2 + · · · =⇒ M 2 Φ = d 2 V Φ (ϕ) dϕ 2 | ϕ=0 = [(3 − ǫ)η V H 2 ] N =0 =⇒ M Φ = [ (3 − ǫ)η V H] t inf ,(22)V Φmin = M 2 Φ |Φ(t inf )| 2 = V Φ (t inf ) = [(1 − ǫ 3 )ρ Φ ] t inf =⇒ |Φ(t inf )| M p = 1 η V (t inf ) ,(23)
where I employ Eqs. (21) and (15). M Φ is about the same size as H inf , but M Φ can be identified as the mass meanings of Φ only when η V becomes positive, in the following Fig. 4, we will see how M Φ is gradually generated from nothing as η V evolving from negative to positive, namely M Φ is generated by the special mechanism of the inflation evolution. Later we will work out η V (t inf ) ≈ 2.61, then there is |Φ(t inf )|/M p ≈ 0.62, this is a very reasonable value. A traditional and usual technique of solving the inflation problem is as the following procedure. Firstly, one has to design or guess a function form of V (|Φ| 2 ) or V (ϕ). Secondly, one puts V Φ into Eq. (19), and ignores the ǫ factor since there is ǫ ≪ 1 in the most of the inflation duration, then one can solve the Φ differential equation to find a solution of Φ(N ). Thirdly, one can obtain ǫ(N ) by making a derivative of Φ(N ), and further one can calculate η(N ) and θ(N ) by Eq. (18). Lastly, one can obtain ρ Φ , ρ Φ DE , ρ Φ DM and |Φ| 2 by Eq. (15) since V Φ (N ) and ǫ(N ) are given, up to this point, the inflationary evolutions are completely solved out. Nevertheless, this procedure has two serious shortcomings. i) In the later phase of the inflation ǫ is actually ∼ 1 rather than ≪ 1, one neglecting ǫ in Eq. (19) will therefore lead to a non-rigorous and incomplete inflationary solution, in particular, this has great effect on the inflation termination and the followed reheating. ii) it is very difficult to fit precisely all of the inflationary data in Eq. (1) by this technique, in fact, a desirable inflationary potential is not be found as yet although the countless endeavours have been made. Therefore, to solve reliably and completely the inflation problem in the standard gravity framework, we have to find a new approach.
In the system of equations of the inflationary evolution, all kinds of the unknown inflationary quantities have an equal status at least in mathematical sense, thereby we can flexibly choose the η parameter as the starting point instead of the V Φ potential. In principle, we can employ the following procedure. Firstly, we can design or guess an evolution function of η(N ) as the following Eq. (24), this amounts to specifying directly the law of the Φ DM growth in Eq. (11). Secondly, Eq. (11) has now become a closed the system of equations, thus we can solve them to obtain ρ Φ DM , ρ Φ DE , ρ Φ and H. Lastly, we can figure out |Φ| 2 , V Φ , ǫ and w Φ by Eqs. (15) and (12), up to now, all of the inflationary evolutions are completely solved out. By means of this procedure, the inflationary potential is reversely worked out rather than provided. Obviously, this technique is both simple and reliable, and it can overcome the shortcomings of the traditional technique. Whatever technical means is employed, the only criterion is that it is able to fit all of the inflationary data correctly and completely.
After careful analysis and calculation, I find a suitable function form of η(N ) as follows,
η(N ) = η(0)e −α( N N * ) 2 = − e α(1− N 2 N 2 * ) N * + 6 ,(24)=⇒ η ′ = −η 2αN N 2 * , θ = η − αN N 2 * ,
where there are two independent parameters N * and α, and η(0) is parameterized by them. In fact, N * is exactly corresponding to the inflationary e-fold number when the pivot scale of k * = 0.05 Mpc −1 exits from the horizon, the model will calculate out N * ≈ 51.15 by the following Eq. (28). α is one of two input parameters in the inflation sector, the other one is H inf in Eq. (27), we can determine α ≈ 2.92 by fitting the inflationary data, all kinds of the input parameters of the unified model are later summarized in Table 2 in VI section. From Eq.
ρ Φ DM (N ) ρ Φ (0) = e 2 N 0 η(N ′ )dN ′ , ρ Φ (N ) ρ Φ (0) = 1 + 3 N 0 ρ Φ DM (N ′ ) ρ Φ (0) dN ′ , ǫ(N ) = 3 2 ρ Φ DM (N ) ρ Φ (N ) ,(25)
where there is ρ Φ DM (0) = ρ Φ (0) = 3M 2 p H 2 inf due to ǫ(0) = 3 2 at the end of the inflation. Lastly, by use of the relevant relations we can also calculate |Φ| 2 , V Φ , η V , w Φ , etc. Note that all kinds of the energy forms are normalized to ρ Φ (0). Now we show the numerical results of this inflation model. Fig. 4 shows the inflationary evolutions of the relevant energy forms of the Φ field with N as time scale. In the early and middle phases of the inflation process, these three curves of ρ Φ , ρ Φ DE , V Φ almost coincide with each other, moreover, they nearly keep a constant value, the reason for this is that the growths of Φ DM and T Φ = |Φ| 2 are very slow at this stages, thus there isΦ ≈ 0, this is namely so-called slow-roll inflation. In the last phase of the inflation proceeding, the ρ Φ DM and T Φ growths are more and more fast, thus ρ Φ , ρ Φ DE and V Φ turn into fall sharply and their curves are significantly separated each other. Eventually, ρ Φ DM exceeds ρ Φ DE , the Φ DE -dominated universe is transformed into the Φ DM -dominated one, at the same time, the accelerating expansion is also changed into the decelerating one, thus the inflation is naturally over. At the time of the inflation finish, namely when N = 0, there are ρ Φ DE = 0 and ρ Φ DM = ρ Φ = 2T Φ = 2V Φ . Note that the green curve indicates ρ Φ (∞)/ρ Φ (0) ≈ 5.65, this means that the ρ Φ amount only varies about 5.65 times from the inflation beginning to its end. In short, Fig. 4 clearly shows the full evolution of all kinds of the energy forms in the inflation process, which contain the slow-roll features, the transformations among these energy forms, and the inflation terminations. Only when N (t) → N (t inf ) = 0, these three parameters of ǫ, η V and w Φ sharply rise, this thus leads to the inflation termination. ii) In the early and middle phases of the inflation, η is coinciding with η V due to η ≈ η V , whereas in the last phase of the inflation, η is coinciding with θ due to η ≈ θ. iii) ǫ is always positive, while η and θ are always negative, but η V can however change from negative to positive when N → 0. In view of Eq. (22), M 2 Φ is proportional to η V , this means that M Φ is gradually generated from nothing as η V evolving, certainly, this is closely related to Φ DM growing, therefore M Φ purely results from the inflationary dynamical evolution. This mass generation mechanism of the inflationary field is very different from that of the SM particles, which simply arises from the vacuum spontaneous breaking. In short, these numerical results of Fig. 4 and Fig. 5 can excellently explain the physical pictures of the primordial inflation.
ϵ(t) → orange dashed line η(t) → green solid line θ(t) → pink dashed line η V (t) → blue dotted line w Φ (t) →
Now we set about addressing the observable data of the inflation. In Fig. 5, at N * ≈ 51.15 the slow-roll parameters and the parameter-of-state are evaluated as follows,
ǫ V * ≈ ǫ * ≈ 1.16 × 10 −8 , η * = −1 N * + 6 ≈ −0.0175, θ * = η * − α N * ≈ −0.0746, η V * ≈ ǫ * + η * (1 + 2α 3N * ) ≈ −0.0183, w Φ * = 2ǫ * 3 − 1 ≈ −1,(26)
hereinafter the " * " subscript specially indicates the time of N * ≈ 51.15. From the cosmological perturbation theory of the structure formation [26], we know that the above slow-roll parameters are directly related to the following inflationary observable quantities,
∆ 2 R (k * ) = H 2 8π 2M 2 p ǫ | k * = 1 8π 2 ǫ * [ ρ Φ (N * ) ρ Φ (0) ][ H inf M p ] 2 , r * = 16ǫ * , n s (k * ) − 1 = dln∆ 2 R dlnk | k * = dlnH 2 − dlnǫ (ǫ − 1)dN | k * = −4ǫ * + 2η * ≈ 6( η ′ * 9 − ǫ V * ) + 2η V * , dn s dlnk | k * = dn s (ǫ − 1)dN | k * = 4ǫ ′ * − 2η ′ * = 8ǫ * (η * − ǫ * ) − 4η * (θ * − η * ) ≈ 24ǫ V * ( 2η ′ * 9 − ǫ V * ) + 16ǫ V * η V * − 2ξ 2 V * ,(27)
where I employ Eq. (21). k * = 0.05 Mpc −1 is the pivot scale exciting from horizon when N * ≈ 51.15, the relation between k * and N * will be given by Eq. (28). H inf is the expansion rate at the time of the inflation finish, it is the second input parameter in the inflation sector, we can determine H inf ≈ 4.49 × 10 10 GeV by fitting the inflationary data. ρ Φ (N * ) ρ Φ (0) ≈ 5.65 has been calculated out by Eq. (25), also see Fig. 4. Note that the η ′ * 9 factor in Eq. (27) is the same order of magnitude as ǫ V * , of course its contribution to the results is negligible. We can employ either of the two sets of slow-roll parameters to calculate the inflationary data, put Eq. (26) into Eq. (27), we can perfectly reproduce all of the measured data in Eq. (1), the detailed results are summarized in Table 2 in VI Section. k * is defined and calculated as follows,
k * = a * H * c = H 0 c H inf H 0 H * H inf a * a inf a inf a req a req a ref a ref a 0 = [ H 0 c h ][ g * (T 0 ) 2 ] 1 3 [Ω γ (T 0 )h 2 ] 1 3 [ H inf H 0 /h ] 1 3 [ T re T 0 ] 1 3 [ ρ Φ (N * ) ρ Φ (0) ] 1 2 e −N * ,(28)
a detailed derivation of Eq. (28) is seen in Appendix I. c is the speed of light and h is the scaling factor for Hubble expansion rate, note that H 0 ch = 100 3×10 5 Mpc −1 and H 0 /h ≈ 2.13 × 10 −42 GeV are two fixed constants. At the present day, the CMB temperature is T 0 ≈ 2.7255 K ≈ 2.35 × 10 −4 eV [1], the effective number of relativistic degrees of freedom is g * (T 0 ) ≈ 4.1 (which includes the ν R contribution, see the following Eq. (38)), the photon energy density parameter Ω γ (T 0 )h 2 ≈ 2.47 × 10 −5 will be calculated out by Eq. (49) in Section V, and the reheating temperature T re ≈ 2.43 × 10 11 GeV will be work out by Eq. (36) in Section IV. Once these quantities are input into Eq. (28), we can immediately solve out N * ≈ 51.15 corresponding to k * = 0.05 Mpc −1 . It should be emphasized that Eq. (28) relates these fundamental quantities of the inflation, reheating and current universe together, this is rightly a characteristic of the unified model.
Finally, we can find out an analytical function form of the inflationary potential V (|Φ| 2 ) through fitting its numerical solution. Since the ǫ(N ) solution has been given in Fig. 5, substitute it into Eq. (20) and make a numerical integration, then we can obtain a numerical solution of |Φ(N )|/M p , on the other hand, the V Φ (N ) solution has been given in Fig. 4, put these two solutions together, thus we can translate V Φ (N ) with N as variable into V (|Φ| 2 /M 2 p ) with [|Φ|/M p ] 2 as variable, this is easily achieved by a computer, the calculated result is shown by the black dotted curve in Fig. 6 of V (|Φ| 2 ) in Fig. 6 is more smooth and steady in comparison with the evolution of V Φ (N ) in Fig. 4, moreover, in the inflation duration [|Φ|/M p ] 2 varying amount is much smaller than N varying one. Note that when N N * (namely t t * ), there is actuallyΦ/M p ≈ 0, namely Φ/M p is approximating to the limit value of [|Φ|/M p ] 2 → 7.04, accordingly there is the limit value of V Φ /ρ Φ (t inf ) → 5.65, namely the potential is approximating to the constant value. The reason for this is of course that the universe is entirely filled with the primordial dark energy in the very early phase of the inflation.
After making a great effort, I eventually find the V (|Φ| 2 ) function form such as
V (|Φ| 2 ) ρ Φ (t inf ) = 5.75 1 − e −0.078(x−0.384) 2 0.45 + e 4.5 x−7.04 ,(29)x = [ |Φ(t)| M p ] 2 , [ |Φ(t inf )| M p ] 2 = 1 η V (t inf ) ≈ 0.384, [ |Φ(t * )| M p ] 2 ≈ 7.04,
where ρ Φ (t inf ) = 3M 2 p H 2 inf , the adjustable parameters include 5.75, 0.078, 0.45 and 4.5. The potential form of Eq. (29) obviously satisfies the model requirement in Eq. (2), note that it is very different from the Starobinsky-type inflationary potential [27]. In Fig. 6, we use the green solid curve to show the analytical solution of Eq. (29), it can perfectly fit the model numerical solution shown as the black dotted curve. When x → 0.384, the first term of Eq. (29) is → 0 and its second term is → 1 2 , this is corresponding to the potential valley bottom, which is namely where the inflation ends. When x → 7.04, its first term is → 5.65 and its second term is → 0, this is corresponding to the potential plateau platform, which is namely the early phase of the inflation. In particular note that the second term of Eq. (29) constrains there must be x → 7.04 with x < 7.04, once x → 7.04 with x > 7.04 the potential will become an infinity, this is certainly unacceptable, therefore Eq. (29) naturally explains the limited values of both the inflationary field and the inflationary potential. If we now start from Eq. (29) to deal with the inflation problem by use of the usual procedure, in principle, we can also derive all kinds of the foregoing results which are obtained by my ansatz, but this potential form of Eq. (29) can not at all be guessed in advance. In conclusion, Fig. 6 clearly shows the inflationary slow-roll evolution and Eq. (29) explicitly gives us deep insights into the inflationary potential. Up to now, All of the inflation problems have completely been solved under the unified model.
IV. Reheating and Baryogenesis
At the end of the inflation, Φ DE is vanishing and the Φ field entirely becomes Φ DM , thus the Φ DM -dominated universe comes into the decelerating expansion era. Since Φ DM is an excited particle state with kinetic energy, it can interact with the other particles via those couplings in Eq. (2), by virtue of its superheavy mass, Φ DM is quite unstable and can shortly decay into one SM particle and one dark particle, the phenomena of the Φ DM decay has been discussed in Section II. The Φ DM decay has however important cosmological implications, in fact, it not only directly produces the hot bath of the universe, namely the reheating universe, but also simultaneously generates the matter-antimatter asymmetry by the foregoing leptogenesis mechanism. In what follows, we will discuss the reheating evolution and the baryon asymmetry genesis.
From now on, we take off the "DM " subscript of Φ DM since Φ has been pure Φ DM . The Φ decay and its subsequent processes directly produce the earliest radiation of the universe, which are hot plasma consisting of the SM and dark particles. The total energy of the universe now includes the two components of ρ Φ and ρ R . The dynamic of the reheating evolution are collectively controlled by Friedmann equation and the continuity equations, namely
ρ Φ + ρ R = 3M 2 p H 2 ,ρ Φ + 3Hρ Φ = −Γ Φ ρ Φ ,ρ R + 4Hρ R = Γ Φ ρ Φ ,(30)
where Γ Φ is the Φ decay width given in Eq. (7). Eq. (30) physical implications are very clear, it is a closed system of equations, the evolutions of ρ Φ and ρ R are completely determined by the Γ Φ value and the initial conditions. For the reheating calculation, therefore I suitably take λ 0 ≈ 0.1 in Eq. (7) as an input parameter of the unified model, see Table 2.
In order to solve the system of equations of Eq. (30), I define the dimensionless energy densities and time variable as follows,
ρ i (t) = ρ i 3M 2 p Γ 2 Φ ,t = (t − t inf )Γ Φ = t − t inf τ Φ , 0 <t t ref = t ref − t inf τ Φ ,(31)=⇒ρ Φ (0) = ( H inf Γ Φ ) 2 ,ρ R (0) = 0 ,(32)
where i = (Φ, R) and τ Φ is the Φ lifetime. The time of the reheating beginning is namely the time of the inflation finish, while the time of the reheating finish is specially indicated by the "ref" subscript. Eq. (32) is namely the initial values of the reheating evolution. By use of the numerical values in Table 2, then the model gives H inf Γ Φ ≈ 2 × 10 5 , so the reheating is indeed a severely out-of-equilibrium process in its early phase. By use of Eq. (31), we can recast Eq. (30) as follows, In fact, we are much more interested in the energy density parameters and the total parameterof-state, which are defined by
ρ Φ +ρ R = ( H Γ Φ ) 2 , dρ Φ dt + [3( H Γ Φ ) + 1]ρ Φ = 0 , dρ R dt + 4( H Γ Φ )ρ R −ρ Φ = 0 . (33) t 1.054 -R equality © ( t ) R ( t )Ω i (t) = ρ i ρ Φ + ρ R , w T (t) = P Φ + P R ρ Φ + ρ R = Ω R 3 ,(34)
where P Φ = 0 and P R = ρ R 3 . Fig. 7 numerically shows the reheating evolutions of Ω Φ and Ω R witht as time scale. As the Φ decay proceeding, the Φ density parameter is continuously decreasing from the initial Ω Φ = 1 to the final Ω Φ ≈ 0, so Φ is eventually exhausted, meanwhile, Ω R is gradually increasing from the initial Ω R = 0 to the final Ω R ≈ 1, thus the universe is entirely filled by radiation. The w T (t) evolution is also in agreement with this, which is gradually increasing from the initial w T = 0 to the final w T = 1 3 , as a result, the initial Φdominated universe is transformed into the finial R-dominated one. In the reheating period and after it, the universe is obviously decelerating expansion and the expansion rate is continuously declining.
In Fig. 7, the time of the Φ − R equality,t req ≈ 1.054, is a key time point in the reheating process, the universe energy is Φ-dominated in 0 <t t req (namely t inf < t t req ), oncet >t req (t > t req ) the universe enters into the radiation-dominated era, att ≈ 10 the reheating process is essentially over, after that the universe will start a evolution of the hot expansion driven by the radiation. In addition,t req ≈ 1.054 means t req − t inf ≈ τ Φ , namely the Φ decay mostly happens around t req , thus ρ R (t req ) essentially arrives the highest radiation energy density, accordingly the radiation temperature T re (t req ) is the highest temperature in the reheating process. To sum up,t req and T re are determined by the following relations,
Ω Φ (t req ) = Ω R (t req ) = 1 2 =⇒t req = t req − t inf τ Φ ≈ 1.054 ,(35)H(t req ) = Γ Φ 2ρ R (t req ) , T re = [M p Γ Φ ] 1 2 90ρ R (t req ) π 2 g * (T re ) 1 4 = [M p H(t req )] 1 2 45 π 2 g * (T re ) 1 4 ,(36)
where g * (T re ) = 129 is the effective number of relativistic degrees of freedom, which includes all of the model particles except Φ. The relevant results of the reheating are all listed in Table 2 in Section VI. The numerical calculation gives
H(treq ) Γ Φ
≈ 0.58 and T re ≈ 2.24 × 10 11 GeV, this demonstrates that the Φ decay is indeed out-of-equilibrium in the period of t inf < t < t req , but the thermal equilibrium is roughly formed after t > t req . When
t = t ref , there is T (t ref ) ≈ 8.65×10 10 GeV < M Φ ≈ 8.88×10
10 GeV, the Φ particle can not newly be produced from the hot bath, thus the reheating is naturally over. Finally, we stress that the reheating is closely related to the inflation and particle physics, this is another one characteristic of the unified model. Now we discuss the baryogenesis through the foregoing leptogenesis mechanism. In the light of the discussions in Section II, Φ → l c + ν R is a slow decay mode of Φ, but it has the three remarkable features. i) Its decay rate has the CP asymmetry about A CP ∼ 10 −10 , which is given by Eq. (7). ii) This decay is always out-of-equilibrium in the reheating process due to
Γ(Φ → l c + ν R ) < H(t) (t inf < t t ref )
. iii) Although the net lepton number is conserved as zero, the CP asymmetric decay can equally generate the anti-lepton asymmetry and the ν R one, after the reheating these two asymmetries are respectively isolated in the SM sector and dark sector, and also they can not be erased each other. Eventually, the ν R asymmetry is forever frozen out in the dark section, whereas the anti-lepton asymmetry is partly converted into the baryon asymmetry through the SM sphaleron process [24]. In short, although this baryogenesis mechanism does not fully fulfil the Sakharov's three conditions [28], it is indeed put into effect in the unified model.
After the reheating is completed, the hot bath has fully been formed, the universe then enters into the hot expansion era. The hot evolution in the dark sector will be discussed in Section V, while the hot evolution in the SM sector is exactly the well-known hot big bang paradigm. Because the B − L number is always conserved, above the electroweak scale the sphaleron transition can partly convert the above-mentioned anti-lepton asymmetry into the baryon asymmetry [29]. The relevant relations are in detail given as follows,
[n l − n l s ] Tre = [ n ν R −n ν R s ] Tre = [ n Φ A CP s ] Tre = [ ρ Φ A CP M Φ s ] Tre = [ ρ R s ] Tre A CP M Φ = 3T re A CP 4M Φ , η B = [ s n γ ] T 0 Y B (T 0 ) = [ s n γ ] T 0 Y B (T ew ) = [ s n γ ] T 0 c s Y B−L (T ew ) = [ s n γ ] T 0 c s Y B−L (T re ) = c s [ s n γ ] T 0 [n l − n l s ] Tre ,(37)
where s is the entropy density and Y B (T ) = [ n B −n B s ] T , c s = 28 79 is the SM sphaleron coefficient, [ s nγ ] T 0 ≈ 7.38 includes the ν R contribution to g * (T 0 ) ≈ 4.1 (see Eq. (38)). Eq. (37) clearly shows that η B is collectively determined by the inflaton mass, the reheating temperature and the CP asymmetry, this again manifests that the unified model closely relates the inflation, reheating and particle physics together. In Table 2, I take A CP ≈ 1.14 × 10 −10 as an input parameter from the particle physics, then the unified model naturally gives η B ≈ 6.13 × 10 −10 .
V. Current Dark Matter and Dark Energy
The universe expansion leads that the radiation energy is red-shift, therefore the universe temperature which is denoted by the photon temperature is continuously declining. The hot evolution in the dark sector is however different from that in the SM sector. In the light of the discussion in Section II, the dark particles of F, φ 0 1 , φ 0 2 ,B µ ,Ã µ are all depleted by their decays in the early universe, only the stable S and ν R can survive in the dark sector. In fact, the dark sector is essentially separated from the SM sector below the temperature of T F ≈ v 2 ≈ 500 GeV, later the annihilation of S + S * → ν R + ν c R is frozen out below T f ≈ M S 26.7 ≈ 22.5 GeV, S and ν R are eventually decoupled from each other, as a result, the relativistic ν R becomes the dark background radiation, while the non-relativistic S becomes the cold dark matter. Because the entropy in the dark sector and that in the SM sector are conserved separately, we can derive the effective temperature of ν R as follows,
a 3 (T F ) a 3 (T ) = g D * (T ν R ) g D * (T F ) ( T ν R T F ) 3 = g SM * (T ) g SM * (T F ) ( T T F ) 3 , =⇒ ( T ν R T ) 3 = 2 + 7 8 × 6( Tν L T ) 3 106.75 ≈ 0.0366 ,(38)
where there must be T < m e ≈ 0.5 MeV (namely after the electron-positron annihilation), (
Tν L T ) 3 = 4 11
is the well-known effective temperature of ν L in the SM sector. Eq. (38) gives T ν R = 0.0366 1 3 T 0 ≈ 0.9 K at the present day, in addition, we can calculate the effective number of neutrinos at the recombination as N ef f = 3[1 + (T ν R /T ν L ) 4 ] ≈ 3.14, which is safely within the current limit from the CMB analysis.
The dark particle S has however a special nature. The S effective temperature scales as T S ∝ E S ∝ p 2 S ∝ a −2 where E S and p S are respectively the S kinetic energy and momentum, while T ν R ∝ a −1 , therefore T S dropping is much faster than that of T ν R , at present T S is essentially approaching to absolute zero, in other words, the S kinetic energy is actually exhausted so that it becomes supercool matter, therefore the supercool S DM is gradually condensing into the dark energy S DE through its special self-interacting potential, this phenomenon is exactly a cosmological effect of the Bose-Einstein condensate which generally occurs at the extremely low temperature.
When the universe temperature cools to T eq ≈ 1 eV, namely the universe age is about 5× 10 4 years, the total matter density exceeds the radiation one, thus the universe is transformed from the radiation-dominated to the matter-dominated. Note that T eq is below T BBN ≈ 0.1 MeV but above T Recom ≈ 0.3 eV. We can regard the time of the matter-radiation equality as the starting point at which the CDM begins to condense into the dark energy. As the universe expansion and cooling, the S DM effective temperature is rapidly dropping and approaching to absolute zero, accordingly the S kinetic energy is depleted very fast, thus more and more S DM become supercool matter, from which more and more S DE are formed. To some extent, the S condensation is essentially a reverse process of the Φ inflation discussed in III section, namely it is actually a process of S DE slowly growing from S DM , or S DM gradually converting into S DE . It should be stressed that the S condensation is very different from baryon and electron condensing into the usual material (namely the visible world), the former is a pure boson system, whereas the latter is a pure fermion system. In a word, the characteristic evolution of S eventually leads to the dark matter and dark energy in the current universe.
From now on, we directly use the abbreviations of "DM " and "DE" to denote S DM and S DE respectively, their energy density and pressure are given as follows,
P DM = 0 , P DE = −ρ DE , ρ DM + ρ DE = ρ S , P DM + P DE = P S = w S ρ S , =⇒ ρ DM = (1 + w S )ρ S , ρ DE = −w S ρ S ,(39)
where w S is a parameter-of-state varying with the time, which is in the range of 0 w S −1.
The physical implications of Eq. (39) is the same as that of the Φ field in Eq. (10), see those explanations below Eq. (10). After T < T eq ≈ 1 eV, the universe energy includes the four components of the photon ρ γ , the neutrino ρ ν (which contains the ν L and ν R energy), the baryon ρ B , and the dark ρ S (which contains ρ DM and ρ DE ), their dynamical evolutions are determined by the following system of equations,
ρ γ + ρ ν + ρ B + ρ S = 3M 2 p H 2 ,(40)ρ γ + 4Hρ γ = 0 ,ρ ν + 3Hρ ν (1 + w ν ) = 0 ,ρ B + 3Hρ B = 0 , ρ S + 3Hρ S (1 + w S ) = 0 =⇒ρ DE = −(ρ DM + 3Hρ DM ),(41)ρ DE = κ(T )Hρ DM .(42)
Eq. (40) is Friedmann equation, it is in charge of the expansion rate and it relates the SM sector and the dark sector together. Eqs. (41) are the continuity equations of the four energy components. The neutrino is relativistic state in the early phase, but it later turns into nonrelativistic state since it has a sub-eV mass, so there is w ν = 1 3 for the relativistic neutrino and w ν = 0 for the non-relativistic neutrino. The last equality in Eqs. (41) indicates that the S DE growth is entirely from the S DM reduction in the comoving volume. Eq. (42) is namely the growth equation of S DE , the parameter κ(T ) characterizes the growth rate (or one can also call it as the condensing rate), it depends on the temperature, κ will rapidly increase as T is more and more low. Once the κ(T ) function is provided, then the above system of equations is closed, thus we can solve the evolution of each energy component.
That is similar to the inflation process, we can introduce the e-fold number of the condensation process as follows,
N (T ) = ln a(T ) a(T eq ) = ln T eq T =⇒Ṅ (t) = H(t),(43)a(T eq ) a(T ) a(T 0 ) = a 0 , 0 = N (T eq ) N (T ) N (T 0 ) = N 0 ,
where T eq is the starting temperature of the S condensation and T 0 is the present universe temperature. Note thatṄ (t) is positive in Eq. (43), namely N is increasing with the time, this is different from the negativeṄ (t) defined in Eq. (16), one should not confuse them. Now we use N as the time variable of the energy evolution, and normalize all kinds of the energy densities to the initial ρ γ (N (T eq )) = ρ γ (0), then we can easily derive the following initial relations, (44), we can now determine T eq ≈ 1.21 eV and ρ S (0)
ρ γ (0) + ρ ν (0) = ρ B (0) + ρ S (0), ρ DM (0) = ρ S (0), ρ DE (0) = 0 , w S (0) = 0 , ρ ν (0) ρ γ (0) = 21 8 [( T ν L T eq ) 4 + ( T ν R T eq ) 4 ] , ρ B (0) ρ γ (0) = n γ (0) ρ γ (0) M B n B (0) n γ (0) = 36M B η B π 4 T eq , ρ S (0) ρ γ (0) = M S n S (0) ρ γ (0) = M S T f g * (T f ) 8.5 × 10 −10 GeV −2 σv r T f [ Teq eV ] ,(44)ρ B (0) ≈ 8.78. Note that ρ DM (N 0 ) ρ B (N 0 ) ≈ 5.38 < ρ S (0) ρ B (0) < ρ S (N 0 ) ρ B (N 0 ) ≈ 19.
3, this means that a part of the S particles are surely condensed into the dark energy after they become supercool.
In order to solve Eqs. (40)-(42), we need provide the κ(T ) evolution, so I assume that κ(T ) is given by the following F (N ) function,
κ(N (T )) = dF (N ) dN ⇐⇒ N 0 κ(N ′ )dN ′ = F (N ) = b e a(1− N 0 N ) ,(45)=⇒ F (0) = 0 , F (N 0 ) = b , F (∞) = b e a ,
where a ≈ 30.15 and b ≈ 0.49 are two input parameters in the S condensation process, which are determined by fitting the current density budget of the dark matter and dark energy, see
dlnρ γ dN = −4 =⇒ ln ρ γ (N ) ρ γ (0) = −4N, dlnρ ν dN = −3(1 + w ν ) =⇒ ln ρ ν (N ) ρ γ (0) = −4N + ln ρ ν (0) ρ γ (0) (0 N N ν ), ln ρ ν (N ) ρ γ (0) = −3N − N ν + ln ρ ν (0) ρ γ (0) (N > N ν ), dlnρ B dN = −3 =⇒ ln ρ B (N ) ρ γ (0) = −3N + ln ρ B (0) ρ γ (0) , dlnρ DM dN = −3 − κ =⇒ ln ρ DM (N ) ρ γ (0) = −3N − F (N ) + ln ρ S (0) ρ γ (0) , dρ DE dN = κ ρ DM =⇒ ln ρ DE (N ) ρ γ (0) = ln N 0 κ(N ′ )e −3N ′ −F (N ′ ) dN ′ + ln ρ S (0) ρ γ (0) ,(46)
where N ν is the time point when the neutrino is transformed from relativistic state to nonrelativistic state, see the following Eq. (47), there is w ν = 1 3 when 0 N N ν and w ν = 0 when N > N ν . Since the initial conditions have been given by Eq. (44), from Eq. (46) we can immediately calculate the relevant energy evolution once we input the model parameters.
First of all, we can calculate out the three key time points in the universe evolution,
N ν = ln T eq 0.1555Σm ν ≈ 4.87, N D = ln T eq 2.753 × 10 −4 eV ≈ 8.39, N 0 = ln T eq T 0 ≈ 8.55,(47)
where T eq ≈ 1.21 eV, its corresponding universe age is about 4 × 10 4 years. N ν is the time when the neutrino turns into non-relativistic state, its corresponding universe temperature is Fig. 8 numerically shows the evolutions of the relevant energy densities since the matterradiation equality which is set as the starting point of the S condensation. ρ γ (the yellow curve), ρ ν (the blue dotted curve) and ρ B (the brown curve) evidently carry out the normal hot evolutions in the SM sector. ρ B always keeps the pure matter evolution because the baryon number is conserved. At N ν ≈ 4.87, the neutrino turns into the non-relativistic state, so its evolution is transformed from the radiation to the matter, eventually the radiation is only left with the photon. By contrast, ρ S (the green curve), ρ DM (the pink curve) and ρ DE (the black curve) implement special hot evolutions in the dark sector. As the universe temperature is more and more cool, more and more DM become supercool and condense into DE, therefore ρ DM is more and more deviating from the pure matter evolution, while ρ DE is slowly growing and rising, as a result, the total ρ S is gradually transformed from the initial pure DM to the final pure DE, these three curves clearly show this condensing evolution. At N D ≈ 8.39, the dark energy exceeds the total matter energy and it begins to dominate the universe, thus the universe is newly transformed from the decelerating expansion to the accelerating one. The present-day value of each energy density is evaluated at N 0 ≈ 8.55. In the future, once all of the CDM are fully condensed into the dark energy, then the dark energy density will eventually become a constant, thus the universe will newly become a de Sitter one, which is the same state as the primordial universe of the early inflation. When the evolutions of ρ DM and ρ DE are compared with that of ρ Φ DE and ρ Φ DM in Fig. 4, we can see that the current S condensation is essentially Figure 9: The evolutions of the relevant energy density parameters with N (T ) as time scale since T eq ≈ 1.21 eV. N (T eq ) = 0 is set as the starting point of the dark energy growing, and also several key time points are shown. The present energy density budget is evaluated at N 0 , which is correctly fitting to the present data. a reverse process of the primordial Φ inflation, the reason for this is of course that the two fields of S and Φ have the same nature and similar dynamics. In a similar way to finding the V Φ inflationary potential, we can also find the V S condensation potential by fitting the ρ S curve in Fig. 8, but we have to leave out it in order not to increase the length of the paper.
Make use of Eq. (46), we can also calculate the density parameter of each energy component, and further obtain the total parameter-of-state and the h value, they are given by the following relations,
Ω i (N ) = ρ i (N ) i ρ i (N ) , w T (N ) = i P i (N ) i ρ i (N ) = Ω R (N ) 3 − Ω DE (N ), i ρ i (N 0 ) = 3M 2 p H 2 0 =⇒ h ≈ 0.674,(48)
where the present critical energy density is 3M 2 p H 2 0 with H 0 ≈ 2.13 × 10 −42 h GeV, note that h ≈ 0.674 is an output value of the unified model rather than an input parameter. It is worth pointing out that this mechanism of cold dark matter condensing into dark energy can also provide a solution for the "Hubble tension" about the current h data [30], we will specially discuss it by another paper. Fig. 9 clearly shows the evolutions of the relevant density parameters since the matterradiation equality. Both the standard hot evolution in the visible sector and the DM condensing into the DE (or the DE growing from the DM) in the dark sector are explicitly illustrated by the corresponding curves, these results further confirm our previous discussions. When N < 0, the universe is R-dominated. When 0 < N < N D , the universe is M-dominated. When N > N D , the universe is DE-dominated. The present energy density budget is evaluated at N 0 , which is namely N ≈ 8.24, the universe is newly transformed from the decelerating expansion to the accelerating one. The current CDM condensation into the DE is essentially a reverse process of the primordial slow-roll inflation. The future fate of the universe will newly become de Sitter universe which is the same state as the primordial universe, but their energy densities differ by about 106 orders of magnitude.
correctly fitting to the present data. In the future, the universe will entirely be filled with the dark energy. In fact, from Eq. (46) and Eq. (48) we can analytically derive the following ratio relations among the present-day density parameters,
Ω γ (N 0 ) = π 2 T 4 0 45M 2 p H 2 0 , Ω ν (N 0 ) Ω γ (N 0 ) = 27Σm ν π 4 T 0 [( T ν L T 0 ) 3 + ( T ν R T 0 ) 3 ], Ω B (N 0 ) Ω γ (N 0 ) = 36M B η B π 4 T 0 , Ω DM (N 0 ) Ω B (N 0 ) = ρ S (0) ρ B (0) e −F (N 0 ) ,(49)
where ( Table 2. Fig. 10 shows the evolutions of the total parameters-of-state and the S parameters-of-state since the matter-radiation equality. When the evolution of w S is compared with that of w Φ in Fig. 5, the current CDM condensation into the dark energy is indeed a reverse process of the primordial slow-roll inflation. When w T < − 1 3 (namely N > 8.24), the universe is newly transformed from the decelerating expansion to the accelerating one. When w T < − 1 2 (namely N > N D ), the universe is transformed from M-dominated to DE-dominated. At the present day, ninety-five percent of the total universe energy is dark and the universe is accelerating expansion by the dark energy drive, all of these have been verified by the observations. The future fate Table 2: A summary of the numerical results of the unified model.
Tν L T 0 ) 3 = 4 11 and ( Tν R T 0 ) 3 = 0.0366. ρ S (0) ρ B (0) ≈ 8Relevant input parameters Φ inflation S condensation Particle model H inf (GeV) α a b λ 0 A CP e D M S (GeV) m ν i (
of the universe will newly become de Sitter universe filled with pure S DE , which is exactly the same state as the primordial universe filled with pure Φ DE , but these two dark energy densities differ by about 106 orders of magnitude.
VI. Summary for Numerical Results
Now we summarize all kinds of the important numerical results of the unified model, and then compare them with the measured experimental data. The used physical constants are onlỹ Table 2 in detail lists the relevant input parameters in the unified model and all kinds of the important output quantities. In the Φ inflation there are the two input parameters of H inf and α, but the inflation output results excellently fit all of the inflationary data, in particular, the model predicts that r 0.05 ≈ 1.86 × 10 −7 is too small to be detected currently, and the inflaton mass M Φ ≈ 8.88 × 10 10 GeV purely arises from the inflationary dynamical evolution. The S condensation are controlled by the two input parameters of a and b, their values are determined by fitting the current densities of the CDM and dark energy. The remaining input parameters are provided by the particle model, λ 0 and A CP are respectively responsible for the reheating outputs and the baryon asymmetry, e D and M S are jointly in charge of the annihilation cross-section and freeze-out temperature of the S particle, by which the S relic density is determined, m ν i dominates the energy density of the present cosmic neutrino. Although there are no observable data of the universe reheating as yet, the reheating output results are very reasonable and believable, the reheating process brings about both the primordial hot bang and the matter-antimatter asymmetry. In the last panel, the current energy density budget is perfectly reproduced, moreover, h ≈ 0.674 and η B ≈ 6.13×10 −10 is finely predicted. In short, all of the numerical results are consistent, reasonable and without any fine-tuning, they are very well in agreement with the present measured data in Eq. (1).
All of the energy scales in Table 2 are below the GUT scale, the primordial inflation really takes place after the GUT phase transition, so the magnetic monopole problem is naturally eliminated. In the unified model, the super-high dark energy drives the primordial inflation, the current dark energy is grown from the CDM condensation, although there is difference about 106 orders of magnitude between their energy densities, the universe energy is step by step released and reduced in the evolution history of 13.8 billion years, the whole evolution process is analogous to a cascade of hydropower stations, see Fig. 3, so there is not the so-called cosmological constant problem at all. All of these results fully demonstrate that the model is highly of self-consistence, concordance and unification. In conclusion, the unified model is very successful, it is indeed able to account for the universe origin and evolution elegantly and excellently, therefore we expect it to be tested by the future experiments.
VII. Conclusions
I put forward to a unified model of particle physic and cosmology based on both a new extension of the SM and the fundamental principle of the standard cosmology. This new theory covers both the SM physics (visible sector) and the BSM one (dark sector), it can successfully account for the full process of the universe origin and evolution in a unified and integrated way, and also it elegantly explains the origins of the neutrino mass and matter-antimatter asymmetry. The universe starts from the primordial super-high dark energy Φ DE , which drives the inflation and is slowly converted into the super-heavy dark matter Φ DM , at the end of the inflation the Φ DM decay leads to both the reheating and the leptogenesis, and then the universe enters the R-dominated era, it gradually cools by means of the hot expansion, after the matter begins to dominate the universe, some supercool particle of the CDM S DM slowly condensate into the current dark energy S DE , the future universe will newly become a de Sitter one filled with the dark energy S DE . Φ and S are two unusual scalar fields due to their special selfinteracting potential, but they have the same nature and similar dynamics. The primordial Φ inflation is implemented by Φ DM slowly growing from Φ DE , whereas the current S condensation is implemented by S DE slowly growing from S DM , the latter is essentially a reverse process of the former.
For each evolution process, I give its complete dynamical system of equations and solve them by some special techniques, in particular, I establish the internal relations between these evolution processes and particle physics. The numerical results of the model are in detail shown by all kinds of the figures and Table 2. Those evolution figures clearly show how the slow-roll inflation is implemented, how the inflaton mass arises from the inflationary evolution, what the inflationary potential shape and its function form are really, the details of reheating process, the mechanism of matter genesis, the CDM formation and condensation, and the current dark energy genesis. From the primordial super-high dark energy to the current super-low dark energy, the universe energy is step by step released and reduced in the evolution history of 13.8 billion years, which is analogous to a cascade of hydropower stations, so there is no the cosmological constant problem.
The unified model can excellently fit all kinds of the observation data by use of fewer input parameters. It not only perfectly reproduces the measured inflationary data and the current energy density budget, but also finely predicts some important cosmological quantities such as the tensor-to-scalar ratio r 0.05 ≈ 1.86 × 10 −7 , the inflaton mass M Φ ≈ 8.88 × 10 10 GeV, the reheating temperature T re ≈ 2.43 × 10 11 GeV, the CDM mass M S ≈ 600 GeV, the baryon asymmetry η B ≈ 6.13 × 10 −10 , the scaling factor of expansion rate h ≈ 0.674, etc., all of these results have important implications for particle physics and cosmology. In short, the model really achieves a unification of particle physics and cosmology, and also it is very successful and believable, therefore we expect that it is tested in the near future.
0.05 < 0.036 , n s ≈ 0.965 , dn s dlnk ≈ −0.004 , ln(10 10 ∆ 2 R ) ≈ 3.04 , Ω DE ≈ 0.685 , Ω CDM ≈ 0.265 , η B ≈ 6.14 × 10 −10 , i m ν i ∼ 0.1 eV.
, we can directly obtain η ′ and θ (employ Eq.(18)), obviously, there are η(0) = θ(0) = −e α N * +6 and η ′ (0) = 0 at the end of the inflation.
Figure 4 :
4ρ Φ → green solid line ρ ΦDE → pink dashed line ρ ΦDM → brown dashed line T Φ → blue dotted line V Φ → blackdotted line The inflationary evolutions of the relevant energy forms of the Φ field with the e-fold number as time scale, N * ≈ 51.15 is corresponding to the time of k * = 0.05 Mpc −1 exiting from horizon. Starting from Eq. (24), we can easily solve the inflationary evolutions as follows. Firstly, by use of the definition formula of η in Eq. (17) we can solve out ρ Φ DM (N ) by integrating η(N ). Secondly, we can further integrate ρ Φ DM (N ) to obtain ρ Φ (N ) by the first formula in Eq. (17). Thirdly, ǫ(N ) is obtained by the second equality in Eq. (15). The derived results are,
Figure 5 :
5The inflationary evolutions of the slow-roll parameters and the parameter-of-state with the e-fold number as time scale, N * ≈ 51.15 is corresponding to the time of k * = 0.05 Mpc −1 exiting from horizon.
Fig. 5
5shows the inflationary evolutions of the slow-roll parameters and the parameter-ofstate with N as time scale. One can see three remarkable features. i) In the most of the inflation duration, there are ǫ ≪ 1 and w Φ ≈ −1, while the other curves are slowly varying.
, where [|Φ(t inf )|/M p ] 2 ≈ 0.384 and [|Φ(t * )|/M p ] 2 ≈ 7.04 are two field values corresponding to N = 0 and N * ≈ 51.15, respectively. Apparently, the evolution inflationary potential evolution with [ |Φ(t)| Mp ] 2 as variable, t inf and t * are respectively corresponding to N = 0 and N * ≈ 51.15, the black dotted curve is the numerical solution of the model, while the green solid curve is the analytical solution of Eq. (29), the latter is perfectly fitting the former.
Figure 7 :
7The reheating evolutions of the Φ and radiation energy density parameters with the dimensionlesst as time scale,t ≈ 1.054 is the time of ρ Φ = ρ R andt ≈ 10 is the time of the reheating finish. Now the evolutions ofρ Φ (t) andρ R (t) only depend on their initial values in Eq. (32), and also we can easily obtain their numerical solutions.
been given by Eq. (38), M B ≈ 0.9383 GeV is the baryon mass, I employ n B (0) nγ (0) = n B (N 0 ) nγ (N 0 ) = η B , and η B ≈ 6.13 × 10 −10 has been obtained by Eq. (37). A detailed derivation of the equality of ρ S (0) ργ (0) is seen in Appendix II, provided these two inputs of e D ≈ 0.8 and M S ≈ 600 GeV, we have obtained σv r T f ≈ 1.27 × 10 −9 GeV −2 , M S T f ≈ 26.7 and g * (T f ) = 91.5 by the foregoing Eq. (8). From the system of equations of Eq.
Figure 8 :
8line ρ ν → blue dotted line ρ B → brown solid line ρ DM → pink dashed line ρ DE → black dashed line ρ S = ρ DM + ρ DE → green solid lineThe evolutions of the relevant energy densities with N (T ) as time scale since T eq ≈ 1.21 eV. N (T eq ) = 0 is set as the starting point of the dark energy growing, and also several key time points are shown. The SM sector carries out the normal hot evolution, while the dark sector implements the S condensation, which is essentially a reverse process of the primordial Φ inflation.0.1555Σm ν where the 0.1555 factor is derived from the second equality in Eq. (49) and Σm ν ≈ 0.06 eV is an input parameter of the unified model. N D is the time of the equality of the dark energy density and the total matter one, it occurs at T ≈ 2.753 × 10 −4 eV ≈ 3.2 K, this is very close to the present-day T 0 ≈ 2.35 × 10 −4 eV ≈ 2.7255 K, namely the N D time is about 2.5 billion years ago.
Figure 10 :
10The evolutions of the relevant parameters-of-states with N (T ) as time scale since T eq ≈ 1.21 eV. N (T eq ) = 0 is set as the starting point of the dark energy growing. At w T = − 1 3
.78 is obtained by Eq. (44) and F (N 0 ) = b ≈ 0.49 is the model parameter. Eq. (49) shows that the present energy density budget is collectively determined by these fundamental quantities of T 0 , Σm ν , M B , η B , M S , e D , a and b (where a replaces h as an input parameter), the detailed results are all listed in
M p = 2.43 × 10 18 GeV, M B = 0.9383 GeV, T 0 = 2.7255 K = 2.35 × 10 −4 eV, H 0 h = 2.13 × 10 −42 GeV. (50)
Table 2 .
2The role of Eq. (45) is very similar to that of Eq. (24) in the Φ inflation, the former controls S DE growing from S DM , whereas the latter is in charge of Φ DM growing from Φ DE .Make use of Eq. (43) and Eq. (45), then the relevant energy densities in Eqs. (40)-(42) are
analytically solved as follows,
.13 × 10 −10 Ω i h 2 2.47 × 10 −5 0.0007 0.02236 0.1202 0.311eV)
4.49 × 10 10 2.92 30.15
0.49
0.1 1.14 × 10 −10 0.8
600
0.06
Inflation output quantities
k * (Mpc −1 )
N *
r 0.05
n s
dns
dlnk
ln[10 10 ∆ 2
R ] ρ Φ (N * )
ρ Φ (t inf )
M Φ (GeV)
0.05
51.15 1.86 × 10 −7 0.965 −0.0040
3.044
5.65
8.88 × 10 10
Reheating output quantities
Ω Φ
Ω R
T (GeV)
H(GeV)
Γ Φ (GeV)
t req = 1.054
0.5
0.5 2.43 × 10 11 1.3 × 10 5 2.24 × 10 5
t ref = 10
1.87 × 10 −4 ≈ 1 8.65 × 10 10 1.16 × 10 4
Current output quantities
Ω γ
Ω ν
Ω B
Ω DM
Ω DE
h
η B
5.44 × 10 −5 0.00154 0.0492
0.265 0.685 0.674 6
Appendix IA derivation of Eq. (28) is as follows,where I use a ∝ ρ − 1 3 for the Φ-dominated phase in t inf < t < t req and a ∝ ρ − 1 4 for the Rdominated phase in t req < t < t ref , in addition, I employ ρ Φ (t req ) = ρ R (t req ) and g * (T req ) = g * (T ref ).Appendix IIA derivation of the ρ S (0) ργ (0) equality in Eq. (44) is as follows,where g * (T eq ) = 4.1 and g * (T f ) = 91.5.
Refer to the relevant reviews in PDG. R L Workman, Prog. Theor. Exp. Phys. 2022Refer to the relevant reviews in PDG, R.L. Workman et al., Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
The Early universe. E W Kolb, M S Turner, Front. Phys. 691E. W. Kolb and M. S. Turner, The Early universe, Front. Phys. 69, 1 (1990);
Introduction to The Theory of The Early Universe: Hot Big Bang Theory. D S Gorbunov, V A Rubakov, Pte. Ltd. World Scientific Publishing CoD. S. Gorbunov and V. A. Rubakov, Introduction to The Theory of The Early Universe: Hot Big Bang Theory (World Scientific Publishing Co. Pte. Ltd, 2018).
. M Bartelmann, Rev. Mod. Phys. 82331M. Bartelmann, Rev. Mod. Phys. 82, 331 (2010).
A Arbey, F Mahmoudi, Progress in Particle and Nuclear Physics. 119103865A. Arbey and F. Mahmoudi, Progress in Particle and Nuclear Physics 119, 103865 (2021);
. J S Bullock, M Boylan-Kolchin, Annu. Rev. Astron. Astrophys. 55J. S. Bullock and M. Boylan-Kolchin, Annu. Rev. Astron. Astrophys. 55:343-387 (2017).
. Michael Dine, Rev. Mod. Phys. 761Michael Dine, Rev. Mod. Phys. 76, 1 (2004).
. G Senjanovic, Int. J. Mod. Phys. A. 362130003G. Senjanovic, Int. J. Mod. Phys. A 36, 2130003 (2021).
. A H Guth, Phys. Rev. D. 23347A. H. Guth, Phys. Rev. D 23, 347 (1981);
. A D Linde, Phys. Lett. 108389A. D. Linde, Phys. Lett. B108, 389 (1982);
Wands, the review of inflation in PDG. J Ellis, D. ; R L Workman, Prog. Theor. Exp. Phys. 2022J. Ellis and D. Wands, the review of inflation in PDG, R.L. Workman et al., Prog. Theor. Exp. Phys. 2022, 083C01 (2022);
Introduction to The Theory of The Early Universe: Cosmological Perturbations and Inflationary Theory. D S Gorbunov, V A Rubakov, World Scientific Publishing Co. Pte. LtdD. S. Gorbunov and V. A. Rubakov, Introduction to The Theory of The Early Universe: Cosmological Perturbations and Inflationary Theory (World Scientific Publishing Co. Pte. Ltd, 2011).
. P , Di Bari, Progress in Particle and Nuclear Physics. 122103913P. Di Bari, Progress in Particle and Nuclear Physics 122, 103913 (2022);
. James M Cline, arXiv:1807.08749James M. Cline, arXiv:1807.08749.
. Pran Nath, Int. J. Mod. Phys. A. 331830017Pran Nath, Int. J. Mod. Phys. A 33, 1830017 (2018);
. D H Lyth, A Riotto, Phys. Reps. 314D. H. Lyth and A. Riotto, Phys. Reps. 314, 1-146 (1999);
. A Mazumdar, G White, Rep. Prog. Phys. 8276901A. Mazumdar and G. White, Rep. Prog. Phys. 82, 076901 (2019).
. R N Mohapatra, N Okada, Phys. Rev. D. 10535024R. N. Mohapatra and N. Okada, Phys. Rev. D 105, 035024 (2022);
. W M Yang, J. High Energy Phys. 01148W. M. Yang, J. High Energy Phys. 01, 148 (2020);
. W M Yang, Nucl. Phys. 944114643W. M. Yang, Nucl. Phys. B944, 114643 (2019);
. W M Yang, J. High Energy Phys. 03144W. M. Yang, J. High Energy Phys. 03, 144 (2018).
Konstantinos Dimopoulos, Introduction to Cosmic Inflation and Dark Energy. CRC PressKonstantinos Dimopoulos, Introduction to Cosmic Inflation and Dark Energy (CRC Press, 2021);
Kaloian Lozanov, Reheating After Inflation (SpringerBriefs in Physics. Kaloian Lozanov, Reheating After Inflation (SpringerBriefs in Physics, 2020).
. B A Bassett, S Tsujikawa, D Wands, Rev. Mod. Phys. 78537B. A. Bassett, S. Tsujikawa, D. Wands, Rev. Mod. Phys. 78, 537 (2006);
. R Allahverdi, R Brandenberger, Francis-Yan Cyr-Racine, A Mazumdar, Annu. Rev. Nucl. Part. Sci. 60R. Allahverdi, R. Brandenberger, Francis-Yan Cyr-Racine, A. Mazumdar, Annu. Rev. Nucl. Part. Sci. 60:27-51 (2010).
. T P Sotiriou, V Faraoni, Rev. Mod. Phys. 82451T. P. Sotiriou and V. Faraoni, Rev. Mod. Phys. 82, 451 (2010).
. L Roszkowski, E M Sessolo, S Trojanowski, Rep. Prog. Phys. 8166201L. Roszkowski, E. M. Sessolo, S. Trojanowski, Rep. Prog. Phys. 81, 066201 (2018);
. P H Frampton, Int. J. Mod. Phys. A. 331830030P. H. Frampton, Int. J. Mod. Phys. A 33, 1830030 (2018);
. H Baer, Ki-Young Choi, J E Kimc, L Roszkowski, Phys. Reps. 555H. Baer, Ki-Young Choi, J. E. Kimc, L. Roszkowski, Phys. Reps. 555, 1-60 (2015).
. Eugene Oks, New Astron. Rev. 93101632Eugene Oks, New Astron. Rev. 93, 101632 (2021).
. D Bodeker, W Buchmuller, Rev. Mod. Phys. 9335004D. Bodeker and W. Buchmuller, Rev. Mod. Phys. 93, 035004 (2021);
. Neil D Barrie, Chengcheng Han, H Murayama, Phys. Rev. Lett. 128141801Neil D. Barrie, Chengcheng Han, H. Murayama, Phys. Rev. Lett 128, 141801 (2022).
. Andre De, Gouvea , Annu. Rev. Nucl. Part. Sci. 66Andre de Gouvea, Annu. Rev. Nucl. Part. Sci. 66:197-217 (2016).
. D Chowdhury, J Martin, C Ringeval, V Vennin, Phys. Rev. D. 10083537D. Chowdhury, J. Martin, C. Ringeval, V. Vennin, Phys. Rev. D 100, 083537 (2019);
. J Martin, C Ringeval, R Trotta, V Vennin, J Cosmology, Astroparticle Phys, 140339J. Martin, C. Ringeval, R. Trotta, V. Vennin, J. Cosmology and Astroparticle Phys. 1403, 039 (2014).
Smoot, the review of Cosmic Microwave Background in PDG. D Scott, G F L ; R, Workman, Prog. Theor. Exp. Phys. 2022D. Scott and G. F. Smoot, the review of Cosmic Microwave Background in PDG, R.L. Workman et al., Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
Y Akrami, Planck CollaborationarXiv:1807.06211Planck 2018 results. X. Constraints on inflation. Y. Akrami et al., [Planck Collaboration], Planck 2018 results. X. Constraints on inflation, arXiv:1807.06211.
. A Joyce, B Jain, J Khoury, M Trodden, Phys. Reps. 568A. Joyce, B. Jain, J. Khoury, M. Trodden, Phys. Reps. 568, 1-98 (2015).
M Gell-Mann, P Ramond, R Slansky, Supergravity. P. Van Niewenhuizen and D. Z. FreemanNorth-Holland, AmsterdamM. Gell-Mann, P. Ramond, R. Slansky, in Supergravity, eds. P. Van Niewenhuizen and D. Z. Freeman (North-Holland, Amsterdam, 1979);
T Yanagida, Proc. of the Workshop on Unified Theory and Baryon Number in the Universe. O. Sawada and A. Sugamotoof the Workshop on Unified Theory and Baryon Number in the UniverseTsukuba, JapanT. Yanagida, in Proc. of the Workshop on Unified Theory and Baryon Number in the Universe, eds. O. Sawada and A. Sugamoto (Tsukuba, Japan, 1979);
. R N Mohapatra, G Senjanovic, Phys. Rev. Lett. 44912R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980).
. G C Branco, R G Felipe, F R Joaquim, Rev. Mod. Phys. 84515G. C. Branco, R. G. Felipe, F. R. Joaquim, Rev. Mod. Phys. 84, 515 (2012).
. V A Kuzmin, V A Rubakov, M A Shaposhnikov, Phys. Lett. B. 15536V. A. Kuzmin, V. A. Rubakov, M. A. Shaposhnikov, Phys. Lett. B 155, 36 (1985).
S Dodelson, F Schmidt, Modern Cosmology. London, 2020Academic PressS. Dodelson and F. Schmidt, Modern Cosmology (Academic Press, London, 2020);
. O Piattella, Lecture Notes in Cosmology. SpringerO. Piattella, Lecture Notes in Cosmology (Springer, Berlin, 2018).
V Mukhanov, Principles of Physical Cosmology. Cambridge University PressV. Mukhanov, Principles of Physical Cosmology (Cambridge University Press, 2005);
. K A Malik, D Wands, Phys. Reps. 475K. A. Malik and D. Wands, Phys. Reps. 475, 1-51 (2009).
. A A Starobinsky, Phys. Lett.B. 9199A. A. Starobinsky, Phys. Lett.B 91, 99 (1980).
. A D Sakharov, Pisma Zh, Eksp. Teor. Fiz. 561Usp. Fiz. NaukA. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967), JETP Lett. 5, 24 (1967), Sov. Phys. Usp. 34, 392 (1991), Usp. Fiz. Nauk 161, 61 (1991).
. W Buchmuller, R D Peccei, T Yanagida, Annu. Rev. Nucl. Part. Sci. 55W. Buchmuller, R. D. Peccei and T. Yanagida, Annu. Rev. Nucl. Part. Sci. 55:311-355 (2005).
. L Knox, M Millea, Phys. Rev. D. 10143533L. Knox, M. Millea, Phys. Rev. D 101, 043533 (2020);
. Nils Schöneberg, Guillermo Franco Abellán, Andrea Pérez Sánchez, Samuel J Witte, Vivian Poulin, Julien Lesgourgues, Phys. Reps. 984Nils Schöneberg, Guillermo Franco Abellán, Andrea Pérez Sánchez, Samuel J. Witte, Vivian Poulin, Julien Lesgourgues, Phys. Reps. 984, 1-55 (2022).
| [] |
[
"Generic Decoding of Restricted Errors",
"Generic Decoding of Restricted Errors"
] | [
"Marco Baldi \nMarche Polytechnic University\nItaly\n",
"Sebastian Bitzer \nTechnical University of Munich\nGermany\n",
"Alessio Pavoni \nMarche Polytechnic University\nItaly\n",
"Paolo Santini \nMarche Polytechnic University\nItaly\n",
"Antonia Wachter-Zeh \nTechnical University of Munich\nGermany\n",
"Violetta Weger \nTechnical University of Munich\nGermany\n"
] | [
"Marche Polytechnic University\nItaly",
"Technical University of Munich\nGermany",
"Marche Polytechnic University\nItaly",
"Marche Polytechnic University\nItaly",
"Technical University of Munich\nGermany",
"Technical University of Munich\nGermany"
] | [] | Several recently proposed code-based cryptosystems base their security on a slightly generalized version of the classical (syndrome) decoding problem. Namely, in the so-called restricted (syndrome) decoding problem, the error values stem from a restricted set. In this paper, we propose new generic decoders, that are inspired by subset sum solvers and tailored to the new setting. The introduced algorithms take the restricted structure of the error set into account in order to utilize the representation technique efficiently. This leads to a considerable decrease in the security levels of recently published code-based cryptosystems. | 10.48550/arxiv.2303.08882 | [
"https://export.arxiv.org/pdf/2303.08882v2.pdf"
] | 257,557,498 | 2303.08882 | 5fe91eae5f654d854cb6c7efc62448c652b4e078 |
Generic Decoding of Restricted Errors
8 Jun 2023
Marco Baldi
Marche Polytechnic University
Italy
Sebastian Bitzer
Technical University of Munich
Germany
Alessio Pavoni
Marche Polytechnic University
Italy
Paolo Santini
Marche Polytechnic University
Italy
Antonia Wachter-Zeh
Technical University of Munich
Germany
Violetta Weger
Technical University of Munich
Germany
Generic Decoding of Restricted Errors
8 Jun 2023
Several recently proposed code-based cryptosystems base their security on a slightly generalized version of the classical (syndrome) decoding problem. Namely, in the so-called restricted (syndrome) decoding problem, the error values stem from a restricted set. In this paper, we propose new generic decoders, that are inspired by subset sum solvers and tailored to the new setting. The introduced algorithms take the restricted structure of the error set into account in order to utilize the representation technique efficiently. This leads to a considerable decrease in the security levels of recently published code-based cryptosystems.
I. INTRODUCTION
With the recent advances in quantum technology, the search for quantum-secure cryptographic systems has become one of the most pressing challenges. In the NIST selection process of post-quantum cryptosystems launched in 2016, which has now reached the 4th round, some of the most promising candidates are based on algebraic coding theory, more precisely on the hardness of decoding a random linear code. A new trend in code-based cryptography is to base the security on the hardness of a slightly different problem, e.g. [7]. One such new decoding problem is a generalized decoding problem, where one restricts the error set. Some recent systems base their security on this decoding problem [1], [18].
In this paper, we provide new solvers for such settings, which are inspired by subset sum solvers [3]. The idea is to use the additive structure that can be found in the error set and to add only few elements to the search set, such that one achieves more representations of the elements in the error set and does not increase the search sizes too much. The connection between subset sum solvers and generic decoders has often been exploited (see, e.g., [4], [11]). We show the impact of these new attacks on systems to which they apply and how significantly their security levels decrease. As the restricted decoding problem is still very promising for cryptographic applications, the presented attacks should be considered in future proposals. For this purpose, we also shortly explain how the specific solvers can be generalized to any setting.
The paper is structured as follows: in Section II, we recall some basic notions of algebraic coding theory and introduce the required notation. In Section III, we introduce the restricted decoding problem and discuss some of its properties. We then present the new attacks in Section IV, comparing the approaches in different cases and computing new security levels for cryptosystems that are using the restricted decoding problem. Finally, Section V concludes the paper.
II. PRELIMINARIES
Throughout this paper we denote by q a prime power and by F q a finite field of order q. We denote the identity matrix of size n by Id n . For a set J, we denote by J 0 = J ∪ {0}. For x ∈ [0, 1], we denote by h(x) the binary entropy function. For n ≥ k 1 + k 2 we denote by n k1,k2 = n k1+k2 · k1+k2 k1 the trinomial coefficient. Recall that
lim n→∞ 1 n log 2 f 1 (n) f 2 (n) = F 1 h F 2 F 1 , lim n→∞ 1 n log 2 f 1 (n) f 2 (n), f 3 (n) = F 1 g F 2 F 1 , F 3 F 1 , with F i = lim n→∞ fi(n)
n , h(x) the binary entropy function and
g(x, y) = −x log 2 (x)− y log 2 (y)− (1 − x− y) log 2 (1 − x− y).
A linear code C is a k-dimensional subspace of F n q . A linear code can be compactly represented either through a generator matrix G ∈ F k×n q or through a parity-check matrix H ∈ F (n−k)×n q , which have the code as image or as kernel, respectively. We say that a linear code has rate R = k/n. We define C J as C J = {c J | c ∈ C}, where c J is the projection of c on the coordinates indexed by J. For any x ∈ F n q , we call s = xH ⊤ ∈ F n−k q a syndrome. A set I ⊆ {1, . . . , n} of size k is called an information set for C, if |C| = |C I |. We say that a generator matrix, respectively a parity-check matrix, is in systematic form (with respect to the information set I), if the columns of G indexed by I form Id k , respectively, if the columns of H not indexed by I form Id n−k . We endow the vector space F n q with the Hamming metric: the Hamming weight of a vector x ∈ F n q is given by the number of its non-zero entries, i.e.,
wt H (x) = |{i ∈ {1, . . . , n} | x i = 0}|, which then induces a distance, as d H (x, y) = wt H (x − y), for x, y ∈ F n q .
III. RESTRICTED SYNDROME DECODING PROBLEM Throughout this paper, we consider the computational version of the following decisional problem. If we choose g to be a primitive element of F q we can indeed recover the original syndrome decoding problem (SDP) in the Hamming metric. Thus, the problem directly inherits the NP-completeness. Even more is true, for a fixed g, the RSDP is still NP-complete, this follows directly from [20,Proposition 2].
Corollary 1. The RSDP with a fixed E is NP-complete.
In addition to generalizing the classical SDP, the RSDP also covers the case E = {±1} considered in [1], by setting g = −1, respectively z = 2.
Also [18] considers a particular case of the RSDP. In their paper, they consider F q , where q = 6m + 1 is a prime, that is the field allows for an element of order 6. They then consider the errors to live in E = {±1, ±g, ±(g −1)}. This corresponds to the RSDP with z = 6. In fact, since g is a root of x 6 − 1 = (x 3 − 1)(x + 1)(x 2 − x + 1) and g is not a root of (x 3 − 1) or (x + 1), we must have g 2 = g − 1.
From the uniqueness condition n w z w q k−n ≤ 1, we can easily observe that the restriction on the entries of the error vector e allows us to increase the Hamming weight of e, while still having a single solution to the RSDP with high probability. This is the main motivation for introducing the RSDP in cryptographic applications.
IV. SOLVERS FOR THE RSDP
There exist several algorithms that could potentially be applied to the RSDP, for example statistical decoders, such as [6], or attacks from lattice-based cryptography, such as [12]. In this paper, we focus on Information Set Decoding (ISD) algorithms combined with ideas from subset sum solvers, such as [3].
The history of ISD dates back to the algorithm of Prange [15] in 1962 and has resulted in several improvements (for an overview in the binary case see [14], [19]). Variants of Prange's ISD with smaller computational complexity are Stern/Dumer [8], [17], MMT [13] and BJMM [4], [10]. We only shortly recall their ideas in the following, before we adapt them to our setting. Given H ∈ F (n−k)×n q , w ∈ N and s ∈ F n−k q , one starts by choosing a set J ⊆ {1, . . . , n} of size k + ℓ, for some positive integer ℓ ≤ n − k, which contains an information set. One then brings the parity-check matrix into systematic form, by performing Partial Gaussian Elimination (PGE), denoted by H ′ , and performs the same operations on the syndrome. For simplicity, assume that the set J is chosen in the first k+ℓ positions. Thus, we get the syndrome equations
eH ′⊤ = e 1 e 2 Id n−k−ℓ 0 A 1 A 2 = s 1 s 2 , where A 1 ∈ F (k+ℓ)×(n−k−ℓ) q , A 2 ∈ F (k+ℓ)×ℓ q , e 1 ∈ E n−k−ℓ 0 , e 2 ∈ E k+ℓ 0 , s 1 ∈ F n−k−ℓ q
and s 2 ∈ F ℓ q . One first solves for e 2 , assuming a weight v, i.e., e 2 A ⊤ 2 = s 2 and then checks if e 1 = s 1 − e 2 A ⊤ 1 has the remaining weight w − v and entries in E 0 . To solve the smaller instance given by (A 2 , v, s 2 ) one can use different approaches.
Before we start describing these approaches, let us fix some notation. To compare algorithms for fixed Q = log 2 (q), Z = log 2 (z), we are interested in the asymptotic cost, that is, we write the cost as 2 F (R,W )n , for some function
F (R, W ), where R = lim n→∞ k(n) n , W = lim n→∞ w(n)
n . Since we have seen that it is enough to solve the smaller instance of weight v and length k+ℓ, we also write L = lim n→∞ ℓ(n)
n ≤ 1−R, V = lim n→∞ v(n)
n ≤ min{W, R + L}, which are internal parameters and can hence be optimized. Then, the complexity of a decoder using the PGE setup is given by the following theorem, see, e.g., [14]. We compare our algorithms to different approaches, namely Stern/Dumer [8], [17] and BJMM [4], [10], which encompasses MMT as well [13], adapted to the new setting. We proceed by explaining how this adaption works.
Theorem 2. A generic decoder using PGE has time complex- ity 2 F (R,W )n , with F (R, W ) = N (R, W, L, V ) + C(R, L, V ), where N (R, W, L, V ) denotes the number of iterations, i.e., h(W ) − (R + L)h V R+L − (1 − R − L)h W −V 1−R−L and C(R, L, V )
A classical approach to enumerating all solutions of the small instance is performing a collision search. This technique was applied to hard knapsacks by Schroeppel and Shamir [16] and adopted for the syndrome decoding problem by Stern and Dumer [8], [17]. In this approach one uses a set partition of e 2 into e 2 = (x 1 , x 2 ), where both x i ∈ E (k+ℓ)/2 0 have weight v/2. One constructs lists containing such x i and by a collision search finds candidates e 2 . This merging process is called concatenation merge. For more details on the classical algorithm we refer to [8], [17] and for the adaption to [2].
Lemma 3. The enumeration cost for the smaller instance of the restricted Stern/Dumer algorithm is given by
C(R, L, V ) = max{ Σ 2 , Σ − Q · L}, where Σ = (R + L)h V R+L +
ZV is the asymptotic size of the search space, i.e., the set of all vectors that are well-formed, i.e., they satisfy the constraint under which the solutions of the small instance are enumerated.
As this is a well-known algorithm with the only change that the lists are taken in E 0 , rather than in F q , we omit the proof.
More in general, one can perform a concatenation merge of two lists L 1 , L 2 , of asymptotic size Λ = lim An alternative to this collision search is using the representation technique, which has proven efficient in solving the hard knapsacks [3], [11] and the classical SDP [4], [13].
Instead of a set partition, a sum partition is used: e 2 = x 1 + x 2 , where in the classical case the x i ∈ F k+ℓ q have weight v/2 + ε. This is chosen such that ε positions of their supports are overlapping and cancel out. Let us first introduce the number of ways we can write e 2 = x 1 + x 2 , i.e., the number of representations. For this purpose, one considers a fixed e 2 of weight v and computes the number of x 1 of weight v ′ , such that e 2 −x 1 is of weight v ′ . Let us denote this number of representations by r and u = log q (r).
Let us now recall how one performs a representation merge: given two lists L 1 , L 2 containing x i of weight v ′ , we add x = x 1 +x 2 to the resulting list L, whenever x attains a target weight v and xA ⊤ 2 = t, on the first u positions, for either t = s 2 , the target syndrome or t = 0, the zero vector. As for any x ∈ L there are r representations (x 1 , x 2 ), which all lead to the same x, by checking on u positions, it is guaranteed that one representation for each possible x survives the merge with high probability. In general, a representation merge of two lists L 1 , L 2 of asymptotic size Λ on u positions costs asymptotically 2 n max{Λ,2Λ−U} .
After the representation merge, one performs a filtering step, which removes vectors which are not well-formed, e.g., do not achieve a given weight constraint. Further steps can then utilize this smaller list.
In the following, we denote by BJMM(a) an algorithm that starts with a concatenation merge followed by a representation merges, since the optimal number of levels a might change depending on the parameters.
For a BJMM algorithm with a levels, we denote by Σ i the size of the search space, by V i = lim
Theorem 4. The enumeration cost of the BJMM(2) algorithm is given by
max{ Σ2 2 , Σ 2 − U 1 , 2Σ 2 − U 1 − U 0 , 2Σ 1 − U 0 − QL}, (1) where we can optimize V i , E i under the constraints that V i = V i−1 2 + E i , and Σ i = (R + L)h Vi R+L + ZV i , U i = V i + (R + L − V i )h Ei+1 R+L−Vi + ZE i+1 .
This algorithm can be used with any number of levels a, and the cost of a restricted BJMM(a) algorithm follows straightforwardly. Furthermore, for the subsequent modifications, we always refer to the cost (1), which can be computed using only the sizes of the search spaces and the number of representations.
In the following, we present some new algorithms derived from BJMM to solve the RSDP. For small choices of z, we can take advantage of the structure of E. Following the idea of [3], we add a few elements, denoted by E + , from {α + β | α, β ∈ E} to the restricted set E, to have more representations. Since we added new elements to the search space, in the final representation merge these elements in E + need to add up to elements in E 0 . We call those algorithms BJMM(a) + , to denote also the number of levels a.
e (i) V i M i e (i+1) 2 + = e (i+1) 1 V i /2 B i+1 E i+1 C i+1
A. Case z = 2
We generalize the classical BJMM-like approach by allow- algorithms are usually compared by going through all rates and fixing the weight as large as possible under the uniqueness condition, as this results in the hardest instances. However, for the new RSDP, this is not true in general. Thus, we chose to fix R = 0.5 and go through all weights W ∈ [0, 1], as they all allow for unique decoding. It can be observed that the adapted BJMM algorithm improves significantly over restricted Stern/Dumer for medium error weights. While for the classical SDP two representation levels give the best performance [4], here, three representation layers were found to be optimal. The generalization given in Corollary 5 provides a further improvement for increasing error weights. It was observed that the number of elements from E + is optimized to approximately 0 in the base lists. Hence, one can start with restricted base lists and not lose a noticeable amount of performance.
V i = V i−1 2 + E i + C i and M i = M i−1 − C i 2 + B i ,
results in
Σ i = (R + L)g Vi R+L , Mi R+L + Z(V i + M i ), U i = V i 1 + h 2Bi+1 Vi + M i g Ci+1 Mi , Mi−Ci+1 2Mi + (R + L − V i − M i )h Ei+1 R+L−Vi−Mi + ZE i+1 .
In [1], the case of z = 2 is considered with the particular choice of W = 1. As can be seen from Figure 2, in this weight regime, the approach of Corollary 5 does not offer any improvement over Stern/Dumer. For such instances, it is advantageous to shift the error vector e toẽ = (e + (1, . . . , 1))/2 ∈ {0, 1} n , which is done by computing s = s + (1, . . . , 1)H ⊤ /2, see e.g. [5]. The resulting error weight is approximatelyw = n/2. In order to solve the transformed instance, we follow the BCJ approach [3], i.e., increase the number of representations by allowing −1's in intermediate lists, which have to be added to the base lists. In the level i, for a e (i) ∈ {0, ±1} (k+ℓ) with v i 1's and m i −1's we write e (i) = e (i+1) 1 + e (i+1) 2 as in Figure 1, with b i+1 = c i+1 = 0 and outside of the support we choose ε i+1 1's that cancel with −1 and ε i+1 −1's that cancel with 1's.
Corollary 6. The exponent of the shifted BCJ(2) algorithm is calculated according to (1) with
V i = V i−1 2 + E i and M i = M i−1 2 + E i , Σ i = (R + L) · g V i R + L , M i R + L , U i = M i + V i + (R + L − V i − M i )h 2E i+1 R + L − V i − M i + 2E i+1 . B. Case z = 4
In this case, we have E = {±1, ±g} and define E + = {±(g + 1), ±(g − 1)}. Following the approach for z = 2, we obtain the same number of possibilities for choosing the supports of e . There are, however, more possibilities for picking the values in the chosen positions: there are two possibilities for obtaining any e ∈ E as the sum of α ∈ E and β ∈ E + and two possibilities for obtaining e ∈ E + as the sum of two elements in E. This increases the number of representations for level i overall by factor of 2 2(bi+1+ci+1) compared to the number computed for z = 2. Hence, we obtain the same V i , M i , Σ i as in Corollary 5 and the new number of representations is given by
U i = V i 1 + h 2Bi+1 Vi + M i g Ci+1 Mi , Mi−Ci+1 2Mi + (R + L − V i − M i )h Ei+1 R+L−Vi−Mi + ZE i+1 + 2(B i+1 + C i+1 ).
C. Case z = 6
In this case, we have E = {±1, ±g, ±(g − 1)}. Note that E already possesses additive structure: any element e ∈ E can be obtained as e = e 1 + e 2 = e 2 + e 1 with e 1 , e 2 ∈ E, e 1 = e 2 . This allows setting E + = E. Thus, using again Figure Figure 4 shows the curve of the complexity coefficient F (R, W ) for R = 0.5, q = 157 and z = 6. Using the additive structure of E as proposed in Corollary 7 enables a remarkable speedup over Stern/Dumer and the basic BJMM adaption. Experiments, for which we allowed elements from {±(g + 1), ±(2g − 1), ±(g − 2)} in intermediate lists, did not yield further performance improvements.
V i = Vi−1 2 + E i + B i , Σ i = (R + L)h Vi R+L + ZV i , U i = V i g 2Bi+1 Vi , Vi/2−Bi+1 Vi + 2B i+1 + (R + L − V i )h Ei+1 R+L−Vi + ZE i+1 .
D. Security Level Update for Instances from Literature
In the following, we apply the presented algorithms to the parameters proposed in [1], [2], [9], [18]. We did not perform a rigorous finite regime analysis, since already approximating the security level as 2 F (R,W )·n shows a considerable reduction, compared to the original analysis. The obtained results are summarized in Table I. Python code for reproducing the work factors and the parameters of the decoders is publicly available at github.com/sebastianbitzer/rest-dec.
E. Arbitrary z
While the presented attacks focused on error sets of size z ∈ {2, 4, 6}, the proposed solvers can also be generalized to larger values of z. Such a generalization can utilize any additive structure of E 0 . The concrete structure of E 0 depends on the factorization of x z − 1 in F q . As the factorization cannot be given in general, any proposed z should be checked independently. As we have seen, it can be beneficial to allow elements from E + to increase the number of representations. Finally, the possibility of transforming the problem by shifting the error vector has to be taken into account.
V. CONCLUSION
In this paper, we studied the complexity of the restricted syndrome decoding problem, which has recently gained attention in code-based cryptography. To this end, we adapted the representation technique, which is utilized in the fastest known solvers for syndrome decoding problems to the new setting.
In particular, small choices of the restriction cardinality z were considered, for which we provided novel tailored solvers which are inspired by [3]. This leads to a drastic decrease of the respective security levels. Nevertheless, we believe that the restricted syndrome decoding problem is a promising underlying problem for cryptographic applications. In contrast to previous proposals, we would like to advocate the use of larger values of z.
Problem 1 (
1Restricted Syndrome Decoding Problem (RSDP)). Let g ∈ F q be an element of order z and define the error set E = {g i | i ∈ {1, . . . , z}}. Let H ∈ F (n−k)×n q , w ∈ N and s ∈ F n−k q . Does there exist a vector e ∈ E n 0 = (E ∪ {0}) n with wt H (e) = w, such that s = eH ⊤ ?
with V 0 = V , the weight of the vectors, by E i = lim n→∞ εi(n) n the number of overlaps on level i ∈ {0, . . . , a}, starting from a. With U i = Q · lim n→∞ ui(n) n we denote the number of positions on which we merge.
Fig. 1 .
1Counting the number of representations for level i.
.
ing 1 + 1 = 2 (and −1 + (−1) = −2) in intermediate lists. We call this the BJMM(2) + algorithm. Thus, in this case we have E = {±1} and E + = {±2}. This changes the number of representations. In order to construct the intermediate lists using representation merge, we have the usual v i entries in {±1} and we also require m i to denote the number of ±2's on level i. Then, the number of well-formed vectors on level i is given by k+l vi,mi 2 vi+mi . The number of representations of e (i) = e i is counted as per Figure 1. For this it is enough to count the number of e There are vi vi/2 ways of splitting the support of the elements in E, without choosing the entries. Out of the chosen v i /2 we chose b i+1 positions, that overlap with ±2's in e (i+1) 2 and also in the v i /2 non-chosen positions we choose b i+1 many positions to be ±2. For this we have vi/2 bi+1 2 possibilities. Out of the m i many ±2's on level i, c i+1 are constructed as ±(1 + 1). The remaining (m i − c i+1 ) many ±2's are obtained by support splitting. This results in mi (mi−ci+1)/2,ci+1 . Finally, one can choose ε i+1 out of the k + ℓ − m i − v i zero-positions.While this enlarges the number of well-formed vectors on the intermediate levels, the number of representations is also increased. Let M i Then, the following corollary holds.
Corollary 5 .
5The exponent of the enumeration cost of the BJMM(2) + algorithm is calculated according to(1). Using
Figure 2 Fig. 2 .
22shows the curve of the complexity coefficient F (R, W ) for R = 0.5, q = 157 and z = 2. Comparison of the complexity coefficients F (R, W ) for restricted Stern/Dumer, the general adaption of BJMM and the generalization given in Corollary 5 using q = 157, z = 2 and R = 0.5.
Figure 3
3shows the curve of the complexity coefficient F (R, W ) for R = 0.5, q = 157 and z = 4. Again, BJMM improves over Stern for medium error weights. The generalization using E + gives a further speedup for increased weights.
Fig. 3 .
3Comparison of the complexity coefficients F (R, W ) for restricted Stern/Dumer, the general adaption of BJMM and the proposed generalization using q = 157, z = 4 and R = 0.5.
v i positions of e (i) in E. The remaining factors are calculated as before, setting m i = c i = 0. Corollary 7. The exponent of the enumeration cost of the BJMM(2) + algorithm is calculated according to (1) using
Fig. 4 .
4Comparison of the complexity coefficients F (R, W ) for restricted Stern/Dumer, the general adaption of BJMM and the generalization given in Corollary 7 using q = 157, z = 6 and R = 0.5.
denotes the time complexity of solving the smaller instance, i.e., the time to enumerate all solutions of the smaller instance under the assumed weight distribution.
TABLE I ESTIMATES
IOF THE WORK FACTORS OF THE PROPOSED ALGORITHMS COMPARED TO CLAIMED SECURITY LEVELS FOR INSTANCES FROM LITERATURE.z
q
n
R
W
claim
this work
(bit)
(bit)
[2] 2
16 381 400
0.75 0.16
128
BJMM(2) +
69
[2] 2
16 381 500
0.75 0.13
128
BJMM(2) +
76
[2] 2
16 381 400
0.80 0.14
128
BJMM(2) +
69
[2] 2
32 749 500
0.75 0.13
128
BJMM(2) +
76
[2] 2
32 749 600
0.66 0.14
128
BJMM(2) +
82
[1] 2
29 167
0.79 1.00
87
shifted BCJ(3) 48
[1] 2
31 256
0.80 1.00
128
shifted BCJ(3) 74
[18] 4
109 270
0.50 0.34
125
BJMM(2) +
75
[18] 4
157 312
0.50 0.34
144
BJMM(2) +
85
[18] 4
197 384
0.50 0.34
177
BJMM(2) + 103
[9] 4
137 272
0.20 0.60
88
BJMM(2) +
45
[9] 4
157 312
0.20 0.60
101
BJMM(2) +
51
[9] 4
173 344
0.20 0.60
111
BJMM(2) +
56
[9] 4
193 384
0.20 0.60
124
BJMM(2) +
62
[18] 6
139 276
0.02 0.60
89
BJMM(3) +
50
[18] 6
157 312
0.20 0.60
101
BJMM(3) +
55
[18] 6
193 384
0.20 0.60
124
BJMM(3) +
67
A new path to code-based signatures via identification schemes with restricted errors. Marco Baldi, Massimo Battaglioni, Franco Chiaraluce, Anna-Lena Horlemann-Trautmann, Edoardo Persichetti, Paolo Santini, Violetta Weger, arXiv:2008.06403arXiv preprintMarco Baldi, Massimo Battaglioni, Franco Chiaraluce, Anna-Lena Horlemann-Trautmann, Edoardo Persichetti, Paolo Santini, and Violetta Weger. A new path to code-based signatures via identification schemes with restricted errors. arXiv preprint arXiv:2008.06403, 2020.
Code-based signatures without trapdoors through restricted vectors. Cryptology ePrint Archive. Marco Baldi, Franco Chiaraluce, Paolo Santini, Marco Baldi, Franco Chiaraluce, and Paolo Santini. Code-based sig- natures without trapdoors through restricted vectors. Cryptology ePrint Archive, 2021.
Improved generic algorithms for hard knapsacks. Anja Becker, Jean-Sébastien Coron, Antoine Joux, Advances in Cryptology -EUROCRYPT 2011. SpringerAnja Becker, Jean-Sébastien Coron, and Antoine Joux. Improved generic algorithms for hard knapsacks. In Advances in Cryptology - EUROCRYPT 2011, pages 364-385. Springer, 2011.
Decoding random binary linear codes in 2 n/20 : How 1+1 = 0 improves information set decoding. Anja Becker, Antoine Joux, Alexander May, Alexander Meurer, Advances in Cryptology -EUROCRYPT 2012. SpringerAnja Becker, Antoine Joux, Alexander May, and Alexander Meurer. Decoding random binary linear codes in 2 n/20 : How 1+1 = 0 improves information set decoding. In Advances in Cryptology -EUROCRYPT 2012, pages 520-536. Springer, 2012.
Ternary syndrome decoding with large weight. Rémi Bricout, André Chailloux, Thomas Debris-Alazard, Matthieu Lequesne, Selected Areas in Cryptography -SAC 2019. SpringerRémi Bricout, André Chailloux, Thomas Debris-Alazard, and Matthieu Lequesne. Ternary syndrome decoding with large weight. In Selected Areas in Cryptography -SAC 2019, pages 437-466. Springer, 2020.
Kevin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, Jean-Pierre Tillich, arXiv:2208.02201Statistical decoding 2.0: Reducing decoding to LPN. arXiv preprintKevin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, and Jean-Pierre Tillich. Statistical decoding 2.0: Reducing decoding to LPN. arXiv preprint arXiv:2208.02201, 2022.
Wave: A new family of trapdoor one-way preimage sampleable functions based on codes. Thomas Debris-Alazard, Nicolas Sendrier, Jean-Pierre Tillich, Advances in Cryptology -ASIACRYPT 2019. SpringerThomas Debris-Alazard, Nicolas Sendrier, and Jean-Pierre Tillich. Wave: A new family of trapdoor one-way preimage sampleable functions based on codes. In Advances in Cryptology -ASIACRYPT 2019, pages 21-51. Springer, 2019.
Two decoding algorithms for linear codes. I Il'ya, Dumer, Problemy Peredachi Informatsii. 251Il'ya I. Dumer. Two decoding algorithms for linear codes. Problemy Peredachi Informatsii, 25(1):24-32, 1989.
A new class of q-ary codes for the McEliece cryptosystem. Jürgen Freudenberger, Johann-Philipp Thiers, Cryptography. 5111Jürgen Freudenberger and Johann-Philipp Thiers. A new class of q-ary codes for the McEliece cryptosystem. Cryptography, 5(1):11, 2021.
Generalization of BJMM-ISD using May-Ozerov nearest neighbor algorithm over an arbitrary finite field Fq. Jean Cheikh Thiécoumba Gueye, Shoichi Belo Klamti, Hirose, International Conference on Codes, Cryptology, and Information Security. SpringerCheikh Thiécoumba Gueye, Jean Belo Klamti, and Shoichi Hirose. Gen- eralization of BJMM-ISD using May-Ozerov nearest neighbor algorithm over an arbitrary finite field Fq. In International Conference on Codes, Cryptology, and Information Security, pages 96-109. Springer, 2017.
New generic algorithms for hard knapsacks. Nick Howgrave, - Graham, Antoine Joux, Advances in Cryptology -EUROCRYPT 2010. SpringerNick Howgrave-Graham and Antoine Joux. New generic algorithms for hard knapsacks. In Advances in Cryptology -EUROCRYPT 2010, pages 235-256. Springer, 2010.
Factoring polynomials with rational coefficients. Hendrik Willem Arjen K Lenstra, László Lenstra, Lovász, Mathematische Annalen. 2614Arjen K Lenstra, Hendrik Willem Lenstra, and László Lovász. Fac- toring polynomials with rational coefficients. Mathematische Annalen, 261(4):515-534, 1982.
Decoding random linear codes in O(2 0.054n ). Alexander May, Alexander Meurer, Enrico Thomae, Advances in Cryptology -ASIACRYPT 2011. SpringerAlexander May, Alexander Meurer, and Enrico Thomae. Decoding random linear codes in O(2 0.054n ). In Advances in Cryptology - ASIACRYPT 2011, pages 107-124. Springer, 2011.
A coding-theoretic approach to cryptanalysis. Alexander Meurer, Ruhr-Universität BochumPhD thesisAlexander Meurer. A coding-theoretic approach to cryptanalysis. PhD thesis, Ruhr-Universität Bochum, 2013.
The use of information sets in decoding cyclic codes. Eugene Prange, IRE Transactions on Information Theory. 85Eugene Prange. The use of information sets in decoding cyclic codes. IRE Transactions on Information Theory, 8(5):5-9, 1962.
A t = O(2 n/2 ), s = O(2 n/4 ) algorithm for certain NP-complete problems. Richard Schroeppel, Adi Shamir, SIAM journal on Computing. 103Richard Schroeppel and Adi Shamir. A t = O(2 n/2 ), s = O(2 n/4 ) algorithm for certain NP-complete problems. SIAM journal on Comput- ing, 10(3):456-464, 1981.
A method for finding codewords of small weight. Jacques Stern, International Colloquium on Coding Theory and Applications. SpringerJacques Stern. A method for finding codewords of small weight. In International Colloquium on Coding Theory and Applications, pages 106-113. Springer, 1988.
Generalized concatenated codes over Gaussian and Eisenstein integers for code-based cryptography. Johann-Philipp Thiers, Jürgen Freudenberger, Cryptography. 5433Johann-Philipp Thiers and Jürgen Freudenberger. Generalized con- catenated codes over Gaussian and Eisenstein integers for code-based cryptography. Cryptography, 5(4):33, 2021.
A survey on code-based cryptography. Violetta Weger, Niklas Gassner, Joachim Rosenthal, arXiv:2201.07119arXiv preprintVioletta Weger, Niklas Gassner, and Joachim Rosenthal. A survey on code-based cryptography. arXiv preprint arXiv:2201.07119, 2022.
On the hardness of the Lee syndrome decoding problem. Violetta Weger, Karan Khathuria, Anna-Lena Horlemann, Massimo Battaglioni, Paolo Santini, Edoardo Persichetti, Advances in Mathematics of Communications. Violetta Weger, Karan Khathuria, Anna-Lena Horlemann, Massimo Battaglioni, Paolo Santini, and Edoardo Persichetti. On the hardness of the Lee syndrome decoding problem. Advances in Mathematics of Communications, 2022.
| [] |
[
"Partial Group Representations on Semialgebras",
"Partial Group Representations on Semialgebras"
] | [
"R P Meenakshi ",
"Sharma "
] | [] | [
"Indian Journal of Advanced Mathematics (IJAM)"
] | Let A bean additively cancellative semialgebra over an additively cancellative semifield K as defined in [9]. For a given partial action α of a group G on an algebra, the associativity of partial skew group ring together with the existence and uniqueness of enveloping (global) action were studied by M. Dokuchaev and R. Exel [2] which were extended for semialgebras with some restriction by Sharmaet. al. using the ring of differences. In a similar way, we extend the results of [2,3]for semialgebras regarding partial representations. | 10.54105/ijam.a1147.043123 | [
"https://export.arxiv.org/pdf/2306.04749v1.pdf"
] | 257,625,921 | 2306.04749 | e93ecad7f9d1c19737c9d5a709d4e32788eb6927 |
Partial Group Representations on Semialgebras
April 2023
R P Meenakshi
Sharma
Partial Group Representations on Semialgebras
Indian Journal of Advanced Mathematics (IJAM)
1April 202310.54105/ijam.A1147.04312320 Published By: Lattice Science Publication (LSP) © Copyright: All rights reserved. Retrieval Number: 100.1/ijam.A1147043123Semialgebraspartial group actionspartial representations AMS Subject Classification: 22E46, 53C35, 57S20
Let A bean additively cancellative semialgebra over an additively cancellative semifield K as defined in [9]. For a given partial action α of a group G on an algebra, the associativity of partial skew group ring together with the existence and uniqueness of enveloping (global) action were studied by M. Dokuchaev and R. Exel [2] which were extended for semialgebras with some restriction by Sharmaet. al. using the ring of differences. In a similar way, we extend the results of [2,3]for semialgebras regarding partial representations.
I. INTRODUCTION
This paper is in continuation of Sharma et. al. [11], wherein the authors gave some applications of some theorems to partial ( , ) −sets and to distinguished the labelling of partial actions. For a given strong partial action ( , ) of a finite group on an additively cancellative yoked semialgebra , a morita equivalence between Partial Actions and Global Actions between skew group semirings arose from the partial action has been constructed in [10]. Now, exploring different analysis of generalized form of partial actions, Sharma et. al. [9] defined the tensor product of semimodules over semirings and they showed with respect to the defined tensor product the category ( − , ⨂ , ) becomes a monoidal category, where is a commutative additively cancellative semiring with identity. The monoids of this monoidal category are −semialgebras. Defining semialgebras, in their paper, the authors studied the global actions of partial actions on semialgebras and the conditions for associativity of partial skew group semirings. In this paper, we study partial representations of partial actions on semialgebras. We recall from [2] that partial action of a group on a semialgebra is defined as: is a pair = {{ } ∈ , { } ∈ }, where for each ∈ , is a subset of and ∶ −1 → is a bijective map, satisfying the following three properties for each , ℎ ∈ :
(i) 1 = and 1 = , the identity map on ,
(ii) ( −1 ∩ ℎ ) = ∩ ℎ ,(iii)( ℎ ( )) = ℎ ( ) for ∈ ℎ −1 ∩ ℎ −1 −1 .
Following [1], for convenience we write ∃ · to mean that · is defined; more precisely, ∃ · means ∈ −1 and · ∈ . Thus from [1], we have an equivalent definition of a partial action of a group on a set as follows:
Definition 1.2.
The partial action of a group on a set is a partial function from × to together with the following conditions:
(PA1) ∃ ·
for all ∈ and · = , where is the identity of ;
(PA2) ∃ · implies that ∃ −1 · ( · ) and −1 · (g · x) = x, g ∈ G and ∈ ;
(PA3) ∃ · (ℎ · ) implies that ∃ ℎ · x and · (ℎ · ) = ( ℎ) · .
In case is a semialgebra, partial maps are semialgebra homomorphisms. Throughout this paper, is a finite group acting partially on an additively cancellative −semialgebra , where is an additively cancellative semifield. The identities of both and are denoted by 1 and the identity of by .
If is a finite group and is a field, then representations theory of on -algebras is identical to the representation of associative group algebra . The associative partial group algebra of , denoted by ( ) plays similar role for the partial representations of . In particular, for a finite group , there is a one-one correspondence between the partial representations of and the representations of ( ), which is isomorphic to groupoid semialgebra ( ). These results are proved in section 3.
In section 4, we show that ( ) is the direct sum of matrix semialgebras over the semirings , where is a subgroup of .
II. PRELIMINARIES
First we recall the definition of a variety from Mac Lane S. [7][8].
). The set of identities for algebraic systems of type is a set of ordered pairs ⟨ , µ⟩ of derived operators, where and µ have the same arity . If is any set, an action of Ω on is a function which assigns to each operator of arity an −ary operation ∶ → .
An action of Ω on satisfies the identity ⟨ , µ⟩ if = µ ∶ → .
An ⟨Ω, ⟩−algebra is a set to gather with an action of Ω on which satisfies all the identities of .
) = ( 1 )( 2 ) … ( ) for ∈
is a morphism from to .
(ii) For any nullary operator , the function
: → defined by ( ) = 0 , ∈ ,
where 0 is the element in distinguished by ; is a morphism of algebras.
A variety whose all algebras are abelian is called an abelian variety. Example 2.1. Let ̃ the category of all commutative monoids. Then ̃ is a variety. Here Ω contains two operators, the product and the assignment of identity element , of arities 2 and 0 respectively, and contains ( ) the axiom for the identity ( = = ) ; (ii) the associative and commutative laws.
Let
, ∈̃ and ( , ) denote the free monoids generated by × , be the congruence on ( , ) generated by the pairs ⟨( 1 2 , ), ( 1 , )( 2 , )⟩ and ⟨( , 1 2 ), ( , 1) ( , 2 )⟩.
Take ⊗ as ( , )⧸ . Let 1 , 1 ∈̃ and ∈̃( , 1 ) and ∈̃( , 1 ), the assignments ( , ) ⟼ ⊗ and ( , ) ⟼ ⊗ determine the bifunctor _ ⊗ _:̃×̃→̃. Since ̃ is an abelian variety. So, the bifunctor _ ⊗ _:̃×̃→̃ is an internal tensor product by Katsov Y. [6]. Now we recall some results and definitions from [9].
Let
− and − denote the categories of left and right − semi-modules, respectively, over a semiring . The following three results can be derived easily in the category ̃o f commutative monoids in place of commutative inverse monoids and without assuming an additively regular semiring as in Katsov Y. for ∈̃( , ) .
Then the assignments ( , ) ↦̃( , ) and ( , ) ↦̃( , ) determine a bifunctor from
( − ) ×̃ into − .
Definition 2.3. (Tensor product of semimodules)
Let ∈ − and ∈ − then both and are commutative monoids, so are in ̃a nd therefore has tensor product ⊗ (considered as commutative monoids). The tensor product ⊗ is defined as the factor monoid ( ⊗ ) ⧸ , where is the congruence on ⊗ generated by the pairs ⟨ ⊗ , ⊗ ⟩ for all ∈ , ∈ and ∈ .
Given , ∈ ̃, let ℎ( , ) denote the category having bihomomorphisms ∶ × → as objects; and morphisms between bihomomorphisms
∶ × → and 1 ∶ × → 1 are homomorphisms ∶ → 1 of ̃ such that • = 1 . Lemma 2.1. If ∈ − and ∈ − ,
then for any balanced product ( , ) of and there exists a unique morphism of monoids
∶ ⊗ → such that = ∘ , where is the composition of the morphism × ℎ → ⊗ → ⊗
; ℎ is the initial object in the category ℎ( , ) and is the canonical epimorphism.
The usual tensor product [5] is not valid for semimodules over semirings and the category − is not a monoidal category with the respect to the tensor product defined in [4]. However, the tensor product defined in [10] gives
III. PARTIAL GROUPOID SEMIALGEBRAS
Let be a finite group with the identity and −an additively cancellative semi-field. The partial groupoid semialgebra ( ) is defined similar to the partialgroupoid algebra. In this section we consider groupoid ( ) constructed for in [3] which gives ( ) ≅ ( ) and there is a one-one correspondence between the partial representations of and the representations of ( ). First, we define Remark 3.1. The map g ↦ [g] is a partial representation of on ( ) . If A is an additively cancellative K −semialgebra and π ∶ G → A is a partial representation of G on A, then π extends uniquely by linearity to a representation ϕ: Kpar(G) → A(ϕ([g]) = π(g)). Conversely, if ϕ ∶ Kpar(G) → A is a unital homomorphism of K −semialgebra, then π(t) = ϕ([t]) is a partial representation of G on A.
We show that for a groupoid Γ(G), there is a one-one correspondence between the partial representations of G and the representations of groupoid semialgebra KΓ(G). Note that if is additively cancellative, then ( ) is additively cancellative. Hence, both △ and ( Γ(G)) △ exist and in this case we have Proof. Since Γ(G) is − semialgebra, by [9] ( Γ(G)) △ ≅ Γ( ) ∆ is ∆ −algebra. Hence, it suffices to show that ( Γ(G)) △ ≅ △ Γ(G).
Define a map : ∆ Γ(G) ≅ Γ( ) ∆ by
f ∑ (( − ) ) = ∑ − ∑ (3.1)
Since { } are ∆ −basis for ∆ Γ(G) and −basis for KΓ(G), ∑ (( − ) = ∑ ( ′ − ′) ), implies ( − ) = ( ′ − ′ ) for all , so that ( + ′ ) = ( ′ + ). Therefore, ( + ′ ) = ( ′ + ) and hence ∑ − ∑ = ∑ ′ − ∑ ′ , proving that the map is well-defined. By reversing the steps, it follows that is one-one. For onto, let ∑ − ∑ ∈(KΓ( ) ∆ . Then ∑ (( − ) ) ∈ Γ(G) and
(∑ (( − ) ) = ∑ − ∑ Now, we will show that is a homomorphism. For, let 1 , 2 ∈ Γ( ), where 1 = (ℎJ, g) and 2 = ( , ℎ) , then
(( 1 − 1 ) 1 · ( 2 − 2 ) 2 ) = (( 1 − 1 )( ℎ , ) · ( 2 − 2 )( , ℎ) = (( 1 − 1 )( 2 − 2 )(ℎ , )( , ℎ) = (( 1 − 1 )( 2 − 2 )( , ℎ) = (( 1 2 + 1 2 ) − ( 1 2 + 1 2 )( , ℎ)) = ( 1 2 + 1 2 )( , ℎ) − ( 1 2 + 1 2 )( , ℎ) = ( 1 − 1 )( ℎ , ) ⋅ ( 2 − 2 )( , ℎ) = (( 1 − 1 ) 1 ) ((( 2 − 2 ) 2 ).
We note from the above lemma that Γ( ) embeds in ∆ Γ( ) as embeds ∆ and the elements ( , ) ∈ Γ( ) are mutually orthogonal and their sum is the identity in both Γ( ) and ∆ Γ( ). i s a partial representation of .
Proof. This follows by using the multiplication in Γ(G) and the fact that∑ (I, e) e∈I = 1.
Theorem 3.1.
There is a one-to-one correspondence between the partial representations of G and the representations of KΓ(G). More precisely, if A is any additively cancellative unital K-semialgebra, then π ∶ G → A is a partial representation of G if and only if there exists a unital semialgebra homomorphism ̃: KΓ(G) → A such that = πo λ p . Moreover, such a homomorphism π is unique.
Proof.
If ̃∶ Γ( ) → is a unital homomorphism of − semialgebras, then we have to show that the composite map = ̃°∶ → is a partial representation of on . Since ̃ is a unital semialgebra homomorphism ̃: Γ( ) → , therefore
( ) = 1. For , ℎ ∈ , we have ( −1 ) ( ) (ℎ) = (∑ ( , −1 )( ∈ ̃(∑ ( , )−1 ∈ ∑ ( , ℎ) ℎ −1 ∈ = ̃(∑ ( , )−1 ∈ ∑ ( , ℎ) ℎ −1 ∈ = (∑ ( , ) −1 ∈ ∑ ( , ℎ) ℎ −1 ∈ ) =̃(∑ ( , ℎ)) ℎ −1 −1 ∈ ℎ −1 ∈ .
On the other hand,
( −1 ) ( ℎ) = (∑ ( , −1 )( ∈ ̃(∑ ( , ℎ) = ℎ −1 −1 ∈̃( ∑ ( ℎ , −1 ) ℎ −1 −1 ∈ ℎ −1 ∈ ( , ℎ) = (∑ ( , ℎ) ℎ −1 −1 ∈ ℎ −1 ∈ , so that ( −1 ) ( ) (ℎ) = ( −1 ) ( ℎ).
Similarly, we can verify that ( ) (ℎ) (ℎ −1 ) = ( ℎ) (ℎ −1 ).
Conversely, suppose π: G → A → A ∆ be a partial representation of on , so by [3, Theorem 2.6 ], there exists a unital homomorphism ̃∆: ∆ Γ( ) → ∆ such that π =̃∆ o λ p . where ( ) = ( ) ( −1 ), ∈ . Hence, ̃∆( , ) ∈ , since is initially a map from to . As ∆ Γ( ) ≅ ( Γ( )) ∆ , let ̃ be the restriction of ̃∆ on Γ( ). Then ̃(∑ ( , )) = ∑̃( , ) ∈ , as ̃∆( , ) ∈ for all ( , ) ∈ Γ ).
Hence, we have the map ̃∆: Γ( ) → , a homomorphism which obviously satisfies ̃∆ (λ p (g)) = π(g). Suppose there exists an another homomorphism τ: KΓ(G) → A such that π = τ ∘ λp with ̃′ ≠̃ Then can be extended to a homomorphism ∆ (( − ) ) = ( ) − ( ), , ∈ , ∈ Γ( ). Obviously, ∆ also satisfies = ∆ ∘ . This contradicts the uniqueness of ̃∆ as ≠̃ implies ∆ ≠̃∆ .
IV. STRUCTURE OF THE PARTIAL GROUP SEMIALGEBRAS
In this section, we study the structure of the groupoid semialgebra Γ( ) which is isomorphic to ( ); we prove that ( ) is the direct sum of matrix semialgebra over the semirings , where is any subgroup of . We follow the following notation from [3].
Given a finite group and a positive integer , let Γ denote the groupoid whose elements are triplets (ℎ, i, j) where ℎ ∈ H and , ∈ (1, 2, . . . , ). The source and the range map on Γ are defined by s(ℎ, i, j) = j and r(ℎ, i, j) = i. The product is defined by ( , , ) · (ℎ, , ) = ( ℎ, , ). The units of are the elements of the form ( , , ); = 1,2, … , .
A groupoid is represented with an oriented graph Γ , whose vertices are the units of the groupoid, and each element ∈ gives an oriented edge of from the vertex (γ) to the vertex r(γ).
A connected component of gives a subgroupoid of Γ.
In the case of the groupoid Γ , the corresponding graph Γ has precisely vertices, and between any two vertices there are precisely |H| oriented edges (in each direction) labeled by the elements of H.
Proposition 4.1.
Let be a groupoid such Γ that is connected and = |Γ (0) | is finite. Let 1 be any vertex of Γ , and be the isotropy group of 1 , which is defined by = { ∈ ∶ ( ) = ( ) = 1 }. Then
(1) Γ ≅ Γ (2) Γ ≅ ( ).
The proof of (1) is same as in [3].
Before proving (2), we prove a result which will be used to define an isomorphism between groupoid semialgebra and ( ). The rest of the proof follows from equation (14) of [3].
V. CONCLUSION
For a finite group , there is a one-one correspondence between the partial representations of and the representations of ( ), which is isomorphic to groupoid semialgebra ( ). We see that ( ) is the direct sum of matrix semialgebras over the semirings , where is a subgroup of .
Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of the Lattice Science Publication (LSP)/ journal and/ or the editor(s). The Lattice Science Publication (LSP)/ journal and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content
Manuscript received on 24 February 2023 | Revised Manuscript received on 16 March 2023 | Manuscript Accepted on 15 April 2023 | Manuscript published on 30 April 2023.
Proposition 2. 1 .. 2 .
12Let N be a commutative monoid, Let G be a left K−semimodule, N, M ∈ M and ∈ ̃( , ). Let ̃(1 , ) ∶ ̃( , ) → ̃( , ) be the map →°°1 for ∈̃( , ) . Then the assignments ↦ ̃( , ) and ↦ ̃( 1 , ) determine the additive functor (_ ) from ̃ to − . ) and ∈ ̃( , ), let ̃( , ): ̃( , ) →̃( 1, ) be the map ↦°°1
Definition 3.1. A partial representation of a group into a unital −semialgebra is a map ∶ → such that for all , ℎ ∈ , we have ( ) (ℎ) (ℎ − 1 ) = ( ℎ)
For
First we recall the definition of Γ(G). For a finite group , ( ) = {( , )| ∈ and , −1 ∈ ⊆ }, so for ( , ) ∈ ( ) we have e, ∈ . The multiplication of pairs ( , ), ( , ℎ) in is defined if = ℎ and in this case (ℎ , ) · ( , ℎ) = ( , ℎ)The groupoid -semialgebra KΓ(G) is given by ∑ , ∈ , ∈ } in which the sum is component wise and the multiplication is given by:
LeΣmma 3. 2 .
2The map λp: G → KΓ(G → ∆ Γ(G) defined by λp(
We note from[3, Theorem 2.6] that for all elements ( , ) in Γ(G), we have ̃∆( ,
Corollary 3. 1 .
1The groupoid semialgebra KΓ(G)is isomorphic to the partial group semialgebra ( ). Proof. The maps []: G → K par (G), given by g → [g] and λp ∶ G → KΓ(G) are partial representation of . By the universal property of the two semialgebras K par (G) and KΓ(G), there exists a unital −semialgebra homomorphism ̃: KΓ(G) → K par (G) and ϕ ∶ K par (G) → KΓ(G) such that ̃(λp (g)) = [g] and ϕ([g]) = (g). Now, for the composite map ̃∘ ϕ ∶ K par (G) → K par (G), we have ̃∘ ([ ]) = ( ( )) = [ ] And for the composite map ∘̃: Γ( ) ⟶ Γ( ), we have ∘̃(is, these maps are the identities on their domains, since {[ ] ∶ ∈ } is the set of generators for ( ) and {( , ) ∈ ( )} is the set of generators for ( ).
Lemma 4. 1 .
1Let be an additively cancellative semifield and be any groupoid. Then ( ) ≅ ( )⨂ , where ⨂ is defined as in[9].Proof. Since( ) is a right −semimodule over and is a left −semimodule over , we define a map :( ) × → ( ), where ( ) is an additive abelian group, such that ( , ℎ) = ( , ℎ) × .Then ( ( ), )is a balanced product of ( ) and . Therefore, by Lemma 2.1, there exists a unique homomorphism of semimodules :( )⨂ → ( ) defined by ( ⨂ ℎ) = ( , ℎ) such that = o , where is the map from ( ) × to ( )⨂ defined by ( , ) ↦ ( ⊗ ).Now, we have to prove that is one-one and onto. For, define a map : . . . , } denotes the set of matrix units of ( ;i.e., is the matrix with null entries, except for the entry in the position ( , ) which is equal to the identity of . Clearly, is a homomorphism of −semimodules. ℎ , that is the two composite maps ∘ and ∘ are the identity maps on their respective domains. Therefore, is the inverse of ; hence is one-one and onto, from which we have Mm(KH) ∼ = Mm(K) ⊗K KH. . . . , } be as in above lemma. For any group , the map (ℎ, , ) ⟼ , ⊗ ℎ extends by linearity to a −semialgebra isomorphism between K and Mm (K) ⊗KKH= ∼ Mm(KH),which gives the required result.Theorem4.1. The semigroupoid semialgebra( Let be any subset of containing the identity of . Let = ( ), the stabilizer of in , given by ( ) = { ∈ : = }.In the graph Γ( ) , ( ) is identified with the set of edges departing and terminating at the vertex ( , ). Since ∈ , we have ⊆ . Since acts on the left on , then is the union of right cosets of , say = ⋃ , 1 = , where = | | | | . As observed in [3, Theorem 3.2] the sub-groupoid of ( ) corresponding to the connected component of the vertex of the graph Γ( ) is isomorphic to the groupoid Γ . Observe that the stabilizer of , which is , coincides with isotropy group of the unit ( , ) ∈ Γ( ) (0)
DECLARATIONFunding/ Grants/ Financial SupportNo, I did not receive.Conflicts of Interest/ Competing InterestsNo conflicts of interest to the best of our knowledge.Ethical Approval and Consent to ParticipateThe article does not require ethical approval and consent to participate with evidence. Availability of Data and Material/ Data Access Statement Not relevant.Authors ContributionsAll authors have equal participation in this article
Transitive partial actions of groups. Keunbae Choi, Lim Yongdo, 10.1007/s10998-008-6169-8Periodica Mathematica Hungarica. 562Keunbae Choi and Lim Yongdo, Transitive partial actions of groups, Periodica Mathematica Hungarica , Vol 56 (2), 2008, pp. 169-181. [CrossRef]
Associativity of crossed products by partial actions, enveloping actions and partial representations. M Dokuchaev, R Exel, 10.1090/S0002-9947-04-03519-6Trans. Amer. Math. Soc. 3575M. Dokuchaev and R. Exel, Associativity of crossed products by partial actions, enveloping actions and partial representations, Trans. Amer. Math. Soc. 357(5) (2005) 1931-1952. [CrossRef]
M Dokuchaev, R Exel, P Piccione, 10.1006/jabr.1999.8204Partial Representations and Partial group algebras. 226CrossRefM. Dokuchaev, R. Exel and P. Piccione, Partial Representations and Partial group algebras, Journal of Algebra 226, 505- 532(2000). [CrossRef]
J S Golan, 10.1007/978-94-015-9333-5Semirings and their applications. Kluwer Academic PublishersCrossRefJ. S. Golan, Semirings and their applications ( Kluwer Academic Publishers, 1999). [CrossRef]
Basic Algebra II. N Jacobson, Hindustan Publishing CorporationIndiaN. Jacobson, Basic Algebra II, Hindustan Publishing Corporation (India).
Tensor Products and Injective Envelopes of Semimodules over Additively Regular Semirings. Y Katsov, Algebra Colloquium. 42Y. Katsov (1997), Tensor Products and Injective Envelopes of Semimodules over Additively Regular Semirings. Algebra Colloquium 4:2 (1997) 121-131, 1997.
Categories for the working Mathematician. S Mac Lane, Springer-VerlagNew YorkS. Mac Lane, Categories for the working Mathematician, Springer-Verlag, New York, 1971.
Semialgebras and their algebras of differences with partial group actions on them, Asian-Eur. R P Sharma, Anu , 10.1142/S1793557113500381J. Math. 63135003820 pages). [CrossRefR.P. Sharma and Anu, Semialgebras and their algebras of differences with partial group actions on them, Asian-Eur. J. Math., vol. 6, no. 3 (2013) 1350038 (20 pages). [CrossRef]
Partial group actions on semialgebras, Asian-Eur. R P Sharma, Anu , N Singh, 10.1142/S179355711250060XID:1250060J. Math. 5420R.P. Sharma, Anu and N. Singh, Partial group actions on semialgebras, Asian-Eur. J. Math. 5(4) (2012), Article ID:1250060, 20pp. [CrossRef]
Morita Equivalence between Partial Actions and Global Actions for Semialgebras. R P Sharma, Meenakshi , J. of Combinatrics, Information & System Sciences. 411-2R.P. Sharma and Meenakshi, Morita Equivalence between Partial Actions and Global Actions for Semialgebras, J. of Combinatrics, Information & System Sciences, 41(1-2), 2016, 65-77.
Distinguishing Labelling of Partial Actions. R P Sharma, Rajni Parmar, Meenakshi , 10.12988/ija.2015.5851International Journal of Algebra. 89R.P. Sharma, Rajni Parmar and Meenakshi, Distinguishing Labelling of Partial Actions, International Journal of Algebra, (9)8, 2015, 371-377. [CrossRef]
AUTHORS PROFILE. AUTHORS PROFILE
Designation: Assistant Professor Education: Ph. D. Area of Interest: Algebra Teaching Experience: 2 years Number of supervised M.Sc. students for their Project Submission. Dr, Meenakshi, 12Dr. Meenakshi, Designation: Assistant Professor Education: Ph. D. Area of Interest: Algebra Teaching Experience: 2 years Number of supervised M.Sc. students for their Project Submission: 12
Distinguishing Labelling of Partial Actions. ➢ R P Sharma, Rajni Parmar, Meenakshi , Intaernational Journal of Algebra. 98➢ R. P. Sharma, Rajni Parmar and Meenakshi, Distinguishing Labelling of Partial Actions, Intaernational Journal of Algebra, 9 (8) (2015), 371-377.
Morita Equivalence Between Partial Actions and Global Actions for Semialgebras. ➢ R P Sharma, Meenakshi , J. of Combinatorics, Information & System Sciences. 411➢ R. P. Sharma and Meenakshi, Morita Equivalence Between Partial Actions and Global Actions for Semialgebras, J. of Combinatorics, Information & System Sciences, 41 (No.1-2) (2016), 65-77.
On Construction of Global Actions for Partial Actions. ➢ R P Sharma, Meenakshi , to appear in Proceedings of ICAA to be published by De Gruyter➢ R. P. Sharma and Meenakshi, On Construction of Global Actions for Partial Actions, to appear in Proceedings of ICAA to be published by De Gruyter, 2019, pn 243-251.
Disjoint Union Metric Spaces. ➢ Meenakshi, J. of Combinatorics, Information & System Sciences. 42➢ Meenakshi, Disjoint Union Metric Spaces, J. of Combinatorics, Information & System Sciences, 2017, 42, pn 81-87.
Meenakshi and Natakshi, Disjoint Union Metric and Topological Spaces. ➢ R P Sharma, Southeast Bulletin of Mathematics. 443➢ R. P. Sharma, Meenakshi and Natakshi, Disjoint Union Metric and Topological Spaces, Southeast Bulletin of Mathematics, 2020, ,44 pg 733-753. Conference Proceedings: 03
Education: Ph.D Area if Interest: Algebra Experience: More than 30 years Number of supervised Ph. R P Prof, Sharma, D. students: 12 Some Significant PublicationsProf. R. P. Sharma Education: Ph.D Area if Interest: Algebra Experience: More than 30 years Number of supervised Ph.D. students: 12 Some Significant Publications:
Prime Kernel Functors of group graded rings and their Smash Products. M Parvathi, R P Sharma, Comm. in Algebra (USA). 177M. Parvathi and R.P. Sharma, Prime Kernel Functors of group graded rings and their Smash Products, Comm. in Algebra (USA), 17(7) (1989), 1533-1562.
A note on algebraic analogs of the cones spectrum. M Parvathi, R P Sharma, Comm. in Algebra (USA). 194M. Parvathi and R.P. Sharma, A note on algebraic analogs of the cones spectrum, Comm. in Algebra (USA), 19(4) (1991), 1249-1270.
R P Sharma, Samriti Sharma, Graded Fuzzy Ideals in Graded Rings, Ganita. 4. R.P. Sharma and Samriti Sharma26R.P. Sharma and Samriti Sharma, Group Action on Fuzzy Ideals, Comm. in Algebra (USA), 26(12) (1998) 4207-4220. 4. R.P. Sharma and Samriti Sharma, Graded Fuzzy Ideals in Graded Rings, Ganita, Vol. 49, No. 1 (1998) 29-37.
On the Range of A G-Prime Fuzzy Ideals. R P Sharma, Samriti Sharma, Comm. in Algebra (USA). 276R.P. Sharma and Samriti Sharma, On the Range of A G-Prime Fuzzy Ideals, Comm. in Algebra (USA), 27(6) (1999) 2914-2916.
Characterization of G-Prime Fuzzy ideals in a ring -An alternate approach. R P Sharma, J R Gupta, Arvind , Comm. in Algebra (USA). 2810R.P. Sharma, J.R. Gupta and Arvind, Characterization of G-Prime Fuzzy ideals in a ring -An alternate approach, Comm. in Algebra (USA), 28(10) (2000) 1981-1987.
Weakly Graded-Prime and Weakly G-Prime Fuzzy ideals. R P Sharma, Samriti Sharma, EJM. R.P. Sharma and Samriti Sharma, Weakly Graded-Prime and Weakly G- Prime Fuzzy ideals, EJM (2000) 1-9.
Basic Concepts of Ring Theory (Expository Article. R P Sharma, Samriti Sharma, ; R P Sharma, Proceedings of the Institutional and Instructional Program on Quantum Groups and Their Applications. the Institutional and Instructional Program on Quantum Groups and Their ApplicationsChennai51University of MadrasOn the range of a Graded-Prime Fuzzy idealR.P. Sharma and Samriti Sharma, On the range of a Graded-Prime Fuzzy ideal, Ganita, 51, No. I (2000) 69-74. 9. R.P. Sharma, Basic Concepts of Ring Theory (Expository Article), Proceedings of the Institutional and Instructional Program on Quantum Groups and Their Applications, University of Madras, Chennai, 01-02 ( 2001) 39-57.
R P Sharma, Torsion theoretic Connes subgroup, Ganita. 52R.P. Sharma, Torsion theoretic Connes subgroup, Ganita, Vol. 52, No2, (2001) 127-135.
Fuzzy Ideals of Group Graded Rings and their Smash Products, Comm. in algebra (USA). R P Sharma, Ranju Banota, 30R.P. Sharma and Ranju Banota, Fuzzy Ideals of Group Graded Rings and their Smash Products, Comm. in algebra (USA), 30(6) (2002)..
On the Semi simplicity of Brauer Algebras. R P Sharma, 45University of Glasgow, Department of MathematicsPreprint SeriesR.P. Sharma, On the Semi simplicity of Brauer Algebras, University of Glasgow, Department of Mathematics, Preprint Series, Paper No. 2002/45, September 2002.
On the Characterization of Completely Prime and Completely G-Prime Fuzzy Ideals. R P Sharma, J R Gupta, Arvind , C. Sharma, Published by Allied Publishers Pvt. Ltd. New DelhiR.P. Sharma, J.R. Gupta and Arvind, On the Characterization of Completely Prime and Completely G-Prime Fuzzy Ideals, Chapter 7, A Book Edited by R.C. Sharma, Published by Allied Publishers Pvt. Ltd. New Delhi (2003) 87-100.
| [] |
[
"PanoFlow: Learning 360 • Optical Flow for Surrounding Temporal Understanding",
"PanoFlow: Learning 360 • Optical Flow for Surrounding Temporal Understanding"
] | [
"Hao Shi [email protected] ",
"Yifan Zhou ",
"Kailun Yang [email protected] ",
"Xiaoting Yin [email protected] ",
"Ze Wang ",
"Yaozu Ye [email protected] ",
"Zhe Yin ",
"Shi Meng [email protected] ",
"Peng Li [email protected] ",
"Kaiwei Wang [email protected]. ",
"Kaiwei Wang ",
"; Kailun ",
"Yang ",
"H Shi ",
"X Yin ",
"Z Wang ",
"Y Ye ",
"P Li ",
"K Wang ",
"\nwith College of Computer Science and Technology\nare with State Key Laboratory of Modern Optical Instrumentation\nZhejiang University\n310027, 76131Hangzhou, KarlsruheChina, Germany\n",
"\nData Intelligence Lab, Luokung Technology Corp\nZhejiang University\n310027, 100020Hangzhou, BeijingChina, China\n"
] | [
"with College of Computer Science and Technology\nare with State Key Laboratory of Modern Optical Instrumentation\nZhejiang University\n310027, 76131Hangzhou, KarlsruheChina, Germany",
"Data Intelligence Lab, Luokung Technology Corp\nZhejiang University\n310027, 100020Hangzhou, BeijingChina, China"
] | [
"IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS"
] | Optical flow estimation is a basic task in self-driving and robotics systems, which enables to temporally interpret traffic scenes. Autonomous vehicles clearly benefit from the ultrawide Field of View (FoV) offered by 360 • panoramic sensors. However, due to the unique imaging process of panoramic cameras, models designed for pinhole images do not directly generalize satisfactorily to 360 • panoramic images. In this paper, we put forward a novel network framework--PANOFLOW, to learn optical flow for panoramic images. To overcome the distortions introduced by equirectangular projection in panoramic transformation, we design a Flow Distortion Augmentation (FDA) method, which contains radial flow distortion (FDA-R) or equirectangular flow distortion (FDA-E). We further look into the definition and properties of cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow Estimation (CFE) method by leveraging the cyclicity of spherical images to infer 360 • optical flow and converting large displacement to relatively small displacement. PanoFlow is applicable to any existing flow estimation method and benefits from the progress of narrow-FoV flow estimation. In addition, we create and release a synthetic panoramic dataset FlowScape based on CARLA to facilitate training and quantitative analysis. PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the fresh established FlowScape benchmarks. Our proposed approach reduces the End-Point-Error (EPE) on FlowScape by 27.3%. On OmniFlowNet, PanoFlow achieves an EPE of 3.17 pixels, a 55.5% error reduction from the best published result (7.12 pixels). We also qualitatively validate our method via an outdoor collection vehicle and a public real-world OmniPhotos dataset, indicating strong potential and robustness for real-world navigation applications. Code and dataset are publicly available at PanoFlow. | 10.1109/tits.2023.3241212 | [
"https://export.arxiv.org/pdf/2202.13388v3.pdf"
] | 250,607,675 | 2202.13388 | 51eefbf1b284c93df3bde5b0bc247465b5a33065 |
PanoFlow: Learning 360 • Optical Flow for Surrounding Temporal Understanding
Hao Shi [email protected]
Yifan Zhou
Kailun Yang [email protected]
Xiaoting Yin [email protected]
Ze Wang
Yaozu Ye [email protected]
Zhe Yin
Shi Meng [email protected]
Peng Li [email protected]
Kaiwei Wang [email protected].
Kaiwei Wang
; Kailun
Yang
H Shi
X Yin
Z Wang
Y Ye
P Li
K Wang
with College of Computer Science and Technology
are with State Key Laboratory of Modern Optical Instrumentation
Zhejiang University
310027, 76131Hangzhou, KarlsruheChina, Germany
Data Intelligence Lab, Luokung Technology Corp
Zhejiang University
310027, 100020Hangzhou, BeijingChina, China
PanoFlow: Learning 360 • Optical Flow for Surrounding Temporal Understanding
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
20221 2 Y. Zhou is with Multimedia Laboratory, Nanyang Technological University, 61 Nanyang Dr, Singapore 637335, Singapore (email: [email protected]). 3 K. Yang is with Institute for Anthropomatics and Robotics, Karlsruhe Insti-tute of Technology,Index Terms-Intelligent VehiclesScene ParsingOptical FlowPanoramaScene UnderstandingSynthetic Dataset
Optical flow estimation is a basic task in self-driving and robotics systems, which enables to temporally interpret traffic scenes. Autonomous vehicles clearly benefit from the ultrawide Field of View (FoV) offered by 360 • panoramic sensors. However, due to the unique imaging process of panoramic cameras, models designed for pinhole images do not directly generalize satisfactorily to 360 • panoramic images. In this paper, we put forward a novel network framework--PANOFLOW, to learn optical flow for panoramic images. To overcome the distortions introduced by equirectangular projection in panoramic transformation, we design a Flow Distortion Augmentation (FDA) method, which contains radial flow distortion (FDA-R) or equirectangular flow distortion (FDA-E). We further look into the definition and properties of cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow Estimation (CFE) method by leveraging the cyclicity of spherical images to infer 360 • optical flow and converting large displacement to relatively small displacement. PanoFlow is applicable to any existing flow estimation method and benefits from the progress of narrow-FoV flow estimation. In addition, we create and release a synthetic panoramic dataset FlowScape based on CARLA to facilitate training and quantitative analysis. PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the fresh established FlowScape benchmarks. Our proposed approach reduces the End-Point-Error (EPE) on FlowScape by 27.3%. On OmniFlowNet, PanoFlow achieves an EPE of 3.17 pixels, a 55.5% error reduction from the best published result (7.12 pixels). We also qualitatively validate our method via an outdoor collection vehicle and a public real-world OmniPhotos dataset, indicating strong potential and robustness for real-world navigation applications. Code and dataset are publicly available at PanoFlow.
I. INTRODUCTION
O PTICAL flow estimation is one of the fundamental challenges for autonomous driving [1]- [5]. Flow estimation provides information about the environment and the sensor's motion, leading to a temporal understanding of the world, which is vital for many robotics and vehicular applications, including scene parsing, image-based navigation, visual odometry, and SLAM [6]- [12]. With the development of spherical cameras [13], panoramic images are now more easily captured for 360 • scene perception [14]- [16], and can better be integrated with LiDARs due to the similar projection model [17]. However, learning-based methods have always focused on traditional 2D images produced by pinhole projection model based cameras [18]- [20]. Models designed for a camera with narrow Field-of-View (FoV) are usually suboptimal for a comprehensive understanding. Coupling them with 360 • LiDARs would also directly lead to inherent and domain adaptation problems [21]. Thus, the ability to infer optical flow of a camera's complete surrounding has motivated the study of 360 • flow estimation.
Unlike classical linear images, panoramic contents often suffer from severe distortions due to the equirectangular projection (ERP) of spherical cameras [22]. An object will arXiv:2202.13388v3 [cs.CV] 29 Nov 2022 deform to varying degrees at different latitudes in panoramic images, making flow estimation more difficult between the target image and the attended image. Another critical issue lies in the cyclicity of spherical boundaries, which means there is more than one path from the source point to the target point, and usually there is one shorter and one longer path [23]. The two routes together form a great circle on the sphere. In other words, the geometric meanings of the two routes are equivalent. However, traditional learning-based models cannot track pixels moving outside the image boundary, and therefore have no choice but to infer the harder long-distance motion vector, leading to less satisfactory estimation.
To tackle these issues, we introduce a new panoramic flow estimation framework--PANOFLOW, to directly estimate dense flow field from panoramic images. We implement PanoFlow on two different state-of-the-art optical flow networks [19], [20] to verify the generality of the proposed framework. We present the first, to the authors' best knowledge, a Flow Distortion Augmentation (FDA) method, that is built on the insight of the distortion induced by ERP, to enhance robustness against deformations in panoramic images. While distortion augmentation is used in panoramic scene parsing [24], [25], it has not been investigated in optical flow estimation, as optical flow is a 2D vector, which incurs further challenges. Unlike traditional geometric augmentation methods that deal with constant properties, distortion augmentation of optical flow is non-trivial, which has to consider the variation of optical flow because the initial and terminal points of the flow would be distorted to different extent. By projecting participating images (attended-and target images) and flow ground truth onto the distortion field, we improve the model's ability to generalize to deformed regions.
We put forward two variants of flow distortion augmentation: radial flow distortion (FDA-R) and equirectangular flow distortion (FDA-E). Although FDA-E is consistent with the distortion introduced by general ERP, given the smaller FoV of the pinhole dataset, the number of pixels that really participate in supervision is reduced. We therefore also explore the role of FDA-R in overcoming ERP distortion. We found that although their deformed models are not exactly identical, FDA-R also improves the network's ability to handle distorted regions. From another distortion-adaptive perspective, we further propose to address the distortion by replacing the first layer of the encoder with a deformable convolution layer [26]. The proposed FDA and the deformable convolution empower the model to handle characteristic panoramic image distortions and robustify flow estimation. As a novel data augmentation method, FDA is a plug-and-play module for any learningbased optical flow network.
Furthermore, we give a standard definition of cyclic optical flow suitable for panorama video stream, analyze the properties of cyclic optical flow and compare it with classical optical flow. We then design a Cyclic Flow Estimation (CFE) method based on the previous insight to leverage the cyclicity of panoramic images, and convert long-distance estimation to a relatively short-distance estimation. CFE well relieves the stress of the model in large displacement estimation, enabling the model to focus on local fine-grained optical flow estima-tion. CFE is a general optical flow estimation method and thus can benefit from the advances of narrow-FoV flow estimation methods. Interestingly, both quantitative and qualitative results show that, compared to the previous best method [23] which estimates on the cubemap plane and the icosahedral tangent plane iteratively, the CFE method is simple, yet very effective. We also calculate the distribution of the accuracy change with the horizontal FoV before and after using the CFE method, and discover that CFE can significantly improve the optical flow estimation accuracy near the panorama vertical boundary, which is a unique difficulty of panoramic flow estimation.
In addition, to overcome the lack of available panoramic training data and to foster research on 360 • understanding, we establish and release a new synthetic panoramic flow estimation benchmark of street scenes--FlowScape. We generate the dataset via the CARLA simulator [27]. FlowScape consists of 6,400 color images, optical flow, and pixel-level semantic ground truth, providing an environment similar to the real world, thanks to dynamic weather, diverse city street scenes, and different types of vehicles. We use this dataset for learning to infer flow from panoramic content. We also analyze the ground-truth quality of existing optical flow datasets [22], [23], [28] when only forward optical flow is given, and determine our evaluation datasets according to the observations.
We conduct extensive quantitative experiments on the established FlowScape benchmark. Compared with the previous best model, the End-Point-Error (EPE) of PanoFlow on this dataset reduces by 27.3%. Further, the EPE of our approach on the public OmniFlowNet dataset [22] is reduced by 55.5% compared with the best published results (3.34 pixels vs. 7.12 pixels). Moreover, a comprehensive set of ablation experiments demonstrates the effectiveness of the proposed FDA and CFE methods. We additionally conduct qualitative analysis on the public real-world OmniPhotos dataset [29] to validate our approach in real-world surrounding perception. To further demonstrate the generalization ability of PanoFlow, We also assemble an outdoor data collection vehicle installed with a Panoramic Annular Lens (PAL) system. As shown in Fig. 1, PanoFlow gives sharp and clean omnidirectional optical flow estimation for real-world surrounding scenes.
In summary, our main contributions are as follows:
• We present a rigorous theoretical definition of 360 • optical flow. • We introduce flow distortion augmentation, a new data augmentation method for optical flow networks, which can help models learn to capture the motion cues even on deformed regions. • We propose a generic cyclic flow estimation method, which can transform large-to relatively short displacement estimation based on the geometric nature of consecutive panoramas. • We generate FlowScape, a new publicly available panoramic dataset that consists of diverse synthetic street scenes, providing both pixel-level flow-and semantic ground truth. We also assess ground-truth quality of existing panoramic flow datasets. • Our entire framework PANOFLOW achieves state-of-theart performance on the established FlowScape benchmark and the public OmniFlowNet dataset. • PanoFlow demonstrates strong generalization ability both on the public real-world OmniPhotos dataset and our captured outdoor panoramic video streams.
II. RELATED WORK A. Learning-based Optical Flow Estimation
The classical optical flow estimation approaches [30], [31] use variational approaches to minimize energy based on brightness constancy and spatial smoothness. Since the advent of FlowNet [32], some other works based on Convolutional Neural Netowrks (CNNs) [33]- [44] have appeared. Besides, there are also some self-supervised approaches [45], [46] to learn optical flow with occlusions. Most of these methods are normally designed to work with pinhole cameras capturing a limited imaging angle.
FlowNet [32] first treats optical flow estimation as a learning problem. In order to further improve the accuracy of optical flow, FlowNet2.0 [47] introduces image warping between multiple cascaded FlowNets. Due to the large model size of FlowNet2.0 [47], many methods have been proposed to simultaneously improve the optical flow accuracy and reduce the model size. Among them, PWC-Net [18] combines classical optical flow estimation principles including pyramid processing, image warping, and cost volumes with learning. LiteFlowNet2 [48] draws on the idea of data fidelity and regularization in the classical variational optical flow method. RAFT [19] iteratively update optical flow fields using multiscale 4D correlation volumes. To better apply optical flow estimation to autonomous driving systems, CSFlow [20] proposes a new optical flow deep network architecture composed of Cross Strip Correlation module (CSC) and Correlation Regression Initialization module (CRI). Moreover, FlowFormer [49] replaces the CNN-based backbone in the RAFT architecture with a transformer-based backbone, which further improves the accuracy of optical flow estimation while increasing the number of parameters by three times. In contrast, PanoFlow is a panoramic optical flow framework that can be adapted to any optical flow network with an encoder-decoder architecture.
B. Optical Flow Estimation beyond the FoV
With the arrival on the market of the increasingly affordable, portable, and accurate panoramic cameras, 360 • flow estimation is in urgent need, that can provide a wide-FoV temporal understanding, for which some methods based on deep learning are developed. LiteFlowNet360 [50] is designed as a domain adaptation framework to cope with inherent distortion in 360 • videos caused by the sphere-to-plane projection. They employ incremental transformation of convolutional layers in feature pyramid networks to reduce network growth size and computational costs combining data augmentation and self-supervised learning with target-domain 360 • videos. OmniFlowNet [22] is built on a CNN model that specializes in perspective images and then applied to omnidirectional ones without training on new datasets, whose convolution operation is unified with equirectangular projection, outperforming the original network. The projection from the 360 • image to the ERP image is a nonlinear mapping, and the distortion caused by this will affect the 360 • optical flow estimation, thus Yuan et al. [23] propose a 360 • optical flow estimation method based on tangent images, including dozens of estimations and refinements on both icosahedron and cubemap panoramas. Overall, the existing learning-based panoramic flow methods adopt a fixed projection paradigm at the model level to deal with ERP distortions. Considering the local bias behavior of CNNs, this will reduce the model's ability to model potential visual cues and result in unsatisfactory performance. On the other hand, estimation and refinement on tangent planes introduce additional computational costs, leading to limited inference speed. Recently, a concurrent work [51] also explores the 360 • optical flow via a siamese representation learning scheme with carefully designed losses and rotational augmentations to adopt existing flow networks. Differing from these works, we tackle image distortions and object deformations that appear across the entire 360 • scenes and leverage the cyclicity of consecutive omnidirectional data for enhancing panoramic optical flow estimation.
C. Optical Flow and Panoramic Perception Datasets
Panoramic datasets are needed in a wide variety of application areas, including depth estimation [52]- [54], scene segmentation [55]- [57], and optical flow estimation [22], [23], [28]. Stanford2D3D [58] is a large-scale indoor spaces dataset that consists of both regular and panoramic data with instancelevel semantic annotations. The 360D dataset [21] reuses released large-scale 3D datasets and re-purposes them to 360 • via rendering for dense depth estimation. PASS [24] presents a panoramic annular semantic segmentation framework with an associated dataset for credible evaluation. DensePASS [59] introduces a dataset with both labeled and unlabeled 360 • images for benchmarking panoramic semantic segmentation from a perspective of unsupervised domain adaptation. KITTI-360 [60] is collected with perspective stereo cameras, a pair of fisheye cameras, and a laser scanning unit for enabling 360 • perception. WoodScape [61] comprises of multiple surroundview fisheye cameras and multiple tasks like segmentation and soiling detection. The OmniScape dataset [62] includes semantic segmentation, depth map, intrinsic parameters of the cameras, and the dynamic parameters of the motorcycle. The Waymo Open dataset [63] is a labeled panoramic video dataset for panoptic image segmentation.
Aiming at improving the accuracy of optical flow estimation, OmniFlow [64] is a synthetic omnidirectional human optical flow dataset with images of household activities with a FoV of 180 • . OmniFlowNet [22] renders a test set of panoramic optical flow only for validation, using simple geometric models based on Blender. Replica360 [23] implements an ERP camera model for the Replica rendering pipeline [65] and contains ground-truth optical flow in the equirectangular format for validation. SynWoodScape [28] is a synthetic fisheye surround-view dataset with ground truth for pixel-wise optical flow and depth estimation. OmniPhotos [29] is a fast 360 • panoramic VR photography method with an released outdoor dataset, but it cannot obtain the ground truth of optical flow. We note that, until now, there is neither a dataset for omnidirectional images targeting at outdoor complex street scenes, nor a dataset covers 360 • that can be used for training and evaluation. The present paper seeks to fill this gap, by proposing a virtual environment, in which one car with panoramic camera drives under the assumption that pedestrians and vehicles move according to traffic rules. Tab. I relates and summarizes current panoramic datasets that contain groundtruth optical flow. A detailed analysis of the ground-truth quality of optical flow will be unfolded in Sec. IV.
O A B 1 C 2 C x y z · · · · (a) A B 1 C 2 C M M · · · · (b) Optical Flow Cyclic Flow · M · ·
III. PANOFLOW: PROPOSED FRAMEWORK
A. Definition of 360 • Optical Flow
The spherical image does not contain any boundaries, and the coordinates are continuous in any direction on the image [22]. However, a boundary parallel to the meridian is naturally introduced in the process of unfolding the spherical image into an equirectangular image as shown in Fig. 2. Given any point A on the sphere, which moves to another point B after time t. Due to the cyclic nature of the sphere itself, there are actually infinite arc trajectories between these two points. Now we only consider two arcsȦC 1 B andȦC 2 B whose range are less than the circumference of the great circle, where C 1 and C 2 are the vertices of the arcs at both ends, respectively. It is easy to find that these two arcs together form a great circle on the sphere. In the process of spherical unfolding, we can map these two points to A and B on the equirectangular image plane I e ∈R H×W , respectively, according to the forward ERP:
ß x = L(φ − φ 0 )cosθ 1 , y = L(θ − θ 0 ),(1)
where x and y are Cartesian coordinates of the image plane. θ∈(− 1 2 π, 1 2 π), φ∈(−π, π) are the unit spherical coordinate pitch and yaw, φ 0 , θ 0 are central meridian and central parallel, respectively. θ 1 are the standard parallels. L is the scaling factor. Therefore, the cyclicity of a great circle on the sphere, is reflected in the cyclicity of the vertical boundary on the equirectangular image:
δx = Lcosθ 1 · (δφ mod 2π),(2)
where δx and δφ denote the variation of Cartesian abscissa and yaw, respectively. (a mod b) indicates a modulo b. Considering a pair of an attended image and a target image, pixels moving out the image boundary on one side will locate to the other side of the image. Thus, there are two 2D motion vectors that connect the source and target points: one is connected along the interior of the equirectangular image, whereas the other points outside the image boundaries. These two flow vectors together form a great circle on the spherical image, one shorter and one longer. For the classical definition of optical flow, given two frames of sequence equirectangular RGB images I 1 and I 2 , we estimate the dense motion vector (u, v) from each pixel (x, y) of I 1 to each pixel (x , y ) of I 2 , that is, the optical flow field V, which gives the per-pixel mapping relationship between the source and target. However, classical optical flow cannot track pixels that move outside the image boundaries, and cannot reflect the boundary circulation of panoramic optical flow. Thus, we define 360 • optical flow V 360 as the shortest path from source to target along the great circle between them, which naturally limits the scalar value of lateral optical flow to u ≤ 180 • . For ground-truth flow field V GT (x) = (u, v) at pixel index x of equirectangular images, we can easily convert the optical flow to 360 • flow:
V 360 (x) = (u − W, v), 1 2 W < u ≤ W ; (u + W, v), −W ≤ u < − 1 2 W ; (u, v).
otherwise.
(3) Given a dense cyclic optical flow field V 360 , we can always find the mapping point (x , y ) on I 2 from every pixel (x, y) on I 1 , i.e., cyclic optical flow maintains the temporal continuity of classical optical flow, which can be used to align temporal features when considering boundary cyclicity.
(a) (b) (c) (d)
B. Data Augmentation with Flow Distortion
In optics, distortion is a map projection which makes the straight lines distorted. Relative to perspective images, the equirectangular transformation can be regarded as a kind of distortion. The models trained on perspective images suffer from the distortion on equirectangular images. To adapt the models to this distortion, we put forward to perform Flow Distortion Augmentation (FDA) on the training samples as a novel data augmentation method.
The distortion of flow is a non-trivial task comparing to general image distortion (Fig. 4). For the properties that adhere to the pixels (e.g., RGB or depth), their values would not be modified during distortion. However, the initial and terminal points of optical flow would both be distorted during distortion. To estimate the exact optical flow of a distorted frame, we should calibrate the optical flow of its grid points before interpolation. Given an undistorted initial point x u = (x u , y u ), the flow field V u , and a coordinate distortion function F that maps a distorted coordinate to a calibrated coordinate, the calibrated flow field V c can be obtained by:
ß V c (x d ) = F (x u + V u (x u )) − F (x u ), x d = F (x u ),(4)
where x d = (x d , y d ) is the distorted coordinate and F is the inversion function of F . There are multiple choices for the coordinate distortion function F . In this work, we consider radial distortion and equirectangular distortion, both resulting a remarkable enhancement, which will be discussed in Sec. V-C. We use the following mapping function F r :
F (x u ) → x d to model the radial distortion: ß x d = P (r)(x c + (x u − x c )), y d = P (r)(y c + (y u − y c )),(5)
where
(x c , y c ) is the distortion center (the intermediate point of image by default). P (x)=x+k 2 x 2 +k 4 x 4 is a polynomial and r is the Euclidean distance from (x u , y u ) to (x c , y c ). In practice, we set k 2 ∼U (−10 −6 , 10 −6 ), k 4 ∼U (−10 −14 , 10 −14 )
, which are empirically set and achieve reasonable augmentation effects for images of different resolutions. For equirectangular distortion F e : F (x u ) → x d , we transform the coordinates via spherical coordinate system. We first map x u on equirectangular image to x s = (x s , y s , z s ) on unit sphere by:
x s = sin πyu H cos 2πxu W , y s = sin πyu H sin 2πxu W , z s = cos πyu H .(6)
We then apply random 3D rotation to x s by x s
T ← R z (θ z )R y (θ y )x s T ,
where R y (θ y ) and R z (θ z ) are standard 3D rotation matrices about y-axis and z-axis with θ y ∼U (0, π) and θ z ∼U (0, 2π):
R y (θ y ) = cosθ y 0 sinθ y 0 1 0 −sinθ y 0 cosθ y , R z (θ z ) = cosθ z −sinθ z 0 sinθ z cosθ z 0 0 0 1 .(7)
Finally, x s is transformed to the perspective coordinate via:
x d = W 2tan θ h 2 ys xs + W 2 , y d = H 2tan θv 2 zs xs + H 2 ,(8)
where θ h , θ v are the horizontal and vertical FoV of the perspective image drawn from U ( π 3 , 2 3 π) respectively. The visualizations of FDA-R and FDA-E are shown in Fig. 3. Notice that the color of the optical flow changes with the distortion, which is due to that the vector distortion affecting the modulo value of the optical flow. The deformation of FDA-E is homogeneous with that introduced by ERP, while the deformation introduced by FDA-R is radially variable. Taking FDA-E as an example, the position of the color image and optical flow on the spherical surface will change randomly in each iteration instead of being fixed, which improves the robust representation learning ability of the model against distortions. Therefore, the model is able to gradually learn how to handle features with different latitude and longitude on the entire spherical image from the source pinhole data. However, due to the limited FoV of pinhole images, the number of available supervision pixels in FDA-E is actually reduced compared to FDA-R, thus it is necessary to explore the effects of two different optical flow distortion techniques on the distortion robustness of the model. It is verified that FDA improves the adaptation of model by introducing the distorted images to the training data. The ablation experiment of FDA is comprehensively discussed in Sec. V-C.
C. Training with Deformable Receptive Field Encoder
Unlike pinhole images, equirectangular images suffer from severe geometric distortions in panoramic dense prediction [21], [24]. While our flow distortion augmentation helps address the deformations from the perspective of training data, classical CNN-based encoders are still limited by the fixed geometry of the convolution kernels, and has insufficient learning ability for deformable features. Therefore, we propose to replace the first convolutional layer of the encoder with deformable convolution [26] when dealing with 360 • contents, endowing the model with a more flexible receptive field. Given a deformable convolution kernel, we extract features at K sampling locations, the weight and grid-specified offset at the k-th location are denoted by w k and g k , respectively. In our practice, we replace the feature encoder and context encoder with two deformable convolution layers with a kernel size of 7×7, thus the kernel is defined with K=49 and g k ∈ {(−3,−3),(−3,−2), · · · ,(0,0), · · · ,(3,2),(3,3)}.
The distortion-aware features F d at each position g 0 can be obtained via:
F d (g 0 ) = K k=1 w k · I(g 0 + g k + g k ) · o k ,(9)
where I∈R H×W is the panorama input, g k and o k are learnable offset and modulation scalar respectively, which are inferred via another convolutional layer:
{ g k } K k=1 = tanh(C of f (I)[0 : 2K]), { o k } K k=1 = σ(C of f (I)[2K : 3K]),(10)
where C of f is a set of convolutional layers, [a : b] denotes the channel slice from index a to index b, tanh and σ represent the Tanh and Sigmoid activation function, respectively. In Sec. V-C, we will show that the use of deformable receptive field encoder further enhances the robustness of the model to distorted images.
D. Inference with Cyclic Flow Estimation
In order to directly infer 360 • cyclic flow from equirectangular contents, and relieve the stress of the model in longdistance displacement estimation, we introduce a Cyclic Flow Estimation (CFE) method based on the geometric nature of panoramas. The structure of CFE is shown in Fig. 5. CFE exploits the cyclicity of the left and right boundaries of equirectangular images, and it is compatible with any optical flow network based on an encoder-decoder structure, e.g., RAFT [19] or CSFlow [20].
Specifically, we first use a convolutional network as the encoder e(·) to extract features F 1 , F 2 ∈R C×H×W from the input two frames of equirectangular images I 1 , I 2 ∈R 3×h×w . Then, the features are split along the horizontal centerline into F a1 , F b1 ∈R C×H× W 2 and F a2 , F b2 ∈R C×H× W 2 , respectively. We regard the process of feature encoding as rigid, that means, we should obtain exactly the same features for the same image input. Therefore, when swapping the left and right regions of the input image, the resulting feature maps should also be approximately left-right swapped. Based on the above observations, we can regroup the feature maps as two feature pairs P 1 , P 2 ∈R 2×C×H×W :
® P 1 = {F a1 ⊕ F b1 , F a2 ⊕ F b2 }, P 2 = {F b1 ⊕ F a1 , F b2 ⊕ F a2 }.(11)
where ⊕ means a concatenate operation. Since the RAFT structure [19] contains an additional context encoder c(·), the context feature maps C a1 , C b1 ∈R C×H× W 2 extracted from I 1 should also be regrouped into P c1 , P c2 ∈R C×H×W :
® P c1 = {C a1 ⊕ C b1 }, P c2 = {C b1 ⊕ C a1 }.(12)
We then stack the feature pairs with context respectively, which will be further sent to the decoder d(·). Subsequently, the decoder will estimate two flow fields V, V ∈ R 2×h×w . Sunny Fog Cloud Rain Fig. 6. From up to down: color images, optical flow, and semantics. The proposed FlowScape dataset consists of 8 various city maps in four weathers: sunny, fog, cloud, and rain. We collect 100 consecutive panoramic images at each random position, resulting in a total of 6,400 frames with a resolution of 512×1024 , each with optical flow ground truth and semantic labels, which can be used for training and evaluation. Since the flow field of panoramic images usually contains large displacement that interferes with visualization and fades colors, we modified the visualization method of optical flow based on [67], and lowered the color saturation of optical flow greater than the threshold. Please refer to our open-source documentation for details.
The flow estimations are split along the horizontal centerline into V a1 , V b1 ∈R 2×h× w 2 and V a1 , V b1 ∈R 2×h× w 2 . Assuming that the estimation is unbiased, for any pixel (x, y) in area a, we consider that V a1 (x, y) and V a1 (x, y) form a pair of complementary optical flows end to end, and these two 2D motion vectors together form a great circle on the sphere. The same is true for area b. According to our definition of 360 • optical flow, the final 360 • flow fieldV is obtained:
V = min(V a1 , V a1 ) ⊕ min(V b1 , V b1 ).(13)
We emphasize again that CFE is a generic flow estimation method based on the assumption that the encoding process should be rigid, which can replace the large displacement estimation with the small displacement estimation when dealing with panoramic contents. According to our analysis of the geometric nature of consecutive panoramic frames in Sec. III-A, CFE is able to cope with the intrinsically most difficult part of long-range cyclic estimation in panoramic optical flow, without having to estimate dozens of times on the tangent plane of the regular polyhedron like the previous method [23]. Considering that large displacement estimation is much more challenging for the model, CFE can significantly enhance the prediction reliability. With the proposed CFE method, we eliminate redundant encoding calculations, and ensure computational efficiency while accurately estimating 360 • optical flow. Another naive idea is to use circular convolutions to replace classical convolutional layers. However, we will show in ablation studies (Sec. V-C) that this method only has a limited circularity for cyclic flow, and thus it is not suitable for panoramic flow estimation.
IV. FLOWSCAPE: ESTABLISHED SYNTHETIC DATASET
End-to-end learning of deep neural networks requires a large amount of annotated ground truth data. Although for pinhole cameras this can be partly resolved by using scanning LiDARs and multiple sensors [68], [69], such an approach is unpractical for 360 • images, considering that panoramic camera and Li-DAR will block each other and have a large divergence in their resolutions. In addition, the point cloud data given by LiDARs is sparse, thus it is difficult to obtain dense ground-truth values of optical flow in the real world. Even when these flaws are patched using algorithms during acquisition, additional errors are still introduced. On the other hand, synthetic datasets are popular for learning flow estimation due to the lack of realworld training data [32], [66], [70]. Extensive investigations have demonstrated that generalization from synthetic-to real scenes is feasible for optical flow tasks [19], [20], [71]. FlowScape dataset: We notice that there is a lack of an open panoramic optical flow dataset that can be used for training and credible numerical evaluation. Therefore, we advocate to generate a dataset with ground-truth flow by synthesizing both the color image and flow via the CARLA simulator [27]. Specifically, we use eight open-source maps given by CARLA.Our virtual collection vehicle contains 6 pinhole color cameras, 6 pinhole optical flow cameras, and 6 pinhole semantic cameras, all of which have a FoV of 90 • ×90 • , are in the same spatial viewpoint and keep synchronized timestamps. Taking color images as an example, six orthogonal viewing angles are obtained to form a cubemap panorama {I f , I r , I b , I l , I u , I d }∈R h×w , including front, right, back, left, top, and bottom view. We can then acquire the equirectangular image I e ∈R H×W with a FoV of 180 • ×360 • by using a cubemap-to-equirectangular algorithm as a postprocessing. Given four horizontally views {I f , I r , I b , I l } of the cubemap format, we can calculate their corresponding coordinates (x, y) on the equirectangular image plane:
® x = W 2 · tan(φ − m π 2 ), y = − H 2 · tanθ cos(φ−m π 2 ) ,(14)
where the view index m={1,2,3,4}, θ∈(− 1 2 π, 1 2 π), φ∈(−π, π) are the angular coordinates. For the upper and lower views {I u , I d }: where the view index n={0,1}. Given 6 views in the cubumapformat panorama with a resolution of 1024×1024, the reprojection latency of the panoramic image is 0.37s, and the panoramic optical flow takes 1.29s, both test on the Intel i5-12600K CPU with Python 3.9 implementation. We set 100∼120 initial collection points on each of the 8 open source maps, and all of them are on the road. During collection, a tracing renderer is used to render our dataset by placing these pinhole cameras at a starting position P ∈ R 3 in the scene, which is randomly sampled from the initial collection points of the map. For each map, we augment the dataset by changing weather, including sunny (62.5%), cloud (12.5%), fog (12.5%), and rain (12.5%) to form FlowScape and assess the robustness of optical flow estimation in various conditions. During the collection process, as the number of vehicles and pedestrians increases, the rendering overhead will increase slightly. For purpose of controlling the stability of the data collection while maintaining the richness of the foreground, we set the generation upper limit of vehicles and pedestrians to 200 for all the maps. In order to ensure a good diversity of the synthetic data, we only gather 100 frames with a frame rate of 30Hz for each position. Considering the unreliable optical flow at infinity such as that on points in the sky, we additionally provide ground-truth values of pixelwise semantic segmentation for selection, which could also be beneficial for panoramic semantic understanding tasks [24], [25]. The semantic labels follow CARLA's setting (Fig. 6). Overall, FlowScape provides 6,400 panoramic images of diverse street scenes, each with ground truth of both optical flow and semantic labels. Photoconsistency analysis: As shown in Tab. I, we compare the existing panoramic optical flow datasets. SynWood-Scape [28], OmniFlowNet [22], and Replica360 [23] are three
ß x = W 2 · tan( π 2 − θ)sin(φ), y = H 2 · tan( π 2 − θ)cos(φ + nπ),(15)P E = 1 HW x |I 1 (x) − I 2 (x)|,(16)W P E = 1 HW x |I 1 (x + V GT (x)) − I 2 (x)|,(17)
where x is the pixel index and V GT is the ground-truth flow field. Obviously, the quality of the ground-truth flow is high when WPE is significantly lower than PE, i.e., a highquality optical flow field can convert one image to the next as much as possible [72]. We consider forward optical flow that results in a reduction in interpolation error of less than 10% to be medium/low-quality ground truth. We perform groundtruth quality analysis on the popular perspective optical flow dataset [32], [66], [70] and these three panorama datasets separately, and the results are shown in Tab. II. Compared to the public OmniFlowNet and our FlowScape datasets, the ground-truth flow of Replica360 dataset seems to be unreliable. Consequently, our quantitative evaluations are performed on the first two datasets.
V. EXPERIMENTS
We conduct experiments using two typical learning-based flow method [19], [20] to verify the proposed PanoFlow framework. We confirm the role of the key components in PanoFlow through ablation experiments. For OmniFlowNet [22] and Yuan et al. [23], we use their official released codes for testing. Unfortunately, neither the code nor the dataset of LiteFlowNet360 [50] is publicly available, therefore we cannot make a fair comparison with it. Since OmniFlowNet is an adaptive method designed for learning-based optical flow networks, we additionally upgrade its backbone from Lite-FlowNet2 [48] to RAFT [19] and CSFlow [20] for quantitative experiments to demonstrate that our PanoFlow framework is more generic and effective. We further conduct qualitative comparisons on the public outdoor panoramic dataset Om-niPhotos [29] and our PAL-collected panoramic videos.
A. Training Details
Following previous works, we pretrain our model using the FlyingChairs [32]→FlyThings [70] schedule, followed by finetuning on our FlowScape dataset. We divide FlowScape into 5,000/1,400 image pairs for train/test subsets. Considering that the sunny days are the most common weather conditions, the test set of FlowScape covers sunny (57.1%), cloud (14.3%), fog (14.3%), and rain (14.3%). We train our model on an RTX 3090 GPU, implemented in PyTorch. We pretrain on FlyingChairs for 100k iterations with a batch size of 10, then train for 100k iterations on FlyingThings3D with a batch size of 6. Finally, we finetune on FlowScape with a batch size of 6 for another 100k iterations using the weights from the pretrained model. The ablation experiment is performed with 100k training iterations on Chairs, and the batch size is also 10. We time our method using an RTX 3090 GPU. The GRU iteration number is set to 12 during training and inference. We follow RAFT [19] for data augmentation. All experiments are with the same augmentations including occlusion augmentation [73], random rescale, perturbing brightness, as well as contrast augmentation, saturation augmentation, and hue augmentation.
B. PanoFlow on FlowScape
We evaluate PanoFlow on the FlowScape dataset using the test split. Results are shown in Tab. III, where we split the results based on the weather conditions. The best results are bolded, the second best are underlined. We term the method using the PanoFlow framework as PanoFlow (·). We denote * and * * to distinguish models using FDA-R and FDA-E methods. C+T means that the models are trained on FlyingChairs (C) and FlyingThings (T). F indicates methods using only FlowScape (F) train split for finetuning. When using C+T for training, our method achieves an 11.7% error reduction for RAFT, and a 12.4% error reduction for CSFlow. The results of CSFlow are slightly better than RAFT, which demonstrates its better cross-dataset generalizability. After finetuning on FlowScape, estimating flow under PanoFlow framework can further improve the accuracy. Our PanoFlow (CSFlow) improves EPE from 4.47 to 3.25 (↑27.3%). Interestingly, FDA-R also makes it easier for the model to cope with ERP deformations when trained on the perspective dataset. When we turn off FDA and train on FlowScape, the FDA-E models have a slightly better overall accuracy than FDA-R, but this advantage does not hold in all weather conditions. We believe this is due to the fact that, by introducing deformed optical flow field for training process which is changed randomly in each iteration instead of being fixed, our model is able to extract robust features for computing visual similarity across different distortion modalities.
C. Ablation Studies
To demonstrate the role of each core module in the proposed PanoFlow framework, we perform the ablation studies on FlowScape using the well-known RAFT structure [19]. End-Point-Error (EPE) is used as the evaluation metric. We now describe the findings of each study. Flow Distortion Augmentation: We explore the role of two different flow distortion variants on the model's ability to adapt from pinhole to panoramic domains. The results are shown in Tab. IV. Both FDA-R and FDA-E can help models to overcome ERP deformation, which indicate that distorted optical flow is beneficial for the model to learn robust features. Although the total number of effectively supervised pixels is reduced in FDA-E, its modality is closer to ERP, thus the model using FDA-E gains an advantage. In the following ablation experiments, we use the FDA-E model by default. Core Components: Tab. V shows how performance varies as each core component (FDA-E: flow distortion augmentation in ERP format; CFE: cyclic flow estimation; DCN: deformable receptive field encoder) of our model is removed. We can see that every component contributes to the overall performance. We find that CFE has the greatest impact on accuracy. The result is surprising, considering that this method that can be used without any retraining. This also reveals that the PanoFlow framework can easily benefit from advances in general optical flow networks. When all the key components are in place, the model performs optimally in all weathers. In the following experiments, we use the "full" version of our method (last row of Tab. V). Cyclic Flow Estimation: We additionally conduct an ablation study based on PanoFlow (RAFT) that has been finetuned on FlowScape to further investigate how the setting of the CFE affects the accuracy and efficiency. Tab. VI shows that CFE improves the performance the most in the default setting. Circular Convolution: In order to explore the ability of circular convolution to capture large-displacement cyclic visual similarity, we replace the convolutional layers in the model with circular convolutions, where experimental results show they do not help performance. We believe that this is because the circular convolution uses a simple padding operation to warp the image, and the introduced cyclicity is insufficient, considering that the end point of the 360 • optical flow may fall within the area [0, W 2 ] outside the left and right boundaries of the panoramic image. However, most of the flow vectors are still given in the direction of the traditional optical flow, which makes it a disadvantage in 360 • flow estimation (↓19.5%). Double Estimation: A naive idea is to swap the left and right regions directly, estimate twice and take the respective minimum values. This does improve the accuracy, but the time complexity is also doubled. It also confuses the model during encoding the false image boundary introduced by the swap operation. Half Zero Padding: Based on the above observations, we naturally associate whether the another region's feature will interfere with the results of the region of interest when decoding. Thus, we try replacing half of the feature maps with empty tensors, resulting in one encoding and four decodings. We find that it has no advantages over the default setting. Half Same Padding: We further replace the zero feature with same feature of the region of interest. The same features make the model face two confusing scene cues at the same time when calculating the visual similarity, which leads to terrible performance regression. Default: The performance improvement brought by CFE is the most significant in the default setting, and its time complexity is only modest. We further explore the horizontal distribution of the gain introduced by CFE. As shown in Fig. 8, CFE improves the model's ability to cope with cross-boundary optical flow, which is an essential difficult part of panoramic flow estimation. On FlowScape, CFE seems to cause a slight degradation near the right boundary, which we believe is due to the fact that the road features of FlowScape are highly similar, causing the model to confuse the road on both sides of the boundary during cyclic inference. And our virtual collection vehicle goes straight ahead, resulting in less cross-boundary optical flow. On the OmniFlowNet dataset, it can be observed that the accuracy is significantly improved in the range of 270 • ∼360 • in FoV. This is reasonable because the dataset only contains forward-and rightward movements, and the crossing of boundaries generally occurs on the right side of the image. Considering real-world cases, the vehicle cannot move completely straight ahead along the lane line, and traffic accidents might occur when the vehicle turns, thus, CFE is ideally suitable for real driving scenarios.
D. Comparison with the State-of-the-Art
Quantitative Comparison on Synthetic Data: Omni-FlowNet [22] is a state-of-the-art CNN adaption model for optical flow estimation in omnidirectional images which can be built on general CNN architectures for perspective images. We reproduce the OmniFlowNet using MMFlow [74] and compare it with our model. Since OmniFlowNet is built on LiteFlowNet2 [48], which is inconsistent to our baseline, we also apply the architecture of OmniFlowNet to RAFT and CSFlow. As shown in Tab. VII, we evaluate the models on the OmniFlowNet dataset [22] with all three scenarios (i.e., CartoonTree (Cart.), Forest (Forest), LowPolyModels (Poly.)), and our FlowScape dataset. All models are trained on FlyingChairs (C) + FlyThings (T), with "-ft" indicating that the model was additionally fine-tuned on the FlowScape data. We also report the accuracy of the icosahedron tangent-plane panoramic flow estimation method [23] on both datasets. When the method of OmniFlowNet is applied to RAFT (OmniFlowNet (RAFT)), the results are improved to some extent (↑5.1% on the OmniFlowNet dataset). However, PanoFlow (RAFT)** improves the performance significantly (↑43.1% on the OmniFlowNet dataset). After finetuning on FlowScape, the accuracy of both networks are further improved, indicating that the FlowScape dataset is effective for panoramic optical flow tasks. When training with FDA-R, PanoFlow (CSFlow)*-ft achieves better performance than PanoFlow (RAFT)*-ft on both datasets, which proves that our CSFlow structure is better at learning robust features across different distortion modalities. PanoFlow (CSFlow)**-ft ranks 1st on our FlowScape dataset (3.25 pixels), while PanoFlow (RAFT)**-ft gives better results on the OmniFlowNet dataset (3.17 pixels), which we believe is due to the domain gap between the two datasets. On the OmniFlowNet dataset, our approach achieves a 55.5% error reduction than the current state-of-the-art panoramic flow method. On FlowScape, PanoFlow also outperforms OmniFlowNet by a large margin, dramatically decreasing EPE from 22.16 to 3.25. We also perform a speed test of existing algorithms on the FlowScape dataset, in which the average time of flow computation for 100 frames are reported. As shown in the last column of Tab. VII, PanoFlow yields state-of-the-art accuracy while maintaining high efficiency (0.14s), indicating that the proposed method achieves a better trade-off between accuracy and latency than existing methods.
We present the error heatmap analysis on both datasets in Fig 9. Although the convolution kernel adopts a fixed ERP deformation, OmniFlowNet [22] still has difficulties in dealing with high-latitude distortion. This defect is especially pronounced on FlowScape, since our dataset considers both foreground-and background panoramic optical flow, while OmniFlowNet dataset only gives foreground ground-truth flow. On the other hand, when the pre-trained RAFT is converted to OmniFlowNet (RAFT), we observe mosaic-like estimation errors in the high-latitude regions of the ERP, indicating the insufficient generality of the OmniFlowNet method. Since Yuan et al. estimate the panoramic flow on both the icosahedron and the cubemap multiple times, the results tend to appear many tangent plane boundaries. Thanks to our FDA and deformable receptive field encoders, PanoFlow can easily handle high-latitude deformation of ERP. Noticeably, applying CFE produces a large improvement near the boundary, which is an essential difficult part of 360 • flow estimation (see the third row of Fig 9). Qualitative Comparisons in the Real World: Considering that the ground truth of dense flow field in the real world is almost impossible to obtain [66], [70], we qualitatively compare the flow on both the public OmniPhotos dataset [29] without the ground truth and the PAL video stream collected by ourselves to verify the synthetic-to-real generalization ability of PanoFlow. As shown in Fig 10, PanoFlow gives high-quality dense optical flow in real scenes. Regarding OmniFlowNet and Yuan et al., they suffer from limited distortion-aware capacity, thus being not accurate enough in the real-world domain. Although we additionally upgrade the backbone of OmniFlowNet to RAFT, its ability to capture large displacements is still insufficient compared with our method. OmniFlowNet (RAFT) also exhibits mosaic-like errors again in the real scene, proving the lack of generality of the method. Moreover, PanoFlow is able to capture adequate details of real-world images, which is evidently better than previous works.
To further investigate the practical performance of the proposed PanoFlow solution on real data, we install a panoramic annular lens (PAL) system with an FoV of 60 • ×360 • on top of a mobile robot (see Fig. 12), which navigates around the campus according to the remote control. As shown in Fig. 11, we collect panoramic videos of campus street scenes and compare our approach with the results given by Omni-FlowNet [22] and the method from Yuan et al. [23]. Although the robot's perspective and FoV are significantly different to that of the virtual camera used in the FlowScape for training, PanoFlow still gives clear and sharp optical flow estimation. For other methods, estimating directly on PAL images will lead to epic failures. Therefore, we convert the PAL video stream to the standard ERP format with an aspect ratio of 2:1 before estimation for each method except PanoFlow. This also reveals that these methods will face additional computational overhead when used on real panoramic shots, as their methods are only designed for complete ERP data. [22]. PanoFlow can easily cope with the challenges introduced by image distortion in high-latitude regions and provide a clear and smooth panoramic flow field in one shot.
Reference Frame
OmniFlowNet Yuan et al. Specifically, for pedestrian and fast-moving vehicles in the foreground of the panoramic images, PanoFlow does not confuse them with the motion of the background, even if they are deformed to varying degrees. Edges are blurred and indistinguishable in OmniFlowNet's background flow estimation, whereas the outlines of street scenes are still sharp and recognizable in PanoFlow's output results. Compared with the method proposed by Yuan et al., PanoFlow gives optical flow with better continuity, and the detailed features are also well preserved. We conclude that our method outperforms the previous state-of-the-art work for both foreground-and background motion estimations, showing excellent syntheticto-real generalizability.
OmniFlowNet (RAFT) Ours
E. Efficiency Analysis
We report the parameter counts, memory requirements during inference, inference time, and the accuracy performance as shown in Tab. VIII. Accuracy is determined by the performance on the FlowScape (test) after training on C+T+F. The image size is 512×1024. RAFT takes 2.67GB memory while our approach takes 2.78GB memory. Due to the additional global context introduced by the decoder in CSFlow, the memory consumption of PanoFlow (CSFlow) is larger than the former. Overall, the results demonstrate that the computational overhead of PanoFlow is low, in contrast to the significant performance improvement, and is therefore suitable for intelligent vehicles to perceive surrounding temporal cues. Fig. 11. Qualitative comparison of existing methods in outdoor-campus 360 • image sequences that captured by our PAL camera. PanoFlow gives optical flow with clear and sharp boundaries for both foreground and background, which means stronger generalization ability for the real-world.
(a) Mobile Robot (b) PAL Camera
F. Failure Case Analysis
As shown in Fig. 13, when there is overexposure in the middle area of the image, the optical flow continuity on both sides will be reduced during cyclic estimation, which is reasonable because the features on both sides become difficult to distinguish and confuse the decoder of the optical flow network. Overcoming this limitation requires some form of supervision or better backbones, e.g., reasoning about panoramic semantics, reasoning about spatio-temporal features in video, or reasoning about fusion with high dynamic range sensors, such as event cameras. Future work can be dedicated to transferring our method to these approaches.
Ours (Failure Case)
Yuan et al.
Failure Case
OmniFlowNet (RAFT) OmniFlowNet Fig. 13. The failure case of PanoFlow. When there is overexposure and lack of texture near the middle of the panorama, CFE may incur difficulty distinguishing features on both sides, resulting in reduced optical flow continuity near the middle of the estimation results.
VI. CONCLUSION
In this paper, we proposed PanoFlow, a flexible framework for estimating 360 • optical flow using flow distortion augmentation, cyclic flow estimation, and deformable receptive filed encoder. We also proposed FlowScape, a publicly available synthetic panoramic optical flow dataset, which can be used for training and evaluation. We have proved through a large number of quantitative experiments that our PanoFlow is compatible with any optical flow methods of an encoder-decoder structure, which significantly improves the accuracy of panoramic flow estimation while ensuring computational efficiency. PanoFlow achieves state-of-the-art performance on both public OmniFlowNet dataset and our FlowScape. PanoFlow also demonstrates strong synthetic-toreal generalizability in the real world, giving high-quality panoramic flow fields for both foreground and background. We look forward to further exploring the adaptability of the PanoFlow framework for other downstream panoramic tasks.
In the future, we aim to explore other panoramic scene understanding tasks, such as the fusion of panoramic camera and LiDAR sensor for an entire and complete semantic and temporal surrounding perception. Furthermore, we plan to exploit synthetic data to study robust scene perception under corner cases such as risky driving and sensor failures to alleviate the long-tail problem in autonomous driving. We also have the intention to look into 3D scene flow estimation based on panoramic cameras. In addition to panoramic cameras with ultra-wide FoV, we are also interested in exploring optical flow estimation for event cameras with ultra-high dynamic range.
Fig. 1. (a) Raw panoramic annular image captured by our mobile perception system, (b)-(c) the proposed panoramic optical flow estimation on real-world surrounding view for 360 • seamless scene temporal understanding.
Fig. 2 .
2Schematic diagram of cyclic optical flow. (a) The optical flow and the cyclic optical flow form a complementary great circle on the spherical coordinate system, (b) The cyclic optical flow on an equirectangular image has a relatively small displacement and spans the horizontal boundaries of the panorama.
Fig. 3 .Fig. 4 .
34Visualizations of FDA-R and FDA-E on the Sintel dataset[66]. Notice that the distortion of the image does not affect its intensity, while the color of the optical flow changes with the modulus. (a) Barrel distortion in FDA-R, (b) Pillow distortion in FDA-R, (c) Low latitude distortion in FDA-E, (d) High latitude distortion in FDA-E. The comparison between RGB image distortion and optical flow distortion. Since the optical flow of grid points has also been modified during distortion, it should be calibrated before interpolation.
Fig. 5 .
5Cyclic Flow Estimation. Partitioned feature maps are extracted from the encoder of the attended frame and the target frame. According to the cyclicity of the left and right boundaries of a panoramic image, the features extracted via the encoder, are regrouped into two feature pairs and sent to the decoder to obtain the complementary optical flow field. The 360 • flow can finally be obtained via min(·) operations.
Fig. 7 .
7Schematic diagram of virtual pinhole cameras placement. The six virtual cameras are located at the same spatial viewpoint, and their orientations are orthogonal to each other. The collected cubemap panoramas are reprojected by ERP to obtain complete field-of-view equirectangular frames with dense flow filed and semantics.
Fig. 8 .
8The distribution of the optical flow estimation EPE variation with the horizontal FoV introduced by the CFE method. (a) Statistical results on FlowScape test split, (b) Statistical results on the OmniFlowNet dataset.
Fig. 10 .
10Qualitative results on the OmniPhotos[29] dataset. PanoFlow successfully generalizes from synthetic dataset to real scenes, and the panoramic flow field visualizations are clean and discriminative while well preserving the details of the image.
Fig. 12 .
12(a) Our outdoor mobile robot is equipped with a Panoramic Annular Lens (PAL) camera and a laptop. (b) PAL for capturing outdoor panoramic video streams.
TABLE I COMPARISON
IOF EXISTING PANORAMIC DATASETS FOR OPTICAL FLOW ESTIMATION.Dataset
Train/Test Split Groundtruth Quality ERP format Resolution Semantics Outdoor Dynamic Weathers Frames
OmniFlowNet Dataset [22]
high
384 × 768
1500
Replica360 [23]
medium/low
640 × 1280
954
SynWoodScape [28]
high
966×1280
500
Ours (FlowScape)
high
512 × 1024
6400
TABLE II
IIGROUND-TRUTH QUALITY ANALYSIS OF OPTICAL FLOW DATASETS.Modal
Dataset
PE WPE
Diff.
GT Quality
Perspective
FlyingChairs [32] 23.35 12.43 ↑ 46.8%
high
FlyingThings [70] 25.57 19.26 ↑ 24.7%
high
MPI-Sintel [66]
13.93 7.12 ↑ 48.9%
high
Panorama
SynWoodScape [28] 10.22 7.20 ↑ 29.5%
high
OmniFlowNet [22] 8.47 5.75 ↑ 32.1%
high
Replica360 [23]
19.76 18.63 ↑ 5.7% medium/low
Ours (FlowScape) 5.96 3.07 ↑ 48.7%
high
small datasets for evaluating panoramic optical flow. Due
to their small size, they are not suitable for the training
of neural network based methods. We further explore the
photoconsistency [67] by introducing photometric error (PE)
and warped photometric error (WPE) to evaluate the ground-
truth optical flow quality of the dataset when only forward
flow is given:
TABLE III QUANTITATIVE
IIIRESULTS ON FLOWSCAPE DATASET. * DENOTES THE MODEL TRAINED WITH FDA-R. ** DENOTES THE MODEL TRAINED WITH FDA-E.Training Data
Method
Sunny Cloud Fog Rain All (test)
Diff.
EPE EPE EPE EPE
EPE
C+T
RAFT [19]
16.57 11.16 15.04 17.00 15.64
-
PanoFlow (RAFT)* 14.93 11.25 13.88 13.36 14.03 ↑ 10.3%
PanoFlow (RAFT)** 14.66 11.10 13.57 13.38 13.81 ↑ 11.7%
CSFlow [20]
16.32 11.16 14.99 16.04 15.35
-
PanoFlow (CSFlow)* 14.74 11.18 13.64 13.42 13.89
↑ 9.5%
PanoFlow (CSFlow)** 14.27 10.74 13.03 13.34 13.45 ↑ 12.4%
C+T+F
RAFT [19]
4.77 1.52 4.84 6.07
4.50
-
PanoFlow (RAFT)*
3.62 1.38 3.60 4.25
3.39
↑ 24.7%
PanoFlow (RAFT)** 3.58 1.41 3.63 4.17
3.36
↑ 25.3%
CSFlow [20]
4.70 1.46 4.79 6.24
4.47
-
PanoFlow (CSFlow)* 3.56 1.47 3.56 3.94
3.31
↑ 26.0%
PanoFlow (CSFlow)** 3.46 1.35 3.59 3.98
3.25
↑ 27.3%
TABLE IV ABLATIONS
IVON FLOW DISTORTION AUGMENTATION.Augmentation
Sunny
Cloud
Fog
Rain
All (test)
FDA-R
FDA-E
-
-
18.53
12.88
17.00
18.02
17.43
-
17.86
12.63
16.60
17.59
16.89
-
16.65
11.67
15.48
16.50
15.75
TABLE V
VABLATIONS ON CORE COMPONENTS OF PANOFLOW.Core Components
Sunny Cloud Fog
Rain All (Test)
FDA-E CFE DCN
-
-
-
18.53 12.88 17.00 18.02
17.43
-
-
16.65 11.67 15.48 16.50
15.75
-
-
16.56 12.46 15.25 15.35
15.62
-
-
18.11 12.69 16.66 17.78
17.08
-
15.93 12.13 14.69 15.03
15.08
-
16.44 11.41 15.25 16.07
15.50
-
14.72 11.28 13.69 13.84
13.96
14.55 11.09 13.57 13.42
13.75
TABLE VI
CYCLIC FLOW ESTIMATION ABLATION.
CFE Settings
FlowScape (test)
Avg. Diff. Latency
sunny cloud fog rain
Baseline
4.77 1.52 4.84 6.07 4.50
-
0.10s
Circular Convolution 5.72 2.73 6.02 7.50 5.59 ↓ 19.5% 0.11s
Double Estimation 3.81 1.68 3.86 4.29 3.58 ↑ 20.4% 0.18s
Half Zero Padding 3.82 1.54 3.74 4.57 3.59 ↑ 20.2% 0.24s
Half Same Padding 31.5 23.5 22.1 35.8 29.6 ↓ 558% 0.13s
Default
3.58 1.41 3.63 4.17 3.36 ↑ 25.3% 0.13s
TABLE VII COMPARISON
VIIWITH STATE-OF-THE-ART. −f t DENOTES THE MODEL FINE-TUNED ON FLOWSCAPEMethod
OmniFlowNet Dataset
FlowScape (test)
Latency
Cart. Forest Poly. Avg. Diff. Avg.
Diff.
OmniFlowNet [22]
5.37 8.68 7.32 7.12
-
22.16
-
0.02s
Yuan et al. [23]
9.13 14.27 10.22 11.21 ↓ 57.4% 20.35 ↑ 8.17% 10.48s
OmniFlowNet (RAFT) 4.84 8.70 6.74 6.76 ↑ 5.06% 19.61 ↑ 11.5%
0.43s
OmniFlowNet (CSFlow) 4.74 8.66 6.52 6.64 ↑ 6.74% 19.47 ↑ 12.1%
0.44s
OmniFlowNet (RAFT)-ft 3.55 7.28 5.28 5.37 ↑ 24.6% 14.33 ↑ 35.3%
0.43s
OmniFlowNet (CSFlow)-ft 3.57 7.21 5.50 5.43 ↑ 23.7% 15.33 ↑ 30.8%
0.44s
PanoFlow (RAFT)*
3.95 4.77 6.78 5.17 ↑ 27.4% 14.03 ↑ 36.7%
0.13s
PanoFlow (RAFT)**
2.71 4.14 5.29 4.05 ↑ 43.1% 13.81 ↑ 37.7%
0.13s
PanoFlow (RAFT)*-ft 2.31 3.53 4.91 3.58 ↑ 49.8% 3.39 ↑ 84.7%
0.13s
PanoFlow (RAFT)**-ft 1.97 3.29 4.24 3.17 ↑ 55.5% 3.36 ↑ 84.8%
0.13s
PanoFlow (CSFlow)*
3.81 4.76 6.92 5.16 ↑ 27.5% 13.89 ↑ 37.3%
0.14s
PanoFlow (CSFlow)** 2.83 4.58 5.57 4.33 ↑ 39.2% 13.45 ↑ 39.3%
0.14s
PanoFlow (CSFlow)*-ft 2.02 3.51 4.48 3.34 ↑ 53.1% 3.31 ↑ 85.1%
0.14s
PanoFlow (CSFlow)**-ft 1.92 3.53 4.37 3.27 ↑ 54.1% 3.25 ↑ 85.3%
0.14s
Fig. 9. Error heatmap visualizations on FlowScape test split and OmniFlowNet datasetsEPE: 6.08
Reference Frame
OmniFlowNet
Yuan et al.
OmniFlowNet (RAFT)
Ours
EPE: 4.22
EPE: 0.66
EPE: 14.52
EPE: 0.49
EPE: 3.03
EPE: 3.11
EPE: 11.97
EPE: 1.74
EPE: 22.00
EPE: 18.18
EPE: 20.80
EPE: 2.15
EPE: 21.20
EPE: 17.86
EPE: 20.08
Yuan et al. Yuan et al. Yuan et al. Yuan et al.Ours
Ours
Ours
Ours
OmniFlowNet (RAFT)
OmniFlowNet
(RAFT)
OmniFlowNet
(RAFT)
OmniFlowNet
(RAFT)
OmniFlowNet
OmniFlowNet
OmniFlowNet
OmniFlowNet
TABLE VIII RUNNING
VIIITIME, PARAMETERS, AND MEMORY REQUIREMENT.Method
Parameters GPU Memory Time ∆Accuracy
RAFT [19]
5.3M
2.67GB
0.10s
-
PanoFlow (RAFT)*
5.3M
2.78GB
0.13s
↑ 24.7%
PanoFlow (RAFT)**
5.3M
2.78GB
0.13s
↑ 25.3%
CSFlow [20]
5.6M
3.42GB
0.10s
-
PanoFlow (CSFlow)*
5.6M
4.04GB
0.14s
↑ 26.0%
PanoFlow (CSFlow)**
5.6M
4.04GB
0.14s
↑ 27.3%
Unsupervised learning of depth, optical flow and pose with occlusion from 3D geometry. G Wang, C Zhang, H Wang, J Wang, Y Wang, X Wang, IEEE Transactions on Intelligent Transportation Systems. 231G. Wang, C. Zhang, H. Wang, J. Wang, Y. Wang, and X. Wang, "Unsupervised learning of depth, optical flow and pose with occlusion from 3D geometry," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 1, pp. 308-320, 2022.
CamLiFlow: Bidirectional camera-LiDAR fusion for joint optical flow and scene flow estimation. H Liu, T Lu, Y Xu, J Liu, W Li, L Chen, Proc. CVPR, 2022. CVPR, 2022H. Liu, T. Lu, Y. Xu, J. Liu, W. Li, and L. Chen, "CamLiFlow: Bidirectional camera-LiDAR fusion for joint optical flow and scene flow estimation," in Proc. CVPR, 2022, pp. 5791-5801.
Real-time optical flow for vehicular perception with low-and high-resolution event cameras. V Brebion, J Moreau, F Davoine, IEEE Transactions on Intelligent Transportation Systems. 239V. Brebion, J. Moreau, and F. Davoine, "Real-time optical flow for vehicular perception with low-and high-resolution event cameras," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 15 066-15 078, 2022.
Unsupervised learning of optical flow with non-occlusion from geometry. G Wang, S Ren, H Wang, IEEE Transactions on Intelligent Transportation Systems. 2311G. Wang, S. Ren, and H. Wang, "Unsupervised learning of optical flow with non-occlusion from geometry," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 11, pp. 20 850-20 859, 2022.
Traffic accident detection via self-supervised consistency learning in driving scenarios. J Fang, J Qiao, J Bai, H Yu, J Xue, IEEE Transactions on Intelligent Transportation Systems. 237J. Fang, J. Qiao, J. Bai, H. Yu, and J. Xue, "Traffic accident detection via self-supervised consistency learning in driving scenarios," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9601-9614, 2022.
Capturing omni-range context for omnidirectional segmentation. K Yang, J Zhang, S Reiß, X Hu, R Stiefelhagen, Proc. CVPR, 2021. CVPR, 2021K. Yang, J. Zhang, S. Reiß, X. Hu, and R. Stiefelhagen, "Capturing omni-range context for omnidirectional segmentation," in Proc. CVPR, 2021, pp. 1376-1386.
Semantic video CNNs through representation warping. R Gadde, V Jampani, P V Gehler, Proc. ICCV. ICCVR. Gadde, V. Jampani, and P. V. Gehler, "Semantic video CNNs through representation warping," in Proc. ICCV, 2017, pp. 4463-4472.
Deep visual MPC-policy learning for navigation. N Hirose, F Xia, R Martín-Martín, A Sadeghian, S Savarese, IEEE Robotics and Automation Letters. 44N. Hirose, F. Xia, R. Martín-Martín, A. Sadeghian, and S. Savarese, "Deep visual MPC-policy learning for navigation," IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3184-3191, 2019.
Large-scale direct SLAM for omnidirectional cameras. D Caruso, J Engel, D Cremers, Proc. IROS. IROSD. Caruso, J. Engel, and D. Cremers, "Large-scale direct SLAM for omnidirectional cameras," in Proc. IROS, 2015, pp. 141-148.
Deep feature flow for video recognition. X Zhu, Y Xiong, J Dai, L Yuan, Y Wei, Proc. CVPR. CVPRX. Zhu, Y. Xiong, J. Dai, L. Yuan, and Y. Wei, "Deep feature flow for video recognition," in Proc. CVPR, 2017, pp. 4141-4150.
VOLDOR: Visual odometry from loglogistic dense optical flow residuals. Z Min, Y Yang, E Dunn, Proc. CVPR. CVPRZ. Min, Y. Yang, and E. Dunn, "VOLDOR: Visual odometry from log- logistic dense optical flow residuals," in Proc. CVPR, 2020, pp. 4897- 4908.
DROID-SLAM: Deep visual SLAM for monocular, stereo, and RGB-D cameras. Z Teed, J Deng, Proc. NeurIPS. NeurIPSZ. Teed and J. Deng, "DROID-SLAM: Deep visual SLAM for monoc- ular, stereo, and RGB-D cameras," in Proc. NeurIPS, 2021, pp. 16 558- 16 569.
Review on panoramic imaging and its applications in scene understanding. S Gao, K Yang, H Shi, K Wang, J Bai, IEEE Transactions on Instrumentation and Measurement. 71S. Gao, K. Yang, H. Shi, K. Wang, and J. Bai, "Review on panoramic imaging and its applications in scene understanding," IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-34, 2022.
Surrounding vehicle detection using an FPGA panoramic camera and deep CNNs. L Chen, IEEE Transactions on Intelligent Transportation Systems. 2112L. Chen et al., "Surrounding vehicle detection using an FPGA panoramic camera and deep CNNs," IEEE Transactions on Intelligent Transporta- tion Systems, vol. 21, no. 12, pp. 5110-5122, 2020.
Cross-modal 360°depth completion and reconstruction for large-scale indoor environment. R Liu, G Zhang, J Wang, S Zhao, IEEE Transactions on Intelligent Transportation Systems. R. Liu, G. Zhang, J. Wang, and S. Zhao, "Cross-modal 360°depth completion and reconstruction for large-scale indoor environment," IEEE Transactions on Intelligent Transportation Systems, 2022.
Semantic cameras for 360-degree environment perception in automated urban driving. A Petrovai, S Nedevschi, IEEE Transactions on Intelligent Transportation Systems. 2310A. Petrovai and S. Nedevschi, "Semantic cameras for 360-degree envi- ronment perception in automated urban driving," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 10, pp. 17 271-17 283, 2022.
Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping. J S Berrio, M Shan, S Worrall, E Nebot, IEEE Transactions on Intelligent Transportation Systems. 237J. S. Berrio, M. Shan, S. Worrall, and E. Nebot, "Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 7637-7652, 2022.
PWC-net: CNNs for optical flow using pyramid, warping, and cost volume. D Sun, X Yang, M.-Y Liu, J Kautz, Proc. CVPR. CVPRD. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "PWC-net: CNNs for optical flow using pyramid, warping, and cost volume," in Proc. CVPR, 2018, pp. 8934-8943.
RAFT: Recurrent all-pairs field transforms for optical flow. Z Teed, J Deng, Proc. ECCV. ECCV12347Z. Teed and J. Deng, "RAFT: Recurrent all-pairs field transforms for optical flow," in Proc. ECCV, vol. 12347, 2020, pp. 402-419.
CSFlow: Learning optical flow via cross strip correlation for autonomous driving. H Shi, Y Zhou, K Yang, X Yin, K Wang, Proc. IV. IVH. Shi, Y. Zhou, K. Yang, X. Yin, and K. Wang, "CSFlow: Learning optical flow via cross strip correlation for autonomous driving," in Proc. IV, 2022.
OmniDepth: Dense depth estimation for indoors spherical panoramas. N Zioulis, A Karakottas, D Zarpalas, P Daras, Proc. ECCV. ECCV11210N. Zioulis, A. Karakottas, D. Zarpalas, and P. Daras, "OmniDepth: Dense depth estimation for indoors spherical panoramas," in Proc. ECCV, vol. 11210, 2018, pp. 453-471.
Om-niFlowNet: a perspective neural network adaptation for optical flow estimation in omnidirectional images. C.-O Artizzu, H Zhang, G Allibert, C Demonceaux, Proc. ICPR, 2021. ICPR, 2021C.-O. Artizzu, H. Zhang, G. Allibert, and C. Demonceaux, "Om- niFlowNet: a perspective neural network adaptation for optical flow estimation in omnidirectional images," in Proc. ICPR, 2021, pp. 2657- 2662.
360°optical flow using tangent images. M Yuan, C Richardt, Proc. BMVC. BMVCM. Yuan and C. Richardt, "360°optical flow using tangent images," in Proc. BMVC, 2021.
PASS: Panoramic annular semantic segmentation. K Yang, X Hu, L M Bergasa, E Romera, K Wang, IEEE Transactions on Intelligent Transportation Systems. 2110K. Yang, X. Hu, L. M. Bergasa, E. Romera, and K. Wang, "PASS: Panoramic annular semantic segmentation," IEEE Transactions on In- telligent Transportation Systems, vol. 21, no. 10, pp. 4171-4185, 2020.
DS-PASS: Detail-sensitive panoramic annular semantic segmentation through SwaftNet for surrounding sensing. K Yang, X Hu, H Chen, K Xiang, K Wang, R Stiefelhagen, Proc. IV, 2020. IV, 2020K. Yang, X. Hu, H. Chen, K. Xiang, K. Wang, and R. Stiefelhagen, "DS-PASS: Detail-sensitive panoramic annular semantic segmentation through SwaftNet for surrounding sensing," in Proc. IV, 2020, pp. 457- 464.
Deformable convolutional networks. J Dai, Proc. ICCV. ICCVJ. Dai et al., "Deformable convolutional networks," in Proc. ICCV, 2017, pp. 764-773.
CARLA: An open urban driving simulator. A Dosovitskiy, G Ros, F Codevilla, A Lopez, V Koltun, Proc. CoRL. CoRL78A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, "CARLA: An open urban driving simulator," in Proc. CoRL, vol. 78, 2017, pp. 1-16.
SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving. A R Sekkat, IEEE Robotics and Automation Letters. 73A. R. Sekkat et al., "SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving," IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8502-8509, 2022.
OmniPhotos: casual 360°VR photography. T Bertel, M Yuan, R Lindroos, C Richardt, ACM Transactions on Graphics (TOG). 396T. Bertel, M. Yuan, R. Lindroos, and C. Richardt, "OmniPhotos: casual 360°VR photography," ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1-12, 2020.
Determining optical flow. B K Horn, B G Schunck, Artificial Intelligence. 171-3B. K. Horn and B. G. Schunck, "Determining optical flow," Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203, 1981.
An iterative image registration technique with an application to stereo vision. B D Lucas, T Kanade, Proc. IJCAI. IJCAIB. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. IJCAI, 1981, pp. 674-679.
FlowNet: Learning optical flow with convolutional networks. A Dosovitskiy, Proc. ICCV. ICCVA. Dosovitskiy et al., "FlowNet: Learning optical flow with convolu- tional networks," in Proc. ICCV, 2015, pp. 2758-2766.
Deep End2End Voxel2Voxel prediction. D Tran, L Bourdev, R Fergus, L Torresani, M Paluri, Proc. CVPRW. CVPRWD. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, "Deep End2End Voxel2Voxel prediction," in Proc. CVPRW, 2016, pp. 402- 409.
Unsupervised convolutional neural networks for motion estimation. A Ahmadi, I Patras, Proc. ICIP. ICIPA. Ahmadi and I. Patras, "Unsupervised convolutional neural networks for motion estimation," in Proc. ICIP, 2016, pp. 1629-1633.
Efficient sparse-to-dense optical flow estimation using a learned basis and layers. J Wulff, M J Black, Proc. CVPR. CVPRJ. Wulff and M. J. Black, "Efficient sparse-to-dense optical flow es- timation using a learned basis and layers," in Proc. CVPR, 2015, pp. 120-130.
Separable flow: Learning motion cost volumes for optical flow estimation. F Zhang, O J Woodford, V Prisacariu, P H S Torr, Proc. ICCV. ICCVF. Zhang, O. J. Woodford, V. Prisacariu, and P. H. S. Torr, "Separable flow: Learning motion cost volumes for optical flow estimation," in Proc. ICCV, 2021, pp. 10 787-10 797.
Learning to estimate hidden motions with global motion aggregation. S Jiang, D Campbell, Y Lu, H Li, R Hartley, Proc. ICCV, 2021. ICCV, 2021S. Jiang, D. Campbell, Y. Lu, H. Li, and R. Hartley, "Learning to estimate hidden motions with global motion aggregation," in Proc. ICCV, 2021, pp. 9752-9761.
Global matching with overlapping attention for optical flow estimation. S Zhao, L Zhao, Z Zhang, E Zhou, D Metaxas, Proc. CVPR, 2022. CVPR, 2022S. Zhao, L. Zhao, Z. Zhang, E. Zhou, and D. Metaxas, "Global matching with overlapping attention for optical flow estimation," in Proc. CVPR, 2022, pp. 17 592-17 601.
Deep equilibrium optical flow estimation. S Bai, Z Geng, Y Savani, J Z Kolter, Proc. CVPR, 2022. CVPR, 2022S. Bai, Z. Geng, Y. Savani, and J. Z. Kolter, "Deep equilibrium optical flow estimation," in Proc. CVPR, 2022, pp. 620-630.
Learning optical flow with kernel patch attention. A Luo, F Yang, X Li, S Liu, Proc. CVPR, 2022. CVPR, 2022A. Luo, F. Yang, X. Li, and S. Liu, "Learning optical flow with kernel patch attention," in Proc. CVPR, 2022, pp. 8906-8915.
DIP: Deep inverse patchmatch for high-resolution optical flow. Z Zheng, Proc. CVPR, 2022. CVPR, 2022Z. Zheng et al., "DIP: Deep inverse patchmatch for high-resolution optical flow," in Proc. CVPR, 2022, pp. 8925-8934.
CRAFT: Cross-attentional flow transformer for robust optical flow. X Sui, Proc. CVPR, 2022. CVPR, 202217611X. Sui et al., "CRAFT: Cross-attentional flow transformer for robust optical flow," in Proc. CVPR, 2022, pp. 17 602-17 611.
GMFlow: Learning optical flow via global matching. H Xu, J Zhang, J Cai, H Rezatofighi, D Tao, Proc. CVPR, 2022. CVPR, 2022H. Xu, J. Zhang, J. Cai, H. Rezatofighi, and D. Tao, "GMFlow: Learning optical flow via global matching," in Proc. CVPR, 2022, pp. 8121-8130.
Imposing consistency for optical flow estimation. J Jeong, J M Lin, F Porikli, N Kwak, Proc. CVPR, 2022. CVPR, 2022J. Jeong, J. M. Lin, F. Porikli, and N. Kwak, "Imposing consistency for optical flow estimation," in Proc. CVPR, 2022, pp. 3181-3191.
SelFlow: Self-supervised learning of optical flow. P Liu, M Lyu, I King, J Xu, Proc. CVPR. CVPRP. Liu, M. Lyu, I. King, and J. Xu, "SelFlow: Self-supervised learning of optical flow," in Proc. CVPR, 2019, pp. 4571-4580.
Self-supervised learning of motion capture. H.-Y Tung, H.-W Tung, E Yumer, K Fragkiadaki, Proc. NeurIPS. NeurIPSH.-Y. Tung, H.-W. Tung, E. Yumer, and K. Fragkiadaki, "Self-supervised learning of motion capture," Proc. NeurIPS, pp. 5236-5246, 2017.
FlowNet 2.0: Evolution of optical flow estimation with deep networks. E Ilg, N Mayer, T Saikia, M Keuper, A Dosovitskiy, T Brox, Proc. CVPR. CVPRE. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, "FlowNet 2.0: Evolution of optical flow estimation with deep networks," in Proc. CVPR, 2017, pp. 1647-1655.
A lightweight optical flow CNN -Revisiting data fidelity and regularization. T.-W Hui, X Tang, C C Loy, IEEE Transactions on Pattern Analysis and Machine Intelligence. 438T.-W. Hui, X. Tang, and C. C. Loy, "A lightweight optical flow CNN -Revisiting data fidelity and regularization," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 8, pp. 2555- 2569, 2021.
FlowFormer: A transformer architecture for optical flow. Z Huang, X Shi, C Zhang, Q Wang, K C Cheung, H Qin, J Dai, H Li, arXiv:2203.16194arXiv preprintZ. Huang, X. Shi, C. Zhang, Q. Wang, K. C. Cheung, H. Qin, J. Dai, and H. Li, "FlowFormer: A transformer architecture for optical flow," arXiv preprint arXiv:2203.16194, 2022.
Revisiting optical flow estimation in 360 videos. K Bhandari, Z Zong, Y Yan, Proc. ICPR. ICPRK. Bhandari, Z. Zong, and Y. Yan, "Revisiting optical flow estimation in 360 videos," in Proc. ICPR, 2021, pp. 8196-8203.
Learning omnidirectional flow in 360-degree video via siamese representation. K Bhandari, B Duan, G Liu, H Latapie, Z Zong, Y Yan, Proc. ECCV. ECCVK. Bhandari, B. Duan, G. Liu, H. Latapie, Z. Zong, and Y. Yan, "Learning omnidirectional flow in 360-degree video via siamese rep- resentation," Proc. ECCV, pp. 557-574, 2022.
Allaround depth from small motion with a spherical panoramic camera. S Im, H Ha, F Rameau, H.-G Jeon, G Choe, I S Kweon, Proc. ECCV. ECCV9907S. Im, H. Ha, F. Rameau, H.-G. Jeon, G. Choe, and I. S. Kweon, "All- around depth from small motion with a spherical panoramic camera," in Proc. ECCV, vol. 9907, 2016, pp. 156-172.
UniFuse: Unidirectional fusion for 360°panorama depth estimation. H Jiang, Z Sheng, S Zhu, Z Dong, R Huang, IEEE Robotics and Automation Letters. 62H. Jiang, Z. Sheng, S. Zhu, Z. Dong, and R. Huang, "UniFuse: Uni- directional fusion for 360°panorama depth estimation," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1519-1526, 2021.
HoHoNet: 360 indoor holistic understanding with latent horizontal features. C Sun, M Sun, H.-T Chen, Proc. CVPR. CVPRC. Sun, M. Sun, and H.-T. Chen, "HoHoNet: 360 indoor holistic understanding with latent horizontal features," in Proc. CVPR, 2021, pp. 2573-2582.
Bending reality: Distortion-aware transformers for adapting to panoramic semantic segmentation. J Zhang, K Yang, C Ma, S Reiß, K Peng, R Stiefelhagen, Proc. CVPR, 2022. CVPR, 2022J. Zhang, K. Yang, C. Ma, S. Reiß, K. Peng, and R. Stiefelhagen, "Bend- ing reality: Distortion-aware transformers for adapting to panoramic semantic segmentation," in Proc. CVPR, 2022, pp. 16 917-16 927.
Restricted deformable convolution-based road scene semantic segmentation using surround view cameras. L Deng, M Yang, H Li, T Li, B Hu, C Wang, IEEE Transactions on Intelligent Transportation Systems. 2110L. Deng, M. Yang, H. Li, T. Li, B. Hu, and C. Wang, "Restricted deformable convolution-based road scene semantic segmentation using surround view cameras," IEEE Transactions on Intelligent Transporta- tion Systems, vol. 21, no. 10, pp. 4350-4362, 2020.
Panoramic panoptic segmentation: Insights into surrounding parsing for mobile agents via unsupervised contrastive learning. A Jaus, K Yang, R Stiefelhagen, arXiv:2206.10711arXiv preprintA. Jaus, K. Yang, and R. Stiefelhagen, "Panoramic panoptic seg- mentation: Insights into surrounding parsing for mobile agents via unsupervised contrastive learning," arXiv preprint arXiv:2206.10711, 2022.
Joint 2D-3D-semantic data for indoor scene understanding. I Armeni, S Sax, A R Zamir, S Savarese, arXiv:1702.01105arXiv preprintI. Armeni, S. Sax, A. R. Zamir, and S. Savarese, "Joint 2D-3D-semantic data for indoor scene understanding," arXiv preprint arXiv:1702.01105, 2017.
Transfer beyond the field of view: Dense panoramic semantic segmentation via unsupervised domain adaptation. J Zhang, C Ma, K Yang, A Roitberg, K Peng, R Stiefelhagen, IEEE Transactions on Intelligent Transportation Systems. 237J. Zhang, C. Ma, K. Yang, A. Roitberg, K. Peng, and R. Stiefelha- gen, "Transfer beyond the field of view: Dense panoramic semantic segmentation via unsupervised domain adaptation," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9478-9491, 2022.
KITTI-360: A novel dataset and benchmarks for urban scene understanding in 2D and 3D. Y Liao, J Xie, A Geiger, IEEE Transactions on Pattern Analysis and Machine Intelligence. Y. Liao, J. Xie, and A. Geiger, "KITTI-360: A novel dataset and benchmarks for urban scene understanding in 2D and 3D," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving. S Yogamani, Proc. ICCV. ICCVS. Yogamani et al., "WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving," in Proc. ICCV, 2019, pp. 9307-9317.
The OmniScape dataset. A R Sekkat, Y Dupuis, P Vasseur, P Honeine, Proc. ICRA, 2020. ICRA, 2020A. R. Sekkat, Y. Dupuis, P. Vasseur, and P. Honeine, "The OmniScape dataset," in Proc. ICRA, 2020, pp. 1603-1608.
Waymo open dataset: Panoramic video panoptic segmentation. J Mei, Proc. ECCV. ECCVJ. Mei et al., "Waymo open dataset: Panoramic video panoptic segmen- tation," in Proc. ECCV, 2022.
OmniFlow: Human omnidirectional optical flow. R Seidel, A Apitzsch, G Hirtz, Proc. CVPRW, 2021. CVPRW, 2021R. Seidel, A. Apitzsch, and G. Hirtz, "OmniFlow: Human omnidirec- tional optical flow," in Proc. CVPRW, 2021, pp. 3678-3681.
The replica dataset: A digital replica of indoor spaces. J Straub, arXiv:1906.05797arXiv preprintJ. Straub et al., "The replica dataset: A digital replica of indoor spaces," arXiv preprint arXiv:1906.05797, 2019.
A naturalistic open source movie for optical flow evaluation. D J Butler, J Wulff, G B Stanley, M J Black, Proc. ECCV. ECCV7577D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, "A naturalistic open source movie for optical flow evaluation," in Proc. ECCV, vol. 7577, 2012, pp. 611-625.
A database and evaluation methodology for optical flow. S Baker, D Scharstein, J Lewis, S Roth, M J Black, R Szeliski, International Journal of Computer Vision. 921S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, "A database and evaluation methodology for optical flow," International Journal of Computer Vision, vol. 92, no. 1, pp. 1-31, 2011.
Vision meets robotics: The KITTI dataset. A Geiger, P Lenz, C Stiller, R Urtasun, The International Journal of Robotics Research. 3211A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The KITTI dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013.
The HCI benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. D Kondermann, Proc. CVPRW. CVPRWD. Kondermann et al., "The HCI benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving," in Proc. CVPRW, 2016, pp. 19-28.
A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. N Mayer, Proc. CVPR. CVPRN. Mayer et al., "A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation," in Proc. CVPR, 2016, pp. 4040-4048.
High-resolution optical flow from 1D attention and correlation. H Xu, J Yang, J Cai, J Zhang, X Tong, Proc. ICCV. ICCVH. Xu, J. Yang, J. Cai, J. Zhang, and X. Tong, "High-resolution optical flow from 1D attention and correlation," in Proc. ICCV, 2021, pp. 10 478-10 487.
On benchmarking optical flow. B Mccane, K Novins, D Crannitch, B Galvin, Computer Vision and Image Understanding. 841B. McCane, K. Novins, D. Crannitch, and B. Galvin, "On benchmarking optical flow," Computer Vision and Image Understanding, vol. 84, no. 1, pp. 126-143, 2001.
Hierarchical deep stereo matching on high-resolution images. G Yang, J Manela, M Happold, D Ramanan, Proc. CVPR. CVPRG. Yang, J. Manela, M. Happold, and D. Ramanan, "Hierarchical deep stereo matching on high-resolution images," in Proc. CVPR, 2019, pp. 5515-5524.
MMFlow: OpenMMLab optical flow toolbox and benchmark. M Contributors, 2021M. Contributors, "MMFlow: OpenMMLab optical flow toolbox and benchmark," https://github.com/open-mmlab/mmflow, 2021.
| [
"https://github.com/open-mmlab/mmflow,"
] |
[
"SUM-OVER-HISTORIES ORIGIN OF THE COMPOSITION LAWS OF RELATIVISTIC QUANTUM MECHANICS AND QUANTUM COSMOLOGY †",
"SUM-OVER-HISTORIES ORIGIN OF THE COMPOSITION LAWS OF RELATIVISTIC QUANTUM MECHANICS AND QUANTUM COSMOLOGY †"
] | [
"Jonathan J Halliwell ",
"Miguel E Ortiz ⋆[email protected] ",
"\nCenter for Theoretical Physics Laboratory for Nuclear Science\nMassachusetts Institute of Technology Cambridge\n02139MAUSA\n",
"\nand Blackett Laboratory\n\n",
"\nImperial College of Science, Technology and Medicine Prince\nCenter for Theoretical Physics Laboratory for Nuclear Science Massachusetts Institute of Technology Cambridge\nConsort Road\nSW7 2BZ, 02139LondonMAUK, USA\n"
] | [
"Center for Theoretical Physics Laboratory for Nuclear Science\nMassachusetts Institute of Technology Cambridge\n02139MAUSA",
"and Blackett Laboratory\n",
"Imperial College of Science, Technology and Medicine Prince\nCenter for Theoretical Physics Laboratory for Nuclear Science Massachusetts Institute of Technology Cambridge\nConsort Road\nSW7 2BZ, 02139LondonMAUK, USA"
] | [] | This paper is concerned with the question of the existence of composition laws in the sum-overhistories approach to relativistic quantum mechanics and quantum cosmology, and its connection with the existence a canonical formulation. In non-relativistic quantum mechanics, the propagator is represented by a sum over histories in which the paths move forwards in time. The composition law of the propagator then follows from the fact that the paths intersect an intermediate surface of constant time once and only once, and a partition of the paths according to their crossing position may be affected. In relativistic quantum mechanics, by contrast, the propagators (or Green functions) may be represented by sums over histories in which the paths move backwards and forwards in time. They therefore intersect surfaces of constant time more than once, and the relativistic composition law, involving a normal derivative term, is not readily recovered. The principal technical aim of this paper is to show that the relativistic composition law may, in fact, be derived directly from a sum over histories by partitioning the paths according to their first crossing position of an intermediate surface. We review the various Green functions of the Klein-Gordon equation, and derive their composition laws. We obtain path integral representations for all Green functions except the causal one. We use the proper time representation, in which the path integral has the form of a non-relativistic sum over histories but integrated over time. The question of deriving the composition laws therefore reduces to the question of factoring the propagators of non-relativistic quantum mechanics across an arbitrary surface in configuration space. This may be achieved using a known result called the Path Decomposition Expansion (PDX). We give a proof of the PDX using a spacetime lattice definition of the Euclidean propagator. We use the PDX to derive the composition laws of relativistic quantum mechanics from the sum over histories. We also derive canonical representations of all of the Green functions of relativistic quantum mechanics, i.e., express them in the form x ′′ |x ′ , where the {|x } are a complete set of configuration space eigenstates. These representations make it clear why the Hadamard Green function G (1) does not obey a standard composition law. They also give a hint as to why the causal Green function does not appear to possess a sum over histories representation. We discuss the broader implications of our methods and results for quantum cosmology, and parameterized theories generally. We show that there is a close parallel between the existence of a composition law and the existence of a canonical formulation, in that both are dependent on the presence of a time-like Killing vector. We also show why certain naive composition laws that have been proposed in the past for quantum cosmology are incorrect. Our results suggest that the propagation amplitude between three-metrics in quantum cosmology, as constructed from the sum-over-histories, does not obey a composition law.2 | 10.1103/physrevd.48.748 | [
"https://export.arxiv.org/pdf/gr-qc/9211004v2.pdf"
] | 16,381,314 | gr-qc/9211004 | 8d0b9b42de4c922737e259f3ea07b850c65c2f3d |
SUM-OVER-HISTORIES ORIGIN OF THE COMPOSITION LAWS OF RELATIVISTIC QUANTUM MECHANICS AND QUANTUM COSMOLOGY †
27 Jan 1993 October 1992
Jonathan J Halliwell
Miguel E Ortiz ⋆[email protected]
Center for Theoretical Physics Laboratory for Nuclear Science
Massachusetts Institute of Technology Cambridge
02139MAUSA
and Blackett Laboratory
Imperial College of Science, Technology and Medicine Prince
Center for Theoretical Physics Laboratory for Nuclear Science Massachusetts Institute of Technology Cambridge
Consort Road
SW7 2BZ, 02139LondonMAUK, USA
SUM-OVER-HISTORIES ORIGIN OF THE COMPOSITION LAWS OF RELATIVISTIC QUANTUM MECHANICS AND QUANTUM COSMOLOGY †
27 Jan 1993 October 1992Submitted to Physical Review D .Imperial-TP/92-93/06 gr-qc/9211004 † This work is supported in part by funds provided by the U. S. Department of Energy (D.O.E.) under contract #DE-AC02-76ER03069. ‡ j [email protected] * Present address
This paper is concerned with the question of the existence of composition laws in the sum-overhistories approach to relativistic quantum mechanics and quantum cosmology, and its connection with the existence a canonical formulation. In non-relativistic quantum mechanics, the propagator is represented by a sum over histories in which the paths move forwards in time. The composition law of the propagator then follows from the fact that the paths intersect an intermediate surface of constant time once and only once, and a partition of the paths according to their crossing position may be affected. In relativistic quantum mechanics, by contrast, the propagators (or Green functions) may be represented by sums over histories in which the paths move backwards and forwards in time. They therefore intersect surfaces of constant time more than once, and the relativistic composition law, involving a normal derivative term, is not readily recovered. The principal technical aim of this paper is to show that the relativistic composition law may, in fact, be derived directly from a sum over histories by partitioning the paths according to their first crossing position of an intermediate surface. We review the various Green functions of the Klein-Gordon equation, and derive their composition laws. We obtain path integral representations for all Green functions except the causal one. We use the proper time representation, in which the path integral has the form of a non-relativistic sum over histories but integrated over time. The question of deriving the composition laws therefore reduces to the question of factoring the propagators of non-relativistic quantum mechanics across an arbitrary surface in configuration space. This may be achieved using a known result called the Path Decomposition Expansion (PDX). We give a proof of the PDX using a spacetime lattice definition of the Euclidean propagator. We use the PDX to derive the composition laws of relativistic quantum mechanics from the sum over histories. We also derive canonical representations of all of the Green functions of relativistic quantum mechanics, i.e., express them in the form x ′′ |x ′ , where the {|x } are a complete set of configuration space eigenstates. These representations make it clear why the Hadamard Green function G (1) does not obey a standard composition law. They also give a hint as to why the causal Green function does not appear to possess a sum over histories representation. We discuss the broader implications of our methods and results for quantum cosmology, and parameterized theories generally. We show that there is a close parallel between the existence of a composition law and the existence of a canonical formulation, in that both are dependent on the presence of a time-like Killing vector. We also show why certain naive composition laws that have been proposed in the past for quantum cosmology are incorrect. Our results suggest that the propagation amplitude between three-metrics in quantum cosmology, as constructed from the sum-over-histories, does not obey a composition law.2
ABSTRACT:
This paper is concerned with the question of the existence of composition laws in the sum-overhistories approach to relativistic quantum mechanics and quantum cosmology, and its connection with the existence a canonical formulation. In non-relativistic quantum mechanics, the propagator is represented by a sum over histories in which the paths move forwards in time. The composition law of the propagator then follows from the fact that the paths intersect an intermediate surface of constant time once and only once, and a partition of the paths according to their crossing position may be affected. In relativistic quantum mechanics, by contrast, the propagators (or Green functions) may be represented by sums over histories in which the paths move backwards and forwards in time. They therefore intersect surfaces of constant time more than once, and the relativistic composition law, involving a normal derivative term, is not readily recovered. The principal technical aim of this paper is to show that the relativistic composition law may, in fact, be derived directly from a sum over histories by partitioning the paths according to their first crossing position of an intermediate surface. We review the various Green functions of the Klein-Gordon equation, and derive their composition laws. We obtain path integral representations for all Green functions except the causal one. We use the proper time representation, in which the path integral has the form of a non-relativistic sum over histories but integrated over time. The question of deriving the composition laws therefore reduces to the question of factoring the propagators of non-relativistic quantum mechanics across an arbitrary surface in configuration space. This may be achieved using a known result called the Path Decomposition Expansion (PDX). We give a proof of the PDX using a spacetime lattice definition of the Euclidean propagator. We use the PDX to derive the composition laws of relativistic quantum mechanics from the sum over histories. We also derive canonical representations of all of the Green functions of relativistic quantum mechanics, i.e., express them in the form x ′′ |x ′ , where the {|x } are a complete set of configuration space eigenstates. These representations make it clear why the Hadamard Green function G (1) does not obey a standard composition law. They also give a hint as to why the causal Green function does not appear to possess a sum over histories representation. We discuss the broader implications of our methods and results for quantum cosmology, and parameterized theories generally. We show that there is a close parallel between the existence of a composition law and the existence of a canonical formulation, in that both are dependent on the presence of a time-like Killing vector. We also show why certain naive composition laws that have been proposed in the past for quantum cosmology are incorrect. Our results suggest that the propagation amplitude between three-metrics in quantum cosmology, as constructed from the sum-over-histories, does not obey a composition law.
INTRODUCTION
Quantum theory, in both its development and applications, involves two strikingly different sets of mathematical tools. On the one hand there is the canonical approach, involving operators, states, Hilbert spaces and Hamiltonians. On the other, there is the path integral, involving sums over sets of histories. For most purposes, the distinction between these two methods is largely regarded as a matter of mathematical rigour or calculational convenience. There may, however, be a more fundamental distinction: one method could be more general than the other. If this is the case, then it is of particular interest to explore the connections between the two formulations, and discover the conditions under which a route from one method to the other can or cannot be found.
A particular context in which the possible distinction between these two quantization methods will be important is quantum cosmology. There, the canonical formulation suffers from a serious obstruction known as the "problem of time" [ish,kuch]. This is the problem that general relativity does not obviously supply the preferred time parameter so central to the formulation and interpretation of quantum theory. By contrast, in sum-over-histories formulations of quantum theory, the central notion is that of a quantummechanical history. The notion of time does not obviously enter in an essential way. Sum-over-histories formulations of quantum cosmology have therefore been promoted as promising candidates for a quantum theory of spacetime, because the problem of time is not as immediate or central, and may even be sidestepped completely [harnew]. In particular, as suggested by Hartle, a sum-over-histories formulation could exist even though a canonical formulation may not [harnew]. The broad aim of this paper is to explore this suggestion.
An object that one would expect to play an important role in sum-over-histories formulations of quantum cosmology is the "propagation amplitude" between three-metrics. Formally, it is given by a functional integral expression of the form,
G(h ′′ ij , h ′ ij ) = Dg µν exp (iS[g µν ])
Here, S[g µν ] is the gravitational action, and the sum is over a class of four-metrics matching the prescribed three-metrics h ′′ ij , h ′ ij on final and initial surfaces. The level of the present discussion is rather formal, so we will not go into the details of how such an expression is constructed (see Ref. [hh] for details), nor shall we address the important question of its interpretation. It is, however, important for present purposes to assume that a definition of the sum over histories exists that is not dependent on the canonical formalism.
The above expression is closely analogous in its construction to the sum-over-histories representations of the propagators (or Green functions) of relativistic quantum mechanics, G(x ′′ |x ′ ), where x denotes a spacetime coordinate. We shall make heavy use of this analogy in this paper.
In relativistic quantum mechanics, there exist both sum-over-histories and canonical formulations of the one-particle quantum theory. In the canonical formulation, one may introduce a complete set of configuration space states, {|x }. The propagators may then be shown to possess canonical representations, i.e., they may be expressed in the form,
G(x ′′ |x ′ ) = x ′′ |x ′
where the right-hand side denotes a genuine Hilbert space inner product. By insertion of a resolution of the identity, it may then be shown that the propagator satisfies a composition law, typically of the form
x ′′ |x ′ = dσ µ x ′′ |x ∂ µ x|x ′
where dσ µ denotes a normal surface element. The details of this type of construction will be given in later sections. For the moment, the point to stress is that the existence of a composition law is generally closely tied to the existence of canonical representations of G(x ′′ |x ′ ).
Now in a sum-over-histories formulation of quantum cosmology, the path integral representation of G(h ′′ ij , h ′ ij ) is taken to be the starting point. Relations such as the composition law, characteristic of canonical formulations, cannot be assumed but hold only if they can be derived directly from the sum over histories alone, without recourse to a canonical formulation. In particular, since the existence of a composition law seems to be a general feature of the canonical formalism, it is very reasonable to suppose that the existence of a composition law for a G(h ′′ ij , h ′ ij ) generated by the sum over histories is a necessary condition for the existence of an equivalent canonical formulation.
The object of this paper is to determine how a derivation of the composition law from the sum over histories may be carried out. We may then ask how this derivation might fail, i.e., whether the necessary condition for the recovery of a canonical formulation of quantum cosmology from a sum-over-histories formulation is fulfilled.
Of course, a full quantum theory of cosmology, even if it existed, would be exceedingly complicated. Like many authors declaring interest in quantum cosmology, therefore, we will focus on the technically simpler case of the relativistic particle. As stated above relativistic quantum mechanics possesses many of the essential features of quantum cosmology. Remarks on quantum cosmology of a more general and speculative nature will be saved until the end. We shall show how the composition laws of relativistic propagators may be derived directly from their sum-over-histories representations. To the best of our knowledge, this derivation has not been given previously. It is therefore of interest in the limited context of relativistic quantum mechanics, as well as being a model for the more difficult problem of quantum cosmology outlined above.
1(A). The Problem
In non-relativistic quantum mechanics the propagator, x ′′ , t ′′ |x ′ , t ′ , plays a useful and important role. It is defined to be the object which satisfies the Schrödinger equation with respect to each argument,
i ∂ ∂t ′′ −Ĥ ′′ x ′′ , t ′′ |x ′ , t ′ = 0,(1.1)
(and similarly for the initial point), subject to the initial condition
x ′′ , t ′ |x ′ , t ′ = δ (n) (x ′′ − x ′ ) . (1.2)
It determines the solution to the Schrödinger equation at time t ′′ , given initial data at time t ′ :
Ψ(x ′′ , t ′′ ) = d n x ′ x ′′ , t ′′ |x ′ , t ′ Ψ(x ′ , t ′ ) . (1.3)
From this follows the composition law (semigroup property),
x ′′ , t ′′ |x ′ , t ′ = d n x x ′′ , t ′′ |x, t x, t|x ′ , t ′ . (1.4)
In relativistic quantum mechanics, the most closely analogous object is at first sight the causal propagator, G(x ′′ |x ′ ). It is defined to be the object satisfying the Klein-Gordon equation with respect to each argument,
x ′′ + m 2 G(x ′′ |x ′ ) = 0 ,
(1.5) (and similarly for the initial point) and obeying the boundary conditions
G(x ′′ , x 0 ′′ |x ′ , x 0 ′ ) x 0′′ =x 0′ = 0 , ∂ ∂x 0 ′′ G(x ′′ , x 0 ′′ |x ′ , x 0 ′ ) x 0′′ =x 0′ = −δ (3) (x ′′ − x ′ ) .
It vanishes outside the lightcone. It determines the solution at a spacetime point x ′′ , given initial data on the spacelike surface Σ
φ(x ′′ ) = − Σ dσ µ G(x ′′ |x ′ ) ↔ ∂ µ φ(x ′ ) (1.8) where ↔ ∂ µ = → ∂ µ − ← ∂ µ ,(1.9)
and dσ µ is normal to the surface Σ in the future timelike direction. From (1.8) follows the composition law,
G(x ′′ |x ′ ) = − Σ dσ µ G(x ′′ |x) ↔ ∂ µ G(x|x ′ ) .
(1.10)
There are of course a number of other Green functions associated with the Klein-Gordon equation, and many of them also obey composition laws similar to (1.10), involving the derivative operator (1.9) characteristic of relativistic field theories. For example, the Feynman Green function obeys a slightly modified version of (1.10).
Because of the presence of the derivative operator (1.9) in (1.10), the relativistic and non-relativistic composition laws assume a somewhat different form. The difference is readily understood. The wave functions of non-relativistic quantum mechanics obey a parabolic equation, and so are uniquely determined by the value of the wave function on some initial surface. By contrast, the wave functions in the relativistic case obey a hyperbolic equation, so are uniquely determined by the value of the wave function and its normal derivative on some initial surface, hence the derivative term in (1.10).
A convenient way of representing the propagator in non-relativistic quantum mechanics is in terms of a sum-over-histories. Formally, one writes
x ′′ , t ′′ |x ′ , t ′ = p(x ′ ,t ′ →x ′′ ,t ′′ ) exp [iS(x ′ , t ′ → x ′′ , t ′′ )] .
(1.11)
Here, p(x ′ , t ′ → x ′′ , t ′′ ) denotes the set of paths beginning at x ′ at time t ′ and ending at x ′′ at t ′′ , and S(x ′ , t ′ → x ′′ , t ′′ ) denotes the action of each individual such path. The propagator of non-relativistic quantum mechanics is obtained by restricting to paths x(t) that are single-valued functions of t, that is, they move forwards in time. There are many ways of defining a formal object like (1.11). A common method worth keeping in mind is the time-slicing definition, in which the time interval is divided into N equal parts of size ǫ, N ǫ = (t ′′ − t ′ ), and one writes
x ′′ , t ′′ |x ′ , t ′ = lim N →∞ N k=1 d n x k (2πiǫ) n 2 exp [iS(x k+1 , t k+1 |x k , t k )] .
Here,
x 0 = x ′ , t 0 = t ′ , x N +1 = x ′′ , t N +1 = t ′′ and S(x k+1 , t k+1 |x k , t k )
is the action of the classical path connecting (x k , t k ) to (x k+1 , t k+1 ). More rigorous definitions also exist, such as that in which (the Euclidean version of) (1.11) is defined as the continuum limit of a sum over paths on a discrete spacetime lattice. Indeed, we will find it necessary to resort to such a rigorous definition below.
Given the representation (1.11) of the propagator, it becomes pertinent to ask whether the composition law (1.4) may be derived directly from the sum-over-histories representation, (1.11). This is indeed possible. The crucial notion permitting such a derivation is that of an exhaustive partition of the histories into mutually exclusive alternatives. For consider the surface labeled by t, where t ′ ≤ t ≤ t ′′ . Because the paths move forwards in time, each path intersects this surface once and only once, at some point x t , say. The paths may therefore be exhaustively partitioned into mutually exclusive sets, according to the value of x at which they intersect the surface labeled by t (see Fig. 1). We write this as
p(x ′ , t ′ → x ′′ , t ′′ ) = xt p(x ′ , t ′ → x t , t → x ′′ , t ′′ ) p(x ′ , t ′ → x t , t → x ′′ , t ′′ ) ∩ p(x ′ , t ′ → y t , t → x ′′ , t ′′ ) = ∅ if x t = y t .
Each path from (x ′ , t ′ ) to (x ′′ , t ′′ ) may then be uniquely expressed as the composition of a path from (x ′ , t ′ ) to (x t , t) with a path from (x t , t) to (x ′′ , t ′′ ). Consider what this implies for the sum-over-histories. First of all, any sensible definition of the measure in the sum-over-histories should satisfy
p(x ′ ,t ′ →x ′′ ,t ′′ ) = xt p(x ′ ,t ′ →xt,t→x ′′ ,t ′′ ) = xt p(x ′ ,t ′ →xt,t) p(xt,t→x ′′ ,t ′′ )
.
(1.15) This is readily shown to be true of the time-slicing definition, for example. Secondly, the action should satisfy
S(x ′ , t ′ → x ′′ , t ′′ ) = S(x ′ , t ′ → x t , t) + S(x t , t → x ′′ , t ′′ ) .
(1.16)
Combining (1.15) and (1.16), it is readily seen that one has,
x ′′ , t ′′ |x ′ , t ′ = xt p(x ′ ,t ′ →xt,t) p(xt,t→x ′′ ,t ′′ ) exp [iS(x ′ , t ′ → x t , t) + iS(x t , t → x ′′ , t ′′ )] = xt x ′′ , t ′′ |x t , t x t , t|x ′ , t ′ .
( 1.17) The composition law therefore follows directly from the partitioning of the sets of paths in the sum over histories.
Turn now to the relativistic particle. There also, certain Green functions may be represented by sumsover-histories. Formally, one writes
G(x ′′ |x ′ ) = p(x ′ →x ′′ ) exp [iS(x ′ → x ′′ )]
(1.18) (we will be precise later about which Green function G may be). In fact, a number of such representations are available, since the classical relativistic particle is a constrained system, and there is more than one way of constructing the path integral for constrained systems [G]. Here we shall be largely concerned with those constructions for which the set of paths summed over in (1.18) is all paths in spacetime. In particular, unlike the non-relativistic case, the paths will generally move forwards and backwards in the time coordinate, x 0 (see Fig. 2).
It again becomes pertinent to ask whether a composition law of the form (1.10) may be derived from the sum-over-histories representation. However, because the paths move both backwards and forwards in time, they typically intersect an intermediate surface of constant x 0 many times, and the points at which they intersect the intermediate surface therefore do not affect a partition of the paths into exclusive sets. The argument for the non-relativistic case, therefore, cannot be carried over directly to the relativistic case. Furthermore, even if this partition did work, it would then not be clear how the derivative term in the composition law might arise from the path representation (1.18). We are thus led to the question, is there a different way of partitioning the paths, that leads to a composition law of the form (1.10), and explains the appearance of the derivative term? This question is the topic of this paper.
In detail, we will study sum-over-histories expressions of the form (1.18) for relativistic Green functions. We will focus on the "proper time" sum-over-histories, in which the Green functions are represented by an expression of the form
G(x ′′ |x ′ ) = dT g(x ′′ , T |x ′ , 0) .
(1.19)
Here g(x ′′ , T |x ′ , 0) is a Schrödinger propagator satisfying (1.1), (1.2) and (1.4), with the Hamiltonian taken to be the Klein-Gordon operator in (1.5). g may therefore be represented by a sum over paths of the form (1.11). We will derive (1.19) below, but for the moment note that (1.19) will be a solution to (1.5) if T is taken to have an infinite range, and will satisfy (1.5) but with a delta-function on the right-hand side if T is taken to have a half-infinite range.
1(B). Outline
We begin in Section 2 by reviewing the various Green functions associated with the Klein-Gordon equation and their properties. We determine which Green functions satisfy a composition law of the form (1.8).
We briefly describe the sum over histories representation, and derive (1.19). An important question we address is that of which Green functions are obtained by the sum over paths (1.19). We also discuss the connection of sum over histories representations with canonical representations. By this we mean representations in which the propagators may be expressed in the form x|x ′ , where the {|x } with a single time argument x 0 form a complete set of configuration space eigenstates.
In the representation (1.19), the time coordinate x 0 is treated as a "spatial" coordinate, when g is thought of as an ordinary Schrödinger propagator like that of non-relativistic quantum mechanics. Comparing (1.19) with the expression to be derived from it, (1.10), we therefore see that our problem of factoring the sum over histories (1.19) across a surface of constant x 0 is very closely related to that of factoring the sum over histories (1.11) not across a surface of constant parameter time t, as in (1.4), but across a surface on which one of the spatial coordinates is constant. It turns out that a solution to this problem exists, and the result goes by the name of the Path Decomposition Expansion (PDX) [AK]. The crucial observation that leads to this result is that although the paths may cross the factoring surface many times, they may nevertheless be partitioned into exclusive sets according to the parameter time and spatial location of their first crossing of the surface. We describe this result in Section 3, and give a rigorous derivation of it.
In Section 4, we give our main result. This is to show how the composition law (1.10) follows from the sum over histories (1.19), using the PDX. We also explain why certain naive composition laws that have been proposed in the past are problematic.
Our principal result is admittedly simple, and has been derived largely by straightforward application of the PDX. However, it has broader significance in the context of the sum-over-histories approach to quantum theory. In particular, it is closely related to the question of the conditions under which a sum-over-histories formulation of quantum theory implies the existence of a Hilbert space formulation. In Section 5, we therefore discuss the generalizations and broader implications of our result.
THE PROPAGATORS OF RELATIVISTIC QUANTUM MECHANICS 2(A). Green Functions of the Klein-Gordon Equation
We begin this section with a review of the various Green functions of the Klein-Gordon equation in Minkowski space relevant to our discussion. The section is intended to set out the conventions we shall use throughout this paper, and to list the relevant properties of the Green functions. A metric of signature (+, −, −, −) is used throughout. Readers familiar with the intricacies of this subject may wish to move directly to section B.
The kernel, G(x|y) of the operator ( + m 2 ) satisfying x + m 2 G(x|y) = −δ 4 (x − y)
(2.1) where x and y are four-vectors, may be shown by Fourier transformation to be given by the expression
G(x|y) = 1 (2π) 4 d 4 k e −ik·(x−y) k 2 − m 2 .
(2.2) G(x|y) is not uniquely defined in Minkowski space due to the presence of poles in the integrand. The k 0 integration
∞ −∞ dk 0 e ik0(x 0 −y 0 ) k 0 2 − k 2 − m 2
has poles on the real axis at k 0 = ±(k 2 + m 2 ) 1/2 , and the various possible deformations of this contour determine the possible solutions to (2.1), each with different support properties. Below we shall list some possible contours and their corresponding Green functions. Closed contours yield solutions to the Klein-Gordon equation. We also discuss these below since they play an important role in relativistic quantum mechanics.
Wightman functions: G + (x|y) and G − (x|y)
A closed anti-clockwise contour around one or other of the poles yields the two Wightman functions ±iG ± (x|y), which are solutions of the Klein-Gordon equation, and of its positive and negative square roots respectively
i ∂ ∂x 0 ∓ m 2 − ∇ 2 x 1/2 G ± (x|y) = 0.
They are given by
G ± (x|y) = 1 (2π) 3 d 4 kθ(k 0 )δ(k 2 − m 2 )e ∓ik·(x−y) or G ± (x|y) = 1 (2π) 3 k0=±ω k d 3 k 2ω k e −ik·(x−y)
and are related by G + (x|y) = G − (y|x).
The two Wightman functions satisfy relativistic composition laws
G ± (x ′′ |x ′ ) = ±i Σ dσ µ G ± (x ′′ |x) ↔ ∂ µ G ± (x|x ′ ) (2.60) (
where dσ µ is normal to Σ and future pointing) and are orthogonal in the sense that
Σ dσ µ G ± (x ′′ |x) ↔ ∂ µ G ∓ (x|x ′ ) = 0.
In field theory they are given by the expressions
G + (x|y) = 0|φ(x)φ(y)|0
and G − (x|y) = 0|φ(y)φ(x)|0 .
Feynman propagator: G F (x|y)
A contour going under the left pole and above the right gives the Feynman propagator. This satisfies equation (2.1), and may be written as
iG F (x|y) = θ(x 0 − y 0 )G + (x|y) + θ(y 0 − x 0 )G − (x|y).
Alternatively,
G F (x|y) = −i (2π) 4 ∞ 0 dT d 4 ke −i[k·(x−y)−T (k 2 −m 2 +iε)] = 1 (2π) 4 d 4 k e −ik·(x−y) k 2 − m 2 + iε .
It may be checked that G F obeys a relativistic composition law
G F (x ′′ |x ′ ) = − Σ dσG F (x ′′ |x) ↔ ∂ n G F (x|x ′ ) , (2.300)
where Σ is an arbitrary spacelike 3-surface, and ∂ n = n µ ∂ µ with n µ now the normal to Σ in the direction of propagation. In free scalar field theory, the Feynman propagator is of course given by
iG F (x|y) = 0|T (φ(x)φ(y)) |0 .
Causal Green function: G(x|y)
A closed clockwise contour around both poles gives what is generally known as the commutator or causal Green function, which is written simply as G(x|y). It is a solution of the Klein-Gordon equation. G(x|y) has the following representations
G(x|y) = −i (2π) 3 d 4 kε(k 0 )δ(k 2 − m 2 )e −ik·(x−y) (2.68) or G(x|y) = −1 (2π) 3 d 3 k ω k sin ω k (x 0 − y 0 ) e ik·(x−y) . Since G(x, x 0 |y, y 0 ) x 0 =y 0 = 0, ∂ ∂x 0 G(x, x 0 |y, y 0 ) x 0 =y 0 = −δ 3 (x − y),
and G is Lorentz invariant, it has support only within the light cone of x − y. G also obeys the relativistic composition law
G(x ′′ |x ′ ) = − Σ dσ µ G(x ′′ |x) ↔ ∂ µ G(x|x ′ ).
and, as mentioned in the Introduction, it propagates solutions φ(x) of the Klein-Gordon equation via
φ(y) = − Σ dσ µ G(y|x) ↔ ∂ µ φ(x).
In field theory, G is given by the commutator
iG(x|y) = 0| [φ(x), φ(y)] |0 = [φ(x), φ(y)] .
Finally, note that
iG(x|y) = G + (x|y) − G − (x|y) .
Hadamard Green function: G (1) (x|y)
A closed figure of eight contour around the two poles gives the Hadamard or Schwinger Green function iG (1) (x|y), which is a solution of the Klein-Gordon equation. It may be written as
G (1) (x|y) = 1 (2π) 3 d 4 kδ(k 2 − m 2 )e −ik·(x−y) = 1 (2π) 4 ∞ −∞ dT d 4 ke −i[k·(x−y)−T (k 2 −m 2 )] (2.3) or G (1) (x|y) = 1 (2π) 3 d 3 k ω k cos ω k (x 0 − y 0 ) e ik·(x−y) .
Perhaps the most important property of G (1) (x|y) is that it does not satisfy the standard relativistic composition law. In fact
G (1) (x ′′ |x ′ ) = − Σ dσ µ G(x ′′ |x) ↔ ∂ µ G (1) (x|x ′ ) = − Σ dσ µ G (1) (x ′′ |x) ↔ ∂ µ G(x|x ′ ) (2.110) and G(x ′′ |x ′ ) = Σ dσ µ G (1) (x ′′ |x) ↔ ∂ µ G (1) (x|x ′ ).
In field theory, G (1) is given by the anti-commutator
G (1) (x|y) = 0| {φ(x), φ(y)} |0 .
It is related to the Wightman functions via
G (1) (x|y) = G + (x|y) + G − (x|y).
Newton-Wigner propagator:
G N W (x, x 0 |y, y 0 )
The Newton-Wigner propagator is a solution of the Klein-Gordon equation, and indeed of its first order positive square root. It is not given by the integral (2.2) for any contour. We nevertheless include it since it plays an important role in the quantum mechanics of the relativistic particle. G N W is defined by
G N W (x, x 0 |y, y 0 ) = 1 (2π) 3 k0=ω k d 3 ke −ik·(x−y) . (2.4)
The support property
G N W (x, x 0 |y, y 0 ) x 0 =y 0 = δ 3 (x − y)
shows that G N W is analogous to the quantum mechanical propagator (1.2). It propagates solutions of the first order Schrödinger equation with Hamiltonian H = (k 2 − m 2 ) 1/2 . G N W also obeys the usual quantum mechanical composition law
G N W (x ′′ , x 0 ′′ |x, x 0 ) = d 3 x ′ G N W (x ′′ , x 0 ′′ |x ′ , x 0 ′ )G N W (x ′ , x 0 ′ |x, x 0 ) . (2.70) Finally, note that G N W is not Lorentz invariant.
An analogous operator, which we shall call the negative frequency Newton-Wigner propagator, may also be defined. It is given byG
N W (x, x 0 |y, y 0 ) = 1 (2π) 3 k0=−ω k d 3 ke −ik·(x−y)
and has the same support properties as G N W (x, x 0 |y, y 0 ). It solves the negative frequency square root of the Klein-Gordon equation and propagates its solutions.
2(B). Sum Over Histories Formulation of Relativistic Quantum Mechanics
We are interested in Green functions which may be represented by sums over histories of the form (1.18). We will take sum-over-histories expressions of the form (1.18) as our starting point and determine which Green functions they give rise to. The expression (1.18) is rather formal as it stands, and various aspects of it need to be specified more precisely before it is properly and uniquely defined. These include the action, class of paths, gauge-fixing conditions, and the domains of integration of certain variables. The particular Green function obtained will depend on how these particular features are specified. We note, however, that there is no guarantee that all known Green functions may be obtained in this way, and indeed, we are not able to obtain the causal propagator, G.
The action for a relativistic particle is usually written as
S = −m τ ′′ τ ′ dτ ∂x µ ∂τ ∂x ν ∂τ η µν 1/2 , (2.5)
the length of the worldline of the particle in Minkowski space. τ parameterizes the worldline, and S is invariant under reparameterizations τ → f (τ ). Since (2.5) is highly non-linear, its quantization presents certain difficulties which have hitherto prevented its direct use in a sum over histories. These difficulties may be bypassed by the introduction of an auxiliary variable N , which can be thought of as a metric on the particle worldline. The action may then be rewritten as
S = − τ ′′ τ ′ dτ ẋ 2 4N + m 2 N
where a dot denotes a derivative with respect to τ . Passing to a Hamiltonian form, the action becomes
S = τ ′′ τ ′ dτ [p µẋ µ + N H] (2.6)
where N is now a Lagrange multiplier which enforces the constraint H = (p 2 − m 2 ) = 0. The Hamiltonian form (2.6) of the action is still invariant under reparameterizations. Infinitesimally, these are generated by the constraint H,
δx = ε(τ ){x, H}, δp = ε(τ ){p, H}, δN =ε(τ )
for some arbitrary parameter ε(τ ). Since H is quadratic in momentum, the action is only invariant up to a surface term [HT]
δS = ε(τ ) p ∂H ∂p − H τ ′′ τ ′
which constrains the reparameterizations at the end points to obey
ε(τ ′′ ) = 0 = ε(τ ′ ).
We shall discuss briefly the use of a sum over histories to evaluate the amplitude for a transition from x ′ to x ′′ , which we shall write as G(x ′′ |x ′ ). The sum is over paths beginning at x ′ at parameter time τ ′ , and ending at x ′′ at parameter time τ ′′ . Trajectories may in principle move forwards and backwards in the physical time x 0 , although it is also possible to define an amplitude constructed from paths that move only forwards in x 0 , as we shall discuss below.
It is necessary to fix the reparameterization invariance, and this may be done in a number of ways. We shall give a brief description of the two most commonly used prescriptions: the so-called proper time gaugeṄ = 0, and the canonical gauge x 0 = τ . The proper time gauge is a good prescription in the Gribov sense [G]. The canonical gauge has the feature that it restricts the class of paths in configuration space to move forwards in the time coordinate x 0 . These two gauge-fixing conditions lead to quite different results. There is, however, no conflict with the standard result that the path integral is independent of the choice of gauge-fixing [BFV]. That result applies only to families of gauge-fixing conditions which may be smoothly deformed into each other, which is not true of the two gauges described above.
Proper Time GaugeṄ = 0
The proper time gauge has been extensively discussed in the literature [G,HT,T,JJH], and we shall therefore only state some well-known results.
The conditionṄ = 0 is implemented by adding a gauge fixing term ΠṄ to the Lagrangian. The BFV prescription also requires the addition of a ghost term (details may be found in [JJH]). The path integration over the ghosts factorises, and the gauge fixing condition, realised by the integration over the Lagrange multiplier Π, reduces the functional integration over N to a single integration, leaving
G(x ′′ |x ′ ) = dN (τ ′′ − τ ′ ) DpDx exp i τ ′′ τ ′ dτ [pẋ − N H] .
(2.50)
Redefining T = N (τ ′′ − τ ′ )
, this may be rewritten as
G(x ′′ |x ′ ) = dT g(x ′′ , T |x ′ , 0) where g(x ′′ , T |x ′ , 0) is an ordinary quantum mechanical transition amplitude with Hamiltonian H = p 2 −m 2 .
The amplitude is given explicitly by
G(x ′′ |x ′ ) = 1 (2π) 4 dT d 4 p e i[p·(x ′′ −x ′ )−T (p 2 −m 2 )] .
All that remains is to specify the range of T integration. If T is integrated over an infinite range, then the Hadamard Green function is obtained,
G(x ′′ |x ′ ) = G (1) (x ′′ |x ′ )
On the other hand, if the range of integration is limited to T ∈ [0, ∞), then, introducing a regulator to make the T integration converge, the Feynman Green function is obtained,
G(x ′′ |x ′ ) = iG F (x ′′ |x ′ ).
From this the sum-over-histories representations of G ± are readily obtained. G + is obtained by taking T > 0 and x 0 ′ > x 0 , or T < 0 and x 0 ′ < x 0 , with the reverse yielding G − . In all of these cases, the class of paths is taken to be all paths in spacetime connecting the initial and final points. Note that the causal propagator G(x ′′ |x ′ ) is not obtained by these means. We will return to this point later.
Canonical Gauge x 0 = τ It is also of interest to consider a sum over histories in which the paths are restricted to move forwards in the physical time x 0 . On this class of paths, x 0 = τ may be shown to be a valid gauge choice, provided that one sets up the parameter time interval so that τ ′ = x 0 ′ and τ ′′ = x 0 ′′ . It may be implemented in the action by the addition of a gauge fixing term Π(x 0 − τ ). An evaluation of the path integral, using an infinite range for N , leads to the amplitude
G(x ′′ , x 0 ′′ |x ′ , x 0 ′ ) = G N W (x ′′ , x 0 ′′ |x ′ , x 0 ′ ) +G N W (x ′′ , x 0 ′′ |x ′ , x 0 ′ ).
(2.100)
This includes contributions from both positive and negative frequency sectors of the relativistic particle, in the sense that trajectories with both positive and negative p 0 are summed over. If the integrations over N are restricted to positive N (equivalently, a factor θ(N ) is included on every time slice), then only the positive frequency sector is included. In this case the amplitude is given by the Newton-Wigner propagator
G(x ′′ |x ′ ) = G N W (x ′′ |x ′ ).
The choice of a canonical gauge leads in both cases to an amplitude which is not Lorentz invariant, a consequence of the preferred status acquired by the co-ordinate x 0 . A comprehensive discussion of this gauge may be found in [HK].
2(C). Canonical Formulation of Relativistic Quantum Mechanics
We have listed the various Green functions, their composition laws, and their path integral representations, where they exist. In this subsection we discuss the connection of these considerations with the canonical quantization of the relativistic particle. In particular, we ask whether the various Green functions have canonical representations of the form x ′′ |x ′ , where the {|x ′ } are complete sets of configuration space eigenstates for any particular value of x 0 , and are constructed by taking suitable superpositions of physical states (i.e. ones satisfying the constraint). We will find that essentially all of the Green functions may be so represented. Which Green function is obtained depends on the choice inner product in the space of physical states, and on which states are included in the superposition (positive frequency, negative frequency or both). These considerations will shed some light on various features of the composition laws.
Dirac quantization of the relativistic particle leads to a space of states which may be expressed in terms of a complete set of momentum eigenstatesp
µ |p = p µ |p subject to the additional constraint (p 2 − m 2 )|p = 0.
The solutions to the constraints may be labelled by the eigenstates of the 3-momentum p, and we denote them |p . The states |p are not complete, since there remains an ambiguity in the action of
p 0 |p = ± p 2 + m 2 1/2 |p .
For free particles, the positive and negative frequency states decouple. Canonical representations are therefore possible involving the positive and negative frequency sectors separately, or both together. We consider each in turn. Our aim is to find canonical representations in which each of the Green functions may be represented in the form x ′′ |x ′ .
Positive Frequency Sector
In the positive frequency sector, p 0 > 0, the |p such that p 0 |p = p 2 + m 2 1/2 |p form a complete basis. The appropriate choice of inner product is
p|p ′ = 2ω p δ(p − p ′ )
and the completeness relation
1 = d 3 p 2ω p |p p|
follows, where ω p = (p 2 + m 2 ) 1/2 . Two choices of configuration space representations of this Hilbert space are possible: the Newton-Wigner representation, and the relativistically invariant representation.
Newton-Wigner Representation
From the basis |p , we may change to the Newton-Wigner basis defined by the states
|x, x 0 = 1 (2π) 3/2 p0=ωp d 3 p (2ω p ) 1/2 e ip·x |p .
They are not Lorentz invariant, but they are orthogonal at equal times and satisfy the completeness relation
1 = d 3 x |x, x 0 x, x 0 |.
(2.7)
Any Newton-Wigner wave function Ψ(x, x 0 ) = x, x 0 |Ψ satisfies the positive square root of the Klein-Gordon equation, i ∂ ∂x 0 Ψ = (m 2 − ∇ 2 ) 1/2 Ψ which reflects the fact that we are only considering the positive frequency excitations. The inner product on wave functions is the usual one
Φ|Ψ = d 3 x Φ † (x, x 0 )Ψ(x, x 0 )
and the propagator x, x 0 |x ′ , x 0 ′ is precisely the Newton-Wigner propagator (2.4). Its composition law (2.70) follows immediately from (2.7).
Relativistic Representation
It is possible to define a Lorentz invariant configuration space representation, using the basis states
|x = 1 (2π) 3/2 p0=ωp d 3 p 2ω p e ip·x |p
where a the states |x with x 0 fixed form a basis on the space of physical states. They are not orthogonal, since at equal times x 0 one has
x|x ′ = 1 (2π) 3 po=ωp d 3 p 2ω p e −ip·(x−x ′ ) .
(2.67)
They satisfy the relativistic completeness relation
1 = i Σ dσ µ |x ↔ ∂ µ x|. (2.8)
where Σ is an arbitrary spacelike 3-surface. The corresponding wave functions ψ(x) = x|ψ solve the positive square root of the Klein-Gordon equation and their positive definite inner product is given by the usual relativistic expression
φ|ψ = i Σ dσ µ φ † (x) ↔ ∂ µ ψ(x). (2.9)
The propagator x ′ |x given in Eq. (2.67) is equal to G + (x ′ |x).
Similarly, by restricting to the negative frequency sector, it is readily shown that x ′ |x is equal to G − (x ′ |x). Canonical representations of G ± are therefore readily obtained. A canonical representation of the Feynman Green function comes from those for G + and G − , although it is not immediately apparent how to construct a more direct one than this. These propagators all obey suitably modified versions of the relativistic composition law (1.10), as readily follows from the completeness relation (2.8).
Positive and Negative Frequency Sectors
The discussion above provides a canonical description of both the Feynman and Newton-Wigner propagators which arose in the path integral formulation of section 2(B), with N > 0. However, if the range of integration of the lapse function N is not restricted to a half-infinite range for the proper time gauge, we saw that the path integral leads to the propagator G(x ′′ |x ′ ) = G (1) (x ′′ |x ′ ) where G (1) is the Hadamard Green function (2.3). Since restricting N to be positive (or negative) in the sum over histories appears to correspond to the positive (or negative) frequency sectors in the canonical representations, it is very plausible that a canonical representation of G (1) will involve both sectors simultaneously. These is indeed the case, as we now show.
In momentum space, there are two orthogonal copies of the space of states |p . We label these two copies |p, ± wherep 0 |p, ± = ±(p 2 + m 2 ) 1/2 |p, ± .
The space of states is now a sum of the two copies, on which we choose the completeness relation
1 = d 3 p 2ω p [|p, + p, +| + |p, − p, −|] .
(2.10)
The corresponding inner product is positive definite for all states
p, i|p ′ , j = 2ω p δ 3 (p − p ′ )δ ij (2.11) where i, j = ± [HT].
Newton-Wigner Representation
We define Newton-Wigner states as
|x, x 0 = p0=ωp d 3 p (2ω p ) 1/2 e ip·x |p, + + p0=−ωp d 3 p (2ω p ) 1/2 e ip·x |p, − where p = (ω p , p)
. This definition is compatible with (2.10) and (2.11) provided that the usual completeness relation (2.7) is amended. Defining
|x, x 0 , + = p0=ωp d 3 p (2ω p ) 1/2 e ip·x |p, + , |x, x 0 , − = p0=−ωp d 3 p (2ω p ) 1/2 e ip·x |p, − ,
as the positive and negative frequency parts of |x, x 0 , (2.7) is replaced by
1 = d 3 x |x, x 0 , + x, x 0 , +| + |x, x 0 , − x, x 0 , −| .
The propagator for Newton-Wigner states is then given by
x, x 0 |x ′ , x 0 ′ = G N W (x, x 0 |x ′ , x 0 ′ ) +G N W (x, x 0 |x ′ , x 0 ′ ) .
This is precisely the amplitude derived in section 2(B), Eq.(2.100).
Note that now the wave function Ψ(x, x 0 ) = x, x 0 |Ψ solves only the second order Klein-Gordon equation. The inner product on wave functions Ψ(x, x 0 ) is
Φ|Ψ = d 3 x Φ † + (x, x 0 )Ψ + (x, x 0 ) + Φ † − (x, x 0 )Ψ − (x, x 0 ) .
Relativistic Representation
Lorentz invariant states in this canonical representation involving positive and negative frequency states may be defined by
|x = p0=ωp d 3 p 2ω p e ip·x |p, + + p0=−ωp d 3 p 2ω p e ip·x |p, − , (2.12)
The usual treatment of the relativistic particle involves these states along with the usual relativistic completeness relation (2.8), which is equivalent to the usual relativistic inner product (2.9) on wave functions [Schiff]. However, it is well known that (2.9) is not positive definite on the class of functions with both positive and negative frequency parts. Hence, (2.8) and (2.12) are not compatible with the positive definite inner product (2.11). In fact, working backwards, they imply that
p, ±|p ′ , ± = ±2ω p δ(p − p ′ ) ,(2.200)
and
1 = d 3 p 2ω p [|p, + p, +| − |p, − p, −|] .
If we are to keep (2.8) and (2.12), therefore, we must use the indefinite inner product (2.200) in place of (2.11). In this way, we do in fact obtain the canonical representation for the causal Green function, for it is readily shown that one has, x|x ′ = iG(x|x ′ ).
Its composition law follows from inserting the resolution of the identity, (2.8).
A different canonical representation may be obtained by keeping (2.12) and the positive definite inner product (2.11), but modifying the completeness relation (2.8). Define
|x, + = p0=ωp d 3 p 2ω p e ip·x |p, + , |x, − = p0=−ωp d 3 p 2ω p e ip·x |p, − ,
and replace (2.8) by
1 = i Σ dσ µ |x, + ↔ ∂ µ x, +| − |x, − ↔ ∂ µ x, −| (2.13)
which is compatible with (2.10) and (2.11). The appropriate relativistic propagator is
x|x ′ = G (1) (x|x ′ ) = G + (x|x ′ ) + G − (x|x ′ ) ,
this giving a canonical representation of the Hadamard Green function. The wave functions ψ(x) = x|ψ satisfy the Klein-Gordon equation and obey
ψ(x ′ ) = i Σ dσ µ G + (x ′ |x) ↔ ∂ µ ψ + (x) − G − (x ′ |x) ↔ ∂ µ ψ − (x)
where ψ ± are the positive and negative frequency parts of ψ. Since
Σ dσ µ G + ↔ ∂ µ ψ − = Σ dσ µ G − ↔ ∂ µ ψ + = 0, it follows that ψ(x ′ ) = − Σ dσ µ G(x ′ , x) ↔ ∂ µ ψ(x),
where iG = G + − G − is the causal Green function. This is precisely the evolution equation we expect for ψ a solution to the Klein-Gordon equation with both positive and negative frequency parts. The unusual form (2.13) of the completeness relation explains how it is that the Green function G (1) , which does not propagate solutions of the Klein-Gordon equation, is nevertheless compatible with causal evolution of a wave function ψ(x). The inner product on ψ(x) is now not (2.9) but rather
φ|ψ = i Σ dσ µ φ † + (x) ↔ ∂ µ ψ + (x) − φ † − (x) ↔ ∂ µ ψ − (x) (2.77)
which is by construction positive definite.
We note that a significant and seemingly anomalous property of G (1) is that, unlike all the other Green functions, it does not obey a composition law of the usual form, but instead obeys (2.110) 1 . Our study of canonical representations now makes it clear why this is. The composition laws of G F , G ± and G readily follow from their canonical representations x|x ′ by simply inserting the resolution of the identity, Eq. (2.8).
Recall, however, that the canonical representation of G (1) involves dropping (2.8) in favour of (2.13), from which follows the result,
G (1) (x ′′ |x ′ ) = i Σ dσ µ G + (x ′′ |x) ↔ ∂ µ G + (x|x ′ ) − G − (x ′′ |x) ↔ ∂ µ G − (x|x ′ ) (2.80)
which is readily shown to be equivalent to (2.110). The important point, therefore, is that the unusual form of the composition law for G (1) is explained by the non-standard resolution of the identity in its canonical representation, which is in turn necessitated by the assumed positive definite inner product on both positive and negative frequency states.
We have therefore derived canonical representations of all the Green functions. Our results, together with the composition laws and path integral representations are summarized in Table 1.
Finally, we make the following comments on the connection between the sum over histories and canonical formulations of relativistic quantum mechanics. A sum over histories representation of a given propagator may be derived from its canonical representation, by a standard procedure, which involves inserting resolutions of the identity into the canonical expression x|x ′ (except for the causal Green function -see below). It is then reasonable to ask how one might proceed in the opposite direction, i.e., given a propagator, as supplied by the sum over histories, how does one derive the Hilbert space inner product from which the canonical representation is constructed? The answer to this question lies in the observation that the inner products given above for the relativistic representations all have the form,
φ|ψ = − Σ,Σ ′ dσ µ dσ µ ′ φ † (x) ↔ ∂ µ G(x|x ′ ) ↔ ∂ µ ′ ψ(x ′ )
So, for example, by taking G to be G + , one obtains the inner product (2.9). This observation is a natural starting point for the possible derivation of a canonical formulation from a sum over histories, as we shall discuss further in Section 5.
2(D). Summary of Section 2
In words, our results may be summarized as follows:
(a) The Green functions G ± and G F obey standard composition laws (Eqs. (2.60) and (2.300)). They may be obtained by sums over histories over either positive or negative proper time. Their canonical representations may be obtained by restriction to the positive or negative frequency sector, with a positive definite inner product and with the usual resolution of the identity.
(b) The causal Green function G obeys the standard composition law. It does not obviously have a sum over histories representation. Its canonical representation involves both the positive and negative frequency sectors, with an indefinite inner product and the usual resolution of the identity.
(c) The Hadamard Green function G (1) does not obey the standard composition law. It may be obtained by a sum over histories over both positive and negative proper time. Its canonical representation involves both positive and negative frequency sectors, with a positive definite inner product and a non-standard resolution of the identity. The latter explains the absence of the usual composition law.
(d) The Newton-Wigner propagator obeys the composition law of the non-relativistic type. It may be obtained by a sum over histories of the form (2.50) in which the paths move forwards in the physical time x 0 . It has a canonical representation in the positive frequency sector with a positive definite inner product, with the usual quantum mechanical resolution of the identity. We will find below that an alternative, rather novel representation in the proper time gauge is also available.
It is striking that unlike all the other Green functions, the causal Green function is represented canonically with an indefinite inner product. We conjecture that this is the reason why it does not have an obvious sum-over-histories representation in configuration space of the form (1.18). Briefly, a phase space path integral representation may be constructed by inserting resolutions of the identity into the canonical expression x|x ′ , and the configuration space path integral is obtained by integrating out the momenta. For the causal Green function, however, the indefinite inner product leads to the appearance of factors of ε(p 0 ) in the phase space path integral (c.f. Eq. (2.68)). This prevents the momenta from being integrated out in the usual way, and a configuration space sum over histories of the form (1.18) is not obviously obtained.
The relativistic particle is frequently studied as a toy model for quantum gravity, and this is indeed part of the motivation for the study described in this paper. In such investigations, it is often stated that the problem with the Klein-Gordon equation is that the standard inner product is indefinite, and thus it is necessary to discard half of the solutions [kuch]. We would like to point out, however, that it is not necessary to view the problem in this way. As we have seen, there does in fact exist a positive definite inner product on the set of all solutions to the Klein-Gordon equation, namely (2.77). It is therefore not necessary to discard any of the solutions if one uses this inner product. Of course, the real problem with the Klein-Gordon equation is that it is not possible to sort out the solutions into positive and negative frequency, except in the simplest of situations. This problem is present whatever view one takes.
THE PATH DECOMPOSITION EXPANSION
Our ultimate task is to derive the various relativistic composition laws from the sum over histories (2.50). The sum over histories for the relativistic particle readily reduces to the proper time representation (1.19). The derivation of the desired composition law is therefore intimately related to that of factoring a sum over histories of the non-relativistic form (1.11) across an arbitrary surface in configuration space. As noted in the Introduction, the solution to this problem already exists, and goes by the name of the path decomposition expansion (PDX). In this section, we will describe this result, and give a rigorous derivation of it.
3(A). The PDX as a Partitioning of Paths
Consider non-relativistic quantum mechanics in a configuration space C (here taken to be IR n ), described by a propagator g(x ′′ , T |x ′ , 0). The propagator may be expressed as a sum over histories, which we write,
g(x ′′ , T |x ′ , 0) = Dx(t) exp i T 0 dt 1 2 Mẋ 2 − V (x) . (3.1)
The sum is taken over all paths in configuration space, x(t), satisfying the boundary conditions x(0) = x ′ and x(T ) = x ′′ . Denote this set of paths by p(x ′ , 0 → x ′′ , T ).
Let Σ be a surface between x ′′ and x ′ . It therefore divides C into two parts, C 1 and C 2 , say, with x ′ ∈ C 1 and x ′′ ∈ C 2 . Σ may be closed or infinite. We would like to factor the sum-over-histories across the surface Σ.
Consider the set of paths p(x ′ , 0 → x ′′ , T ). Every path crosses Σ at least once, but will generally cross it many times (see Fig. 3). Unlike surfaces of constant time in spacetime, therefore, the position of crossing does not label each path in a unique and unambiguous manner. However, each path is uniquely labeled by the time and location of its first crossing of Σ. This means that there exists a partition of the paths according to their time t and location x σ of first crossing (see Fig. 4). We write,
p(x ′ , 0 → x ′′ , T ) = xσ ∈Σ t∈[0,T ] p(x ′ , 0 → x σ , t → x ′′ , T ) and p(x ′ , 0 → x σ , t → x ′′ , T ) ∩ p(x ′ , 0 → y σ , s → x ′′ , T ) = ∅, if x σ = y σ , t = s .
Each path in each part p(x ′ , 0 → x σ , t → x ′′ , T ) of the partition may then be split into two pieces:
(i) a restricted path lying entirely in C 1 , beginning at x ′ at time 0 and ending on Σ at x σ at its first-crossing time t;
(ii) an unrestricted path exploring C 1 and C 2 , beginning on Σ at x σ at time t and ending at x ′′ at time T .
This suggests that there exists a composition of (3.1) across Σ, consisting of a restricted propagator in C 1 from (x ′ , 0) to (x σ , t), composed with a standard unrestricted propagator in C from (x σ , t) to (x ′′ , T ), with summations over both x σ and t. There is indeed such a composition law. It is the path decomposition expansion [AK,vB]:
g(x ′′ , T |x ′ , 0) = T 0 dt Σ dσ g(x ′′ , T |x σ , t) i 2M n · ∇g (r) (x, t|x ′ , 0) x=xσ . (3.4)
Here, dσ is the integration over the surface Σ. The quantity g (r) is the restricted propagator in C 1 , and satisfies the boundary condition that it vanish on Σ. Its normal derivative n · ∇g (r) , however, does not vanish on Σ. Also note that n is defined to be the normal to Σ pointing away from the region of restricted propagation, in this case C 1 . The reason for the appearance of the normal derivative term will become fully apparent in the rigorous derivation given below. For the moment we comment that it is related to the fact that we are interested in restricted propagation to a final point which actually lies on the boundary.
The path decomposition expansion is central to this paper, and we will be making heavy use of it in what follows.
We now record some useful closely related results. First of all, it is also possible to partition the paths according to their last crossing times. This would lead to the composition law,
g(x ′′ , T |x ′ , 0) = − T 0 dt Σ dσ i 2M n · ∇ g (r) (x ′′ , T |x, t) x=xσ g(x σ , t|x ′ , 0) (3.5)
where t is the last crossing time. The overall minus sign arises because the restricted propagator is now in the region C 2 , and the normal n (whose definition is unchanged) now points into the region of restricted propagation.
Secondly, it is of interest to consider the case in which the surface Σ does not lie between the initial and final points, x ′ , x ′′ ∈ C 1 , say. Then it is no longer true that every path crosses Σ. In this case, one first partitions the paths into paths that never cross Σ and paths that always cross. The paths that always cross may then be further partitioned as above. The sum over paths which never cross simply yields a restricted propagator in the region C 1 that vanishes on Σ. One thus obtains,
g(x ′′ , T |x ′ , 0) =g (r) (x ′′ , T |x ′ , 0) + T 0 dt Σ dσ g(x ′′ , T |x σ , t) i 2M n · ∇g (r) (x, t|x ′ , 0) x=xσ (3.6)
where t is the first crossing time, and g (r) is the restricted propagator in C 1 . n is again the normal pointing away from C 1 . Similarly, in the case that the paths are partitioned according to their final crossing times, one obtains
g(x ′′ , T |x ′ , 0) =g (r) (x ′′ , T |x ′ , 0) + T 0 dt Σ dσ i 2M n · ∇g (r) (x ′′ , T |x, t) x=xσ g(x σ , t|x ′ , 0) . (3.7)
Here t is the final crossing time, g (r) is again the restricted propagator in C 1 and n is again the normal pointing away from C 1 . Note that there is no minus sign in the second term in Eq. (3.7), in contrast to Eq.
(3.5). This is because in both (3.6) and (3.7), the region of restricted propagation is C 1 in each case, whereas in (3.4) and (3.5), it is C 1 and C 2 , respectively. These subtle differences will turn out to be significant in Section 4.
3(B). A Lattice Derivation of the PDX
The sum over histories (3.1) must be regarded as no more than a formal expression. Certain formal properties can sometimes be deduced from (3.1) as it stands, but care is generally necessary. In particular, the path decomposition expansion cannot be derived directly from the sum over histories without recourse to a more precise mathematical definition. The purpose of this section, therefore, is to give a rigorous derivation of the PDX from a properly-defined sum over histories.
Real time path integrals cannot be rigorously defined [C], so we first rotate to the imaginary time (Euclidean) version, by writing t = −iτ (note that the Euclidean time τ bears no relation to the parameter time τ of the previous section), yielding
g E (x ′′ , τ |x ′ , 0) = Dx(τ ′ ) exp − τ 0 dτ ′ 1 2 Mẋ 2 + V (x) . (X.1)
Euclidean sums over histories may be rigorously defined as the continuum limit of a discrete sum over histories on a spacetime lattice. The discrete sum over histories is then viewed as a sum of probability measures on the space of paths on the lattice for some suitable stochastic process. To illustrate the key features of the derivation of the PDX, we will first consider the case of the free particle, V (x) = 0, and define the sum over histories using one particularly simple stochastic process, namely the random walk.
Consider a spacetime lattice with temporal spacing ∆τ and spatial spacing ∆x. We follow the methods of Itzykson and Drouffe [ID]. Let the n-dimensional spatial lattice be generated by n orthonormal vectors, e µ , with e µ · e ν = ∆x 2 δ µν . Each site is located at x = x µ e µ , where the x µ are integers.
We propose to regard g E (x ′′ , τ |x ′ , 0) as the continuum limit of a probability density p(x ′′ , τ |x ′ , 0). This quantity is defined to be the probability density that in a random walk on the spacetime lattice, the system (a particle, say) will be found at x ′′ at time τ given that it was at x ′ initially. On the lattice it is meaningful to talk about the probability of an individual history from x ′ at time zero to x ′′ at time τ . The probability density p(x ′′ , τ |x ′ , 0) is therefore given by the sum of the probabilities for the individual histories connecting the initial and final points. Formally we write,
p(x ′′ , τ |x ′ , 0) = histories p(history) . (X.3)
It is in this sense that it corresponds to a sum over histories.
p(x ′′ , τ |x ′ , 0) satisfies the relations,
(∆x) n p(x ′′ , 0|x ′ , 0) = δ x ′′ ,x ′ (X.4) (∆x) n x ′′ p(x ′′ , τ |x ′ , 0) = 1 (X.5)
where δ x ′′ ,x ′ denotes a product of Kronecker deltas. The factors of (∆x) n enter because p is a density. Eq. (X.4) expresses the initial condition, and (X.5) says that the particle must be somewhere at time τ .
In a random walk, the probabilities of stepping from any one site to any one of the adjacent sites are all equal, and equal to 1/2n in n spatial dimensions. The probability of an entire history in (X.3) is then just 1/2n raised to the power of the number of steps in that history. Proceeding in this way, one may evaluate (X.3) and calculate the probability density p. However, we find it instead to be more convenient to calculate p using the recursion relation,
p(x, τ + ∆τ |x ′ , 0) − p(x, τ |x ′ , 0) = 1 2n n µ=1 [ p(x + e µ , τ |x ′ , 0) + p(x − e µ , τ |x ′ , 0) − 2 p(x, τ |x ′ , 0) ] . (X.6)
This relation follows from the fact that if the walker is on site x at time τ + ∆τ , he must have been on one of the immediately adjacent sites at time τ . Eq. (X.6) is a discrete version of the diffusion equation. It may be solved by Fourier transform, yielding the result
p(x ′′ , τ |x ′ , 0) = π/∆x −π/∆x d n k (2π) n e ik·(x ′′ −x ′ ) 1 n n µ=1 cos ∆xk µ τ /∆τ .
Taking the continuum limit, ∆τ, ∆x → 0, and holding fixed the combination,
(∆x) 2 2n∆τ = 1 2M (X.8) one obtains, g E (x ′′ , τ |x ′ , 0) = M 2πτ n 2 exp − M (x ′′ − x ′ ) 2 2τ
where we use g E to denote the continuum limit of p. The diffusion limit of this stochastic process therefore yields the Euclidean propagator for the free non-relativistic particle of mass m.
Armed with a more precise notion of a discrete sum over histories, we may now proceed to the derivation of the PDX. For simplicity, we first restrict attention to the case in which the intermediate surface Σ is flat 2 . We view p(x ′′ , τ |x ′ , 0) as a sum of the probabilities for each path on the lattice from the initial to the final point. As described in Section 3(A), the paths may be partitioned according to their position x σ and time τ c of first crossing of an intermediate surface Σ. We therefore expect a composition law on the lattice expressing the statement, "The probability of going from x ′ at time zero to x ′′ at time τ is the sum over x σ and τ c of the probabilities of going from the initial point to final point crossing the surface Σ for the first time at time τ c at the point x σ ". The composition law is,
p(x ′′ , τ |x ′ , 0) = (∆x) n xσ∈Σ τ τc=0p (x ′′ , τ |x σ , τ c ) q(x σ , τ c |x ′ , 0) . (X.10)
Here, q(x σ , τ c |x ′ , 0) is defined to be a lattice sum over paths which never cross Σ but end on it at position x σ at time τ c . After reaching the surface at the point x σ at time τ c , the paths must then actually step across it, by definition of the partition. The quantityp(x ′′ , τ |x σ , τ c ) is therefore a lattice sum over all paths from the surface to the final point, but with the restriction that the very first step moves off the surface in the normal direction. It is therefore given by,
p(x ′′ , τ |x σ , τ c ) = 1 2n p(x ′′ , τ |x σ + ∆x n, τ c + ∆τ ) (X.11) since 1/2n is the probability of stepping off the surface, and p(x ′′ , τ |x σ + ∆x n, τ c + ∆τ ) is the probability of going from the point just off the surface to the final point. Strictly the sum over τ c should not begin at zero, because on the lattice it takes a finite amount of time for the first path to reach the surface, but this time interval goes to zero in the continuum limit.
Because q(x ′′ , τ |x ′ , 0) is a sum over paths that never cross Σ (but may touch it), it will satisfy the boundary condition q(x σ + ∆x n, τ c |x ′ , 0) = 0 (X.12)
where n is the normal to the surface. That is, the probability of making one step beyond Σ is zero. Now write
q(x σ , τ c |x ′ , 0) = q(x σ + ∆x n, τ c |x ′ , 0) − ∆x q(x σ + ∆x n, τ c |x ′ , 0) − q(x σ , τ c |x ′ , 0) ∆x . (X.13)
The boundary condition (X.12) implies that the first term vanishes. The part of the second term in square brackets converges to the normal derivative of q in the continuum limit. Inserting this in (X.10), one obtains, with some rearrangement,
p(x ′′ , τ |x ′ , 0) = xσ ∈Σ (∆x) n−1 τ τc=0 ∆τ p(x ′′ , τ |x σ + ∆x n, τ c + ∆τ ) × (∆x) 2 2n∆τ q(x σ + ∆x n, τ c |x ′ , 0) − q(x σ , τ c |x ′ , 0) ∆x .
(X.14)
Now, using the continuum limits
xσ∈Σ (∆x) n−1 → Σ dσ, τ τc=0 ∆τ → τ 0 dτ c (X.15)
and using (X.8), we derive
g E (x ′′ , τ |x ′ , 0) = τ 0 dτ c Σ dσ g E (x ′′ , τ |x σ , τ c ) 1 2M n · ∇g (r) E (x, τ c |x ′ , 0) x=xσ (X.16)
This is the Euclidean version of the path decomposition expansion. The desired result (3.4) is then readily obtained by continuing back to real time. The closely related results (3.5)-(3.7) are derived in a similar manner.
It is perhaps worth noting that this result cannot not be derived from formal manipulation of the continuum sum over histories (3.1). Each part of the composition law (X.10) is well-defined and non-zero on the lattice, but not every part has a continuum analogue. In particular q(x σ , τ c |x, 0), where x σ is on Σ, formally goes to zero in the continuum limit. The desired result arises because the various parts of (X.10) fortuitously conspire to give a result which is well-defined in the continuum limit, even though the separate parts may not be. Now consider the case of non-zero potential, V (x) = 0. We will argue that the inclusion of a potential does not affect the key points of the derivation of the path decomposition expansion. The random walk process described above supplies a measure on the set of paths on the lattice. (In fact it is an important result that it also defines a measure in the continuum limit, but we prefer to work on the lattice.) Using this measure, one can compute the average value of various functions of the histories of the system. In particular, it is a standard result that the amplitude (X.1) may be defined as the average value of exp − dτ V (x(τ )) in this measure [Kac].
A different way of doing essentially the same calculation is more convenient for our purposes. The amplitude (X.1) may be calculated directly by constructing a measure on the set of paths different to that given above, which includes the effect of the potential. A weight w(history) may be defined for each history, and the density w(x ′′ , τ |x ′ , 0) is again w(x ′′ , τ |x ′ , 0) = histories w(history).
Loosely speaking, w is defined by weighting the probability p of going from one lattice point to the next by exp (−∆τ V (x)). w is of course no longer a probability density, and does not define a stochastic process. It obeys the recursion relation
w(x, τ + ∆τ |x ′ , 0) − w(x, τ |, x ′ , 0) = 1 2n n µ=1 [w(x + e µ , τ |x ′ , 0) + w(x − e µ , τ |x ′ , 0) − 2 w(x, τ |x ′ , 0)] + ∆τ V (x)w(x, τ |x ′ , 0) (3.77)
which differs from (X.6) in that the 'walker' may now stay at site x with a weight ∆τ V (x). This recursion relation yields the Euclidean Schrödinger equation with potential V (x) in the continuum limit, as expected.
The issue is now to determine whether the derivation, (X.10)-(X.16) goes through for w as it did for p. It is relatively easy to see that it will. The quantities analogous top and q are defined in the obvious way, and all the steps go through as before. The important point is that (X.10) and (X.11) are not modified, since the weight for stepping off the surface is still 1/2n, as may be seen from the recursion relation (3.77).
An equivalent approach is to rescale the weights w so that they describe a stochastic process, and can be regarded as probability densities [Y]. The random walk is then characterised by a non-zero drift, that is by unequal probabilities of stepping in different directions due to the asymmetry of the potential. In the continuum limit, the rescaled w satisfies a Fokker-Planck equation. A composition law involving an object analogous to q may then be derived, which is a rescaled version of the path decomposition expansion. We will not pursue this here 3 .
3(C). An Important Simplification
The restricted propagator appearing in (3.4)-(3.7) is somewhat inconvenient and for our purposes it is useful to re-express it in terms of the usual propagator [S]. This is certainly possible if the potential V (x) in (3.1) possesses a translational symmetry in a direction that we shall refer to as x 0 , and Σ is a surface of constant x 0 . To this end, consider g (r) (x, t|x ′ , 0) in (3.4), where both x and x ′ are in C 1 , the region of restricted propagation. By the imposed symmetry of the random walk, it is possible to rewrite the restricted propagator as
g (r) (x ′′ , t|x ′ , 0) = g(x ′′ , t|x ′ , 0) − g(x σ + (x σ − x ′′ ), t|x ′ , 0) (3.8)
where x σ is the point on Σ closest to x ′′ . This is of course just the familiar method of images. That this is equivalent to a restricted sum over paths may be seen as follows. The full propagator is given by a sum over all paths from initial to final point. The sum over all paths g may be written as a sum over paths that never cross the surface, g (r) , plus a sum over paths that do cross the surface at least once, g (a) (c.f. Eq. (3.6)). The paths that cross have a last crossing position. Because of the symmetry, the segment of the path after the last crossing may be reflected about the surface without changing the value of the sum over paths (see Fig. 5). g (a) is therefore equal to the sum over all paths from the initial point to the reflection about the surface of the final point. Hence, with a little rearrangement, one obtains (3.8).
Given (3.8), the normal derivative of the restricted propagator on Σ, which is the quantity that appears in (3.4), is just
n ·∇g (r) (x, t|x ′ , 0) x=xσ = 2 n · ∇g(x, t|x ′ , 0) x=xσ .
(3.10)
We conclude that in the special case of a symmetric potential and a flat surface, (3.4) becomes
g(x ′′ , T |x ′ , 0) = T 0 dt Σ dσ g(x ′′ , T |x σ , t) i M n · ∇g(x, t|x ′ , 0) x=xσ (3.9)
and likewise for (3.5). We will use this result in all subsequent applications of the PDX.
The analysis so far is for flat surfaces Σ. (3.10) will also follow for curved surfaces (in flat configuration spaces with constant potential), because the analysis leading to it is essentially local.
DERIVATION OF RELATIVISTIC COMPOSITION LAWS
We now show how the path decomposition expansion is used to derive the relativistic composition laws for certain Green functions.
4(A). Composition Laws for G F , G + and G −
Consider the Feynman Green function. As discussed earlier, its sum-over-histories representation readily reduces to,
iG F (x ′′ |x ′ ) = ∞ 0 dT g(x ′′ , T |x ′ , 0) .
(4.1)
Here g(x ′′ , T |x ′ , 0) is a propagator of the non-relativistic type, and is given by a sum-over-histories of the form (3.1), but with 2M = 1, V (x) = m 2 (which means that the results of section 3(C) apply for any flat surface), and withẋ 2 replaced byẋ µẋν η µν , where η µν is the Minkowski metric, with signature (+ − −−). It therefore obeys the Schrödinger equation,
−i ∂ ∂T + x ′′ + m 2 g(x ′′ , T |x ′ , 0) = 0 subject to the initial condition g(x ′′ , 0|x ′ , 0) = δ (4) (x ′′ − x ′ ) .
From this, it readily follows that G F (x ′′ |x ′ ) satisfies Eq. (2.1). An explicit expression for g(x ′′ , T |x ′ , 0) is readily obtained:
g(x ′′ , T |x ′ , 0) = 1 (2πiT ) 2 exp −i (x ′′ − x ′ ) 2 4T − im 2 T .
These basics out of the way, we may now derive the composition law. Consider first the case in which the initial and final points are on opposite sides of the surface Σ. Apply the path decomposition expansion (3.4) to (4.1). One obtains,
G F (x ′′ |x ′ ) = −i ∞ 0 dT T 0 dt Σ dσ g(x ′′ , T |x, t) 2i → ∂ n g(x, t|x ′ , 0) . (4.6)
Here, → ∂ n denotes the normal derivative pointing away from x ′ and operating to the right, and we have used (3.10) to express the derivative of the restricted propagator in terms of the unrestricted propagator. Also, we use a simple x to denote the coordinates in the surface Σ. Now, in the integrals over time, one may perform the change of coordinates v = T − t, and u = t. Eq. (4.6) then becomes,
G F (x ′′ |x ′ ) = 2 ∞ 0 dv ∞ 0 du Σ dσ g(x ′′ , v|x, 0) → ∂ n g(x, u|x ′ , 0) . (4.7)
Comparing with (4.1), it is then readily seen that
G F (x ′′ |x ′ ) = −2 Σ dσ G F (x ′′ |x) → ∂ n G F (x|x ′ ) . (4.8)
Although this is a correct property of the Feynman Green function, it is not quite the expected result. Furthermore, it does not manifestly exhibit the usual property of independence of the location of the factoring surface. To this end, we repeat the above with (3.5) instead of (3.4), obtaining,
G F (x ′′ |x ′ ) = 2 Σ dσ G F (x ′′ |x) ← ∂ n G F (x|x ′ ) . (4.9)
Finally, averaging (4.8) and (4.9) leads to the desired result:
G F (x ′′ |x ′ ) = − Σ dσ G F (x ′′ |x) ↔ ∂ n G F (x|x ′ ) .
(4.10)
Define iG F (x ′′ , x ′ ) to be G + (x ′′ , x ′ ) when x ′′ is in the future cone of x ′ , and to be G − (x ′′ , x ′ ) when x ′′ is in the past cone of x ′ . Then it readily follows that G + and G − each satisfy suitably modified versions of (4.8) and (4.9) and hence their composition laws (2.60). Now consider the case in which the initial and final points lie on the same side of the surface Σ. We therefore apply the path decomposition expansions (3.6), (3.7). Direct application of either of these expressions to Eq. (4.1), does not lead to an obviously useful result, since it still involves a restricted propagator. However, equating (3.6) and (3.7), one obtains
T 0 dt Σ dσ g(x ′′ , T |x, t) → ∂ n g(x, t|x ′ , 0) = T 0 dt Σ dσ g(x ′′ , T |x, t) ← ∂ n g(x, t|x ′ , 0) . (4.11)
Suppose that x ′′ is in the future cone of x ′ , which in turn lies to the future of the surface Σ. Then performing the integral over T in (4.11) leads to the result,
Σ dσ µ G + (x ′′ |x) ↔ ∂ µ G − (x|x ′ ) = 0 (4.12)
demonstrating the expected orthogonality of G + and G − .
It might appear that the above derivation of the composition law is valid for any choice of factoring surface. This impression would be false: the derivation holds only for spacelike surfaces. To see this, note that the integral representation of the Feynman Green function (4.1) is properly defined only in the Euclidean regime. The Euclidean version of (4.1) is obtained by rotating both the parameter time T and the physical time x 0 . Write T E = iT and x 0 E = ix 0 . The first rotation is just a matter of distorting the integration contour in (4.1) and does not change the result of evaluating the integral. Indeed, (4.1) may be defined by an integral over real T E . The second rotation actually changes the answer, so needs to be rotated back afterwards. Performing the rotations, one obtains,
g E (x ′′ , T E |x ′ , 0) = 1 (2πT E ) 2 exp − 1 4T E (x 0 E ) 2 + x 2 − m 2 T (4.66)
for the time-dependent propagator, where x denotes x ′′ − x ′ for both its time and space components. The Euclidean path decomposition expansion (X.16) is then clearly well-defined for (4.66) -the integral over the surface Σ is clearly convergent. A composition law for the Euclidean Feynman propagator is therefore obtained across any surface. But suppose now we try to continue back the Euclidean PDX (X.16) to the Lorentzian spacetime. Leave T E as it is, but continue back x 0 . The integrand, previously exponentially decaying in all directions, becomes exponentially growing in the x 0 direction. This is not a problem if the surface Σ is spacelike, since x 0 is not integrated over. It is a problem if x 0 is integrated over, which it would be if Σ is timelike. It follows that the Euclidean composition law, valid for any surface, may be continued to a well-defined composition law for the Lorentzian propagator only if the surface Σ is spacelike in the Lorentzian regime.
At this stage it is perhaps useful to summarize how we have arrived at the results (4.10), (4.12) from the sum-over-histories. First the sum over histories was written in the proper time representation, (4.1). This is essentially a partition of the set of all paths from x ′ to x ′′ , according to their total parameter time (which is effectively the same as their length). Then the paths were further partitioned according to the parameter time and position of their first (or last) crossing of Σ. The path decomposition expansion then led to the desired result. In the final result (4.10), however, no reference is made to the parameter time involved in this sequence of partitions; only the first (or last) crossing position x is referred to. In the results (4.10), (4.12), therefore, there is only one partition of the paths that is important, namely the partition according to the position x of first or last crossing. Differently put, suppose there existed a sum over histories representation of G F referring only to the spacetime coordinates x µ , and not requiring the explicit introduction of a parameter t. Then the composition law (4.10) could be derived by a single partitioning of the paths according to their first or last crossing position.
By way of a short digression, let us explore this idea further. Suppose one simply assumes that a sum over histories representation of G F (x ′′ |x ′ ) is available, in which there is a sum over all paths in spacetime from x ′ to x ′′ . As described above, one can therefore partition the paths according to their first crossing position x of an intermediate surface Σ. It is therefore reasonable to postulate a relation of the form
G F (x ′′ |x ′ ) = Σ dσ G F (x ′′ |x)∆(x|x ′ )
where ∆(x|x ′ ) is defined by a restricted sum over paths beginning at x ′ which never cross Σ, but end on it at x. Comparing with Eq. (4.8), or by explicit calculation, one has,
∆(x|x ′ ) = 2 ∂ n G F (x|x ′ ) .
(4.14)
This gives rather intriguing representation of ∂ n G F (x|x ′ ) in terms of a restricted sum over paths in spacetime.
It is also interesting to note that when x 0 > x 0 ′ , by explicit calculation,
2i∂ n G F (x|x ′ ) = G N W (x|x ′ )
where G N W is the Newton-Wigner propagator [H]. Via (4.14), this therefore gives a novel path integral representation of the Newton-Wigner propagator. It is novel because G N W is really a propagator of the Schrödinger type, and is therefore normally obtained by a sum over paths moving forwards in time, as we saw in section 2(B). By contrast, in the path integral representation of ∆(x|x ′ ), the paths move backwards and forwards in time, although are restricted to lie on one side of the surface Σ in which x lies.
Note that using this representation of the Newton-Wigner propagator, its composition law (2.70) is easily derived. The sum over paths from x ′ to x ′′ ending on Σ and remaining below it, may be partitioned across an intermediate surface Σ ′ according to the point x σ ′ of first crossing of Σ ′ . That is
p(x ′ → x ′′ ) = x σ ′ p(x ′ → x σ ′ → x ′′ ) p(x ′ → x σ ′ → x ′′ ) ∩ p(x ′ → y σ ′ → x ′′ ) = ∅ if x σ ′ = y σ ′ .
The sum over paths factorises into a sum over paths from x ′ to x σ ′ , ending on Σ ′ and remaining below it, and over paths from x σ ′ to x ′′ , ending on Σ and remaining below it. This is precisely a composition of type (1.17), and leads directly to (2.70).
These observations may merit further investigation. They are, however, only incidental to the rest of this paper.
4(B). Other Green Functions
By integrating T over an infinite range in (1.19) the Green function G (1) (x ′′ |x ′ ) is obtained. Let us therefore repeat the steps (4.6) to (4.10) for this case. The integration over T and t in (4.6) is now
∞ −∞ dT T 0 dt = ∞ 0 dT T 0 dt + 0 −∞ dT T 0 dt .
(4.16)
The first term in (4.16) leads to a composition of two Feynman Green functions, as before. The second term can be cast in a similar form by letting T → −T and t → −t, which introduces an overall minus sign, and using the fact that g(x, −t|x ′ , 0) = g * (x, t|x ′ 0). One thus obtains
G (1) (x ′′ |x ′ ) = −i Σ dσ G F (x ′′ |x) ↔ ∂ n G F (x|x ′ ) − G * F (x ′′ |x) ↔ ∂ n G * F (x|x ′ ) = i Σ dσ µ G + (x ′′ |x) ↔ ∂ µ G + (x|x ′ ) − G − (x ′′ |x) ↔ ∂ µ G − (x|x ′ ) (4.17)
where dσ µ and ∂ n are defined as in section 2. The result (4.17) may seem somewhat trivial, since it follows from (4.10) and the use of
G (1) (x ′′ |x ′ ) = i [G F (x ′′ |x ′ ) − G * F (x ′′ |x ′ )] = G + (x ′′ |x ′ ) + G − (x ′′ |x ′ ) .
However, the key point is that the composition law (2.80) for G (1) arises directly in the sum over histories. The splitting into positive and negative frequency parts, in the language of section 2, arises naturally from the identity (4.16).
Finally, consider the causal Green function. In terms of G ± it is defined by
iG(x ′′ |x ′ ) = G + (x ′′ |x ′ ) − G − (x ′′ |x ′ ) .
Then it straightforwardly follows that G obeys (1.10), since G ± obey (2.60) and (4.12). However, there is no natural, quantum mechanical derivation of the composition law for G directly from a sum over histories. This is because we do not have a direct path-integral representation of G -only an indirect one in terms of the path-integral representations of G ± (which may be read off from (4.1)). The question of finding a direct sum-over-histories representation of the causal propagator is, to the best of our knowledge, a question for which no entirely satisfactory answer exists at present. Indeed, as we conjectured in Section II, such a representation may not exist.
4(C). Why the Naive Composition Law Fails
In the context of quantum gravity, and parameterized theories generally, composition laws different in form to (1.10) have occasionally been proposed. In particular, a composition law of the form
G(x ′′ |x ′ ) = d 4 x G(x ′′ |x) G(x|x ′ ) (4.20)
has often been considered [??]. However, it is readily seen that there are difficulties associated with (4.20) [JJH2]. The methods of this paper help to understand the reason why it cannot hold as it stands.
Let us first illustrate the problem with (4.20). Consider the proper time representation (1.19). It is a property of g that,
g(x ′′ , T ′′ + T ′ |x ′ , 0) = d 4 x g(x ′′ , T ′′ |x, 0) g(x, T ′ |x ′ , 0) .
Integrating both sides over T ′′ and T ′ , one obtains
− 1 2 du dv g(x ′′ , u|x ′ , 0) = d 4 x g(x ′′ |x) g(x|x ′ ) (4.22) where we have introduced u = T ′′ + T ′ , v = T ′′ − T ′ .
If T is taken to have an infinite range, then u and v have an infinite range, and the left hand side of (4.22) is equal to G (1) (x ′′ |x ′ ) multiplied by an infinite factor. If T is taken to have a half-infinite range, then things are yet more problematic. In that case v ranges from −u to +u, and the left-hand side of (4.22) becomes,
− ∞ 0 du u g(x ′′ , u|x ′ , 0) . (4.23)
This may converge, but it does not converge to the left-hand side of (4.20).
It should be clear from the discussion given in the Introduction that (4.20) should not be expected to hold. The reason is, quite simply, that it does not correspond to a proper partitioning of the paths in the sum-over-histories (1.18). For in proposing an expression of the form (4.20), one is evidently contemplating a partitioning of the paths p(x ′ → x ′′ ) in which the paths are labeled according to an intermediate spacetime point x through which they pass. That is, the set of all paths is regarded as the union over all x of paths passing through x,
p(x ′ → x ′′ ) = x p(x ′ → x → x ′′ ) .
But this is not a proper partition because it is not exclusive, Of course, the exhaustivity condition (4.25) is still in some sense true, but the failure of the exclusivity condition means that there is a vast amount of overcounting. It is this that leads to the divergent factor appearing in (4.23) in the case where T takes an infinite range.
p(x ′ → x → x ′′ ) ∩ (x ′ → y → x ′′ ) = ∅, for x = y .
The fact that (4.22) is equal to G (1) times an infinite factor is, however, suggestive. A similar feature was found in the Dirac quantisation of the relativistic particle by Henneaux and Teitelboim [HT]. They found that for functions ψ(x) = x|ψ solving the Klein-Gordon equation,
φ|ψ = d 4 x φ † (x)ψ(x) (4.88)
is a positive definite inner product independent of x 0 , and with all the necessary symmetry properties for an inner product on physical states. The only problem with (4.88) is that it is formally divergent. In fact, it is equal to the inner product (2.77) times a factor δ(0), which may be removed in a Lorentz invariant way [HT,ct]. This inner product may therefore be of some value, despite the fact that it is not associated with a partition of the sum over histories. It is yet to be seen whether these features continue to hold in more complicated parameterized systems, such as quantum gravity.
It is also clear that a composition law in which the d 4 x in (4.20) is replaced by a d 3 x cannot be correct. This would at first sight be more in keeping with conventional quantum mechanics, since one of the four x µ 's is time, and the composition law (1.4) is at a fixed moment of time. However, it corresponds to contemplating a partition in which the paths are labeled according to the position x i at which they cross a surface x 0 = constant. This fails because, as discussed in the Introduction, it is not a proper partition. The paths typically cross such a surface many times, and the crossing location does not label the paths in a unique and unambiguous way.
We have seen in this paper that there is a partition that does work, and does lead to the desired composition law. It is to partition the paths according to their position of first crossing of an intermediate surface.
DISCUSSION
The principal technical aim of this paper was to show that the composition laws of relativistic quantum mechanics may be derived directly from a sum over histories by partitioning the paths according to their first crossing position of an intermediate surface. We also derived canonical representations of the propagators. These representations showed why the Hadamard Green function G (1) , which is the propagator picked out by the sum over histories, does not obey a standard composition law. They also indicate why it is not obviously possible to construct a sum-over-histories representation of the causal Green function.
The notion of a sum over histories is extremely general. Indeed, as discussed in the Introduction, it has been suggested that sum-over-histories formulations of quantum theory are more general than canonical formulations. Central to such generalized formulations of quantum mechanics is the notion of a partition of paths. This simple but powerful notion replaces and generalizes the notion of a complete set of states at a fixed moment of time used in canonical formulations [harnew].
In this paper we have investigated a particular aspect of the correspondence between these two different approaches to quantum theory. Namely, we demonstrated the emergence of the composition law from the sum-over-histories approach, in the context of relativistic quantum mechanics in Minkowski space. Quite generally, such a derivation will be an important step in the route from a sum-over-histories formulation to a canonical formulation in a reparameterization invariant theory. We have admittedly not determined the exact status of the composition law along this route. In particular, it is not clear whether the existence of the composition law alone is a sufficient condition for the recovery of a canonical formulation. This would be an interesting question to pursue, perhaps taking as a starting point the comments at the end of Section 2(C), on the recovery of the canonical inner product given the propagator. However, as argued in the Introduction, it is at least a necessary condition. It is therefore of interest to find a situation in which this necessary condition is not satisfied.
Such a situation is provided by the case of relativistic quantum mechanics in curved spacetime backgrounds with a spacetime dependent mass term (i.e. a potential). Let us consider the generalization of our results to this case.
The path decomposition expansion (3.4) is a purely kinematical result. As we have shown, it arises solely from partitioning the paths in the sum over histories, and does not depend on the detailed dynamics. We would therefore expect it to hold in a very general class of configuration spaces, including curved ones. It follows that for the relativistic particle, one would always expect a composition law of the form
G(x ′′ |x ′ ) = dσ µ G(x ′′ |x) ∂ µ G (r) (x|x ′ ) (5.1)
where G (r) is the restricted relativistic propagator 4 . In the case of flat backgrounds, with constant potential, it was possible to express the restricted Green functions in terms of unrestricted ones, using (3.8)-(3.10).
The important point, however, is that in general backgrounds, and with arbitrary potentials, the steps (3.8)-(3.10) are not possible, and a composition law of the desired type (1.4) is not recovered. Of course, (5.1) is still a composition law of sorts, but G and G (r) are quite different types of object, and (5.1) is not compatible with regarding G(x ′′ |x ′ ) as a canonical expression of the form x ′′ |x ′ , since there is no known canonical representation for G (r) (x ′′ |x ′ ).
What is needed for the steps (3.8)-(3.10) to work? The main issue is understanding how the method of images can be generalized. First of all, consider the case of one dimension with a potential. The method of images yields the restricted propagator for any potential which is symmetric about the factoring surface (actually a point in one dimension). For example, the restricted propagator in x > 0 for the harmonic oscillator is readily obtained in this way. However, we would like to obtain the restricted propagator on one side of any factoring surface. The only potential invariant under reflections about any point is a constant. So in one dimension, (3.10) follows if the potential is constant. Similarly, it is easy to see that in flat spaces of arbitrary dimension, with a flat factoring surface, (3.10) will follow if the potential is constant in the direction normal to the surface. Now consider the case of curved spacetimes with a Lorentzian signature (although our conclusions will not be restricted to this situation). From the above, we have seen that the method of images will work if the propagator is symmetric about each member of a family of factoring surfaces. We will now argue that this will be true if there is a timelike Killing vector.
Consider first the case of static spacetimes. This means there is a timelike Killing vector field normal to a family of spacelike hypersurfaces. It is therefore possible to introduce coordinates such that g µν dx µ dx ν = g 00 (x i )(dx 0 ) 2 + g kl (x i )dx k dx l (5 .2) where i, k, l = 1, 2, 3. The action in the sum-over-histories representation of g(x ′′ , T |x ′ , 0) is
S = T 0 dt [g µνẋ µẋν − V (x µ )] (5.3)
If the metric is of the form (5.2) and if the potential is independent of x 0 , then the action (5.3) will be invariant under reflections about any surface of constant x 0 . It is reasonable to expect that the path-integral measure will be similarly invariant, and hence the method of images may be used to construct the restricted propagator in a region bounded by x 0 = constant. We therefore expect (3.10) to hold in static spacetimes in which the potential is invariant along the flow of the Killing field. We anticipate that this argument may be generalized to stationary spacetimes (for which there is a Killing field that is not hypersurface orthogonal), but we have not proved this.
What we find, therefore, is that the existence of a timelike Killing vector field, along which the potential is constant, is a sufficient condition for the existence of a composition law for the sum over histories. We cannot conclude from the above argument that it is also a necessary condition, although this is plausibly true for a general class of configuration spaces, with the possible exception of a limited number of cases in which special properties of the space avoid the need for a Killing vector 5 . Modulo these possible exceptions, we have therefore achieved our desired aim: we have found a situation -spacetimes with no Killing vectors -in which the necessary condition for the recovery of a canonical formulation from a sum over histories is generally not satisfied. general spacetimes. One can think of G as being the Feynman or Hadamard propagators. These may be formally defined in any spacetime, using the proper time representation, (4.1), although their interpretation in terms of positive and negative frequencies is generally not possible [ct]. 5 Note that none of these claims are in contradiction with the fact that the causal Green function is well-defined and obeys a composition law on any globally hyperbolic spacetime, even those possessing no Killing vectors [fulling]. The causal Green function does not appear to have a sum-over-histories representation, whilst the above conclusions specifically concern propagators generated by sums over histories.
This is a desirable conclusion: the existence of a timelike Killing vector field is the sufficient condition for a consistent one-particle quantization in the canonical theory (see Section 9 of Ref. [kuch] and references therein). Again, it is not obviously a necessary condition because there could be spacetimes with no Killing vectors but some special properties permitting quantization in them. We therefore find close agreement (although not an exact correspondence) between our approach, in which the canonical formulation is regarded as derived from a sum over histories, and standard lore, in which it is constructed directly.
Turn now to quantum cosmology. As noted in the Introduction, relativistic quantum mechanics is frequently used as a model for quantum cosmology. In quantum cosmology, the wave function for the system -the universe -obeys the Wheeler-DeWitt equation. This is a functional differential equation which has the form of a Klein-Gordon equation in which the four x α 's are replaced by the three-metric field, h ij (x), the "mass" term is dependent on the three-metric, and the "background" (superspace, the space of three-metrics) is curved.
As outlined in the Introduction, one may construct the propagator between three-metrics. The object obtained is most closely analogous to either the Feynman or the Hadamard propagators, as noted above. One can then ask whether it obeys a composition law. An important result due to Kuchař [kuch] is that there are no Killing vectors associated with the Wheeler-DeWitt equation. We therefore find that there is no composition law for the propagator between three-metrics generated by a sum over histories 6 . It follows that we do not expect to recover a canonical formulation. Again this is in agreement with standard lore on the canonical quantization of quantum cosmology, which holds that there is no consistent "one-universe" quantization [kuch].
Our final conclusions on the existence of a canonical scheme for quantum cosmology are therefore not new. However, what has not been previously appreciated, as far as we are aware, is the close connection of this question with the question of the existence of a composition law for the sum over histories.
Finally, we may comment on the suggestion of Hartle discussed in the Introduction -that the sum over histories is more general than the canonical scheme. Our results are not inconsistent with this claim: the absence of Killing vectors associated with the Wheeler-DeWitt equation probably rules out a canonical quantization, but does not obviously prevent the construction of sums over histories. Of course, there still remains the question of how the sums over histories are to be used to construct probabilities, i.e., the question of interpretation. This is a difficult question and will not be addressed here.
We emphasize that these arguments are intended to be suggestive, rather than rigorous. These issues will be considered in greater detail in future publications. Table 1. The various Green functions, and their roles in non-relativistic quantum mechanics. G • G and G × G denote relativistic and non-relativistic composition laws respectively. Unless otherwise stated, sums over histories are over arbitrary paths in spacetime from x to y.
Green F n
Composition Law Sum Over Histories Canonical Rep. x|y G + (x, y) Figure 3. The surface Σ divides the configuration space C into two components, C 1 and C 2 . A path typically crosses Σ many times; the point of first crossing is at x σ . Figure 4. The path crosses the surface Σ for the first time at x = x σ and t = t σ , and is in the set p(x ′ , 0 → x σ , t σ → x ′′ , T ). Figure 5. A path crossing the surface Σ and ending at x ′′ is cancelled by a path crossing the surface and ending at x σ + (x σ − x ′′ ), provided that V (x) is independent of x 0 .
G − (x, y) G + = G + • G + G − = G − • G − G + • G − = 0 N > 0 x 0 > y 0 N < 0 x 0 < y 0 N > 0 x 0 < y 0 N < 0 x 0 > y 0 p 0 > 0 p 0 < 0 p|p ′ > 0, 11 = usual. G F (x, y) G F = G F • G F N > 0 x 0 > < y 0 see G + and G − . G (1) (x, y) G = G (1) • G (1) G (1) = G (1) • G G (1) = G + • G + − G − • G − ∞ > N > −∞ p 0 > < 0 p|p ′ > 0, 11 = unusual.
FIGURES ARE AVAILABLE FROM THE AUTHORS
not exclusive because passing through an intermediate point x does not prohibit the path from also passing through a different intermediate point y. The intermediate spacetime point x therefore does not supply the paths with a unique and unambiguous label.
G
N W (x, y) G N W = G N W × G N W (i) paths moving forward in x 0 , N > 0 (ii) all paths not crossing final surface,
Figure 1 .
1Paths for the non-relativistic propagator in the set p(x ′ , t ′ → x t , t → x ′′ , t ′′ ).
Figure 2 .
2Paths for the relativistic propagator.
That this is somewhat puzzling was, to our knowledge, first noticed by Ikemori[J].
The validity of the PDX for arbitrary surfaces has been demonstrated in Ref.[vB] by the use of a generalized Green's theorem.
It is also possible to prove the path decomposition expansion for non-zero potential by using the Wiener measure on the set of all continuous Brownian paths[sam] (for a description of the Wiener measure, and its role in the sum-over-histories, see for example Ref. SB).
One might reasonably ask which Green function is involved in (5.1), and to what extent it is defined in
This conclusion does not exclude the possibility of the existence of a propagator not given by a sum over histories which obeys a composition law, in analogy with the causal Green function. Note also that the propagation amplitude generated by a sum over histories will obey the Wheeler-DeWitt equation[hh], but this does not imply the existence of a composition law or a canonical formulation.
ACKNOWLEDGEMENTSWe would like to thank Arlen Anderson, Franz Embacher, Eddie Farhi, Jeffrey Goldstone, Sam Gutmann, Jim Hartle, Chris Isham, Samir Mathur and Claudio Teitelboim for useful conversations. We would particularly like to thank Larry Schulman for useful conversations, and for introducing us to the path decomposition expansion. J.J.H. was supported in part by a Royal Society Fellowship. M.O. was supported by an SERC (UK) fellowship while part of this work was carried out.
Canonical quantum gravity and the problem of time', Imperial College preprint TP/91-92/25. . C Isham, gr-qc/9210011ish. C. Isham, 'Canonical quantum gravity and the problem of time', Imperial College preprint TP/91- 92/25, gr-qc/9210011.
. J J Halliwell, J B Hartle, Phys. Rev. 431170J. J. Halliwell and J. B. Hartle, Phys. Rev. D43, 1170 (1991).
. Ak A Auerbach, S Kivelson, Nucl. Phys. 257799AK. A. Auerbach and S.Kivelson, Nucl. Phys. B257, 799 (1985).
Tunneling and the path decomposition expansion. . P Van Baal, THU-91/19Utrecht PreprintvB. P. van Baal, 'Tunneling and the path decomposition expansion', Utrecht Preprint THU-91/19 (1991).
Hamiltonian Quantisation and Constrained Dynamics. G J Govaerts, Leuven University PressLeuven, BelgiumG. J. Govaerts, Hamiltonian Quantisation and Constrained Dynamics, (Leuven University Press, Leuven, Belgium, 1991).
. M Ht, C Henneaux, Teitelboim, Ann. Phys. (N.Y.). 143127HT. M. Henneaux and C. Teitelboim, Ann. Phys. (N.Y.) 143, 127 (1982).
. J B Hk, K V Hartle, Kuchař, Phys. Rev. D. 342323HK. J. B. Hartle and K. V. Kuchař, Phys. Rev. D 34, 2323 (1986).
. Jjh J J Halliwell, Phys. Rev. D. 382468JJH. J.J. Halliwell, Phys. Rev. D 38, 2468 (1988).
J J Jjh2, Halliwell, Conceptual Problems of Quantum Gravity. A. Ashtekhar and J. StachelBostonBirkhäuserJJH2. J. J. Halliwell, in Conceptual Problems of Quantum Gravity, edited by A. Ashtekhar and J. Stachel (Birkhäuser, Boston, 1991).
S L Schulman, R W Ziolkowiski, Path integrals from meV to MeV. V. Sa-yakanit, W. Sritrakool, J. Berananda, M. C. Gutzwiller, A. Inomata, S. Lundqvist, J. R. Klauder and L. S. SchulmanSingaporeWorld ScientificS. L. Schulman and R. W. Ziolkowiski, in Path integrals from meV to MeV, edited by V. Sa-yakanit, W. Sritrakool, J. Berananda, M. C. Gutzwiller, A. Inomata, S. Lundqvist, J. R. Klauder and L. S. Schulman (World Scientific, Singapore, 1989).
. Y N Yamada, Sci. Rep. Tôhoku Uni., Series. 8177Y. N. Yamada, Sci. Rep. Tôhoku Uni., Series 8 12, 177 (1992).
. J H Ikemori, Phys. Rev. D. 403512J. H. Ikemori, Phys. Rev. D 40, 3512 (1989).
. C R H Cameron, J. Math. Phys. 39126C. R. H. Cameron, J. Math. Phys. 39, 126 (1960).
. H J B Hartle, private communicationH. J. B. Hartle, private communication.
. S Gutmann, private communicationS. Gutmann, private communication.
L S Sb, Schulman, Techniques and Applications of Path Integration. New YorkWileySB. L. S. Schulman, Techniques and Applications of Path Integration (Wiley, New York, 1982).
L I Schiff, Schiff, Quantum Mechanics. New YorkMcGraw-HillSchiff. L. I. Schiff, Quantum Mechanics (McGraw-Hill, New York, 1955).
. T C Teitelboim, Phys. Rev. D. 253159T. C. Teitelboim, Phys. Rev. D 25, 3159 (1982).
C Id, J M Itzykson, Drouffe, From Brownian Motion to Renormalization and Lattice Gauge Theory. CambridgeCambridge University Press1Statistical Field TheoryID. C. Itzykson and J. M. Drouffe, Statistical Field Theory Vol. 1: From Brownian Motion to Renormal- ization and Lattice Gauge Theory (Cambridge University Press, Cambridge, 1989).
The naive composition law, in the context of quantum gravity, has appeared in numerous papers over the years. Just a few of them are. S W Hawking, General Relativity. S. W. Hawking and W. IsraelCambridgeCambridge University Press??. The naive composition law, in the context of quantum gravity, has appeared in numerous papers over the years. Just a few of them are, S. W. Hawking, in General Relativity, An Einstein Centenary Survey, eds S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979);
. J A Wheeler, B. S. DeWitt and C. M. DeWittGordon and BreachNew YorkJ. A. Wheeler, in Relativity, Groups and Topology, eds B. S. DeWitt and C. M. DeWitt (Gordon and Breach, New York, 1963);
. S Coleman, Nucl. Phys. 310643S. coleman, Nucl. Phys. B310, 643 (1988);
Many of the above authors appreciated that it may be incorrect. S Giddings, A Strominger, Nucl. Phys. 321481but not for the reasons given in this paperS. Giddings and A. Strominger, Nucl. Phys. B321, 481 (1988). Many of the above authors appreciated that it may be incorrect, but not for the reasons given in this paper.
. M Kac, Kac, Probability and Related Topics in the Physical Sciences. New YorkInterscienceKac. M. Kac, Probability and Related Topics in the Physical Sciences (Interscience, New York, 1959).
. E S Bfv, G A Fradkin, Vilkowiski, Phys. Lett. 55224BFV. E. S. Fradkin and G. A. Vilkowiski, Phys. Lett. 55B, 224 (1975);
. I A Batalin, G A Vilkowiski, Phys. Lett. 69309I. A. Batalin and G. A. Vilkowiski, Phys. Lett. 69B, 309 (1977).
Time and interpretations of quantum gravity. K Kuchař, Proceedings of the 4th Canadian conference on General Relativity and Relativistic Astrophysics. G. Kunstatter, D. Vincent and J. Williamsthe 4th Canadian conference on General Relativity and Relativistic AstrophysicsSingaporeWorld Scientificto appear in theK. Kuchař, 'Time and interpretations of quantum gravity', to appear in the Proceedings of the 4th Canadian conference on General Relativity and Relativistic Astrophysics, eds G. Kunstatter, D. Vin- cent and J. Williams, World Scientific, Singapore, 1992.
. S Fulling, Aspects of Quantum Field Theory in Curved Spacetime. CambridgeCambridge University Pressfulling. S. Fulling, Aspects of Quantum Field Theory in Curved Spacetime (Cambridge University Press, Cambridge, 1989).
The idea that the sum over histories may supply more a general formulation of quantum mechanics has been emphasized by Hartle. See for example. Phys. Rev. J. B. Hartle382985harnew. The idea that the sum over histories may supply more a general formulation of quantum mechanics has been emphasized by Hartle. See for example, J. B. Hartle, Phys. Rev. D38, 2985 (1988);
Quantum Cosmology and Baby Universes, Proceedings of the Jerusalem Winter School on Theoretical Physics. S. Coleman, J. Hartle, T. Piran and S. WeinbergSingapore; Tokyo, JapanWorld Scientific443173Waseda UniversityProceedings of the International Symposium on Quantum Physics and the UniversePhys. Rev. D44 3173 (1991); in Quantum Cosmology and Baby Universes, Proceedings of the Jerusalem Winter School on Theoretical Physics, eds S. Coleman, J. Hartle, T. Piran and S. Weinberg (World Scientific, Singapore, 1991); in Proceedings of the International Symposium on Quantum Physics and the Universe, Waseda University, Tokyo, Japan (1992).
. C Teitelboim, private communicationC. Teitelboim, private communication.
| [] |
[] | [] | [] | [] | A simple thermodynamic argument related to a (weakly absorbing) finite dielectric slab illuminated by sunlightoriginally suggested by Yablonovich -leads to the conclusion that the absorption in a dielectric can at best be increased by a factor 4n 2 . Therefore, the absorption in these materials is always imperfect; the Shockley-Queisser limit can be achieved only asymptotically. In this paper, we make the connection between the degradation in efficiency and the Yablonovich limit explicit and re-derive the 4n 2 -limit by intuitive geometrical arguments based on Snell's law and elementary rules of probability. Remarkably, the rederivation suggests strategies of breaking the traditional limit and improving PV efficiency by enhanced light absorption. | null | [
"https://export.arxiv.org/pdf/1212.2897v2.pdf"
] | 118,608,170 | 1212.2897 | 2b9ad03b291481c355f3e71d46e70c582443329e |
1 1Index Terms-Photovoltaic cellsthermodynamicsintensityabsorption
A simple thermodynamic argument related to a (weakly absorbing) finite dielectric slab illuminated by sunlightoriginally suggested by Yablonovich -leads to the conclusion that the absorption in a dielectric can at best be increased by a factor 4n 2 . Therefore, the absorption in these materials is always imperfect; the Shockley-Queisser limit can be achieved only asymptotically. In this paper, we make the connection between the degradation in efficiency and the Yablonovich limit explicit and re-derive the 4n 2 -limit by intuitive geometrical arguments based on Snell's law and elementary rules of probability. Remarkably, the rederivation suggests strategies of breaking the traditional limit and improving PV efficiency by enhanced light absorption.
I. INTRODUCTION
HE thermodynamic argument proposed by Shockley-Queisser (S-Q) [1] allows us to calculate the maximum efficiency of solar cells. In an earlier paper [2], we have shown that the essential features of the thermodynamic limit (as well as various practical approaches proposed to approach or exceed it) can be understood by shining sunlight onto a box of atoms characterized by two energy levels, 1 E and 2 E , see The S-Q limit [1] can be theoretically achieved only if all the photons of right energy 1 2 (
) E E ω = − ℏ
entering the dielectric are absorbed with probability one. Our previous derivation of the S-Q limit for the 2-level system presumed perfect absorption [2]. For imperfect absorption, we should have written the upward and downward transition rates as, 2 1 (1 )
S ph
U P f f n θ × − = (1) 1 2 (1 1 1) )( D ph f f D n θ × − = +(2)
allowing for the fact that some photons of the right energy may exit the solar cell without being absorbed ( 1) P < . Here, ph n is the B-E distributions related to the radiation from the sun (with appropriate µ ∆ and T ). S θ and D θ are the input and output radiation angles. If we equate Eq. (1) and (2) and follow the procedure in Ref. [2] to recalculate the efficiency ( ) η of the simplified 2-level model, we find 1 lo 1 g log .
D D D D S g S g T kT kT T E E P θ η θ = − − − (3)
A reduction in absorption ( ) P reduces η below the thermodynamic limit, an intuitive result.
Figure 1: (a) A system comprising of 2-level atoms, (b) A 2level atom interacting with incident light.
In this paper, we will show that absorption probability (absorptance) for the weakly absorbing material is given by
, ( ) 1 A A f L f P L α α × + × = (4)
where L is the thickness of the dielectric slab defined by the electrical design of the cell, α is the absorption coefficient the material under consideration, and A f is the absorption enhancement factor defined by optical design of the cell.
µ 2 T D (a) (b)
1 P → and the efficiency approaches the 2-level efficiency limit (analogous to S-Q limit), see Eq. (3). Unfortunately, the cell cannot be made arbitrarily thick, because the photogenerated carriers in a thick film will recombine before being collected by the contact, and the short-circuit current SC J will be reduced. Instead, we should focus on increasing P by increasing A f , with clever arrangement of mirrors, reflectors, concentrators, photonic crystals and metamaterials.
In 1982, Yablonovich used the theory of detailed balance of photons to provide a surprising answer [3][4]: In essence, no matter how clever or sophisticated the optical design, A f cannot exceed 2 4n (n is the refractive index of the solar cell), and therefore, the absorption probability P of a finite cell of thickness L can never be perfect. The theory suggests several practical ways to approach the limit. For a poorly designed cell,
1,
A f → 1 A f L α ≪ , so that P L α ; a more thoughtful design enhances 2 4 A f n → , so that even with 1 A f L α < 2 4 P n L
α . While 1 P < , but at least a good design could increase P by a factor of 2 4n over a poorly designed one.
In Sec. II, we re-derive Yablonovich limit by elementary geometrical arguments based on Snell's law. We explain the absorption enhancement factor ( )
I A L f f f = × .
Here, L f is the average absorption path length enhancement factor per trip through the dielectric (normalized to cell thickness) to account for the fact that rays at an random angle i θ has a higher probability of absorption compared to a ray passing vertically through the cell, i.e., ( )
ef i f L i f L L θ ≡ . And, 2 I f β =
is the intensity enhancement factor calculated from the average number of bounces β a photon experiences before it escapes the cell. In turn, β depends on material index n and the dimensionality of surfaces S D defining scattering of photons.
Therefore, A f in a weakly absorbing material is determined by determining by simple geometrical arguments the two parameters, ( ) eff i i L θ and β . The derivation of Eq. (4) will also suggest techniques to beat the Yablonovich limit, i.e.,
2 4 A f n >
, by restricting the emission angle or changing the statistics of photon scattering, see Sec. III. Some of these have also been discussed in [5], however, we offer intuitive interpretations and significant generalizations of the key results to conventional structures and some recent light trapping configurations. Our conclusions are summarized in Sec. IV.
II. STATISTICS OF LIGHT RAYS
A. A Summary of the key Results
The essence of the Yablonovich's argument in [3][4] is understood by the following scenario: Consider a dielectric with an inlet and an outlet for photons. At steady state, the incoming and outgoing flow rates (#/sec) are equal to ensure there is no constant buildup of energy inside the dielectric. If the incoming flux (#/sec/area), i F is fixed, the steady state number of photons inside the layer can be controlled by changing the outgoing flux, F o . In principle, the arbitrary decrease in o F would be assisted by a corresponding increase in the photon density inside the dielectric layer. Generalizing Yablonovich's argument, we can show that when a solar cell is illuminated by sunlight of flux i F (Fig. 2), the emission (outgoing flux) cannot be reduced arbitrarily by optical design and therefore, the 'pressure' of photon gas inside the cell (i.e., intensity enhancement) cannot exceed For a weak absorbing dielectric, the absorption only occurs as a small perturbation to the intensity and L f . Hence the absorption enhancement differs from the intensity enhancement by only a factor:
. S S D D C DS I esc f C n θ θ × × ( ) . , S S D D C A S DS esc f n D C n θ θ × × (5)
As we will see below that in a conventional 3D solar cells with a Lambertian back mirror,
B. Derivation of Eq.(4) for Classical Cells
Consider the fate of a photon trapped within a finite
1D 2D 3D D S F i F o =F i F o =F i /nc 2 F o =F i /n 2 c 3 F i F i F i F o =F i / n Ds-1 c D
dielectric slab as it tries to escape the dielectric region by repeated bouncing between the two (one reflecting, one random) surfaces, as in Fig. 3(a). Snell's law
1 1 2 2 ( sin si ) n n n θ θ =
dictates that the maximum angle with which a photon incident onto dielectric/air interface can escape the dielectric is given by
1 sin (1/ ) C n θ − = .
The probability that a ray will escape ( ) esc P through the escape
cone (0 ) C θ θ < <
depends on the dimensionality of the confining surfaces (Fig. 3).
If a ray is incident outside the escape cone, it will bounce back, for total internal reflection, in a random angle defined by the local orientation of the top interface. The average number of bounces a photon experiences before it escapes the dielectric is defined by the escape probability per bounce as 1 .
esc P β − = (6)
Note that the number of bounces before escape equals the enhancement of photon intensity per round trip inside the dielectric layer. Figure
eff i L L f L α α × ≡ where eff L i f L L ≡ .
Now to complete the derivation of Eq. (4), consider a solar cell in which photons bounce β times between top and bottom interfaces before exiting a dielectric slab of length L and absorption coefficient α . The probability of absorption per round trip is ~2 L f L α and in every round-trip, a fraction 1 / β of the photons escape through the top surface without being absorbed. Therefore, the absorption probability or absorptance is,
1 2 2 L L f L P A f L α α β − = = + ( ) ( ) 2 . 1 2 1 1 L L A L L A I I f L f L f L f L f f f L f L βα α α βα α α × = × ≡ ≡ + + +
Once the two parameters,
C. A Planar Bottom Mirror with no Randomness (D S =0)
Consider a dielectric defined by two parallel, planar surfaces. If a ray of sunlight refracts into the dielectric as in Fig. 4(a), it must enter the dielectric within the 'escape cone' (see arrow labeled '0' in Fig. 4(c)). If the ray is neither decayed (negligible perturbation due to absorption), nor scattered within the dielectric, the ray will be incident on the bottom surface within the escape cone and will escape to air (arrow '1', Fig. 4(c)), with no further reflection. The path length enhancement depends on the angle of the refracted ray ( )
β = .
Only a single ray is associated with any point 'A', so that the density of photons at each point inside the dielectric is exactly equal to that in air ( 1)
I f β = = .
Therefore, the absorption enhancement is
A I L f f f = 1 / cos L D f β θ = =
-an intuitive result. Note that if the photon is never absorbed or scattered within the dielectric, If we make the back-surface fully reflecting, as in Fig. 4(b), the ray still enters the dielectric within max C θ θ = (Fig. 4(d)), bounces once on the back mirror(red dot, marked '1'), and then escapes through the top interface (arrow marked '2')never once leaving the escape cone. The photon makes two trips ( 2) I f = through the dielectric before it escapes, so that 2 / cos 2
θ max =θ C 0 2 1 θ (b) (d) θ max =θ C 0 1 θ (a) (c) A θ D Perfect reflector A θ D EscapeA D L I L f f f f θ β = = = .
By blocking off the exit from the bottom, the internal photon density has been raised by a factor of 2 , because every point 'A' is traversed by two rays-one on its way down to the mirror, the other after bouncing back, on its way to escape.
The results above are consistent with Eq. surface in the back. Let us assume that the incident ray is restricted to planes parallel to xz-plane. The incident ray enters into the dielectric through the top planar surface within the escape cone ( ) C θ θ ≤ of the top surface. This is represented as state-0 in Fig. 5(b). The ray is scattered and reflected by the bottom rough surface. If the angle following the scattering is outside the escape cone ( ) C θ θ > , the state of the ray is characterized by a point within the blue region in Fig. 5(b). Since the ray is outside the escape cone, it will be internally reflected from the top surface (total internal reflection). The bouncing between the surfaces will continue and the photon will remain trapped within the dielectric, as long as the ray occupies a state outside the escape cone (the blue region in Fig. 5(b)). Statistically, on average, the photon will bounce β times (i.e., hop through β number of states in Fig. 5(b)) before it is randomly scattered into the escape cone and finally exits the structure (arrow-β as shown in Fig. 5(b)). Note that β should be understood as an average, because some photons may escape only a single bounce, while others may be trapped for bounces much larger than β .
The escape probability and β can be calculated as follows: Therefore, by Eq. (4) we find
1 2 1 2 L L f L n L P A n L f L α π α π α α β − = = = + +(7)
for reflective rough surface (in one direction). The appearance of π and ( 1) n > suggests improved absorption -even for a 0 β=πn/2
θ 2 1 3 θ esc =θ C θ max 0 β=n 2 Ω 2 1 3 Ω esc =Ω C Ω max (b) (d) (a) (c) z
x y poor absorber like silicon ( 3.49) n = , the 1D rough bottom reflector increases absorptance by a factor of 11. And a 200 m µ thick silicon layer will appear optically as a 2mm thick film.
E. Random Surfaces (D S =2)
It is possible to roughen the bottom reflector in both the x and y directions, see Fig. 5(c). The light comes in through the top planar surface and gets scattered by rough back reflector. The scattering of light and the trapping concept is the same as explained in the previous section. The light cannot escape from the dielectric if it is scattered into and then stays within the states in the blue region of Fig. 5(d). The light escapes after β bounces, when a random scattering by the bottom interface scatters the ray into the escape cone.
To calculate esc P and β , we will integrate over the solid angle For weakly absorbing material, absorptance is now given by The formula implies that the sunlight entering a 200 m µ thick silicon film will have effectively 1cm optical thickness for absorption! Even for very weakly absorbing light, 1 P → ; such is the power of a single roughed surface. However, for improved electrical properties of the solar cells, a much thinner absorber layer is desired-for such cases, even higher absorption enhancement is required to reach 1 P → .
We have used a configuration with planar, refracting surface facing the sun, and the roughened mirror at the back, because the ray tracing is intuitive and the analysis easy to explain. The results are unchanged if the configuration is reversed -a roughened refracting surface on top and a planar, reflecting surface at the back.
F. Planar Surfaces and Photon recycling
The angle diagram (Fig. 5(d)) suggests that it is not necessary to have a random surface to achieve high photon intensity. Any process that scatters the photons away from the escape cone can achieve similar amplification. For example, if a photon is absorbed and immediately reemitted at a random angle, Fig. 6 shows that the number of repeated bounces will be identical to those from scattering by rough surfaces [6]. The randomization of the angles by a process called photonrecycling has been used with great success in creating ultra high efficiency cells that do not require rough surfaces [7].
III. EXCEEDING THE 4n 2 LIMIT
A. Intensity Enhancement
One can increase 2 ) 4 (
L A I f f f n > ≡ ×
by increasing I f as follows. The idea of this approach is to reduce the escape angle esc θ below C θ , so that esc P is reduced, and the number of photons within the box is enhanced. If the angle of the output emission angle is reduced by a factor out N by an angle selective layer so that light can be emitted with an angle of ( 2 / ) out Air N θ π = , as in Fig. 7(a), the escape cone inside the dielectric will likewise be reduced by a factor N ( / )
esc C N θ θ = .
Thus from Snell's law,
. / 2 sin sin C out N N n θ π = (10)
We can simplify this relationship. For practical dielectrics we can approximate the critical angle as
) (1 / ) (sin C C n θ θ ≈ = .
Now if out N is large enough, we can re-write (10) as,
/ 2 1 1 . C C out C n N N N N θ θ π θ = ≈ ≈ Therefore, . / 2 out N N π ≈ for 1 out N >> .
The roughened back reflector continues to randomize the light angles inside the dielectric, so that the angle-space in Fig. 7(b) is populated with equal probability. Following the derivation of Eq. (8), but now for the restricted escape angle, we find, θ for concentrator solar cells [8] or cells with restricted emission [9][10][11]. Although the absorption path length enhancement is the same as before ( 2) L f = , the intensity enhancement is very high in this case ( ) ,
A N f n n = × ≈ × ×
which is significantly higher than the Yablonovich limit. In this case, the photons are virtually guaranteed to be absorbed with probability 1 ( 1 P → ), because even the weakest absorbing materials have absorption coefficient of
5 10 / m − .
Returning to Eq. (3), we find that suppressing esc θ not only improves P (reduces the third term on right), but simultaneous suppresses the angle anisotropy and increase the open circuit voltage close to the bandgap [2], [10]. In practice, 5 10 out N → may be both impractical and unnecessary for absorption enhancement. The quadratic improvement of absorption with angle restriction ensures that even for moderate angle restriction, the absorbance increase is significant.
Note that Eq. (5) only holds when the rays are scattered such that all possible photon densities of state are accessible. For the following cases, this condition is not fulfilled and hence the absorption enhancement cannot be described by Eq. (5).
B. 'Intensity' enhancement versus 'absorption' enhancement.
The second approach to obtain 2 ) 4 (
L A I f f f n > ≡ × is to
increase 2 L f ≫ by preferential low-angle scattering of the rays. For these cases, L f is often reduced below 2 2n , but 2 ) 4 (
L A I f f f n > ≡ ×
. We will now discuss the theory and two specific implementation that have been discussed in the literature [12], [13].
1) Theory
In all the preceding discussions, we have assumed the light to be scattered isotropically inside the dielectric. The intensity enhancement was Note that, the effective path length for light is much higher if it is scattered into large angles-allowing the rays to get absorbed. As shown in Fig. 8(a), rays with a smaller angle will require more number of bounces before absorption, having higher probability of escaping. A ray with an angle θ will undergo absorption of rays according to a probability distribution of ( ) P Ω . Therefore, the average absorption path length enhancement is: 2
2) Beating the Limit by Anisotropic Scattering.
Several recent works beat the 4n 2 -limit of light absorption by innovative optical structures that scatters light predominantly into guided modes [12], as in Fig. 9(a). To understand this approach intuitively, we calculate the integral of L f shown above (11) by partitioning the rays into two groups: one for 'very low angle' evanescent wave absorption ( ) The intensity enhancement I f is calculated by turning off absorption. Now the ray of light goes into the dielectric (arrow '0' in Fig. 9(b)), bounces at the back (states shown by set of dots marked '1') and then goes out (arrow '2'). Therefore, 2 2
I f β = = .
In summary, while I f does not increase, L f does, so that overall A f exceeds the Yablovich limit. Another form of such slot waveguide structure has been proposed by Green [13], see Fig. 9(c). The cladding layers with higher refractive index increases the evanescent mode coupling to the active layer with lower refractive index (see Fig. 9(c)), and increase ev A . As the active layer is made thinner 0 L → , ev bulk A A ≫
. As in Ref. [12], the increase in L f compensates for the decrease in I f to beat the Yablovich limit. The rays enter into the active layer form the 2 cladding layers through the escape cones of the cladding-active layer interface as shown in the angle statistics diagram of Fig. 9(d). The escaped rays into the active layer are distributed such that evanescent modes are increased. We note in passing that evanescent mode coupling in Ref. [13] is purely a wave optics phenomenon and there ev A can only be calculated by solving Maxwell's equations.
Note that, the absolute value of absorptance of these arrangements may not be high (i.e., not close to unity), although the absorption enhancement appears to be even orders of magnitude larger than the 2 4n limit.
IV. SUMMARY
For weakly absorbing layer with a back mirror, the intensity enhancement limit is found to be in the form , as an special case with random refracting surface. Additional gain beyond this limit is possible if we observe the following: The essence of light trapping and intensity enhancement is related to the reduction of escape probability of the photons. This can be achieved either by increasing the number of states occupied by the photons inside the dielectric, or by decreasing the number of available states that allow photon escape. The example discussed in Sec. III(A) suggest that angle restriction provides significant additional gain because it not only improves absorption, but also the open circuit voltage/efficiency of a solar cell. It is important to remember that the additional gain is achieved only for normal incidence of sunlight obtained by orienting the cell towards the sun throughout the day
Fig. 1 .
1At equilibrium, a system comprising of a collection of 2-level atoms can be described by the Fermi-Dirac (Fi E and i µ being the energy and chemical potential of the i-th state ( 1, 2) i = . On the other hand, the photons follow Bose-Einstein (B-E) distribution given as follows, . The open circuit voltage of the solar cell made out of the system of atoms is given by the splitting of the chemical potentials, i.e.with the Electrical and Computer Engineering Department, Purdue University, West Lafayette, IN 47906 USA (e-mail: [email protected]).
Figure 2 :
2Input and extraction of photons to fill up a 'container/photon-tank'.
Eq. (5) can be inserted in Eq. (4) to calculate absorptance for different types of optical configurations.
=
4 explains why: if a photon bounces β times before it escapes, any arbitrary point 'A' within the dielectric is visited by β (on average) number of photons, so that the intensity of the point goes up by the same factor. , if the mirror is absent.
Figure 3 :
3(a) Definition of loss-cone (escape cone) and path lengths, (b) Illustration of escape cone solid angle in a 3D case.It should be understood that intensity enhancement arise from an enhanced emission rate from atoms within the cell. The emission rate of photons depends only on the temperature of the dielectric (blackbody radiation). By reducing the probability of escape ( ) , is greater than L , the thickness of the cell. Therefore, the absorption per round trip, with a back mirror, is 2 2
Figure 4 :
4Angle statistics (c, d) of photons in a 1D object (a) without and (b) with a back reflector.
Figure 5 :
5Angle statistics for photon scattering in (a) 1D and (c) 2D Lambertian reflector at the back. The corresponding angle statistics are shown in (b) and (d).
4
4, Eq.(5) suggests that A f is independent of the index of the dielectric, consistent with the results derived in the preceding paragraph. Finally, using (to see that a dielectrics defined by parallel, planar surfaces make a poor absorber, i.e., ~2 P L α when L α is small. Absorption is enhanced considerably by roughening the bottom reflector, as discussed below.D. Bottom Mirror with 1D Randomness (D S =1)Fig. 5(a) shows a dielectric with a planar surface on the top and a roughened (only along the x direction), fully reflective
silicon, on average, the ray travels an astonishing ~25 times inside the layer before it can escape.For a very weak absorbing dielectric, the absorption in a single pass by the randomly scattered light is
Figure 6 :
6Trapping of recycled photons.
Figure 7 :
7(a) Conventional structure with Lambertian back reflector yielding 4n 2 absorption enhancement. An extra angle selective transmitter/reflector layer on the structure of can yield >4n 2 absorption enhancement as well as reduction in angle entropy loss. (b) The angle statistics shows that the suppressed escape angle allows for more states for photons inside the dielectric.
This expression is in the form of Eq.(5)
single pass through the dielectric. Now, assume a surface scatters the
Figure 8 :
8(a) Absorption of scattered light in the dielectric. (b) Absorption enhancement as a function of dielectric layer thickness L.
Fig. 8 (
8b) shows that we can obtain very high L f as 0 L → , highlighting the importance of evanescent mode absorption.
Figure 9 :
9(a) Coupling of light into evanescent modes to reduce the probability of photon escape. (c) Thin active layer surrounded by high refractive index cladding for enhanced evanescent mode absorption. (b) and (d) show the angle statistics for (a) and (c) respectively.
where n , DS C and esc θ are the refractive index, proportionality constant and the maximum escape angle, respectively, as defined by the dimensionality S D of the scattering surface. The derivation provides an
If the surface is designed such that it preferentially scatters , however, the overall absorption exceeds the Yablonovitch limit/ cos
(1
)
)
.
1
(
L
L
f
e
P
d
L
α
θ
α
−
=
Ω Ω
−
∫
(11)
parallel (
/ )
2
θ π
→
to the surface, the absorption path
enhancement L
f can be very high. The strong absorption
implies fewer bounces
2
(
)
n
β <
Detailed Balance Limit of Efficiency of p-n Junction Solar Cells. W Shockley, H J Queisser, J. Appl. Phys. 323510W. Shockley and H. J. Queisser, "Detailed Balance Limit of Efficiency of p-n Junction Solar Cells," J. Appl. Phys., vol. 32, no. 3, p. 510, 1961.
M A Alam, M R Khan, arXiv:1205.6652Fundamentals of PV Efficiency Interpreted by a Two-Level Model. M. A. Alam and M. R. Khan, "Fundamentals of PV Efficiency Interpreted by a Two-Level Model," arXiv:1205.6652, May 2012.
Intensity enhancement in textured optical sheets for solar cells. E Yablonovitch, G D Cody, IEEE Transactions on. 292Electron DevicesE. Yablonovitch and G. D. Cody, "Intensity enhancement in textured optical sheets for solar cells," Electron Devices, IEEE Transactions on, vol. 29, no. 2, pp. 300-305, 1982.
Statistical ray optics. E Yablonovitch, J. Opt. Soc. Am. 727E. Yablonovitch, "Statistical ray optics," J. Opt. Soc. Am., vol. 72, no. 7, pp. 899-907, Jul. 1982.
The confinement of light in solar cells. A Luque, Solar Energy Materials. 232-4A. Luque, "The confinement of light in solar cells," Solar Energy Materials, vol. 23, no. 2-4, pp. 152-163, Dec. 1991.
Intense Internal and External Fluorescence as Solar Cells Approach the Shockley-Queisser Efficiency Limit. O D Miller, E Yablonovitch, S R Kurtz, arXiv:1106.1603v3O. D. Miller, E. Yablonovitch, and S. R. Kurtz, "Intense Internal and External Fluorescence as Solar Cells Approach the Shockley-Queisser Efficiency Limit," arXiv:1106.1603v3, Jun. 2011.
Approaching the Shockley-Queisser Limit in GaAs Solar Cells. Xufeng Wang, M Ryyan Khan, Muhammad A Alam, Mark Lundstrom, presented at the PVSCXufeng Wang, M. Ryyan Khan, Muhammad A. Alam, and Mark Lundstrom, "Approaching the Shockley-Queisser Limit in GaAs Solar Cells," presented at the PVSC, 2012.
The limiting efficiency of silicon solar cells under concentrated sunlight. P Campbell, M A Green, IEEE Transactions on. 332Electron DevicesP. Campbell and M. A. Green, "The limiting efficiency of silicon solar cells under concentrated sunlight," Electron Devices, IEEE Transactions on, vol. 33, no. 2, pp. 234 -239, Feb. 1986.
Angular confinement and concentration in photovoltaic converters. M Peters, J C Goldschmidt, B Bläsi, Solar Energy Materials and Solar Cells. 948M. Peters, J. C. Goldschmidt, and B. Bläsi, "Angular confinement and concentration in photovoltaic converters," Solar Energy Materials and Solar Cells, vol. 94, no. 8, pp. 1393-1398, Aug. 2010.
The effect of photonic bandgap materials on the Shockley-Queisser limit. J N Munday, Journal of Applied Physics. 1126J. N. Munday, "The effect of photonic bandgap materials on the Shockley-Queisser limit," Journal of Applied Physics, vol. 112, no. 6, p. 064501-064501-6, Sep. 2012.
Fundamental limit of light trapping in grating structures. Z Yu, A Raman, S Fan, Opt. Express. 18S3Z. Yu, A. Raman, and S. Fan, "Fundamental limit of light trapping in grating structures," Opt. Express, vol. 18, no. S3, p. A366-A380, 2010.
Fundamental limit of nanophotonic light trapping in solar cells. Zongfu Yu, Aaswath Raman, Shanhui Fan, Zongfu Yu, Aaswath Raman, and Shanhui Fan, "Fundamental limit of nanophotonic light trapping in solar cells."
Enhanced evanescent mode light trapping in organic solar cells and other low index optoelectronic devices. M A Green, Progress in Photovoltaics: Research and Applications. 19M. A. Green, "Enhanced evanescent mode light trapping in organic solar cells and other low index optoelectronic devices," Progress in Photovoltaics: Research and Applications, vol. 19, no. 4, pp. 473-477, 2011.
| [] |
[
"ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation",
"ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation",
"ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation",
"ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation"
] | [
"Daitao Xing [email protected] \nNew York University\nUSA\n",
"Jinglin Shen \nOPPO US Research Center\nUSA\n",
"Chiuman Ho [email protected] \nOPPO US Research Center\nUSA\n",
"Anthony Tzes [email protected] \nDhabi and Center for Artificial Intelligence and Robotics\nNew York University Abu\nUAE\n",
"Daitao Xing [email protected] \nNew York University\nUSA\n",
"Jinglin Shen \nOPPO US Research Center\nUSA\n",
"Chiuman Ho [email protected] \nOPPO US Research Center\nUSA\n",
"Anthony Tzes [email protected] \nDhabi and Center for Artificial Intelligence and Robotics\nNew York University Abu\nUAE\n"
] | [
"New York University\nUSA",
"OPPO US Research Center\nUSA",
"OPPO US Research Center\nUSA",
"Dhabi and Center for Artificial Intelligence and Robotics\nNew York University Abu\nUAE",
"New York University\nUSA",
"OPPO US Research Center\nUSA",
"OPPO US Research Center\nUSA",
"Dhabi and Center for Artificial Intelligence and Robotics\nNew York University Abu\nUAE"
] | [] | The exploration of mutual-benefit cross-domains has shown great potential toward accurate self-supervised depth estimation. In this work, we revisit feature fusion between depth and semantic information and propose an efficient local adaptive attention method for geometric aware representation enhancement. Instead of building global connections or deforming attention across the feature space without restraint, we bound the spatial interaction within a learnable region of interest. In particular, we leverage geometric cues from semantic information to learn local adaptive bounding boxes to guide unsupervised feature aggregation. The local areas preclude most irrelevant reference points from attention space, yielding more selective feature learning and faster convergence. We naturally extend the paradigm into a multi-head and hierarchic way to enable the information distillation in different semantic levels and improve the feature discriminative ability for fine-grained depth estimation. Extensive experiments on the KITTI dataset show that our proposed method establishes a new state-of-the-art in self-supervised monocular depth estimation task, demonstrating the effectiveness of our approach over former Transformer variants. | 10.48550/arxiv.2212.05729 | [
"https://export.arxiv.org/pdf/2212.05729v3.pdf"
] | 254,564,688 | 2212.05729 | 373959536e023e451b46e6e3d60228b59568a5ac |
ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
Daitao Xing [email protected]
New York University
USA
Jinglin Shen
OPPO US Research Center
USA
Chiuman Ho [email protected]
OPPO US Research Center
USA
Anthony Tzes [email protected]
Dhabi and Center for Artificial Intelligence and Robotics
New York University Abu
UAE
ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation
The exploration of mutual-benefit cross-domains has shown great potential toward accurate self-supervised depth estimation. In this work, we revisit feature fusion between depth and semantic information and propose an efficient local adaptive attention method for geometric aware representation enhancement. Instead of building global connections or deforming attention across the feature space without restraint, we bound the spatial interaction within a learnable region of interest. In particular, we leverage geometric cues from semantic information to learn local adaptive bounding boxes to guide unsupervised feature aggregation. The local areas preclude most irrelevant reference points from attention space, yielding more selective feature learning and faster convergence. We naturally extend the paradigm into a multi-head and hierarchic way to enable the information distillation in different semantic levels and improve the feature discriminative ability for fine-grained depth estimation. Extensive experiments on the KITTI dataset show that our proposed method establishes a new state-of-the-art in self-supervised monocular depth estimation task, demonstrating the effectiveness of our approach over former Transformer variants.
Introduction
Accurate depth estimation is critical for many applications in computer vision and robotics fields such as perception, navigation, and path planning. The advancements in deep learning have brought significant breakthroughs in the accuracy of depth estimation methods in recent years. While supervised learning-based methods like (Ranftl, Bochkovskiy, and Koltun 2021a) achieved remarkable performance on pixel-wise dense predictions of monocular images, the requirement of a large number of dense labels for training and the excessive spending for acquiring those datasets using LiDAR constraint their usage in real-world applications. Instead, self-supervised depth estimation methods, which learn the depth values using only monocular or stereo image sequences, have become more popular.
Monocular self-supervised depth estimation utilizes the photometric loss and smoothness constraints of consecutive frames in image sequences to simultaneously learn the Figure 1: Depth prediction (top two rows) from a single image on KITTI with semantic guidance shows geometry consistency preservation in local uncertainty areas. ROI Former overview (bottom row) shows that the depth of the red star requires attention from various semantic regions. depth and pose networks. Despite the notable achievement, self-supervised methods, which only rely on similarity constraints, still have large performance gaps with the supervised methods. (Lyu et al. 2021) shows that the bottlenecks come from the inaccurate depth estimation, especially at object boundaries due to the moving objects, ambiguity in lowtexture regions, reflective surfaces, occlusion, and the uncertainty of pose estimation. However, the sole consistency constraints of RGB images are insufficient to reduce those effects, which instead require external modalities to provide stronger geometric information.
Recent works, instead, leverage semantic information to improve monocular depth predictions via incorporating geometric guidance. While most of those works incorporate the semantics information explicitly, fewer works focus on designing cross-domain feature aggregation strategies to optimize the intermediate depth representations. Recently, Transformer based attention methods demonstrated their superiority over traditional CNNs in many vision tasks. (Jung, Park, and Yoo 2021) proposes a cross-modality attention module to refine depth features progressively on multi-scales and obtains significant improvement. However, the interaction is restricted within corresponding features to avoid the computational overhead in the classic Transformer methods. (Zhu et al. 2020b) provides an alternative method to Transformer with significantly reduced parameters and faster convergence speed. However, we found that the performance of deformable attention drops dramatically when high-resolution images are used as inputs. We argue that this is due to the attention module collapse and failure to locate relative information as the feature space becomes too large.
The aforementioned analysis related to semantics-guided depth estimation and the success of attention-based feature aggregation indicates that these are mainly determined by two aspects: (1) to provide a dynamic attention region assigned to each reference point, which covers the intact local semantic information and excludes irrelevant points regardless of the spatial size of the feature maps, (2) locate the semantic positions within search areas to update the reference point. Subsequently, we propose a Regionof-Interests (ROIs) guided deformable attention module, named ROIFormer, which performs deformable attention within learnable adaptive local region proposals. Inspired by (Yang et al. 2018) and , those object-aware proposals can be inferred directly from semantically feature maps via lightweight networks. Unlike (Yang et al. 2018), which creates proposals with deformation on meta-anchors, we merge the proposal generation inside multi-head attention modules and define attention areas implicitly according to information from different semantic levels. The deformable attention is then performed within constrained regions to find the most relative semantic features. With the search space restrained into object-aware local areas, the search complexity is dramatically reduced, resulting in robust feature enhancement and fast convergence.
Despite the depth consistency provision within each connected segment block using the semantic features, the instance level information is still missing, which results in uncertainty on the boundaries. Thus, we consider the spatial relationship between instance objects with the crowded areas (roads, sidewalks, and buildings) in 3D space. The points projected into the 3D space based on the estimation depth and intrinsic camera model should stay close to the nearby points within the same category and keep a reasonable distance to the reference crowded areas. Instead, the points far away from the reference points are masked as outliers which should not be used to calculate photometric similarity loss. Overall, our main contributions are summarized as follows:
• We provide a detailed comparison between different feature fusion strategies for efficient self-supervised depth estimation, indicating that search space complexity is critical for model convergence and performance improvement.
• We propose the ROIFormer, which guides the attention in local areas to most relative semantics information in an unsupervised and efficient way.
• The suggested self-supervised depth estimation with semantics guidance network achieves state-of-the-art performance on varies settings.
Related Work
Self-Supervised Monocular Depth Estimation
Significant improvement has been made since (Zhou et al. 2017) proposed the generalized framework, which enables supervision from consecutive frames via ego-motion. Later works focus on a more elegant loss design to filter unreliable propagation. ) models the photometric uncertainties of pixels on input images. proposed to feature metric loss to stabilize the loss landscape process. (Bian et al. 2019) upgraded the geometry consistency loss for scale-consistent predictions. In order to provide more geometry information for self-supervised learning, (Ranjan et al. 2019;Wang et al. 2019;Zhao et al. 2020;Petrovai and Nedevschi 2022;Zhu et al. 2020a) integrate optical flows and pseudo labels for extra constraints. (Guizilini et al. 2020a) and (Lyu et al. 2021) proposed optimized architectures for more efficient depth estimation.
Semantic-guidance for Depth Estimation
Semantic segmentation with strong geometry knowledge is widely used for the promotion of depth estimation. (Lee et al. 2021) improves the performance using an instanceaware geometric consistency loss. (Zhu, Brazil, and Liu 2020) explicitly measures the border consistency between segmentation and depth and minimizes it in a greedy manner. (Casser et al. 2019;Klingner et al. 2020) stabilize the photometric loss by removing moving dynamic-class objects. (Chen et al. 2019) performs region-aware depth estimation by enforcing semantics consistency, while (Guizilini et al. 2020b) uses pixel-adaptive convolutions to produce semantic-aware depth features via assigning weights to features within a local window. and (Jung, Park, and Yoo 2021) design a cross-task attention module to refine depth features progressively on multi-scales. (Tosi et al. 2020) and (Cai et al. 2021) apply knowledge distillation from semantic segmentation to depth estimation with a learnable domain transfer network.
Efficient Attention Network
Transformer (Carion et al. 2020) indicates a stronger performance over traditional CNNs. After that, lot of works, including (Dai et al. 2017;Xia et al. 2022;Xie et al. 2020;Chen et al. 2021;Yue et al. 2021;Liu et al. 2021;Wang et al. 2021) design efficient multi-scale attention for detection and classification tasks. and (Ranftl, Bochkovskiy, and Koltun 2021b) customized Transformer with encoder and decoder frameworks for dense prediction tasks. More recently, (Bae, Moon, and Im 2022) and ) migrate Transformer into supervised depth estimation. (Johnston and Carneiro 2020;Zhao et al. 2022) and (Jung, Park, and Yoo 2021) integrate attention into selfsupervised depth estimation and obtain significant improvement. (Nguyen et al. 2022) samples points over 3X3 transformed grids guided from ground truth boxes for efficient detection. However, those methods rely on either global dependence or local interactions. Instead, we select attention areas adaptively for optimal efficiency.
Proposed Methods
Self-Supervised Depth Estimation
Self-supervised monocular depth estimation utilizes source images I t−1 and I t+1 to build reference imageÎ t →t for target image I t via geometric transformation. We learn a scaleambiguous depthD t map and a corresponding semantic segmentation mapŜ t from the multi-task network. With known intrinsic camera parameter matrix K ∈ R 3×3 , the pixels p ∈ I t with homogeneous coordinates u are projected to the 3D-space, resulting in a point cloud P t with pixel value and semantic information. The pose network, instead, outputs the 6 DOF transformation T t→t |t ∈ {t − 1, t + 1}, rotates, shifts, and re-projects the point cloud to get pixel coordinates in image I t . The correspondence between I t with I t can be summarized as:
p = KT t→t D t K −1 p.(1)
The reference imageÎ t →t for I t is obtained via the interpolation on I t according to the reprojected pixel coordinates p . The objective is to minimize the similarity betweenÎ t →t and I t by calculating the Structural Similarity (SSIM) loss and L1 distance, as:
L p,t = α 1 − SSIM I t ,Î t →t 2 + (1 − α) I t −Î t →t . (2)
We incorporate the minimum reprojection error by selecting the per-pixel minimum value cross all similarity losses, where L p = min t L p,t . Following (Godard et al. 2017), we include an edge-aware term to smooth the depth in low gradient areas, defined as:
L s = |∂ x D t | e −|∂xIt| + |∂ y D t | e −|∂yIt| .(3)
The smoothness loss reinforces the depth similarity between pixels with small differences in grey values. It is proven that the multi-task joint training stimulates mutual benefits and results in significant improvements to both tasks. Therefore, we adopt pre-computed segmentation maps from (Jung, Park, and Yoo 2021) as ground truth and train the semantic segmentation branch with cross entropy loss.
Feature Enhancement with ROI Attention
The principle of jointly training depth estimation and segmentation is to distill the semantic and position information and obtain a more discriminative depth representation. Given a depth feature map F d ∈ R H×W ×C and segmentation map F s ∈ R H×W ×C from level l with feature dimension equals C and W, H are width and height of feature maps on level l. The enhanced geometric representations can be generalized as:
Fusion f i , F s = j∈Ω(F s ) A i,j W i,j f j ,(4)
where f i ∈ F d is query feature from depth image at position i, A is assigned weight of j th feature point f j from sample space Ω(F s ) which is a subset of F s based on sampling function Ω, and W i,j ∈ R C×C is the feature projection matrix. In (Guizilini et al. 2020b), the distribution of the weights is calculated as the correlation between the guiding features and the Gaussian kernel. The projection matrix W are convolutional weights with kernel size k and Ω is defined as a k × k convolutional window. Transformer attention is designed to capture global dependencies to build spatial interactions over enlarged areas. Specifically, let the sampling space Ω include all feature points from F s . The attention function is performed on query vector W q f d , key vector W k f s and value vector W v f s , where W q , W k and W v are three separate linear transform functions. The attention weight is expressed as
A i,j ∝ exp f T i W T q W k fj √ C , where j∈Ω A ij = 1.
Since the attention weights are distributed over the entire feature space, the cross-attention module suffers from slow convergence and computational complexity overhead on feature maps with large spatial sizes. An alternative way to crossattention is deformable attention from (Zhu et al. 2020b), which only attends to a small set of key sampling points around the query point with:
Fusion f i , F s = j∈Ω(F s ) A i,j W i,j f pi+∆pi .(5)
In specific, the Ω samples M pairs of key-value vectors from F s . Those vectors are obtained from F s via interpolation on position p i + ∆p i , where p i is the position coordinates of query point f i ; ∆p i ∈ R 2 denotes the sampling offset with unconstrained range, which are learned from linear functions. While the computational cost, especially on large spatial resolutions, can be significantly reduced using sampled key-value pairs, the sample space is still the same size as F s . The experimental results show that the sample space is too large for the linear function to find the most relative points, yielding divergence attention and worse performance in high-resolution settings, as shown in Figure 3.
To improve the attention efficiency and facilitate the location of relative information in limited iterations, we constraint the attention in a local region which is much smaller than the original sample space. Inspired by (Yang et al. 2018), we generate the Region of Interests (ROIs) for each query point. Thanks to the supervision from segmentation, the local semantic aware ROIs can be inferred from F s efficiently. Specifically, we denote the ROI for a query point f i as bin
b i = [d l , d t , d r , d b ] ∈ [0, W ]×[0, H]×[0, W ]×[0, H]
which represents the distances from f i to the edges of the ROI box and [w = d l +d r , h = d t +d b ] are normalized width and height.In practice, we constraint the width and height of ROIs with a maximum value, r min and r max .Therefore, the feature fusion within a specific region of interest can be ex- Figure 4: ROI attention overview: a local region of interest is first inferred from semantic cues, using sampling offsets and attention weights with two separate linear functions.The features are sampled via interpolation without cropping the ROI region explicitly.
pressed as:
Fusion f i , b i = j∈Ω(bi) A i,j W i,j f pi+∆ 1 2 ·pi·b wh i ,(6)
where the key-value pairs are sampled from b i via the interpolation on position p i +∆ 1 2 ·p i ·b wh i , ∆p i ∈ R 2 denotes the sampling offset with unconstrained range and p i is the center position of ROI. ROI size b wh i guarantees the sampling offset is within
[− w 2 , w 2 ] × [− h 2 , h 2 ]
. Without cropping and pooling explicitly, the sampling points can be obtained via interpolation over the neighboring positions inside the ROIs. We simplify the ROI generation with linear function which outputs a bounding box based on the hypothesis that most relevant information are distributed around the query point. Moreover, the deformable attention mechanism enables sampling in any position within ROIs. The attention module is shown in Figure 4. By introducing a lightweight module, our semantic bounded attention module prompts the attention efficiency and concentrates computational resources via narrowing down the search space.
ROI Former Module
Inspired by (Carion et al. 2020), the representative ability of ROI Attention can be boosted with a multi-head setting to capture guidance information from different semantic levels. Thus, we employ the multi-head structure to generate multiple proposals which provide separate ROIs for local attention. The multi-head ROI attention feature is:
f d i = concat(Fusion f i , b 1 i , . . . , Fusion f i , b M i )W O (7)
where Fusion f i , b m i is the fusion feature after ROI attention on the m th head. The features from the M head attention are merged by concatenation, following an output projection matrix W O . Beside the attention from depth to segmentation features, we apply the same multi-head ROI attention over depth feature to update semantic representation for mutual enhancement. To save the computational overhead when computing the guiding feature maps, we stack f s and f d at the beginning and use it as a shared guidance feature map after the convolutional layers, as shown in Figure 2 .
The shared guidance feature map is then fed into two attention blocks with N stacked multi-head ROI attention layers to update depth and semantic feature separately. The outputs are fed into the upper level after upsampling to refine features on high resolution layers.
Network Architecture
Our segmentation and depth estimation networks have the same architecture as in (Godard et al. 2019), i.e., a U-Net with skip connections, except for the depth fusion module. The feature maps of encoder P = C6, C5, C4, C3, C2 from 1/32 to 1/2 are extracted and fed into different levels of the encoder for dense prediction. Those feature maps are projected into P = P 6, P 5, P 4, P 3, P 2 of dimension C = 256, 128, 64, 32, 16 with five separate convolutional layers. The decoder consists of five upsample stages where, in each stage, the feature from the last decoding level is pre-processed with a convolutional layer and then upsampled and concatenated with features in the current level. The concatenated features are fed into another convolutional layer. The same operations are applied to the segmentation branch. Finally, the semantic features and depth features are fed into the ROIFormer module to obtain the fusion features, as shown in Figure 2. Within ROIFormer, the depth feature and semantic feature are first concatenated into a common feature map. For efficiency purposes, we stack the feature maps from the segmentation branch and depth branch directly and use them as common attention memory.
Semantic Guided Re-projection Mask
Similar to (Godard et al. 2019), we also predict depth mapŝ D L on intermediate layers to calculate projection loss on multiple scales. For the segmentation map, we only use the final predictions. Photometric loss calculation is less accurate near the object boundaries and instance segments connected regions due to the depth estimation uncertainties. To overcome the boundary contamination problem, we create a mask for each point with a penalty coefficient according to the distance from the instance points to the areas. To this end, we build a graph from instance points set S Ins to reference points set S Ref and apply K-nearest-neighbors to sample the K and get average relative distance d t,i→S Ref . Consequently, after re-projection into the consecutive images, we get the confidential mask as:
µ t,i = 1 e αd t,i , i ∈ S ins 1, i / ∈ S ins(8)
We set 1 to each ith pixel position not belonging to instance points set S ins and penalty weight to instance points obtained from an exponentially decreasing function with a scale factor α. As shown in Figure 5, compared with (Godard et al. 2019)'s automask, our semantic bounded mask concentrate on the border areas where the depth value varies dramatically or reflective areas like windows of cars. In practice, we only consider the salient objects as reference points. The loss function of our model is obtained as : L = µ * L p + β · L s + γ · L sem . (9) where β and γ are weighted factors for smoothness loss and segmentation loss.
Experimental Results
Implementation Details
Our encoder is built using Resnet-18 and Resnet-50 backbones pre-trained on the ImageNet. The pose network is a pre-trained Resnet-18 model. The input image size is set to a medium resolution of 192 × 640 and a high resolution of 320×1024 following the same setting in the former methods for a fair comparison. We train our model with a batch size of 12 on a single NVIDIA Tesla V100 GPU. Following (Godard et al. 2019), all experiments are trained with 20 epochs with the learning rate of 10 −4 and decayed by 10 on the 10th and 15th epochs. All experiments are trained with an ADAM optimizer. For supervised segmentation training, we employ the Cross-Entropy loss with a hyper-parameter γ = 0.5 to control the weights of segmentation loss. The α for photometric loss is set to 0.85 and β = 1 × 10 −3 . The ROIs range is set to r min = {0.3W, 0.3H} and r max = {0.7W, 0.7H} For evaluation, the input image size is the same as the training set, and the final output of the model is resized into the ground truth resolution (384 × 1280). We apply the ground truth scale technique to restore the absolute depth estimation, and the depth values are restricted to the range of 0 to 80 meters. No further post-processing is required.
The KITTI dataset is used for self-supervised depth evaluation using the same settings as in (Zhou et al. 2017) to remove the static frames from the data split of (Eigen and Fergus 2015), resulting in 39810 images used for training, 4424 for validation ,and 697 for final evaluation. ROIFormer Module ROIFormer is the core operation in the proposed method, which guarantees the efficient mutual information fusion between semantics and depth and the feature representation enhancement. We first investigate the impact of the number of attention layers, where the attention block is applied on P3, P4, and P5 levels with N attention layers stacked for both segmentation and depth queries. Thanks to the shared memory and local attention design, the memory cost is linear to the input feature size, making it possible to test more stacked layers. As shown in Table 1, removing the attention blocks degenerate the fusion layer into a simple concatenation operation. The interaction between two domains relies on stacked convolutional operations, which provides limited feature enhancement. Adding one attention layer yields significant improvement over all metrics. We found that using two stack layers achieves the best trade-off between accuracy and complexity while additional layers contribute less or even hinder the performance. We further investigate the impact of attention layers on Table 2. The experiments show that feature fusion only on deep layers with limited resolution leads to a minor improvement in depth precision. Shallow layers with fewer feature channels bring more fine-grained details, resulting in compelling results. We further test our attention module on P2 level with 1/2 input size resulting in similar results. Finally, we compare the two sampling strategies and the effects of attention points in the attention module. Similar to the aforementioned cases, the features in shallows are vital for final outputs. Thus, it is reasonable that assigning more attention points in shallow layers brings more performance benefits. In our experiments, the combination of [8,16,32] is the best setting considering the precision and efficiency trade-off. Table 4: Comparison between different attention types Figure 6: Speed-Accuracy trade-off curve.
deformable attention on HR inputs drops dramatically. We argue that the sample space is too large to select valid keyvalue pairs, resulting in divergence. ROIFormer, achieves the best performance across all settings, demonstrating its superiority over other attention methods. Figure. 7 shows the attention areas and sampled key values based on different attention variants. ROIFormer samples the relative features more efficiently. We further summarize the model complexity against the performance of all STOA methods and plot the trade-off curve in Figure.6. The model size is shown as areas of circles.
Impact of Individual Component. We summarize the impact on depth estimation accuracy of each main component in Table 5
Comparison with State-Of-The-Art Methods
We evaluate ROIFormer on the KITTI dataset based on the metrics from (Eigen and Fergus 2015). As shown in Table 6, our proposed method outperforms all existing SOTA selfsupervised monocular depth estimation methods, including approaches utilizing semantic information. ROIFormer enables flexible local feature interaction, resulting in a much higher performance under HR settings with more finegrained details. We also compare the model complexity by calculating the MACs and model parameter numbers. The trade-off curve is shown in Figure 6. We achieve the same performance with only 30% MACs, compared to the recent SOTA method (Zhao et al. 2022) which utilizes the transformer as the backbone. With Resnet18, the ROIFormer runs at 51 fps on MR and 33 fps on HR. With Resnet50, it runs at 45 and 24 fps respectively. Finally, Figure 8 illustrates the performance comparison with other SOTA methods qualitatively.
Conclusion
In this paper, we propose a novel attention module to improve the self-supervised depth estimation accuracy. Our method enhances the representative feature ability by learning spatial dependencies from local semantic areas. We leverage geometric cues from segmentation feature maps to learn the regions of interest to guide feature aggregation cross domains. The attention module is employed in a multihead and multi-scale schema to enable the feature learning from different semantic levels. Furthermore, we introduce a semantic aware projection mask to improve the model robustness on uncertain areas.We conducted extensive experiments on the KITTI dataset and achieved new SOTA performance, which demonstrated its effectiveness.
Figure 2 :
2Proposed framework for self-supervised monocular depth estimation with semantic guidance. The segmentation and depth branches share the same backbone as the encoder, and capture mutual benefits information from other domains with adaptive local attention modules.
Figure 3 :
3Comparison of feature fusion strategies for depth estimation: adaptive convolution (top row), global dense attention (second row), global sparse attention (third row) and our adaptive sparse attention (bottom row).
Figure 5 :
5Visualization comparison between the semantic guided re-projection mask (second column) and auto mask (third column).
Figure 7 :
7Attention visualization of ROIFormer and Deformable Attention on multi-scale heads in different colors. The query points are marked as yellow rectangles. Our ROIFormer learned most relevant references while deformable attention diverged. tation and depth domains.
Figure 8 :
8Qualitative self-supervised monocular depth estimation performance comparing ROIFormer with previous State-of-the-Art.
Table 2 :
2Ablation study on the impact of layers for feature
fusion. P2-inclusion improves the performance, while P3-P5
was only used in all other experiments.
different pyramid levels and show results in
Table 3 :
3Ablation study on attention points sampling strategies and number of points. The features of pyramid layers are fixed, as well as the attention head numbers.Complexity Comparison The model efficiency of our
proposed method is explored by comparing it with other
attention-based feature fusion methods, i.e., transformer at-
tention and deformable attention. As shown in Table3, the
transformer performs better than deformable attention but
exceeds the memory limit on HR inputs. The performance of
. For the baseline, we keep only selfsupervised depth estimation modules including smoothness loss and photometric loss. Applying our attention module only to the intermediate depth features brings over 6% improvement on relative errors which imply the significant impact of attention on feature representation enhancement. Together with semantics information, the attention module is capable of getting the most of mutual benefits from segmen-M Sem ROIFormer Mask AbsRel SqRel RMSE RMSElog δ < 1.250.115
0.903
4.863
0.192
0.877
0.108
0.79
4.595
0.184
0.888
0.1005 0.6733 4.351
0.1756
0.895
0.1002
0.654
4.356
0.175
0.898
Table 5 :
5Ablation study for the contribution of three main components: monocular (M) training, semantics (Sem) information, ROIFormer and semantics guided mask loss.Method
Backbone Sem Resolution AbsRel SqRel RMSE RMSElog θ < 1.25 θ < 1.25 2 θ < 1.25 3
(Zou et al. 2020)
Resnet-18
192 × 640
0.115
0.871
4.778
0.191
0.874
0.963
0.984
(Johnston and Carneiro 2020) Resnet-18
192 × 640
0.111
0.941
4.817
0.185
0.885
0.961
0.981
(Hui 2022)
Resnet-18
192 × 640
0.108
0.71
4.513
0.183
0.884
0.964
0.983
(Jung, Park, and Yoo 2021)
Resnet-18
192 × 640
0.105
0.722
4.547
0.182
0.886
0.964
0.984
Ours
Resnet-18
192 × 640
0.103
0.6959 4.438
0.1778
0.8892
0.9648
0.9836
(Klingner et al. 2020)
Resnet-18
384 × 1280
0.107
0.768
4.468
0.186
0.891
0.963
0.982
(Choi et al. 2020)
Resnet-18
320 × 1024
0.106
0.743
4.489
0.181
0.884
0.965
0.984
(Lyu et al. 2021)
Resnet-18
320 × 1024
0.106
0.755
4.472
0.181
0.892
0.966
0.984
(Jung, Park, and Yoo 2021)
Resnet-18
320 × 1024
0.102
0.687
4.366
0.178
0.895
0.967
0.984
Ours
Resnet-18
320 × 1024
0.100
0.6749 4.335
0.1757
0.8962
0.9665
0.9836
(Godard et al. 2019)
Resnet-50
192 × 640
0.115
0.903
4.863
0.193
0.877
0.959
0.981
(Guizilini et al. 2020b)
Resnet-50
192 × 640
0.113
0.831
4.663
0.189
0.878
0.971
0.983
(Kumar et al. 2021)
Resnet-50
192 × 640
0.109
0.718
4.516
0.18
0.896
0.973
0.986
(Yan et al. 2021)
Resnet-50
192 × 640
0.105
0.769
4.535
0.181
0.892
0.964
0.983
(Li et al. 2021)
Resnet-50
192 × 640
0.103
0.709
4.471
0.18
0.892
0.966
0.984
(Jung, Park, and Yoo 2021)
Resnet-50
192 × 640
0.102
0.675
4.393
0.178
0.893
0.966
0.984
Ours
Resnet-50
192 × 640
0.100
0.6733 4.351
0.1756
0.8958
0.9665
0.9848
(Godard et al. 2019)
Resnet-50
320 × 1024
0.115
0.882
4.701
0.19
0.879
0.961
0.982
(Shu et al. 2020)
Resnet-50
320 × 1024
0.104
0.729
4.481
0.179
0.893
0.965
0.984
(Gurram et al. 2021)
Resnet-50
320 × 1024
0.104
0.721
4.396
0.185
0.88
0.962
0.983
(Kumar et al. 2021)
Resnet-50
320 × 1024
0.102
0.701
4.347
0.166
0.901
0.98
0.99
(Cai et al. 2021)
Resnet-50
320 × 1024
0.102
0.698
4.439
0.18
0.895
0.965
0.983
(Chanduri et al. 2021)
Resnet-50
320 × 1024
0.102
0.723
4.374
0.178
0.898
0.966
0.983
(Petrovai and Nedevschi 2022) Resnet-50
320 × 1024
0.101
0.72
4.339
0.176
0.898
0.967
0.984
Ours
Resnet-50
320 × 1024
0.096
0.6161 4.148
0.1697
0.9045
0.9692
0.9856
Table 6 :
6Comparison with the state-of-the-art on KITTI Eigen test set.
MonoFormer: Towards Generalization of self-supervised monocular depth estimation with Transformers. J Bae, S Moon, S Im, arXiv:2205.11083arXiv preprintBae, J.; Moon, S.; and Im, S. 2022. MonoFormer: Towards Generalization of self-supervised monocular depth estima- tion with Transformers. arXiv preprint arXiv:2205.11083.
Unsupervised scale-consistent depth and ego-motion learning from monocular video. J Bian, Z Li, N Wang, H Zhan, C Shen, M.-M Cheng, I Reid, Advances in neural information processing systems. 32Bian, J.; Li, Z.; Wang, N.; Zhan, H.; Shen, C.; Cheng, M.- M.; and Reid, I. 2019. Unsupervised scale-consistent depth and ego-motion learning from monocular video. Advances in neural information processing systems, 32.
X-distill: Improving self-supervised monocular depth via cross-task distillation. H Cai, J Matai, S Borse, Y Zhang, A Ansari, F Porikli, arXiv:2110.12516arXiv preprintCai, H.; Matai, J.; Borse, S.; Zhang, Y.; Ansari, A.; and Porikli, F. 2021. X-distill: Improving self-supervised monocular depth via cross-task distillation. arXiv preprint arXiv:2110.12516.
End-to-end object detection with transformers. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, European conference on computer vision. SpringerCarion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In European conference on computer vi- sion, 213-229. Springer.
Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. V Casser, S Pirk, R Mahjourian, A Angelova, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence01Casser, V.; Pirk, S.; Mahjourian, R.; and Angelova, A. 2019. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. In Pro- ceedings of the AAAI conference on artificial intelligence, 01, 8001-8008.
S S Chanduri, Z K Suri, I Vozniak, C Müller, arXiv:2110.14347CamLessMonoDepth: Monocular Depth Estimation with Unknown Camera Parameters. arXiv preprintChanduri, S. S.; Suri, Z. K.; Vozniak, I.; and Müller, C. 2021. CamLessMonoDepth: Monocular Depth Estima- tion with Unknown Camera Parameters. arXiv preprint arXiv:2110.14347.
Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. P.-Y Chen, A H Liu, Y.-C Liu, Y.-C F Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChen, P.-Y.; Liu, A. H.; Liu, Y.-C.; and Wang, Y.-C. F. 2019. Towards scene understanding: Unsupervised monocu- lar depth estimation with semantic-aware representation. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, 2624-2632.
Dpt: Deformable patch-based transformer for visual recognition. Z Chen, Y Zhu, C Zhao, G Hu, W Zeng, J Wang, M Tang, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaChen, Z.; Zhu, Y.; Zhao, C.; Hu, G.; Zeng, W.; Wang, J.; and Tang, M. 2021. Dpt: Deformable patch-based transformer for visual recognition. In Proceedings of the 29th ACM In- ternational Conference on Multimedia, 2899-2907.
Safenet: Selfsupervised monocular depth estimation with semantic-aware feature extraction. J Choi, D Jung, D Lee, C Kim, arXiv:2010.02893arXiv preprintChoi, J.; Jung, D.; Lee, D.; and Kim, C. 2020. Safenet: Self- supervised monocular depth estimation with semantic-aware feature extraction. arXiv preprint arXiv:2010.02893.
Deformable convolutional networks. J Dai, H Qi, Y Xiong, Y Li, G Zhang, H Hu, Y Wei, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionDai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Pro- ceedings of the IEEE international conference on computer vision, 764-773.
Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. D Eigen, Fergus , R , Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionEigen, D.; and Fergus, R. 2015. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE in- ternational conference on computer vision, 2650-2658.
Digging into self-supervised monocular depth estimation. C Godard, O Mac Aodha, M Firman, G J Brostow, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionGodard, C.; Mac Aodha, O.; Firman, M.; and Brostow, G. J. 2019. Digging into self-supervised monocular depth estima- tion. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, 3828-3838.
3D Packing for Self-Supervised Monocular Depth Estimation. V Guizilini, R Ambrus, S Pillai, A Raventos, A Gaidon, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Guizilini, V.; Ambrus, R.; Pillai, S.; Raventos, A.; and Gaidon, A. 2020a. 3D Packing for Self-Supervised Monoc- ular Depth Estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Semantically-guided representation learning for self-supervised monocular depth. V Guizilini, R Hou, J Li, R Ambrus, A Gaidon, arXiv:2002.12319arXiv preprintGuizilini, V.; Hou, R.; Li, J.; Ambrus, R.; and Gaidon, A. 2020b. Semantically-guided representation learning for self-supervised monocular depth. arXiv preprint arXiv:2002.12319.
Monocular depth estimation through virtualworld supervision and real-world SFM self-supervision. A Gurram, A F Tuna, F Shen, O Urfalioglu, A M López, IEEE Transactions on Intelligent Transportation Systems. Gurram, A.; Tuna, A. F.; Shen, F.; Urfalioglu, O.; and López, A. M. 2021. Monocular depth estimation through virtual- world supervision and real-world SFM self-supervision. IEEE Transactions on Intelligent Transportation Systems.
RM-Depth: Unsupervised Learning of Recurrent Monocular Depth in Dynamic Scenes. T.-W Hui, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHui, T.-W. 2022. RM-Depth: Unsupervised Learning of Re- current Monocular Depth in Dynamic Scenes. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1675-1684.
Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume. A Johnston, G Carneiro, Proceedings of the ieee/cvf conference on computer vision and pattern recognition. the ieee/cvf conference on computer vision and pattern recognitionJohnston, A.; and Carneiro, G. 2020. Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, 4756-4765.
Fine-grained semantics-aware representation enhancement for selfsupervised monocular depth estimation. H Jung, E Park, S Yoo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJung, H.; Park, E.; and Yoo, S. 2021. Fine-grained semantics-aware representation enhancement for self- supervised monocular depth estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 12642-12652.
Self-supervised monocular depth estimation: Solving the dynamic object problem by semantic guidance. M Klingner, J.-A Termöhlen, J Mikolajczyk, T Fingscheidt, European Conference on Computer Vision. SpringerKlingner, M.; Termöhlen, J.-A.; Mikolajczyk, J.; and Fin- gscheidt, T. 2020. Self-supervised monocular depth estima- tion: Solving the dynamic object problem by semantic guid- ance. In European Conference on Computer Vision, 582- 600. Springer.
Syndistnet: Selfsupervised monocular fisheye camera distance estimation synergized with semantic segmentation for autonomous driving. V R Kumar, M Klingner, S Yogamani, S Milz, T Fingscheidt, P Mader, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionKumar, V. R.; Klingner, M.; Yogamani, S.; Milz, S.; Fingscheidt, T.; and Mader, P. 2021. Syndistnet: Self- supervised monocular fisheye camera distance estimation synergized with semantic segmentation for autonomous driving. In Proceedings of the IEEE/CVF Winter Confer- ence on Applications of Computer Vision, 61-71.
Learning monocular depth in dynamic scenes via instance-aware projection consistency. S Lee, S Im, S Lin, I S Kweon, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence3Lee, S.; Im, S.; Lin, S.; and Kweon, I. S. 2021. Learning monocular depth in dynamic scenes via instance-aware pro- jection consistency. In Proceedings of the AAAI Conference on Artificial Intelligence, 3, 1863-1872.
Learning depth via leveraging semantics: Self-supervised monocular depth estimation with both implicit and explicit semantic guidance. R Li, X He, D Xue, S Su, Q Mao, Y Zhu, J Sun, Y Zhang, arXiv:2102.06685arXiv preprintLi, R.; He, X.; Xue, D.; Su, S.; Mao, Q.; Zhu, Y.; Sun, J.; and Zhang, Y. 2021. Learning depth via leveraging se- mantics: Self-supervised monocular depth estimation with both implicit and explicit semantic guidance. arXiv preprint arXiv:2102.06685.
DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation. Z Li, Z Chen, X Liu, J Jiang, arXiv:2203.14211arXiv preprintLi, Z.; Chen, Z.; Liu, X.; and Jiang, J. 2022. DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation. arXiv preprint arXiv:2203.14211.
Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLiu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vi- sion transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 10012-10022.
Hr-depth: High resolution self-supervised monocular depth estimation. X Lyu, L Liu, M Wang, X Kong, L Liu, Y Liu, X Chen, Y Yuan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence3Lyu, X.; Liu, L.; Wang, M.; Kong, X.; Liu, L.; Liu, Y.; Chen, X.; and Yuan, Y. 2021. Hr-depth: High resolution self-supervised monocular depth estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, 3, 2294- 2301.
BoxeR: Box-Attention for 2D and 3D Transformers. D.-K Nguyen, J Ju, O Booij, M R Oswald, C G Snoek, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionNguyen, D.-K.; Ju, J.; Booij, O.; Oswald, M. R.; and Snoek, C. G. 2022. BoxeR: Box-Attention for 2D and 3D Trans- formers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4773-4782.
Exploiting Pseudo Labels in a Self-Supervised Learning Framework for Improved Monocular Depth Estimation. A Petrovai, S Nedevschi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPetrovai, A.; and Nedevschi, S. 2022. Exploiting Pseudo Labels in a Self-Supervised Learning Framework for Im- proved Monocular Depth Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1578-1588.
Vision transformers for dense prediction. R Ranftl, A Bochkovskiy, V Koltun, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionRanftl, R.; Bochkovskiy, A.; and Koltun, V. 2021a. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 12179-12188.
Vision transformers for dense prediction. R Ranftl, A Bochkovskiy, V Koltun, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionRanftl, R.; Bochkovskiy, A.; and Koltun, V. 2021b. Vision transformers for dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 12179-12188.
Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. A Ranjan, V Jampani, L Balles, K Kim, D Sun, J Wulff, M J Black, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionRanjan, A.; Jampani, V.; Balles, L.; Kim, K.; Sun, D.; Wulff, J.; and Black, M. J. 2019. Competitive collabora- tion: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12240-12249.
Featuremetric loss for self-supervised learning of depth and egomotion. C Shu, K Yu, Z Duan, Yang , K , European Conference on Computer Vision. SpringerShu, C.; Yu, K.; Duan, Z.; and Yang, K. 2020. Feature- metric loss for self-supervised learning of depth and ego- motion. In European Conference on Computer Vision, 572- 588. Springer.
Sparse r-cnn: End-to-end object detection with learnable proposals. P Sun, R Zhang, Y Jiang, T Kong, C Xu, W Zhan, M Tomizuka, L Li, Z Yuan, C Wang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionSun, P.; Zhang, R.; Jiang, Y.; Kong, T.; Xu, C.; Zhan, W.; Tomizuka, M.; Li, L.; Yuan, Z.; Wang, C.; et al. 2021. Sparse r-cnn: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 14454-14463.
Distilled semantics for comprehensive scene understanding from videos. F Tosi, F Aleotti, P Z Ramirez, M Poggi, S Salti, L D Stefano, S Mattoccia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTosi, F.; Aleotti, F.; Ramirez, P. Z.; Poggi, M.; Salti, S.; Ste- fano, L. D.; and Mattoccia, S. 2020. Distilled semantics for comprehensive scene understanding from videos. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4654-4665.
Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. W Wang, E Xie, X Li, D.-P Fan, K Song, D Liang, T Lu, P Luo, L Shao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionWang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; and Shao, L. 2021. Pyramid vision trans- former: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 568-578.
Unos: Unified unsupervised optical-flow and stereodepth estimation by watching videos. Y Wang, P Wang, Z Yang, C Luo, Y Yang, W Xu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionWang, Y.; Wang, P.; Yang, Z.; Luo, C.; Yang, Y.; and Xu, W. 2019. Unos: Unified unsupervised optical-flow and stereo- depth estimation by watching videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8071-8081.
Vision transformer with deformable attention. Z Xia, X Pan, S Song, L E Li, G Huang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXia, Z.; Pan, X.; Song, S.; Li, L. E.; and Huang, G. 2022. Vision transformer with deformable attention. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4794-4803.
SegFormer: Simple and efficient design for semantic segmentation with transformers. E Xie, W Wang, Z Yu, A Anandkumar, J M Alvarez, P Luo, Advances in Neural Information Processing Systems. 34Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J. M.; and Luo, P. 2021. SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34: 12077-12090.
Spatially adaptive inference with stochastic feature sampling and interpolation. Z Xie, Z Zhang, X Zhu, G Huang, S Lin, European conference on computer vision. SpringerXie, Z.; Zhang, Z.; Zhu, X.; Huang, G.; and Lin, S. 2020. Spatially adaptive inference with stochastic feature sampling and interpolation. In European conference on computer vi- sion, 531-548. Springer.
Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation. J Yan, H Zhao, P Bu, Jin , Y , 2021 International Conference on 3D Vision (3DV). IEEEYan, J.; Zhao, H.; Bu, P.; and Jin, Y. 2021. Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation. In 2021 International Conference on 3D Vision (3DV), 464-473. IEEE.
D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. N Yang, L V Stumberg, R Wang, D Cremers, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYang, N.; Stumberg, L. v.; Wang, R.; and Cremers, D. 2020. D3vo: Deep depth, deep pose and deep uncer- tainty for monocular visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1281-1292.
Metaanchor: Learning to detect objects with customized anchors. T Yang, X Zhang, Z Li, W Zhang, J Sun, Advances in neural information processing systems. 31Yang, T.; Zhang, X.; Li, Z.; Zhang, W.; and Sun, J. 2018. Metaanchor: Learning to detect objects with customized an- chors. Advances in neural information processing systems, 31.
Vision transformer with progressive sampling. X Yue, S Sun, Z Kuang, M Wei, P H Torr, W Zhang, D Lin, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYue, X.; Sun, S.; Kuang, Z.; Wei, M.; Torr, P. H.; Zhang, W.; and Lin, D. 2021. Vision transformer with progressive sampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 387-396.
MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer. C Zhao, Y Zhang, M Poggi, F Tosi, X Guo, Z Zhu, G Huang, Y Tang, S Mattoccia, International Conference on 3D Vision. Zhao, C.; Zhang, Y.; Poggi, M.; Tosi, F.; Guo, X.; Zhu, Z.; Huang, G.; Tang, Y.; and Mattoccia, S. 2022. MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer. In International Conference on 3D Vision.
Towards better generalization: Joint depth-pose learning without posenet. W Zhao, S Liu, Y Shu, Y.-J Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhao, W.; Liu, S.; Shu, Y.; and Liu, Y.-J. 2020. To- wards better generalization: Joint depth-pose learning with- out posenet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9151-9161.
Unsupervised learning of depth and ego-motion from video. T Zhou, M Brown, N Snavely, D G Lowe, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhou, T.; Brown, M.; Snavely, N.; and Lowe, D. G. 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1851-1858.
MDA-Net: Memorable Domain Adaptation Network for Monocular Depth Estimation. J Zhu, Y Shi, M Ren, Y Fang, BMVC. Zhu, J.; Shi, Y.; Ren, M.; and Fang, Y. 2020a. MDA- Net: Memorable Domain Adaptation Network for Monoc- ular Depth Estimation. In BMVC.
The edge of depth: Explicit constraints between segmentation and depth. S Zhu, G Brazil, X Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhu, S.; Brazil, G.; and Liu, X. 2020. The edge of depth: Ex- plicit constraints between segmentation and depth. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13116-13125.
Deformable detr: Deformable transformers for end-to-end object detection. X Zhu, W Su, L Lu, B Li, X Wang, J Dai, arXiv:2010.04159arXiv preprintZhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020b. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159.
Learning monocular visual odometry via selfsupervised long-term modeling. Y Zou, P Ji, Q.-H Tran, J.-B Huang, M Chandraker, European Conference on Computer Vision. SpringerZou, Y.; Ji, P.; Tran, Q.-H.; Huang, J.-B.; and Chandraker, M. 2020. Learning monocular visual odometry via self- supervised long-term modeling. In European Conference on Computer Vision, 710-727. Springer.
| [] |
[
"Gauge symmetry enhancing-breaking from a Double Field Theory perspective",
"Gauge symmetry enhancing-breaking from a Double Field Theory perspective"
] | [
"G Aldazabal \nCentro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina\n\nInstituto Balseiro Centro Atómico Bariloche\nAv. Bustillo 9500BarilocheArgentina\n",
"E Andrés \nCentro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina\n",
"Martín Mayo \nCentro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina\n",
"J A Rosabal \nB.W. Lee Center for Fields, Gravity & Strings Institute for Basic Sciences\n34047DaejeonKOREA\n"
] | [
"Centro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina",
"Instituto Balseiro Centro Atómico Bariloche\nAv. Bustillo 9500BarilocheArgentina",
"Centro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina",
"Centro Atómico Bariloche\nG. Física CAB-CNEA and CONICET\nAv. Bustillo 9500BarilocheArgentina",
"B.W. Lee Center for Fields, Gravity & Strings Institute for Basic Sciences\n34047DaejeonKOREA"
] | [] | Gauge symmetry enhancing, at specific points of the compactification space, is a distinguished feature of string theory. In this work we discuss the breaking of such symmetries with tools provided by Double Field Theory (DFT). As a main guiding example we discuss the bosonic string compactified on a circle where, at the self-dual radio the generic U(1) × U(1) gauge symmetry becomes enhanced to SU(2) × SU(2). We show that the enhancing-breaking of the gauge symmetry can be understood through a dependence of gauge structure constants (fluxes in DFT) on moduli. This dependence, in DFT description, is encoded in the generalized tangent frame of the double space. The explicit T-duality invariant formulation provided by DFT proves to be a helpful ingredient. The link with string theory results is discussed and generalizations to generic tori compactifications are addressed. | 10.1007/jhep07(2017)045 | [
"https://arxiv.org/pdf/1704.04427v2.pdf"
] | 119,228,126 | 1704.04427 | d9a26883abb295d9a179209576974efe6121a947 |
Gauge symmetry enhancing-breaking from a Double Field Theory perspective
24 Apr 2017 April 25, 2017
G Aldazabal
Centro Atómico Bariloche
G. Física CAB-CNEA and CONICET
Av. Bustillo 9500BarilocheArgentina
Instituto Balseiro Centro Atómico Bariloche
Av. Bustillo 9500BarilocheArgentina
E Andrés
Centro Atómico Bariloche
G. Física CAB-CNEA and CONICET
Av. Bustillo 9500BarilocheArgentina
Martín Mayo
Centro Atómico Bariloche
G. Física CAB-CNEA and CONICET
Av. Bustillo 9500BarilocheArgentina
J A Rosabal
B.W. Lee Center for Fields, Gravity & Strings Institute for Basic Sciences
34047DaejeonKOREA
Gauge symmetry enhancing-breaking from a Double Field Theory perspective
24 Apr 2017 April 25, 2017arXiv:1704.04427v2 [hep-th]
Gauge symmetry enhancing, at specific points of the compactification space, is a distinguished feature of string theory. In this work we discuss the breaking of such symmetries with tools provided by Double Field Theory (DFT). As a main guiding example we discuss the bosonic string compactified on a circle where, at the self-dual radio the generic U(1) × U(1) gauge symmetry becomes enhanced to SU(2) × SU(2). We show that the enhancing-breaking of the gauge symmetry can be understood through a dependence of gauge structure constants (fluxes in DFT) on moduli. This dependence, in DFT description, is encoded in the generalized tangent frame of the double space. The explicit T-duality invariant formulation provided by DFT proves to be a helpful ingredient. The link with string theory results is discussed and generalizations to generic tori compactifications are addressed.
Introduction
The extended nature of strings is responsible for several amazing phenomena that are not conceivable from a field theory of point particles. When moving on compact space, besides the expected states associated to KK compact momenta, a string can wind around non-contractible cycles leading to the so-called winding states, with the winding number being an integer counting the number of times that the cycle is wrapped by the string.
Quantum states are thus labelled by specific values of KK momenta and windings.
The interplay among winding and momentum modes underlies T-duality, a genuine stringy feature. Such interplay manifests itself by connecting the physics of strings defined on geometrically very different backgrounds. At specific points of moduli of the compact space, states in some combinations of windings and momenta become massless and can give rise to enhanced gauge symmetries (see for instance [1,2]). The simplest example is provided by the compactification of the bosonic string on a circle of radio R. The resulting theory, which contains a U(1) × U(1) gauge group, is equivalent to a string compactified on a circle of radioR = α ′ R (where α ′ is the string constant) if momenta and winding are exchanged. At the self-dual point R =R = √ α ′ the gauge symmetry is enhanced to SU(2) × SU (2).
When the compact space is a r dimensional torus T r , characterized by some background moduli (internal metric and anti-symmetric fields), T-duality implies that backgrounds related by the non-compact group O(r, r, Z) are physically equivalent. Generically a richer structure of points of gauge enhancing appear.
Recall that, from the world sheet point of view, states are created by vertex operators involving both coordinates associated with momentum excitations and dual coordinates associated to winding excitations or, equivalently, to left (L) and right (R) moving coordinates. For generic values of moduli an Abelian symmetry U(1) r L × U(1) r R appears. However, at specific points, the symmetry becomes enhanced to a gauge symmetry G L × G R where G L(R) are non-Abelian gauge groups of rank r. For example, in a two torus T 2 , a generic (U(1) × U(1)) L × (U(1) × U(1)) R is enhanced to SU(3) L × SU(3) R or (SU(2) × SU(2)) L × (SU(2) × SU(2)) R etc. at different points.
Let us sketch, as motivation of our work, the case of circle compactification at self-dual point 1 . The effective action in d dimensional space, computed from string theory 3-point amplitudes [3] reads
S = 1 2κ 2 d d d x √ ge −2ϕ R + 4∂ µ ϕ∂ µ ϕ − 1 12 H µνρ H µνρ − 1 8 δ ij F iµν F j µν + δ ijF iµνF j µν − 1 2 g d √ α ′ M ij F i µνF jµν − D µ M ij D ν M ij g µν + 16g d √ α ′ det M + O(M 4 ),(1.1)
where the first row contains the universal gravity contribution, the second one contains the gauge field strength for the vector fields of SU(2) L and SU(2) R (that we denote here as A i Lµ , A i Rµ respectively). M ij is the matrix of scalars living in the (3,3) representation. This is discussed in Ref [3] (and briefly reviewed below) where it was observed that the spectrum of the bosonic string has (d + 3) 2 massless states: d 2 from g µν and B µν , 6d from the vector states and 9 the scalar states. The number of degrees of freedom precisely agrees with the dimension of the coset
D µ M ij = ∂ µ M ij + g d f k li A l µ M kj + g d f k lj A l µ M ikO(d + 3, d + 3) O(d + 3) × O(d + 3) (1.2)
that counts the number of degrees of freedom in the DFT formulation with symmetry
O(d + 3, d + 3).
In general a DFT action with O(D, D) symmetry with D = d + n, can be written as
S ef f = 1 2κ 2 d d d x √ ge −2ϕ R + 4∂ µ ϕ∂ µ ϕ − 1 12 H µνρ H µνρ − 1 8 H IJ F Iµν F J µν + 1 8 (D µ H) IJ (D µ H) IJ − 1 12 f IJ K f LM N H IL H JM H KN − 3 H IL η JM η KN +2 η IL η JM η KN ) − Λ . (1.3)
after a generalized Scherk-Schwarz [5] like n dimensional compactification. In this expression H IJ with I, J = 1, . . . , 2n is the, so-called, generalized metric containing the scalar fields coming from the internal components of the n-dimensional metric and B-field. R is the d-dimensional Ricci scalar and the field strengths F A µν and H µνρ are
F I = dA I + 1 √ 2 f JK I A J ∧ A K H = dB + F I ∧ A I , (1.4)
The covariant derivative of the scalars is
(D µ H) IJ = (∂ µ H) IJ + 1 √ 2 f K LI A L µ H KJ + 1 √ 2 f K LJ A L µ H IK (1.5)
The structure constants f N LI = η N K f K LI are completely antisymmetric and η N K is the O(n, n) metric
η P Q = 1 n 0 0 −1 n .
(1.6)
In our example D = d + 3, thus I = 1, . . . 6. The gauge fields are A I µ = (A i Lµ , −A i Rµ ) and the structure constant splits into
f IJ K = ( 2 α ′ ) 1 2 ǫ ijk −( 2 α ′ ) 1 2ǭ ijk .
After expanding around a fixed background the internal generalized metric H IJ can be written as
H ≃ 1 3 M M T 1 3 = I + 0 M M T 0 (1.7)
By replacing above expressions into the action (1.3), and after absorbing constants into the fields, the SU(2) ×SU(2) theory given in (1.1) is reproduced. Of course, any reference to DFT could be omitted and just present the above (1.3) action as an interesting way of writing the original expression.
It is worth looking at the term containing the derivatives of scalar fields. Since the metric H = I + . . . contains a constant term, the identity, the action could have a contribution,
|D µ H| 2 ≡ (· · · + 1 √ 2 f K LI A L µ δ KJ + 1 √ 2 f K LJ A L µ δ IK ) 2 (1.8) = · · · + 1 2 ([f J LI + f I LJ ]A L µ ) 2 .
Namely, a potential "mass term" for the vector bosons.
Moreover, by splitting the O(n, n) indices into Left and Right indices, that we denote as A = (a,â) and by using that f ABC = η AA ′ f A ′ BC is completely antisymmetric, the above term can be recast as
f ABC A B µ δ C D + f DBC A B µ δ C A 2 ∼ A B µ A E µ f ABC f DEF η AD η CF − δ AD δ CF ∼ A B µ A E µ f aBĉ f aEĉ (1.9)
where a sum over repeated indices is understood.
Since in our example f IJK = (f ijk ,fˆiĵk) the first three "Left" indices do not mix with the last three "Right" ones, such terms vanish.
f ij k = ǫ ijk 1 √ 2α ′ m + = −fˆiĵk f 123 = f12 3 = f 13 2 = f1 32 = f 321 = f3 2 1 = − 1 √ 2α ′ m − (1.10)
withî ≡ i + 3 and
m ± = 1 R ± 1 R (1.11)
If we go back to equation (1.9) and replace above flux values we find
A B µ A E µ f aBĉ f aEĉ ∝ m 2 − (A + µ ) 2 + m 2 − (A − µ ) 2 (1.12) with A ± µ = A 1 µ ± iA 2
µ acquiring a mass m − whereas A 3 andĀ 3 remain massless, indicating that the gauge group is spontaneously broken to U(1) L × U(1) R . Moreover, by looking at In Section 2 we review the basic ideas of Double Field Theory, with special emphasis on the symmetry enhancing situation and by highlighting the ingredients needed in our construction. In particular we discuss how to extend the frame description, away from points of enhancing, from the circle compactification example.
the couplings η IN η JM ∂ µ H IJ f K LI δ N S A L µ we notice that there is a coupling ∝ m − ∂ µ M ±,3 A ∓ µ (1.13) with M ±,3 = M 1,3 + iM 2,3 , identifying M ±,
In Section 3 we discuss how to extend the DFT construction to describe the enhancingbreaking of gauge symmetries at different points of m dimensional toroidal compactification. The structure of the gauge groups associated to fixed points is known to be of the form G L × G R where G L (R) are non Abelian gauge groups of rank m.
Concluding remarks and a brief outlook are presented in Section 4.
DFT and enhanced gauge symmetries
Generalized Complex Geometry (GCG) [7,8,9] and Double Field Theory (DFT) [10] are proposals that aim at integrating T-duality as a geometric symmetry. In DFT the presence of windings, an essential ingredient of T-duality, is achieved by introducing new coordinates associated to the winding numbers. Thus in DFT fields depend on a double set of coordinates. This idea, first proposed in [11,12,13], received new impulse in recent years [14,15,16] (see [4,17] for some reviews on the subject and references therein).
Generically, these double field theories are constrained theories since some consistency conditions must be satisfied to ensure closure of generalized diffeomorphism algebra. A quite restrictive condition, the so called section condition (or strong constraint), ensures consistency at the price of eliminating half of the coordinates and, therefore, abandoning the original motivation. However, it is worth emphasising that this constrained DFT, which in this case essentially coincides with GCG, still provides an interesting description for understanding underlying symmetries and stringy features (for instance α ′ corrections have been recently incorporated [18,19] in these formulations). An alternative constraint is provided by generalized Scherk-Schwarz like compactifications [20] of DFT [5]. These compactifications contain the generic gaugings of gauged supergravity theories [21,22] allowing for a geometric interpretation of all of them. In this framework, the double coordinates enter in a very particular way through the twist matrix. Constant gaugings are computed from this matrix and, generically, closure of the algebra is ensure if these gaugings satisfy some quadratic constraints [23] with no need of a strong constraint requirement. A generalization of this formalism was proposed in [3] in order to account for the description of gauge enhancing. The proposal of [3], discussed for the example of circle compactification on D = d + 1 and inspired in the relation with the coset (1.2), requires to introduce an extended tangent space with d + 1 → d + 1 + 2. However, the "physical space" of DFT is still a double circle. The frame vectors do depend on both circle compact coordinates y and its dualỹ thus being truly non-geometric. We strongly rely on these results below in order to describe the breaking of enhanced symmetries when moduli do slide slightly away from the fixed points. In this process slightly massive states In what follows we, briefly, review some basic features of GCG and/or DFT. The theory is defined on a generalized tangent bundle which locally is T M ⊕ T * M and whose sections, the generalized vectors V , are direct sums of vectors v plus one forms ξ, V = v+ξ. Here M = 0, · · · , 2D andμ = 0, · · · , D − 1.
A generalized frame E
A natural pairing between generalized vectors is defined by
V 1 · V 2 = ι v 1 ξ 2 + ι v 2 ξ 1 = η(V 1 , V 2 ) = V M 1 η MN V N 2 , (2.1)
where the O(D, D) metric η MN has the following off-diagonal form
η MN = 0 1 D 1 D 0 , (2.2) where 1 D is the D × D identity matrix. Note that η MN is invariant under ordinary diffeomorphisms. Defining η AB = η(E A , E B ) where A, B = 0, 1, .., 2D are frame indices it
results that η AB has the same numerical form as (2.2).
A generalized metric can be constructed as
H MN = E A M S AB E B N , where S AB =
diag(s ab , s ab ), s ab being the Minkowski metric.
The generalized metric can be parametrized as
H MN (X) = g −1 −g −1 B B g −1 g − B g −1 B , (2.3) satisfying H MP η PQ H QN = η MN . (2.4)
where gμν(X), Bμν(X) are a symmetric and an anti-symmetric tensor, respectively.
The generalized vectors transform under generalized diffeomorphisms as
L V W M = V P ∂ P W M + (∂ M V P − ∂ P V M )W P . (2.5)
The dilaton field ϕ is incorporated through density field e −2d = |g|e −2ϕ that transforms like a measure
L V e −2d = ∂ P (V P e −2d ) . (2.6)
The algebra of generalized diffeomorphisms closes provided a set of constraints is satisfied.
The generalized diffeomorphisms allow to define the generalized dynamical fluxes [4]
F ABC = (L E A E B ) M E CM . (2.7)
Fluxes are totally antisymmetric in ABC (flat indices) and transform as scalars under generalized diffeomorphisms, up to the closure constraints.
In generalized Scherk-Schwarz compactifications [5,4,6] the frame is split into a space-time piece and an internal one. The former depends on the external d-dimensional
coordinates 2 x µ while the latter strictly depends on the internal n-dimensional (where
D = d + n) coordinates Y I = (Ỹ i , Y i ), defined in the fundamental representation of O(n, n). Here I = 1, · · · , 2n and E A (x, Y,Ỹ ) = U A A ′ (x)E ′ A ′ (Y,Ỹ ) . (2.8)
The matrix U encodes the field content in the effective theory, while E ′ is a generalized frame that depends on the internal coordinates. All the dependence on the internal coordinates is through the frame. By using this splitting ansatz the generalized metric
becomes H = S AB U A A ′ E ′ A ′ U B B ′ E ′ B ′ = H A ′ B ′ E ′ A ′ E ′ B ′
where all the field dependence on space time coordinates is encoded in
H A ′ B ′ (x) = S AB U A A ′ U B B ′ . (2.9)
parametrizing the moduli space. In particular, we will deal with the "internal piece" H IJ ,
where I, J = 1, ..., 2n are frame indices on the internal part of the double tangent space.
It proves useful to rotate to a Right-Left basis C where left and right coordinates are
y Lm = 1 2 [(g + B) mn y n +ỹ m ] (2.10) y Rm = 1 2 [(g − B) mn y n −ỹ m ]
in terms of Y M = (y m ,ỹ m ). Namely, the rotation matrix reads
R = (g + B) 1 (g − B) −1 , (2.11) and therefore from E A → (E C ) A = R A B E B we see that η becomes diagonal (RηR T ) AB = 1 D 0 0 −1 D . (2.12)
Since the internal piece of H lies in O(n, n)/O(n) × O(n) it is possible to show [3,33] that the scalar matrix, in the Left-Right basis C can be written as an expansion in scalar fluctuations
H C = 1 n + MM T M M T 1 n + M T M + O(M 3 ) (2.13)
with n 2 independent degrees of freedom.
By using the expression for the generalized Lie derivative in the specific case of the frame
L E ′ A E ′ B = 1 2 E ′ P A ∂ P E ′ M B − E ′ P B ∂ P E ′ M A + η M N η P Q ∂ N E ′ P A E ′ Q B D M (2.14) [E ′ I , E ′ J ] = L E ′ I E ′ J = f IJ K E ′ K . (2.15)
where the fluxes f IJ K , for the generalised Scherk-Schwarz reduction, must be constants and must satisfy the constraints
f IJK ≡ η KL f IJ L = f [IJK] , f [IJ L f K]L R = 0 . (2.16)
The information about the internal space is encoded in these constants. When replacing above results into the initial DFT action (1.3) the expression presented in (1.1) is obtained.
Enhanced gauge symmetry on the circle
In Ref. [3] a specific DFT frame was presented 3 in order to reproduce the effective action, obtained from string theory compactification on the circle, at the self-dual point. As mentioned it requires to enhance the tangent space to D = d + 1 + 2 but the frame only depends on the circle coordinate and its dual. In a Cartan-Weyl basis the frame vectors read,
E ± = c(e ∓i 2 √ α ′ y L , ie ∓i 2 √ α ′ y L , 0, 0, 0, 0), E 3 = −c(0, 0, 1, 0, 0, 0, 0) E± = c(0, 0, 0, e ∓i 2 √ α ′ y R , ie ∓i 2 √ α ′ y R , 0)Ē3 = −c(0, 0, 0, 0, 0, 1) (2.17)
The directions E± ≡ E 1 + iE 2 (andĒ± = E1 + iE2 ) encode the extension of the tangent space. It is easy to check that, by using (2.14) (setting c = i √ α ′ ) and by noticing that the only contributions to the partial derivative are
∂ A = (0, 0, ∂ y L , 0, 0, ∂ y R ), (2.18)
the SU(2) L × SU(2) R coupling constants (1.7) are obtained. In the Cartan-Weyl basis they read,
1 2 f +− 3 = 1 2 f+−3 = f 3+ + = f3++ = −f 3− − = −f− 3− = 1 (2.19) − 1 2 f+− 3 = − 1 2 f +−3 = f 3++ = −f 3−− = f3 + + = −f3 − − = 0
where we have used a hat to denote the indices constructed up from 4, 5, 6 Right indices.
The construction of the frame is inspired in the coset structure (1.2) and on the structure of vertex operators in string theory 4 . Namely the correspondence among frame vectors and string current generators [3] can be established (here e ±L = e 1 ± ie 2 , e ±R =
e 1 ± ie 2 )Ē ± = e ∓ 2i √ α ′ y R e ±R ↔ e ∓ 2i √ α ′ y R (z) dz =J ∓ dz , (2.20) E 3 = i/ √ α ′ dy R ↔ dy R (z) =J 3 dz E ± = e ∓ 2i √ α ′ y L e ±L ↔ e ∓ 2i √ α ′ y L (z) dz = J ∓ dz , (2.21) E 3 = i/ √ α ′ dy L ↔ dy L (z) = J 3 dz . where J ∓ (z) = e ∓ 2i √ α ′ y L (z) , J 3 (z) = ∂ z Y (z)V ±,3 (z,z) = i g ′ c α ′1/2 ǫ ±,3 µ : J ±,3 (z)∂X µ e iK·X : (2.22)
where i = ±, 3 and K µ is the space time momentum.
A similar construction was presented in [26] (see also [27]) for the case of the S 3 reduction in the context of the WZW model, inspired by [28]. The purely geometric case was studied in [29]. For the non-geometric one [26], the authors were able to show that allowing for a non trivial dependence on the dual coordinate of the Hopf fibre, non-geometric gaugings can be obtained [30]. However, unlike the toroidal construction presented here (and in [3]) where a clear world sheet picture arises, the S 3 does not have non-contractible cycle and, therefore, no winding states were really considered in [26].
For general compactification radios, the dependence on moduli is encoded in the exponential part of the vertex operators
: exp[ik L y L (z) + ik R y R (z)]e iK·X : (2.23) where k (p,p) L = p R +p R , k (p,p) R = p R −p R . (2.24)
in terms of KK momenta p and winding numberp satisfying the level matching condition
N − N = pp. N = N x + N y (N =N x +N y ) is the Left (Right) moving number operator,
involving the sum of the number operator along the circle N y (N y ) and the number operator for the non-compact space-time directions, denoted by N x (N x ). At the self-dual radio R =R = √ α ′ , the vertices separate into a Left part with k R = 0 or into a Right vertices with k L = 0. The three vector states generating SU(2) L correspond toN x = 1,
N x =N y = 0. The assignment N y = 1, p =p = 0 corresponds to the KK (Cartan field) mode A 3 Lµ , while for N y = 0, p =p = ±1 (namely, k (±1,±1) L = ± 2 √ α ′ ; k (±1,±1) R = 0) the
charged vectors A ± Lµ are obtained (and similarly for SU(2) R ). When moving away from the fixed point, Left and Right parts mix up and, generically, the original vertex operator becomes ill defined as a conformal field. It must combine with other vertex operators, that have the same exponential contribution, in order to produce a new consistent vertex. Interestingly enough, these combinations encode the Higgs mechanism by absorption of a vertex corresponding to a would be Goldstone boson field [3].
With this picture in mind we generalize the frame (2.17) by including the dependence k L y L + k R y R for the found values of momenta and windings. and
m ± = k (1,±1) R = 1 R ± 1 R . Notice that m + → 2 √ α ′ at self-dual radio R =R = √ α ′ .
Again, by using (2.14) we obtain
E + , E − = 2 (a + E 3 − a −Ē3 ), E+,Ē− = 2 (a +Ē3 − a − E 3 ), E 3 , E + = a + E + , Ē3 ,Ē+ = a +Ē+ E 3 , E − = −a + E − , E3,Ē− = −a +Ē− Ē3 , E + = a − E + , E 3 ,Ē+ = a − E+ Ē3 , E − = −a − E − , E 3 ,Ē− = −a −Ē− (2.27) with a ± = √ α ′ m ± 2 .
(2.28)
Thus, we find that, by computing the fluxes (2.16), and up to a normalization factor α ′ 3 2 √ 2, the constants proposed in (1.10) are obtained (here written in a complex combination). Notice that, if R →R then a − (a + ) → 0(1) and the original SU(2) × SU (2) algebra is recovered. Moreover, it is easy to check that the algebra is invariant under T-duality transformation R ↔R.
As mentioned, by systematically replacing the above structure constants (
− 4 √ α ′ M +− M −+ M 33 ( √ α ′ R ) 2 + 4 √ α ′ M ++ M −− M 33 ( √ α ′ R ) 2 (2.30) with m 2 + + m 2 − + 2m + m − = (m + + m − ) 2 = 4 R 2 .
Coming back to the expressions (2.27), it is worth noticing that the above brackets close into a Lie algebra for arbitrary values of R. Indeed, by recalling that f IJK = η KL f L IJ are totally antisymmetric, it is easy to check that Jacobi identity is satisfied. Of course, the found algebra should correspond to one of the known semi-simple algebras. Since it involves six charged generators and two Cartan ones the only possibility is SU(2)×SU(2).
Actually, this can be explicitly shown by performing the linear combinations of generators
E ′ ± =E ± ;Ē ′± =Ē± E ′ 3 =a + E 3 − a −Ē3 E ′ 3 = − a − E 3 + a +Ē3 ,(2. 1 2 0 0 0 0 a + 0 −a − 0 0 1 2 0 0 −a − 0 a + (2.32)
and using that a 2 + − a 2 − = 1. We thus see that, even in the broken phase, there is still an underlying SU(2) symmetry (now mixing massive and massless states). However, once the above frame is chosen, the O(3, 3) full symmetry gets broken and, therefore, it can not be rotated to the starting point. Recall also that, in terms of fields, the combination of U(1) gauge bosons
A 3 ′ µ = a − A 3µ L + a + A3 µ R = V µ + B µ (2.33)
is the right combination in terms of
V µ = 1 2R (A 3 µ +Ā 3 µ ) , B µ = 1 2R (A 3 µ −Ā 3 µ ) (2.34)
which are the KK reductions of the metric and antisymmetric fields and with respect to which massive states carry integer charge (see [3]).
It is instructive to look at the above results from the string theory point of view.
There, the structure constants can be essentially read from the 3-gauge vector bosons vertices with vertex operators V i . For the massless case they read (see [3] for notations and explicit computations), for Left vectors,
< V i L V j L V k L > = πg c i √ α ′ ǫ ijk (ǫ k 3 · K 1 )(ǫ i 1 · ǫ j 2 ) − (ǫ j 2 · K 1 )(ǫ i 1 · ǫ k 3 ) + (ǫ i 1 · K 2 )(ǫ k 3 · ǫ j 2 )
where K 1 , K 2 , K 3 are the space time momenta of vertices i, j, k respectively. Namely, we can read the ǫ ijk structure constants of SU(2) L (and similarly for SU(2) R ) and there is no mixing between L-R sectors.
On the other hand, away from the self-dual point we find the three-point coupling of
Left and Right vectors can be written as
V + L V − L V 3 L = πg ′ c 2 √ α ′ (a + )E(k i , ǫ i ) V + L V − L V 3 R = πg ′ c 2 √ α ′ (a − )E(k i , ǫ i ) V + R V − R V 3 L = πg ′ c 2 √ α ′ (a − )E(k i , ǫ i ) where E(K i , ǫ i ) = (ǫ ′ 1+ · ǫ ′ 2− )(K 1 · ǫ 3 ) + (ǫ ′ 1+ · ǫ 3 )(K 3 · ǫ ′ 2 ) + (ǫ 3 · ǫ ′ 2− )(K 2 · ǫ ′ 1+
) is a factor that depends on space time momenta and vector polarizations. Thus, if by analogy with the dual point case, we interpret the coefficients as the moduli dependent coupling constants we find; f +−3 (R) = a + , f +−3 (R) = a − etc. Moreover, by considering the combinations (2.33) above, we can again identify the underlying SU(2) structure. The SU(2) controls the allowed three point functions through conservation of internal Right and Left momenta.
Enhancing-breaking of gauge symmetries for generic toroidal compactifications
In this section we briefly discuss possible realizations of the enhanced symmetry breaking mechanism, through moduli dependent structure constants, for general toroidal compactifications. Bosonic string compactification [2] on a T r torus of r dimensions gives rise to a gauge symmetry group G L × G R of rank 2r (r coming from Left and Right vectors associated to the metric and B field degrees of freedom). At generic points of the compactified manifold this group is simply U(1) r L × U(1) r R but, at special moduli points, G L is a non abelian group with dimG L = n = n c +r. Here n c counts the number of charged generators associated to the presence of non trivial winding and KK momenta. By reasoning as in the circle case, if we assume that the number of massless degrees of freedom at some point of enhancing is given by g mn = e a m e a n defines the internal metric whereas B mn are the internal components of the Kalb-Ramond field.
dim( O(d + n, d + n) O(d + n) × O(d + n) ) = d 2 + 2nd + n 2
Notice that, by using (2.11) the following relation holds
k L .y L + k R .y R = P.Y (3.4)
Gauge symmetry enhancing occurs at specific values of moduli (g 0 , B 0 ), encoded in the frame vectors e a m (g, B) and of windings and momenta (encoded in the generalized momentum P = (p 1 , p 2 , . . . ;p 1 ,p 2 . . . ). At such values, k a L become roots of a semi simple algebra (k a R = 0) and similarly for the right sector. Namely, at such points, the internal part of vertex operators in (3.2) becomes For generic points in the compact manifold we will have internal directions e a m (g, B) depending on the moduli fields and, therefore, so do k
E α ≃ e ik P L .y(z)θ (j) = k m (j)L y Lm + k m (j)R y Rm = k 1 (j)L y L1 + k 2 (j)L y L2 + k 1 (j)R y R1 + k 2 (j)L y R2 (3.9)
Here (j) encodes the P = (p 1 , p 2 ,p 1 ,p 2 ) values that would lead to SU(2) j at the self-dual point. For instance, P = (±1, 0, ±1, 0) generates a k m (1)L and k m (1)R (where k m (1)R = 0 at self-dual point) etc. Overall we find
P = (±1, 0, ±1, 0) → k mE +(j) = 0 3(j−1) ; v +(j) ; 0 3(4−j) e −iθ j = E * −(j) (3.11) E 0(j) = 0 3(j−1) ; v 0(j) ; 0 3(4−j) ) (3.12)
where v ±j = (0, 1, ±i) (v 0j = (i, 0, 0)) is a 3 dim vector inserted at position j. Notice that E +(j+3) ≡Ē +(j) correspond to Right vectors. At the self-dual point these vectors lead to SU(2) L × SU(2) R algebra for each value of j.
Moving away from the SU(2) 4 fixed point generically mix the twelve generators leading to moduli dependent structure constants f IJK (g, B) (I, J, K = 1, . . . 12). Actually, due to the frame structure (3.12), the mixing occurs between Left and Right components for a given value of (j), namely for the same would be SU(2) j frame.
For instance, by setting for simplicity for B = 0 but for generic metric, we find
f +−0 (1)(G) ∝ k R (1) = √ 2[G 11 + G 22 det(G) − 2] 1 2 (3.13) f +−0 (2)(G) ∝ k R (2) = √ 2[G 11 + G 11 det(G) − 2] 1 2 (3.14)
which generalizes the expression (1.10) found for the circle. By inserting these constants into the generic DFT action it is possible to check, as sketched in the introduction, that the action for a generic spontaneous symmetry breaking to U(1) 4 is achieved. The complete computation was performed by using a computer program.
The masses of the Left-vectors bosons are
m 2 1 = f 2 +−0 (1)(G) (3.15) m 2 2 = f 2 +−0 (2)(G) (3.16)
and (similarly for the R-vectors). They coincide with the masses computed from string theory (A.2). The values G 12 = 0, G 11 = G 22 = 1 lead to m 2 1 = m 2 2 = 0 thus leading to the SU(2) 4 enhancing. Also, G 12 = 0, G 11 = 1, G 22 = (
R (2) √ α ′ ) 2 corresponds to a partial breaking stage to SU(2) 1L × U(1) 2L × SU(2) R × U(1) 2R etc.
Recall that, generically, for a given point of enhancing (g 0 , B 0 ) with G L × G R gauge group, once the values of fluxes f ABC (g, B) are found, we just have to plug them into the DFT action to obtain the effective gauge symmetry broken action. We have shown how to compute these fluxes from a generalized tangent frame construction. However, we can easily read them from string theory 3-vector bosons amplitudes, as we saw for the circle case. Namely, at a given fixed point, as mentioned k where we have used hatted indices for Right generators. Thus, we propose the algebra
V (k (P 1 ) L )V (k (P 2 ) L )V (k (P 3 ) L ) ∝ f α (P 2 ) α (P 2 ) α (P 3 ) (g, B) (3.17) where f α (P 2 ) α (P 2 ) α (P 3 ) (g, B) = 1 if P 3 = −P 1E α , E −α = k (α)I L H I + k (α)Î RĤÎ Êα ,Ê −α = k (α)I L H I + k (α)I RĤ I H I , E α = k (α)I L E α ĤÎ ,Êα = k (α)Î RÊα H I ,Êα = k (α)I LÊα Ĥ I , E α = k (α)I R E α (3.19)
where we have used α = α (P) to alleviate the notation. It is easy to show that (3.19) satisfies Jacobi identities. and therefore defines a Lie algebra.
At the self dual point (where k α R (g 0 , B 0 ) = kα L (g 0 , B 0 ) = 0) and f α−αI = α I , (and similarly for Right sector) the algebra reduces to to the gauge algebra of G L × G R in the As mentioned, when replacing these moduli dependent fluxes into the generic DFT action the effective string theory action is reproduced, as long as up to slightly massive states are kept. Therefore, DFT is providing us with a generic field theory action that leads to an accurate description of string theory results even in a non trivial stringy situation of gauge symmetry enhancing-breaking when massive states with associated momenta and winding are present. As discussed in [3] for the circle case (and extended in [33] moving oscillators is implied in string theory (see also [33]). Also in [24] a generalized KK toroidal compactification (GKK) of DFT containing towers of massive states with generic windings and KK momenta was considered, for the case N −N = 0, namely with the level matching condition P 2 = 0. The present work is a contribution in between, in the sense that it incorporates slightly massive states with paired and unpaired oscillators but disregards higher massive states.
The tangent space extra dimensions in the above construction are associated to states with non vanishing momenta and windings, actually with P 2 = ±1. It may appear somewhat awkward that moving continuously from one point of enhancing to another could lead to a discrete change in the number of these extra tangent dimensions, even if these are just tangent directions and not physical dimensions at all. In string theory the vector fields that become massless to lead to gauge enhancing are part of the spectrum and they are associated to N −N = ±1. It appears that in this situation DFT in lower dimensions should allow for the presence of new vector fields, say A ν L(R) (x, Y) where Y are coordinates on a double torus.
A possible way these jumps could be actually understood is through a GKK mode expansion, as considered in [24], but allowing for states with LMC δ(P 2 ) = ±1, 0. For instance,
A Lν (x, Y) = P A (P) Lν (x)e iP M Y M δ(P 2 , 1) (4.1) = P A I(P)
Lν (x)e ik L .y L +k R .y R δ(P 2 , 1),
(4.2)
where P L , P R depend on moduli (3.3). When moving continuously along the moduli space, for certain values of P, GKK modes k R = 0 and the corresponding fields A (P)
Lν (x) become massless. For instance for the T 2 ×T 2 the six modes (3.6) become massless for g 11 = g 22 = −2g 12 = −2B 12 = 1 leading to the charged operators of SU(3) L . Sliding away from this point the masses of these modes vary continuously from zero. When reaching the moduli point g 11 = g 22 = 1; B 12 = 0 other modes (the six modes shown in (3.11)) become massless 9 and lead to SU(2) 2 L enhancing. The massless vector fields are those captured by the extended tangent frame vector in DFT. Moreover, we saw that at the neighbourhood of the point of enhancing associated to a gauge generator algebra G, there is still an underlying global G algebra, mixing massless (Cartans) and slightly massive states. When moving away from that point other fields, now with comparable masses, will come into play and will have non neglectable 3-point amplitudes indicating a possible infinite enhancing of the global algebra. This appears to be an indication of the presence of a Generalized Kac-Moody algebra of the kind discussed in [24] but including unpaired LMC conditions. Of course these ideas need further investigation.
For the sake of simplicity we have dealt with the bosonic string example. However the reasoning should be straightforwardly applicable to the (bosonic sector) of Heterotic theories ( [14]) or Type II theories obtained from U-dual Extended Field Theories (EFT) [32].
It could also be interesting to explore the inclusion of extra tangent dimensions directly in gauged supergravity theories [21,22].
K µ stands for the space-time momentum while k L(R) are the internal L(R) momenta.
It is convenient to use coordinates y a L(R) = e m a y m L(R) with tangent space indices a, b, ..., defined in terms of the vielbein e m a (δ ab = e m a g mn e n b ) since they have the standard OPEs.
Namely, the propagators read
X µ (z,z)X ν (w,w) = − α ′ 2 η µν ln|z − w| 2 , Y a (z)Y b (w) = −δ ab α ′ 2 ln(z − w) , Ȳ a (z)Ȳ b (w) = −δ ab α ′ 2 ln(z −w) .
and the vertex operator momenta are
k a L = e a m p m L , k a R = e a m p m R , (A.1) where p m L =p m + g mn (p n − B nkp k ) , p m R = −p m + g mn (p n − B nkp k ) .
The stress energy tensor is
T (z) = − 1 α ′ (η µν : ∂ z X µ (z)∂ z X ν (z) : +δ ab : ∂ z Y a (z)∂ z Y b (z) :) ,
The mass of the string states is
M 2 = 1 2 m 2 L + 1 2 m 2 R = 1 2 k aL .k aL + 1 2 k aR .k aR + 2(N +N − 2) (A.2)
where N,N are the number of string oscillators and the level matching condition reads
1 4 k aL .k aL − 1 4 k aR .k aR − (N −N) = p np n − (N −N) = 0 (A.3)
and similarly for the right moving one.
A.1 Torus example
The frame base can be written as (as mention the factor √ 2 is included to maintain the normalization conditions α 2 = 2 for simple roots)
e 1 = 1 √ 2 (0, G 11 ), e 2 = 1 √ 2 ( √ detG √ G 11 , G 12 √ G 11 ), (A.4)
leading to the matrix g mn = e m .e n = 1 2 G mn . with dual lattice vectors (e * m = e m )
e * 1 = √ 2(− G 12 √ detG √ G 11 , 1 √ G 11 ), e * 2 = √ 2( √ G 11 √ detG ,α 1 = (0, √ 2), α 2 = ( √ 3 √ 2 , − 1 √ 2 ) (A.6) (A.7)
are the SU(3) simple roots.
On the other hand, G 22 = G 11 = 2; G 12 = 0 corresponds to an SU(2) × SU(2) algebra.
Metric and B field define the complex structure U = U 1 + iU 2 and Khaler structure T = T 1 + iT 2 of the torus with U 1 = g 12 g 22 , U 2 =
B General enhancing groups
We show here that, in the general case of an enhancing from U(1) r L × U(1) r R to a gauge group G L × G R the generalized fluxes lead to the the exact vector and scalar massive terms. Namely, the corresponding masses coincide with the masses computed from string theory. Consider the L-R splitting of indices in the C base A = (a,â) where the first (second) entries belong to left group G L (right group G R ). Let us focus on G L and further split left indices as a = (α, I) corresponding to charged generators and Cartan generators I = 1, . . . r (and similarly for Right group).
B.1 Vector masses
The vector mass terms in the Lagrangian read
f ABC A B µ M C D + f DBC A B µ M C A 2 ∼ A B µ A E µ f ABC f DEF η AD η CF − δ AD δ CF ∼ A B µ A E µ f aBĉ f aEĉ (B.1)
If the fluxes do not mix Left and Right sectors (as it happens at the self dual point) then all vectors are massless. From momentum conservation we know that f aIĉ = f aĪĉ = 0.
Moreover a andĉ can not be charged indices simultaneously.
Then
f aBĉ f aEĉ = f IBγ f IEγ + f αBÎ f αEÎ (B.2)
We conclude that indices B, E in the previous expression must be charged indices and, moreover, they must be equal by momentum conservation
A B µ A E µ f aBĉ f aEĉ ∼ γ Aγ µ Aγ µ r I=1 f I −γγ f I −γγ + α A α µ A α µ r Î =1 f α −αÎ f α −αÎ = γ Aγ µ Aγ µ m 2 γ + α A α µ A α µ m 2 α (B.3)
where the sum runs over the positive roots. By using that (see (3.18)) f I −γγ = K I L,γ
i.e. the I-component of the Left γ momentum (similar for the right case) we can write the masses as m 2 γ = r I=1 (K I L,γ ) 2 and for the γ-left vector is m 2 γ = r I=1 (K I R,γ ) 2 , that coincide with vector masses computed from (A.2).
B.2 Scalar masses
We denote the, (dimG−r) 2 , massless scalars charged under Left and Right gauge group as M αβ . In string compactification they are described by the vertex operators V αβ (z,z) ∝ J α (z)Ĵβ(z) with J α (z) = e k Lα .y . When moving away from the self-dual point a non vanishing Right contribution k Rα (m − in circle example) appears and similarly a k Lβ , from the Right sector. Therefore, the scalar Left and Right internal momenta become that, as expected, vanishes at the fixed point. By using the identification with fluxes (3.18) this expression can be recast as
M 2 αβ = fÎ α−α (fÎ α−α + fÎβ −β ) + f Iβ−β (f Iβ−β + f Iα−α ) (B.8)
This is exactly the combination of fluxes that appears in front of the quadratic scalar term when we mimmick the steps we followed for the circle case(2.30). Namely, insert the expansion in scalar fluctuations M (2.13), into the third row of the DFT action (1.3) and
use the values (1.10).
are the usual covariant derivatives and f ijk = −f ijk ∝ ǫ ijk (i, j = 1, 2, 3) are the structure constants where ǫ ijk is the usual Levi-Civita completely antisymmetric tensor and g d = κ d 2 α ′ . Interestingly enough, this action can be embedded into an O(d + 3, d + 3) framework.
A on this bundle is a set of linearly independent generalized vectors that belong to the vector space of representations of the group G = O(D, D). It parametrizes the coset G/G c , the quotient being over the maximal compact subgroup of G. A Lorentz signature is assumed on the D-dimensional space-time, i.e. G c = O(1, D − 1) × O(D − 1, 1). In DFT, this generalized tangent bundle is locally parametrized with a double set of coordinates, X M = (xμ,xμ, ), defined in the fundamental representation of O(D, D).
are the string currents satisfying the Operator Product Expansion (OPE) algebra of SU(2) L (and similarly for the Right sector). The corresponding string vertex operators V i (z,z) for vectors are
Ew
± = c(e ∓iw , ±ie ∓iw , 0, 0, 0, 0)Ē + = c(0, 0, 0, e ∓iw , ±ie i∓w , 0) = m + y L + m − y R ,w = m − y L + m + y R (2.26)
fluxes) into the general DFT action expression (1.3), the exact spontaneously broken action, with U(1) × U(1) gauge symmetry, as computed from string theory (see Eq.(3.31) in [3]) is found. In particular, vector fields A ± Lµ and A± Rµ become massive, with masses m − by "eating" the would be Goldstone bosons ∂ µ M ±3 (and ∂ µ M 3± ) that disappear from the spectrum. It appears instructive to see how some of the terms in the broken symmetry action arise. For instance, by inserting the expansion in scalar fluctuations M in the generalized scalar matrix (2.13), into the third row of the DFT action (1.3) and using the values (1.10) for structure constants we find the quadratic the terms 2(m + m − + m 2 − )|M ±± | 2 − 2(m + m − − m 2 − )|M ±∓ | 2 |M ±± | 2 − 4 R m − |M ±∓ | 2 reproducing the exact values m 2 ±± = 4 R m − and m 2 ±∓ = − 4 R m − as computed from string mass formula (A.2) 5 . The terms proportional to m + m − and m 2 − come from linear and quadratic terms in M expansion in (2.13), respectively. In the same way it can be checked that the masses of the would be Goldstone bosons M ±,3 , M 3± coincide, as it should, with the masses m − of the massive vector bosons. Moreover, the same row in (1.3) for cubic terms in M lead to
to correspond to the d 2 degrees of freedom of gravity (plus B field), the 2n vectors of a G L × G R and n 2 scalars in bi-adjoint representations. If n = r it gives the correct counting for U(1) r L × U(1) r R degrees of freedom. For a circle compactification r = 1 and by choosing n = 2+1 the counting corresponds to an SU(2) L × SU(2) R gauge group with scalars in the (3, 3) representation, as it is the case for the self-dual point R =R. For a T 2 toroidal compactification, when n = 2 × 3, the number of massless degrees of freedom for SU(2) L ×SU(2) L ×SU(2) R ×SU(2) R with scalars in (3, , corresponding to a possible torus enhancing point, is reproduced. Also, the correct counting occurs for n c = 6, for the degrees of freedom of SU(3) L ×SU(3) R with scalars in the (8, 8) representation at the point of maximal enhancing [2]. The generalization of the exponential contribution (2.23) to the string vertex operators for a general torus (with lattice vectors e a m ) reads (see Appendix for notation): e ik L .y L (z)+ik R .y R (z) e iK·X : L =p m + g mn (p n − B nkp k ) , p m R = −p m + g mn (p n − B nkp k ) .(3.3)
root of the semi simple algebra and where m = 1, 2, . . . (associated to P values) labels the charged operators. These vertex operators, together with the corresponding Cartan operators, close the OPE of a G L group affine algebra. Let us consider the 2-torus example discussed in the Appendix. For generic values ofE = g + B the gauge group is U(1) 2 L × U(1) 2 Rbut enhancings occur at different points[2]. For instance, by choosing the basis 6 e m = 1 √ 2 α m with m = 1, 2 with α 1,2 the simple roots of SU(3) and B 12 = g 12 = − 1 2 we see that there are six generalized momentum vectors P = ±(1, the LMC, and such that P m R = 0. They give rise to six extra massless states withP L = ±(1, 0), ±(0, 1), ±(1, 1) (3.7)Similarly P = ±(−1, 1, 1, 0), ±(0, −1, 0, 1), ±(1, 0, −1, −1) lead to same roots for P R whileP L = 0. At the end, the enhanced SU(3) L × SU(3) R gauge group is generated. B = 0 an enhancing to (SU(2) × SU(2)) L × (SU(2) × SU(2)) R is obtained for P = (±1, 0, ±1, 0), (0, ±1, 0, ±1), etc.6 The √ 2 is just a normalization factor in order to keep the usual convention for α 2 = 2 for the roots.The description of the enhancing-breaking of the gauge symmetry could be, in principle, described by generalizing the steps presented in the previous section for the circle particular situation. We have not pursued this construction systematically but we present some examples for the 2-torus case 7 . For a general r-torus compactification, from a DFT point of view, we should consider a doubling of the internal manifold T r ×T r , incorporating both tori coordinates y m as well as their dualsỹ m with m = 1, . . . r ( in a O(r, r) writing it corresponds to the double coordinate Y M with M = 1, . . . 2r.). Following the counting (3.1) it appears that the tangent frame must be enlarged further in order to incorporate information about charged operators. Thus, if we were to describe a G L × G R point, besides the r + r frame vectors, associated to the internal coordinates Cartan generators, 2n c extra frame vectors should be incorporated with n c = (dimG − r) associated to the left charged generator vertices (3.5) (and similarly for Right vertices).Thus, in principle, we should have a 2dimG tangent frame space where the frame vectors only depend on Y internal coordinates. Each frame vector could be written in a given 2dimG basis and frame vectors associated to charged operators are expected to depend on an exponential factor e ik (P) L .y L (and similarly for R vectors) where P encodes the specific values of momenta and winding characterizing the enhanced vectors.
R
(g, B)8 . At selected values of g, B these directions become the simple roots of the enhancing algebra. Therefore, away from fixed points we expect the frame vectors to depend on both e ik (P)L .y L +ik (P) R .y R ,as in fact, we found in the circle case (recall that the possible values of P are fixed). When moving into the fixed point, P values will produce the roots of G L (and k P R = 0) and the roots of G R (and k P L = 0 ). This is indeed what we found in the circle case and we now illustrate in its simplest generalization of the 2−torus case near the SU(2) 4 fixed point.Let us name Y M = (ỹ m , y m ), m = 1, 2, the double torus coordinates or (y Lm , y Rm ) in a L − R basis. The exponential contributions can now be written in terms of e iθ j where
the corresponding self-dual point k m (1)R = k m (2)R = 0 and k m (3)L = k m (4)L = 0. Following the general steps sketched above we thus propose a generalized twelve dimensional ( 2dimG L = 12) frame with frame vectors depending only on Y M . A straightforward generalization of the circle case leads us to the frame vectors
L
(g 0 , B 0 ) = α (P) (g 0 , B 0 ) become simple roots of the G L group algebra and K(0) L(R) = 0 for Cartan vectors. Let us consider the 3-point amplitudes for massless bosons. For charged bosons we can write, up to an antisymmetric factor in vertex indices depending on vector polarizations (see (2.35)), as
−P 2 LR
2and vanishing otherwise due to momentum conservation. The constants are antisymmetric. At the self-dual point this indicates that structure constants f α 1 α 2 α 3 vanish unless α 1 + α 2 is a root (and similarly for Right the sector). For the same reason mixings of Left and Right indices vanish. On the other hand, by denoting by V (I L(R) ) with I = 1, . . . r the Cartan vectors, the only non vanishing amplitudes are V (k (P) L )V (k (−P) L )V (I L(R) ) ∝ k (P) L(R) (g, B) Iand, by identifying the amplitudes coefficients with algebra structure constants, (g, B) I R = f α (P) α (−P )Î (g, (g, B) I L = fα(P)α(−P) I (g, B)
Cartan-Weyl basis. For instance notice that [E α , E −α ] = α I H I for charged generators E α and Cartan generators H I , as expected.As an example let us specify to the SU(3) L ×SU(3) R case (the expressions are, however, general). Since this algebra must be continuously connected with the SU(3) L × SU(3) R algebra at the fixed point and has four Cartan generators the only possibility left is an SU(3) ×SU(3). Again, away from the fixed point, we detect the same underlying algebra, now mixing massive and massless (associated to Cartan generators) vector fields.Let us underscore that, by replacing above fluxes into the DFT action (1.3) and by performing the scalars expansion (2.13), as we did for the circle case example, the full broken G L × G R symmetry action is found. Recall that this is valid for an arbitrary fixed point in a general r dimensional toroidal compactification. As a check we show in the Appendix that the resulting masses for vector fields and scalar fields, as functions of moduli, coincide with the string theory ones.
known distinguished feature of string theory is the enhancing of gauge symmetries at certain values of moduli backgrounds. In this work we have shown that DFT formulation helps to identify an interesting description of enhancing phenomena. Namely, enhancing information appears encoded into moduli dependent generalized fluxes f ABC (g, B) with A, B, C = 1, . . . 2n indices in an O(n, n) vector representation. Splitting indices in a Left-Right basis A = (a,â), it appears that enhancing occurs for moduli values (g 0 , B 0 ) such that generalized fluxes with mixed indices vanish. In this situation f abc (g 0 , B 0 )(fâbĉ(g 0 , B 0 )) become the structure constants of a G L (G R ), dimG L = n dimensional non-Abelian gauge group. In fact, the vector boson masses are proportional to mixed indices fluxes (B.3).
for other situations) by giving vev's to some specific scalar fields, the string broken symmetry action can be approximately obtained as an expansion in powers of the vev,s. It is worth insisting that the DFT construction we are presenting here is already producing the broken symmetry phase. Moreover, different coefficients and masses in the string action are exactly reproduced as functions of moduli and not as an expansion. In addition, we have shown (at least for some examples) that generalized fluxes can be computed by introducing a generalized frame in tangent space with extended tangent directions but depending only on the coordinates of the double "physical torus". The DFT generalized Lie algebra closes even though the strong constraint is not satisfied. In fact, the frame is explicitly non-geometric since it is a function of the double coordinates Y = (Y,Ỹ ). The idea of doubling the number of coordinates in order to describe winding modes was one of the original motivations of DFT. However, only recently windings were actually included in DFT. In [3] a step in this direction was performed by showing that DFT can describe the massless sector of an enhanced gauge symmetry situation with windings playing a fundamental role and where an unpaired number of Left and Right N −N = ±1
( 3 )
3Cartan matrix is obtained and frame vectors become e m = 1 √ 2 α m where
, T 1 = B 12 , T 2 = √ detg. In terms of complex moduli, SU(3) L × SU(3) R enhancing occurs at SU(2) × SU(2)) L × (SU(2) × SU(2)) R enhancing is achieved for T = i = U (A.9)
k 2
2Lαβ = k Lα + k Lβ (B.4) k Rβα = k Rβ + k Rα Recall that, since N =N = 0 level matching requires k Lαβ = k Rα .(k Rα + k Rβ ) + k Lβ .(k Lβ + k Lα ) (B.7)
However, we could envisage a situation where Left and Right indices do mix. In fact, this is what we expect from string theory when we move away from the self dual point. Vertex operators that at the dual point depend only on Left coordinates (or Right coordinates) acquire a mixed dependence and the group breaks down to U(1) L × U(1) R (in the circle example). From this observation we could imagine a description of the symmetry breaking where the structure constants have a dependence on the moduli, namely f IJ K (R), such that for the dual point R =R = √ α ′ Left and Right indices do not mix but generically do, away from the fixed point. Let us propose, out of the blue,the following constants
appear as the generalized fluxes of the algebra associated to a generalized vielbein on a doubled internal space. Indeed, it was shown in[3] that such a generalized frame can be explicitly constructed to account for the description of the circle compactification at the self-dual point. In the following sections we indicate how to generalize this frame in order to provide a description valid also (slightly) away from the point of enhancing. The3 as would be Goldstone bosons
In fact, by replacing the proposed structure constants in the action (1.3) and after some
redefinitions it can be shown that the full string theory effective action [3], computed away
from the self-dual point (and keeping slightly massive terms), is reproduced.
Interestingly enough, the structure constants in the action (1.3) can be understood
from a DFT perspective. In generalized Scherk-Schwarz reduction of DFT [5, 4] they
constants presented in (1.10) are then obtained as generalized fluxes from this frame. It
is worth emphasizing that, therefore, the resulting DFT construction involves, besides
massless states, massive states that become massless at the fixed point. The breaking
is not achieved by giving vacuum expectation values (vev's) to scalar fields. We also
indicate how to extend the construction to toroidal compactifications in more dimensions
and provide some examples for the T 2 case. An interpretation from the string theory
point of view is provided as well.
Details are presented in next section.
Thex µ duals are dropped off, or equivalently the strong constraint is imposed in the space time sector.
The choice of frame was inspired by a previous work[25] set in a different context. 4 Basic ingredients and notation conventions for string theory vertices are briefly presented in Appendix.
Recall that, depending on the value of R half of the scalars become tachyonic. This is an artefact associated to the ill defined bosonic string.
A systematic derivation is proposed in[33] with a modification of the generalized Lie derivative.8 We avoid writing the dependence on moduli in order to lighten the notation
Notice that there are two common modes P = (±1, 0, ±1, 0).
AcknowledgmentsWe thank G. Torroba, F. Schaposnik Massolo and D. Marqués for useful discussions and comments and, in particular, C. Nuñez who participated in the first steps of this research. Note added in proof: The same day this paper was made public, the article[33]appeared adressing similar issues from a rather complementary point of view.A Vertex operators and enhancingWe summarize here some string theory ingredients needed in the body of the article.A generic vertex operator contains an exponential contribution that can be written in terms of Left and Right moving coordinates y L (z), y R (z)as e ik·X+ik L ·y L +ik R ·ȳ R : where
New Heterotic String Theories in Uncompactified Dimensions ¡ 10. K S Narain, 10.1016/0370-2693Phys. Lett. 16986K. S. Narain, "New Heterotic String Theories in Uncompactified Dimensions ¡ 10," Phys. Lett. 169B (1986) 41. doi:10.1016/0370-2693(86)90682-9
Target space duality in string theory. A Giveon, M Porrati, E Rabinovici, hep-th/9401139Phys. Rept. 24477A. Giveon, M. Porrati and E. Rabinovici, "Target space duality in string theory," Phys. Rept. 244 (1994) 77 [hep-th/9401139].
Enhanced gauge symmetry and winding modes in Double Field Theory. G Aldazabal, M Grana, S Iguri, M Mayo, C Nuñez, J A , arXiv:1510.07644JHEP. 160393hep-thG. Aldazabal, M. Grana, S. Iguri, M. Mayo, C. Nuñez and J. A. Rosabal, "Enhanced gauge symmetry and winding modes in Double Field Theory," JHEP 1603, 093 (2016) [arXiv:1510.07644 [hep-th]].
Double Field Theory: A Pedagogical Review. G Aldazabal, D Marqués, C Nuñez, arXiv:1305.1907Class. Quant. Grav. 30163001hep-thG. Aldazabal, D. Marqués and C. Nuñez, "Double Field Theory: A Pedagogical Review," Class. Quant. Grav. 30 (2013) 163001 [arXiv:1305.1907 [hep-th]].
The effective action of Double Field Theory. G Aldazabal, W Baron, D Marqués, C Nuñez, arXiv:1109.0290JHEP. 111152hep-thG. Aldazabal, W. Baron, D. Marqués and C. Nuñez, "The effective action of Double Field Theory," JHEP 1111, 052 (2011) [arXiv:1109.0290 [hep-th]].
Double Field Theory and N=4 Gauged Supergravity. D Geissbuhler, arXiv:1109.4280JHEP. 1111116hep-thD. Geissbuhler, "Double Field Theory and N=4 Gauged Supergravity," JHEP 1111, 116 (2011) [arXiv:1109.4280 [hep-th]].
Exploring Double Field Theory. D Geissbuhler, D Marques, C Nuñez, V Penas, arXiv:1304.1472JHEP. 1306101hep-thD. Geissbuhler, D. Marques, C. Nuñez and V. Penas, "Exploring Double Field The- ory," JHEP 1306, 101 (2013) [arXiv:1304.1472 [hep-th]].
Generalized Calabi-Yau manifolds. N Hitchin, arXiv:math.dg/0209099Quart. J. Math. Oxford Ser. 54281N. Hitchin, "Generalized Calabi-Yau manifolds," Quart. J. Math. Oxford Ser. 54 (2003) 281 [arXiv:math.dg/0209099].
Generalized Complex Geometry. M Gualtieri, arXiv:math.DG/0401221Oxford University DPhil thesisM. Gualtieri, "Generalized Complex Geometry," Oxford University DPhil thesis (2004) [arXiv:math.DG/0401221].
T-duality, Generalized Geometry and Non-Geometric Backgrounds. M Grana, R Minasian, M Petrini, D Waldram, arXiv:0807.4527JHEP. 090475hep-thM. Grana, R. Minasian, M. Petrini and D. Waldram, "T-duality, Generalized Ge- ometry and Non-Geometric Backgrounds," JHEP 0904, 075 (2009) [arXiv:0807.4527 [hep-th]].
Supergravity as Generalised Geometry I: Type II Theories. A Coimbra, C Strickland-Constable, D Waldram, arXiv:1107.1733JHEP. 111191hep-thA. Coimbra, C. Strickland-Constable and D. Waldram, "Supergravity as Generalised Geometry I: Type II Theories," JHEP 1111, 091 (2011) [arXiv:1107.1733 [hep-th]].
Double Field Theory. C Hull, B Zwiebach, arXiv:0904.4664JHEP. 090999hep-thC. Hull and B. Zwiebach,"Double Field Theory," JHEP 0909, 099 (2009) [arXiv:0904.4664 [hep-th]].
Background independent action for double field theory. O Hohm, C Hull, B Zwiebach, arXiv:1003.5027JHEP. 100716hep-thO. Hohm, C. Hull and B. Zwiebach, "Background independent action for double field theory," JHEP 1007, 016 (2010) [arXiv:1003.5027 [hep-th]].
Generalized metric formulation of double field theory. O Hohm, C Hull, B Zwiebach, arXiv:1006.4823JHEP. 10088hep-thO. Hohm, C. Hull and B. Zwiebach, "Generalized metric formulation of double field theory," JHEP 1008, 008 (2010) [arXiv:1006.4823 [hep-th]].
Duality Rotations In String Theory. M J Duff, Nucl. Phys. B. 335610M. J. Duff, "Duality Rotations In String Theory," Nucl. Phys. B 335 (1990) 610.
Duality Rotations In Membrane Theory. M J Duff, J X Lu, Nucl. Phys. B. 347394M. J. Duff and J. X. Lu, "Duality Rotations In Membrane Theory," Nucl. Phys. B 347 (1990) 394.
Duality Symmetric Formulation of String World Sheet Dynamics. A A Tseytlin, Phys. Lett. B. 242163A. A. Tseytlin, "Duality Symmetric Formulation of String World Sheet Dynamics," Phys. Lett. B 242, 163 (1990).
Duality symmetric closed string theory and interacting chiral scalars. A A Tseytlin, Nucl. Phys. B. 350395A. A. Tseytlin, "Duality symmetric closed string theory and interacting chiral scalars," Nucl. Phys. B 350, 395 (1991).
Superspace duality in low-energy superstrings. W Siegel, hep-th/9305073Phys. Rev. D. 482826W. Siegel, "Superspace duality in low-energy superstrings," Phys. Rev. D 48 (1993) 2826 [hep-th/9305073].
Two vierbein formalism for string inspired axionic gravity. W Siegel, hep-th/9302036Phys. Rev. D. 475453W. Siegel, "Two vierbein formalism for string inspired axionic gravity," Phys. Rev. D 47 (1993) 5453 [hep-th/9302036].
Double Field Theory Formulation of Heterotic Strings. O Hohm, S K Kwak, arXiv:1103.2136JHEP. 110696hep-thO. Hohm and S. K. Kwak, "Double Field Theory Formulation of Heterotic Strings," JHEP 1106, 096 (2011) [arXiv:1103.2136 [hep-th]].
Stringy differential geometry, beyond Riemann. I Jeon, K Lee, J H Park, 10.1103/PhysRevD.84.044022arXiv:1105.6294Phys. Rev. D. 8444022hepthI.Jeon, K.Lee and J. H.Park, "Stringy differential geometry, beyond Riemann", Phys. Rev. D 84 (2011) 044022 doi:10.1103/PhysRevD.84.044022 [arXiv:1105.6294 [hep- th]].
D S Berman, D C Thompson, arXiv:1306.2643Duality Symmetric String and M-Theory. hep-thD. S. Berman and D. C. Thompson, "Duality Symmetric String and M-Theory," arXiv:1306.2643 [hep-th].
The Spacetime of Double Field Theory: Review, Remarks, and Outlook. O Hohm, D Lust, B Zwiebach, arXiv:1309.2977Fortsch. Phys. 61926hep-thO. Hohm, D. Lust and B. Zwiebach, "The Spacetime of Double Field Theory: Review, Remarks, and Outlook," Fortsch. Phys. 61, 926 (2013) [arXiv:1309.2977 [hep-th]].
Doubled α ′ -geometry. O Hohm, W Siegel, B Zwiebach, arXiv:1306.2970JHEP. 140265hep-thO. Hohm, W. Siegel and B. Zwiebach, "Doubled α ′ -geometry," JHEP 1402, 065 (2014) [arXiv:1306.2970 [hep-th]].
Double field theory at order α ′. O Hohm, B Zwiebach, arXiv:1407.3803JHEP. 141175hep-thO. Hohm and B. Zwiebach, "Double field theory at order α ′ ," JHEP 1411, 075 (2014) [arXiv:1407.3803 [hep-th]].
Heterotic α'-corrections in Double Field Theory. O A Bedoya, D Marques, C Nuñez, arXiv:1407.0365JHEP. 141274hep-thO. A. Bedoya, D. Marques and C. Nuñez, "Heterotic α'-corrections in Double Field Theory," JHEP 1412, 074 (2014) [arXiv:1407.0365 [hep-th]].
T-duality and α ′ -corrections. D Marques, C A Nuñez, arXiv:1507.00652JHEP. 151084hep-thD. Marques and C. A. Nuñez, "T-duality and α ′ -corrections," JHEP 1510, 084 (2015) [arXiv:1507.00652 [hep-th]].
Generalised geometry for string corrections. A Coimbra, R Minasian, H Triendl, D Waldram, arXiv:1407.7542hep-thA. Coimbra, R. Minasian, H. Triendl and D. Waldram, "Generalised geometry for string corrections," arXiv:1407.7542 [hep-th].
How to Get Masses from Extra Dimensions. J Scherk, J H Schwarz, Nucl. Phys. B. 15361J. Scherk and J. H. Schwarz, "How to Get Masses from Extra Dimensions," Nucl. Phys. B 153, 61 (1979).
H Samtleben, arXiv:0808.4076Lectures on Gauged Supergravity and Flux Compactifications. 25214002hep-thH. Samtleben, "Lectures on Gauged Supergravity and Flux Compactifications," Class. Quant. Grav. 25 (2008) 214002 [arXiv:0808.4076 [hep-th]].
Gauged Supergravities. M Trigiante, arXiv:1609.09745hep-thM. Trigiante, "Gauged Supergravities," arXiv:1609.09745 [hep-th].
Gauged Double Field Theory. M Grana, D Marques, arXiv:1201.2924JHEP. 120420hep-thM. Grana and D. Marques, "Gauged Double Field Theory," JHEP 1204, 020 (2012) [arXiv:1201.2924 [hep-th]].
Probing the String Winding Sector. G Aldazabal, M Mayo, C Nuñez, arXiv:1611.04927JHEP. 170396hep-thG. Aldazabal, M. Mayo and C. Nuñez, "Probing the String Winding Sector," JHEP 1703 (2017) 096 [arXiv:1611.04927 [hep-th]].
Duality orbits of non-geometric fluxes. G Dibitetto, J J Fernandez-Melgarejo, D Marques, D Roest, arXiv:1203.6562Fortsch. Phys. 601123hep-thG. Dibitetto, J. J. Fernandez-Melgarejo, D. Marques and D. Roest, "Duality orbits of non-geometric fluxes," Fortsch. Phys. 60, 1123 (2012) [arXiv:1203.6562 [hep-th]].
K Lee, C Strickland-Constable, D Waldram, arXiv:1506.03457New gaugings and non-geometry. hep-thK. Lee, C. Strickland-Constable and D. Waldram, "New gaugings and non-geometry," arXiv:1506.03457 [hep-th].
R Blumenhagen, F Hassler, D Lst, 10.1007/JHEP02(2015)001arXiv:1410.6374Double Field Theory on Group Manifolds. 15021hep-thR. Blumenhagen, F. Hassler and D. Lst, "Double Field Theory on Group Manifolds," JHEP 1502 (2015) 001 doi:10.1007/JHEP02(2015)001 [arXiv:1410.6374 [hep-th]].
Buscar :: Ayuda :: Terms of use :: Privacy policy Powered by Invenio v1.1.2+ Problems/Questions to feedback@inspirehep. netHEP :: Buscar :: Ayuda :: Terms of use :: Privacy policy Powered by Invenio v1.1.2+ Problems/Questions to [email protected]
T-folds, doubled geometry, and the SU(2) WZW model. M B Schulz, arXiv:1106.6291JHEP. 1206158hep-thM. B. Schulz, "T-folds, doubled geometry, and the SU(2) WZW model," JHEP 1206, 158 (2012) [arXiv:1106.6291 [hep-th]].
Spheres, generalised parallelisability and consistent truncations. K Lee, C Strickland-Constable, D Waldram, arXiv:1401.3360hep-thK. Lee, C. Strickland-Constable and D. Waldram, "Spheres, generalised parallelis- ability and consistent truncations," arXiv:1401.3360 [hep-th].
Evidence for a family of SO(8) gauged supergravity theories. G Agata, G Inverso, M Trigiante, arXiv:1209.0760Phys. Rev. Lett. 109201301hepthG. Dall'Agata, G. Inverso and M. Trigiante, "Evidence for a family of SO(8) gauged supergravity theories," Phys. Rev. Lett. 109, 201301 (2012) [arXiv:1209.0760 [hep- th]].
Double Field Theory Formulation of Heterotic Strings. O Hohm, S K Kwak, arXiv:1103.2136JHEP. 110696hep-thO. Hohm and S. K. Kwak, "Double Field Theory Formulation of Heterotic Strings," JHEP 1106 (2011) 096 [arXiv:1103.2136 [hep-th]].
Gauge theory of Kaluza-Klein and winding modes. O Hohm, H Samtleben, arXiv:1307.0039Phys. Rev. D. 8885005hep-thO. Hohm and H. Samtleben, "Gauge theory of Kaluza-Klein and winding modes," Phys. Rev. D 88 (2013) 085005 [arXiv:1307.0039 [hep-th]].
The gauge structure of Exceptional Field Theories and the tensor hierarchy. G Aldazabal, M Graña, D Marqués, J A , arXiv:1312.4549JHEP. 140449hep-thG. Aldazabal, M. Graña, D. Marqués and J. A. Rosabal, "The gauge structure of Exceptional Field Theories and the tensor hierarchy," JHEP 1404 (2014) 049 [arXiv:1312.4549 [hep-th]].
Y Cagnacci, M Graña, S Iguri, C Nñez, arXiv:1704.04242The bosonic string on string-size tori from double field theory. hep-thY. Cagnacci, M. Graña, S. Iguri and C. Nñez, "The bosonic string on string-size tori from double field theory," arXiv:1704.04242 [hep-th].
| [] |
[
"High Energy Results from BeppoSAX",
"High Energy Results from BeppoSAX"
] | [
"Roberto Fusco-Femiano ",
"Daniele Dal Fiume ",
"Mauro Orlandini ",
"Silvano Molendi ",
"Luigina Feretti ",
"Paola Grandi ",
"Gabriele Giovannini ",
"\nTESRE/CNR\nIASF/CNR\nSabrina De Grandi Osservatorio di Merate\nIASF/CNR\nIASF/CNR\nRoma, Bologna, Bologna, Merate, MilanoItaly, Italy, Italy, Italy, Italy\n",
"\nIASF/CNR\nIRA/CNR\nBologna, RomaItaly, Italy\n",
"\nUniversita' di Bologna\nBolognaItaly\n"
] | [
"TESRE/CNR\nIASF/CNR\nSabrina De Grandi Osservatorio di Merate\nIASF/CNR\nIASF/CNR\nRoma, Bologna, Bologna, Merate, MilanoItaly, Italy, Italy, Italy, Italy",
"IASF/CNR\nIRA/CNR\nBologna, RomaItaly, Italy",
"Universita' di Bologna\nBolognaItaly"
] | [
"MATTER AND ENERGY IN CLUSTERS OF GALAXIES ASP Conference Series"
] | We review all the BeppoSAX results relative to the search for additional nonthermal components in the spectra of clusters of galaxies. In particular, our MECS data analysis of A2199 does not confirm the presence of the nonthermal excess reported byKaastra et al. (1999). A new observation of A2256 seems to indicate quite definitely that the nonthermal fluxes detected in Coma and A2256 are due to a diffuse nonthermal mechanism involving the intracluster medium. We report marginal evidence (∼ 3σ) for a nonthermal excess in A754 and A119, but the presence of point sources in the field of view of the PDS makes unlikely a diffuse interpretation. | null | [
"https://export.arxiv.org/pdf/astro-ph/0207241v1.pdf"
] | 14,986,292 | astro-ph/0207241 | 6d8c4534dad3aa4acfeb9a3fa81c84f1ee6aeca8 |
High Energy Results from BeppoSAX
11 Jul 2002
Roberto Fusco-Femiano
Daniele Dal Fiume
Mauro Orlandini
Silvano Molendi
Luigina Feretti
Paola Grandi
Gabriele Giovannini
TESRE/CNR
IASF/CNR
Sabrina De Grandi Osservatorio di Merate
IASF/CNR
IASF/CNR
Roma, Bologna, Bologna, Merate, MilanoItaly, Italy, Italy, Italy, Italy
IASF/CNR
IRA/CNR
Bologna, RomaItaly, Italy
Universita' di Bologna
BolognaItaly
High Energy Results from BeppoSAX
MATTER AND ENERGY IN CLUSTERS OF GALAXIES ASP Conference Series
11 Jul 2002
We review all the BeppoSAX results relative to the search for additional nonthermal components in the spectra of clusters of galaxies. In particular, our MECS data analysis of A2199 does not confirm the presence of the nonthermal excess reported byKaastra et al. (1999). A new observation of A2256 seems to indicate quite definitely that the nonthermal fluxes detected in Coma and A2256 are due to a diffuse nonthermal mechanism involving the intracluster medium. We report marginal evidence (∼ 3σ) for a nonthermal excess in A754 and A119, but the presence of point sources in the field of view of the PDS makes unlikely a diffuse interpretation.
Introduction
It is well known that X-ray measurements in the energy range 1-10 keV of thermal bremsstrahlung emission from the hot, relatively dense intracluster gas, have already contributed in an essential way to our understanding of the cluster environment. However, recent researches on clusters of galaxies have unveiled new spectral components in the intracluster medium (ICM) of some clusters, namely a cluster soft excess discovered by EUVE (Lieu et al. 1966) and a hard X-ray (HXR) excess detected by BeppoSAX and RXTE (Rephaeli, Gruber, & Blanco 1999). Observations at low and high energies can give additional insights on the physical conditions of the ICM.
Nonthermal emission was predicted at the end of seventies in clusters of galaxies showing extended radio emission, radio halos or relics (see Rephaeli 1979). In particular, the same radio synchrotron electrons can interact with the CMB photons to give inverse Compton (IC) nonthermal X-ray radiation. Attempts to detect nonthermal emission from a few clusters of galaxies were performed with balloon experiments (Bazzano et al. 1984;90), with HEAO-1 (Rephaeli, Gruber & Rothschild 1987;Rephaeli & Gruber 1988), with the OSSE experiment onboard the Compton-GRO satellite (Rephaeli, Ulmer & Gruber 1994) and with RXTE & ASCA (Delzer & Henriksen 1998), but all these experiments reported essentially flux upper limits. However, we want to remind the conclusions of the paper regarding the OSSE observation of HXR radiation in the Coma cluster by Rephaeli, Ulmer & Gruber in 1994: "..It can be definitely concluded that the detection of the IC HEX (high energy X-ray) emission necessitates an overall sensitivity a few times 10 −6 ph cm −2 s −1 keV −1 in the 40-80 keV band. ..To reduce source confusion, detectors optimized specifically for HEX measurements of clusters should have ∼ 1 • × 1 • fields of view. A level of internal background more than a factor of 10 lower than that of OSSE is quite realistic. Obviously, another very desirable feature of any future experiment is wide energy coverage, starting near (or below) 15-20 keV, in order to independently measure the tail of the thermal emission". In these conclusions it is possible to find the spectral characteristics of the Phoswich Detector System (PDS) onboard BeppoSAX which is able to detect hard X-ray emission in the 15-200 keV energy range. The PDS uses the rocking collimator technique for background subtraction with angle of 3.5 • . The strategy is to observe the X-ray source with one collimator and to monitor the background level on both sides of the source position with the other in order to have a continuous monitoring of the source and background. The dwell time is 96 sec. The background level of the PDS is the lowest obtained so far with high-energy instruments onboard satellities (∼ 2 × 10 −4 counts s −1 keV −1 in the 15-200 KeV energy band) thanks to the equatorial orbit of BeppoSAX . The background is very stable again thanks to the favorable orbit, and no modelling of the time variation of the background is required (Frontera et al. 1997).
Hard X-ray observations of clusters of galaxies by BeppoSAX
BeppoSAX observed seven clusters of galaxies with the main objective to detect additional nonthermal components in their spectra. Coma cluster -HPGSPC and PDS data. The continuous line represents the best fit with a thermal component at the average cluster gas temperature of 8.5 +0.6 −0.5 keV.
Coma
The first cluster was Coma observed in December 1997 for an exposure time of about 91 ksec. A nonthermal excess with respect to the thermal emission was observed ) at a confidence level of about 4.5σ (see Fig.1). The thermal emission was measured with the HPGSPC always onboard BeppoSAX in the 4-20 keV energy range with a FWHM (∼ 1 • × 1 • ) comparable to that of the PDS (∼ 1.3 • , hexagonal). The average gas temperature is 8.5 +0.6 −0.5 keV consistent with the temperature of Ginga of 8.2 keV (David et al. 1993). The χ 2 value has a significant decrement when a second component, a power law, is added. On the other hand, if we consider a second thermal component, instead of the nonthermal one, the fit requires a temperature greater than 40-50 keV. This unrealistic value may be interpreted as a strong indication that the detected hard excess is due to a nonthermal mechanism. The data are not able to give a good determination of the photon spectral index (0.7-2.5; 90%), but the nonthermal flux ∼ 2.2 × 10 −11 erg cm −2 s −1 in the 20-80 keV energy range is rather stable against variations of the power-law index. Binning the PDS data between 40-80 keV the nonthermal flux is lower by a factor about 2 with respect to the upper limit derived by the OSSE experiment (see Fig. 1 of Fusco-Femiano et al. 1999). In the same time a RXTE observation of the Coma cluster (Rephaeli, Gruber & Blanco 1999) showed evidence for the presence of a second component in the spectrum of this cluster, in particular the authors argued that this component is more likely to be nonthermal, rather than a second thermal component at lower temperature. The first possible explanation for the detected excess is emission by a point source in the field of view of the PDS. The most qualified candidate is X Comae, a Seyfert 1 galaxy (z=0.092). ROSAT PSPC, EXOSAT and Einstein IPC observations report a flux at approximately the same flux level of 1.6 × 10 −12 erg cm −2 s −1 in the 2-10 keV energy band. With a typical photon index of 1.8 the variability factor of the source to account for the detected ex-cess is of the order of 10 which could be still plausible. But luckily enough, X Comae is located just on the edge of the field of view of the MECS (see Fig. 3 of Fusco-Femiano 1999). Considering the location of X Comae and the lack of detection, it is possible to estimate an upper limit to the flux of the source of ∼ 4 × 10 −12 erg cm −2 s −1 (2-10 keV) when BeppoSAX observed Coma which is a factor ∼ 7 lower than the flux of ∼ 2.9 × 10 −11 erg cm −2 s −1 required to account for the nonthermal HXR emission in the PDS. A recent mosaic of the Coma cluster with XMM-Newton (Briel et al. 2000) reports a tentative identification of 3 quasars in the central region, but the estimated fluxes are insufficient to reproduce the excess detected by BeppoSAX . However, we cannot exclude that an obscured source, like Circinus a Seyfert 2 galaxy very active at high X-ray energies, may be present in the field of view of the PDS. With the MECS image it is possible to exclude the presence of this kind of source only in the central region of about 30 ′ in radius unless the obscured source is within 2 ′ of the central bright core. We have estimated that the probability to find an obscured source in the field of view of the PDS is of the order of 10% and also Kaastra et al. (1999) independently arrived to the same estimate.
Another interpretation is that the nonthermal emission is due to relativistic electrons scattering the CMB photons and in particular the same electrons responsible for the radio halo emission present in the central region of the cluster, Coma C. In this case we can derive the volume-averaged intracluster magnetic field, B X , using only observables, combining the X-ray and radio data (see Eq. 1 of Fusco-Femiano et al. 1999). The value of B X is of the order of 0.15 µG and assuming a radio halo size of R = 1 Mpc at the distance of Coma also the electron energy density (∼ 7 × 10 −14 erg cm −3 ) can be derived. The value of the magnetic field derived by the BeppoSAX observation seems to be inconsistent with the measurements of Faraday rotation of polarized radiation of sources through the hot ICM that give a line-of-sight value of B F R of the order of 2-6 µG (Kim et al. 1990;Feretti et al. 1995). But Feretti and collaborators inferred also the existence of a weaker magnetic field component, ordered on a scale of about a cluster core radius, with a line-of-sight strength in the range 0.1-0.2 µG consistent with the value derived from BeppoSAX . So, we can argue that the component at 6 µG is likely present in local cluster regions, while the overall cluster magnetic field may be reasonably represented by the weaker and ordered component. However, there are still many and large uncertainties on the value of the magnetic field determined using the FR measurements (Newman, Newman, & Rephaeli 2002). Other determinations of B based on different methods are in the range 0.2-0.4 µG (Hwang 1997;Bowyer & Berghöfer 1998;Sreekumar et al. 1996;Henriksen 1998). The equipartition value is of the order of ∼ 0.4µG (Giovannini et al. 1993).
However, alternative interpretations to the IC model have been proposed essentially motivated by the discrepancy between the values of B X and B F R . Blasi & Colafrancesco (1999) have suggested a secondary electron production due to cosmic rays interactions in the ICM. However, this model implies a γ-ray flux larger than the EGRET upper limit, unless the hard excess and the radio halo emission are due to different populations of electrons. A different mechanism is given by nonthermal bremsstrahlung from suprathermal electrons formed through the current acceleration of the thermal gas (Ensslin, Lieu, & Biermann 1999;Dogiel 2000;Sarazin & Kempner 2000;Blasi 2000). At present, due to the A2199 -MECS data. The points represent the ratio of the data above the MEKAL model in the energy range 8-10 keV.
low efficiency of the proposed acceleration processes and of the bremsstrahlung mechanism, these models would require an unrealistically high energy input, as pointed out by Petrosian (2001). Regarding the discrepancy between B X and B F R , Goldshmidt & Rephaeli already in 1993 suggested that this discrepancy could be alleviated by taking into consideration the expected spatial profiles of the magnetic field and relativistic electrons. More recently, it has been shown that IC models that include the effects of more realistic electron spectra, combined with the expected spatial profiles of the magnetic fields, and anisotropies in the pitch angle distribution of the electrons allow higher values of the intracluster magnetic field , in better agreement with the FR measurements Petrosian 2001).
A2199
The cluster has been observed in April 1997 for 100 ksec (Kaastra et al. 1999). The MECS data in the range 8-10 keV seem to show the presence of a hard excess with respect to the thermal emission. Between 9 ′ and 24 ′ , the count rate is 5.4±0.6 counts ks −1 , while the best fit thermal model predicts only 3.4 counts ks −1 . So, the excess is at a confidence level of ∼ 3.3σ. The PDS data are instead not sufficient to prove the existence of a hard tail. There are some difficulties to account for the presence of a nonthermal excess in this cluster because the electrons responsible for the hard emission would have an energy of ∼ 4 GeV and a resulting IC lifetime of only ∼ 3×10 8 yr. So, these electrons have to be replenished by a continuous acceleration process and this is particularly difficult to explain in A2199 that is a bright cooling flow cluster, a regular cluster without merger events able to release a fraction of the input energy in particle acceleration. However, a source of relativistic electrons may be given via the decay of pions produced by proton-proton collisions between intracluster cosmic rays and gas, as suggested by Blasi & Colafrancesco (1999). We have re-analyzed the MECS data and Fig. 2 shows only a point above the thermal model at the level of ∼ 2σ. However, the cluster is planned to be observed by XMM-Newton that should be able to discriminate between these two different results of the MECS data analysis, considering the low average gas temperature of about 4.5 keV (David et al. 1993).
A2256
The cluster A2256 is the second cluster where BeppoSAX detected a clear excess (see Fig. 3) at about 4.6σ above the thermal emission . The temperature is about 7.4±0.23 keV consistent with the value determined by previous observations of ASCA , Einstein and Ginga . The thermal emission is measured by the MECS taking into account the difference between the two fields of view of the two instruments. Also in this case the χ 2 value has a significant decrement when a second component, a power law, is added and also in this case the fit with a second thermal component gives an unrealistic temperature which can be interpreted as evidence in favour of a nonthermal mechanism for the second component present in the X-ray spectrum of A2256. The range of the photon index at 90% confidence level is very large : 0.3-1.7. The flux of the nonthermal component is ∼ 1.2 × 10 −11 erg cm −2 s −1 in the 20-80 keV energy range, rather stable against variations of the photon index.
There is only a QSO in the field of view of the PDS, QSO 4C+79.16, observed by ROSAT with a count rate of ∼ 0.041 c/s and about 1.2 c/s are necessary to reproduce the observed nonthermal excess and considering that the QSO is ∼ 52 ′ off-axis an unusual variability of about 2 orders of magnitude is required. The MECS image excludes the presence of an obscured source in the central region (∼ 30 ′ in radius) of the cluster. We want to stress that the analysis of A2256 regards two observations (46 & 96 ksec) with a time interval of ∼ 1 yr (Feb. 98 and Feb. 99) and this analysis does not show significant flux variations. In addition, we have re-observed the cluster after about two years from the previous one and the two spectra are consistent (see Fig. 4) and also in this case the observation is composed of two observations with a time interval of ∼ 1 month and the analysis does not show significant flux variations. So, this results and the fact that the two clusters with a detected hard excess, Coma and A2256, both have extended radio emission, make less plausible the point source interpretation and strongly support the idea of a diffuse nonthermal mechanism involving the ICM.
The diffuse radio emission of A2256 is very complex. It is composed of a relic at a distance of about 8 ′ from the center. A broad region (1×0.3 Mpc) with a rather uniform and flat spectral index of 0.8±0.1 between 610 and 1415 MHz (Bridle et al. 1979). There is a second fainter extended component in the cluster center with a steeper radio spectral index of ∼ 1.8 (Bridle et al. 1979, Rengelink et al. 1997. Markevitch & Vikhlinin (1997) in their analysis of the ASCA data noted a second component in the spectrum of A2256 in the central spherical bin of radius 3 ′ . Their best fit is a power law with a photon index of 2.4±0.3 which therefore favors a nonthermal component. Considering that there are no bright point sources in the ROSAT HRI image they suggested the presence of an extended source. Also the joint analysis ASCA GIS & RXTE PCA (Henriksen 1999) is consistent with a detection of a nonthermal component in addition to a thermal component. The MECS data do not show this steep nonthermal component in the central bin because the energy range is truncated to a lower limit of 2 keV . So, in conclusion the power law with slope 2.4 found in the ASCA data and the upper limit 1.7 determined by BeppoSAX suggest that two tails could be present in the X-ray spectrum of A2256. The former might be due to the radio halo with the steep index of 1.8 that is not visible in the PDS (we estimate a flux of ∼ 4×10 −13 erg cm −2 s −1 ), and the latter might be due to the relic with a flatter radio index of 0.8±0.1 that indicates a broad region of reaccelerated electrons, probably the result of the ongoing merger event shown by a Chandra observation (Sun et al. 2002).
A1367
A BeppoSAX observation of Abell 1367 has not detected hard X-ray emission in the PDS energy range above 15 keV (P.I.: Y.Rephaeli). A1367 is a near cluster (z=0.0215) that shows a relic at a distance of about 22 ′ from the center and a low gas temperature of ∼ 3.7 keV (David et al. 1993) that might explain lack of thermal emission at energies above 15 keV. We do not expect presence of nonthermal radiation for two reasons : the radio spectral index α R = 1.90±0.27 (Gavazzi & Trinchieri 1983) seems to indicate the absence of high energy reaccelerated electrons and in any case the steep spectrum gives a negligible flux in the PDS. Besides, the radio region has a limited extent of 8 ′ corresponding to 300 kpc. The source has been observed also by XMM-Newton and the data analysis is still in progress.
A3667
A3667 is one of the most spectacular clusters of galaxies . It contains one of the largest radio sources in the southern sky with a total extent of about 30 ′ which corresponds to about 2.6h −1 50 Mpc. A similar but weaker region is present also to the south-west (Robertson 1991;Röttgering et al. 1997). The Mpc-scale radio relics may be originated by the ongoing merger visible in the optical region, in the X-ray, as shown by the elongated isophotes, and in the weak lensing map. The ASCA observation reports an average gas temperature of 7.0±0.6 (Markevitch, Sarazin, & Vikhlinin 1999). The temperature map shows that the hottest region is in between the two groups of galaxies confirming the merger scenario. The PDS field of view includes only the radio region in the north of the cluster. A long observation with the PDS (effective exposure time 44+69 ksec) reports a clear detection of hard X-ray emission up to about 35 keV at a confidence level of ∼ 10σ. Instead, the fit with a thermal component at the average gas temperature indicates an upper limit for the nonthermal flux of ∼ 6.4 × 10 −12 erg cm −2 s −1 in the 20-80 keV energy range that is a factor ∼ 3.4 and ∼ 2 lower than the nonthermal fluxes detected in Coma and A2256, respectively (see Fig. 5). In the IC interpretation this flux upper limit combined with the radio synchrotron emission determines a lower limit to the volume-averaged intracluster magnetic field of 0.41µG.
Given the presence of such a large radio region in the NW of the cluster, a robust detection of a nonthermal component might be expected instead of the upper limit reported by BeppoSAX . One possible explanation may be related to the radio spectral structure of the NW relic. The sharp edge of the radio source (α R ∼ 0.5) is the site of particle acceleration (Roettiger, Burns & Stone 1999) , while the progressive index steepening with the increase of the distance from the shock (α R ∼ 1.5) would indicate particle ageing because of radiative losses. In the narrow shocked region, where particle reacceleration is at work, the magnetic field is expected to be amplified by adiabatic compression with the consequence that the synchrotron emission is enhanced thus giving a limited number of electrons able to produce IC X-rays. In the post-shock region of the relic the electrons suffer strong radiative losses with no reacceleration, considering also that the relic is well outside the cluster core. Therefore, the electron energy spectrum develops a high energy cutoff at γ < 10 4 and the electron energy is not sufficiently high to emit IC radiation in the hard X-ray A3667 -PDS data. The continuous lines represent the best fit with a thermal component at the average cluster gas temperature of 7.0±0.6 keV.
band. Synchrotron emission is detected from the post-shocked region, because the magnetic field is still strong enough due to the likely long time to relax.
A754 & A119
The last two clusters, A754 and A119, observed by BeppoSAX show an evident merger activity. It is plausible that a considerable fraction of the input energy during a merger process can be released in particle acceleration and remitted in various energy bands. The scope of these observations was to verify whether clusters showing merger events can produce nonthermal X-ray radiation also in the absence of a clear evidence of diffuse radio emission as it is for Coma and A2256.
The rich and hot cluster A754 is considered the prototype of a merging cluster. X-ray observations report a violent merger event in this cluster (Henry & Briel 1995;Henriksen & Markevitch 1996;De Grandi & Molendi 2001), probably a very recent merger as shown by a numerical hydro/N-body model (Roettiger, Stone, & Mushotzky 1998). Therefore, the intracluster medium of A754 appears to be a suitable place for the formation of radio halos or relics. As a consequence, radio and HXR observations of this cluster are relevant to verify the suggested link between the presence of nonthermal phenomena and merger activity in clusters of galaxies. The cluster has been recently observed with the NRAO VLA observatory (Kassim et al. 2001), after our BeppoSAX proposal, suggesting the existence of a radio halo and at least one radio relic. The presence of a radio halo is confirmed by a deeper observation at higher resolution (Fusco-Femiano et al. , in preparation). A754 was observed in hard X-rays with RXTE in order to search for a nonthermal component (Valinia et al. 1999) and the fit to the PCA and HEXTE data set an upper limit of ∼ 1.4× 10 −12 erg cm −2 s −1 in the 10-40 keV band to the nonthermal emission.
A long BeppoSAX observation of A754 shows an excess at energies above about 45 keV with respect to the thermal emission at the temperature of 9.4 keV (see Fig. 6). The excess is at a level of confidence of 3σ. The nonthermal flux There are two possible origins for the detected excess. One is tied to the presence of the diffuse radio emission and the other explanation is due to the presence of the radio galaxy 26W20 in the field of view of the PDS discovered by Harris et al. (Westerbork radio survey, Harris et al. 1980). This source shows X-ray characteristics similar to those of a BL Lac object. The radio galaxy has had several X-ray observations, due to its close proximity to A754, and all these observations give a flux of ∼ 2.3 × 10 −12 erg cm −2 s −1 in the 0.5-3 keV energy range. The source shows variability (18% in 5 days in 1992). The fit to the SED for 26W20 (see Fig. 7), where the highest energy points refer to the PDS observation assuming that this source is responsible for the detected excess, requires a flat index of about 0.3 to extrapolate the flux detected by ROSAT in the PDS energy range, taking into account the angular response of the detector. Unfortunately, the source is not in the field of view of the MECS because it is hidden by one of the calibration sources of the instrument. The conclusion is that a HXR observation with spatial resolution is necessary to discriminate between these two interpretations.
Finally, A119 was the last cluster observed by BeppoSAX to detect an additional nonthermal component in its X-ray spectrum. ROSAT PSPC, ASCA and BeppoSAX observations have shown a rather irregular and asymmetric X-ray brightness suggesting that the cluster is not completely relaxed and may have undergone a recent merger (Cirimele et al. 1997;Markevitch et al. 1998;Irwin, Bregman & Evrard 1999;De Grandi & Molendi 2001). The average cluster temperature measured by BeppoSAX is 5.66±0.16 keV within 20 ′ and is consistent with previous measurements of Einstein and EXOSAT . The excess with respect to the thermal emission at the average gas temperature measured by the MECS is at confidence level of ∼ 2.8σ (see Fig. 8). The nonthermal flux is in the range 7 − 8 × 10 −12 erg cm −2 s −1 in the 20-80 keV energy range and A2256 -Residuals in the form of a ratio of data to a thermal MEKAL model. The best fit temperature for the simulated spectrum is ∼ 4 keV. Full circles and stars are for the PN single and double events spectra, and open circles are for the MOS spectrum.
3 − 4 × 10 −12 erg cm −2 s −1 in the 2-10 keV energy band for a photon spectral index in the range 1.5-1.8.
A119 does not show evidence of a radio halo or relic, but the presence of a recent merger event could accelerate particles able to emit nonthermal emission in the PDS energy range. However, the presence of 7 QSO with redshift in the range 0.14-0.58 makes very unlikely that this possible excess at a flux level of 3 − 4 × 10 −12 erg cm −2 s −1 in the 2-10 keV energy band may be due to a diffuse source. We can instead exclude that the observed excess is due to the radio source 3C29, a FR I source located in the field of view of the MECS at a distance of about 21 ′ from the BeppoSAX pointing.
Conclusions
BeppoSAX observed a clear evidence of nonthermal emission in two clusters, Coma and A2256, both showing extended radio regions. In particular, the two observations of A2256 strongly support the presence of a diffuse non thermal mechanism involving the ICM. These detections and the lack of detection in other clusters seem to indicate that the essential requirement to observe additional nonthermal components at the level of the PDS sensitivity is the presence of large regions of reaccelerated electrons, with Lorenz factor 10 4 , due to the balance between radiative losses and reacceleration gains in turbulence generated by merger events that must be very recent considering the short lifetime of the electrons.
BeppoSAX , as it is well known, has ceased its activity at the end of April 2002. The next missions able to search for nonthermal components are : • INTEGRAL . In particular, with IBIS, that has a spatial resolution of 12 ′ , we have the opportunity a) to localize the source of the nonthermal X-ray emission.
In the case of a point source it is possible to identify it, while in the case of a diffuse source it is possible to verify whether the nonthermal emission is mainly concentrated in the cluster central region or in the external region, as predicted by the model for the Coma cluster of Brunetti et al. (2001), or it is uniformly spread over the whole radio halo present in the cluster. b) to have a better determination of the photon spectral index.
• ASTRO-E . The Hard X-ray Detector (HXD) has a field of view of 34 ′ × 34 ′ similar to that of the MECS. A positive detection of the nonthermal emission already measured by BeppoSAX in Coma and A2256 would eliminate the ambiguity between a diffuse emission involving the intracluster gas and a point source, considering that the MECS images do not show evidence for point sources.
• The future missions are represented by NEXT and CONSTELLATION .
These missions will be operative in the next years, but the energy range and the spectral capabilities of XMM-Newton /EPIC give the possibility to localize nonthermal components in regions of low gas temperature as shown by the simulation regarding the radio relic of A2256 performed using the nonthermal flux measured by BeppoSAX (see Fig. 9). This region has a gas temperature of 4 keV likely associated with the ongoing merger shown by a Chandra observation (Sun et al. 2001). So with XMM-Newton we should have the possibility, by comparing the X-ray and radio structures, to constrain the profiles of the magnetic field and of relativistic electrons.
Figure 1 .
1Figure 1. Coma cluster -HPGSPC and PDS data. The continuous line represents the best fit with a thermal component at the average cluster gas temperature of 8.5 +0.6 −0.5 keV.
Figure 2. A2199 -MECS data. The points represent the ratio of the data above the MEKAL model in the energy range 8-10 keV.
Figure 3 .
3Abell 2256 -MECS and PDS data. The continuous line represents the best fit with a thermal component at the average cluster gas temperature of 7.47±0.35 keV.
Figure 4 .
4A2256 -PDS data of two observations (Feb. 98/Feb. 99 and July 01/August 01). The continuous lines represent the best fit to the two data sets with a thermal component at the average cluster gas temperature of 7.47±0.35 keV; the error bars are at 1σ.
Figure 5 .
5Figure 5. A3667 -PDS data. The continuous lines represent the best fit with a thermal component at the average cluster gas temperature of 7.0±0.6 keV.
Figure 6 .
6A754 -MECS and PDS data. The continuous lines represent the best fit with a thermal component at the average cluster gas temperature of 9.42 +0.16 −0.17 keV is ∼ 1 × 10 −11 erg cm −2 s −1 in the range 40-100 keV consistent with the flux upper limit determined by RXTE (∼ 1.6 × 10 −12 erg cm −2 s −1 in the range 10-40 keV).
Figure 7 .
7Spectral energy distribution for 26W20. The highest energy points refer to the PDS observation. The dotted line is the fit to the SED.
Figure 8 .
8A119 -MECS and PDS data. The continuous lines represent the best fit with a thermal temperature at the avergage cluster gas temperature of 5.66 ± 0.16 keV.
Figure 9 .
9Figure 9. A2256 -Residuals in the form of a ratio of data to a thermal MEKAL model. The best fit temperature for the simulated spectrum is ∼ 4 keV. Full circles and stars are for the PN single and double events spectra, and open circles are for the MOS spectrum.
. A Bazzano, R Fusco-Femiano, C La Padula, V F Polcaro, P Ubertini, R K Manchanda, ApJ. 279515Bazzano, A., Fusco-Femiano, R., La Padula, C., Polcaro, V.F., Ubertini, P., & Manchanda, R.K. 1984, ApJ, 279, 515
. A Bazzano, R Fusco-Femiano, P Ubertini, F Perotti, E Quadrini, A J Court, N A Dean, A J Dipper, R Lewis, J B Stephen, ApJ. 36251Bazzano, A., Fusco-Femiano, R., Ubertini, P., Perotti, F., Quadrini, E., Court, A.J., Dean, N.A., Dipper, A.J., Lewis, R., & Stephen J.B. 1990, ApJ, 362, L51
. P Blasi, S Colafrancesco, APh. 12169Blasi, P., & Colafrancesco, S. 1999, APh, 12, 169
. P Blasi, ApJ. 5329Blasi, P. 2000, ApJ, 532, L9
. S Bowyer, T W Berghöfer, ApJ. 506502Bowyer, S., & Berghöfer, T.W. 1998, ApJ, 506, 502
. A Bridle, E Fomalont, G Miley, E Valentijn, A&A. 80201Bridle, A., Fomalont, E., Miley, G., & Valentijn, E. 1979, A&A, 80, 201
. U G Briel, A&A. 36560Briel, U.G. et al. 2001, A&A, 365, L60
. G Brunetti, G Setti, L Feretti, G Giovannini, MNRAS. 320365Brunetti, G., Setti, G., Feretti, L., & Giovannini, G., 2001, MNRAS, 320, 365
. G Cirimele, R Nesci, D Trevese, ApJ. 47511Cirimele. G., Nesci, R., & Trevese, D. 1997, ApJ, 475, 11
. L P David, A Slyz, C Jones, W Forman, S D Vrtilek, ApJ. 412479David, L.P., Slyz, A., Jones, C., Forman, W., & Vrtilek, S.D. 1993, ApJ, 412, 479
. S De Grandi, S Molendi, ApJ. 551153De Grandi, S., & Molendi, S. 2001, ApJ, 551, 153
. C Delzer, M Henriksen, AAS. 1933806Delzer, C., & Henriksen, M. 1998, AAS, 193, 3806
. V A Dogiel, A&A. 35766Dogiel, V.A. 2000, A&A, 357, 66
. T Ensslin, R Lieu, P L Biermann, A&A. 344409Ensslin, T., Lieu, R., & Biermann, P.L. 1999, A&A, 344, 409
. L Feretti, D Dallacasa, G Giovannini, A Tagliani, A&A. 302680Feretti, L., Dallacasa, D., Giovannini, G., & Tagliani, A. 1995, A&A, 302, 680
. F Frontera, E Costa, D Dal Fiume, M Feroci, L Nicastro, M Orlandini, E Palazzi, G Zavattini, A&AS. 122357Frontera, F., Costa, E., Dal Fiume, D., Feroci, M., Nicastro, L., Orlandini, M., Palazzi, E., & Zavattini, G. 1997, A&AS, 122, 357
. Fusco-Femiano, Fusco-Femiano et al
. R Fusco-Femiano, D Dal Fiume, L Feretti, G Giovannini, P Grandi, G Matt, S Molendi, A Santangelo, ApJ. 51321Fusco-Femiano, R., Dal Fiume, D., Feretti, L., Giovannini, G., Grandi, P., Matt, G., Molendi, S., & Santangelo, A. 1999, ApJ, 513, L21
R Fusco-Femiano, Diffuse Thermal and Relativistic Plasma in Galaxy Clusters. H.Böhringer, L.Feretti, & P.Schuecker271191Proc. MPE ReportFusco-Femiano, R. 1999, Proc. MPE Report 271, "Diffuse Thermal and Rel- ativistic Plasma in Galaxy Clusters", ed. : H.Böhringer, L.Feretti, & P.Schuecker, 191
. R Fusco-Femiano, D Dal Fiume, S De Grandi, L Feretti, G Giovannini, P Grandi, A Malizia, G Matt, S Molendi, ApJ. 5347Fusco-Femiano, R., Dal Fiume, D., De Grandi, S., Feretti, L., Giovannini, G., Grandi, P., Malizia, A., Matt, G., & Molendi, S. 2000, ApJ, 534, L7
. R Fusco-Femiano, D Dal Fiume, M Orlandini, G Brunetti, L Feretti, G Giovannini, ApJ. 55297Fusco-Femiano, R., Dal Fiume, D., Orlandini, M., Brunetti, G., Feretti, L., & Giovannini, G. 2001, ApJ, 552, L97
. G Gavazzi, G Trinchieri, ApJ. 270410Gavazzi, G., & Trinchieri, G. 1983, ApJ, 270, 410
. G Giovannini, L Feretti, T Venturi, K T Kim, P P Kronberg, ApJ. 406399Giovannini, G., Feretti, L., Venturi, T., Kim, K.T., & Kronberg, P.P. 1993, ApJ, 406, 399
. O Goldshmidt, Y Rephaeli, ApJ. 411518Goldshmidt, O., & Rephaeli, Y. 1993, ApJ, 411, 518
. D E Harris, A&A. 90283Harris, D.E. et al. 1980, A&A, 90, 283
. J P Henry, U G Briel, ApJ. 4439Henry, J.P., & Briel, U.G. 1995, ApJ, 443, L9
. M J Henriksen, M L Markevitch, ApJ. 46679Henriksen, M.J., & Markevitch, M.L. 1996, ApJ, 466, L79
. M Henriksen, PASJ. 50389Henriksen, M. 1998, PASJ, 50, 389
. M Henriksen, ApJ. 511666Henriksen, M. 1999, ApJ, 511, 666
. C. -Y Hwang, Science. 2781917Hwang, C. -Y. 1997, Science, 278, 1917
. J A Irwin, J N Bregman, A E Evrard, ApJ. 519518Irwin, J.A., Bregman, J.N., & Evrard, A.E. 1999, ApJ, 519, 518
. R Lieu, J P D Mittaz, S Bowyer, F J Lockman, C. -Y Hwang, J H M M Schmitt, ApJ. 4585Lieu, R., Mittaz, J.P.D., Bowyer, S., Lockman, F.J., Hwang, C. -Y., & Schmitt, J.H.M.M. 1996, ApJ, 458, L5
. J S Kaastra, R Lieu, J P D Mittaz, J A M Bleeker, R Mewe, S Colafrancesco, F J Lockman, ApJ. 519119Kaastra, J.S., Lieu, R., Mittaz, J.P.D., Bleeker, J.A.M., Mewe, R., Colafrancesco, S., & Lockman, F.J. 1999, ApJ, 519, L119
. N E Kassim, T E Clarke, T A Enßlin, A S Cohen, D M Neumann, ApJ. 559785Kassim, N.E., Clarke, T.E., Enßlin, T.A., Cohen, A.S., & Neumann, D.M. 2001, ApJ, 559, 785
. K T Kim, P P Kronberg, P E Dewdney, T L Landecker, ApJ. 35529Kim, K.T., Kronberg, P.P., Dewdney, P.E., & Landecker, T.L. 1990, ApJ, 355, 29
. G Matt, A&A. 34139Matt, G. et al. 1999, A&A, 341, L39
. M Markevitch, A Vikhlinin, ApJ. 47484Markevitch, M., & Vikhlinin, A. 1997, ApJ, 474, 84
. M Markevitch, W R Forman, C L Sarazin, A Vikhlinin, ApJ. 50377Markevitch, M., Forman, W.R., Sarazin, C.L., & Vikhlinin, A. 1998, ApJ, 503, 77
. M Markevitch, C L Sarazin, A Vikhlinin, ApJ. 521526Markevitch, M., Sarazin, C.L. & Vikhlinin, A. 1999, ApJ, 521, 526
. S Molendi, S De Grandi, R Fusco-Femiano, ApJ. 53443Molendi, S., De Grandi, S., & Fusco-Femiano, R. 2000, ApJ, 534, 43
. W I Newman, A L Newman, Y Rephaeli, astro-ph/0204451Newman, W.I., Newman, A.L., & Rephaeli, Y. 2002, astro-ph/0204451
. V Petrosian, ApJ. 557560Petrosian, V. 2001, ApJ, 557, 560
. R B Rengelink, A&AS. 124259Rengelink, R.B. et al. 1997, A&AS, 124, 259
. Y Rephaeli, ApJ. 227364Rephaeli, Y. 1979, ApJ, 227, 364
. Y Rephaeli, D E Gruber, R E Rothschild, ApJ. 320139Rephaeli, Y., Gruber, D.E., & Rothschild, R.E. 1987, ApJ, 320, 139
. Y Rephaeli, D E Gruber, ApJ. 333133Rephaeli, Y., & Gruber, D.E. 1988, ApJ, 333, 133
. Y Rephaeli, M Ulmer, D E Gruber, ApJ. 429554Rephaeli, Y., Ulmer, M., & Gruber, D.E. & 1994, ApJ, 429, 554
. Y Rephaeli, D E Gruber, P Blanco, ApJ. 51121Rephaeli, Y., Gruber, D.E., & Blanco, P. 1999, ApJ, 511, L21
. J G Robertson, Aust. J. Phys. 44729Robertson , J.G. 1991, Aust. J. Phys., 44, 729
. K Roettiger, J M Stone, R F Mushotzky, ApJ. 49362Roettiger, K., Stone, J.M., & Mushotzky, R.F. 1998, ApJ, 493, 62
. K Roettiger, J O Burns, J M Stone, ApJ. 518603Roettiger, K., Burns, J.O., & Stone, J.M. 1999, ApJ, 518, 603
. H Röttgering, I Snellen, G Miley, J P De Jong, R J Hanish, R Perley, ApJ. 436654Röttgering, H., Snellen, I., Miley, G., de Jong, J.P., Hanish, R.J., & Perley, R. 1994, ApJ, 436, 654
. H J A Röttgering, M H Wieringa, R W Hunstead, R D Ekers, 290577MN-RASRöttgering, H.J.A., Wieringa, M.H., Hunstead, R.W., & Ekers, R.D. 1997, MN- RAS, 290, 577
. C L Sarazin, J C Kempner, ApJ. 53373Sarazin, C.L., & Kempner, J.C. 2000, ApJ, 533, 73
. P Sreekumar, ApJ. 464628Sreekumar, P. et al. 1996, ApJ, 464, 628
. M Sun, S S Murray, M Markevitch, A Vikhlinin, ApJ. 565867Sun, M., Murray, S.S., Markevitch, M. & Vikhlinin, A. 2002, ApJ, 565, 867
. A Valinia, M J Henriksen, M Loewenstein, K Roettiger, R F Mushotzky, G Madejski, ApJ. 51542Valinia, A., Henriksen, M.J., Loewenstein, M., Roettiger, K., Mushotzky, R.F., & Madejski, G. 1999, ApJ, 515, 42
| [] |
[
"Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu",
"Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu",
"Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu",
"Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu"
] | [
"Amit Kanigel \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n\nPaul Scherrer Institute\n5232Villigen PSICHSwitzerland\n",
"Amit Keren \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n",
"Yaakov Eckstein \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n",
"Arkady Knizhnik \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n\nRutherford Appleton Laboratory\nChilton DidcotOX11 0QXOxfordshireU.K\n",
"James S Lord \nPaul Scherrer Institute\n5232Villigen PSICHSwitzerland\n",
"Alex Amato ",
"Amit Kanigel \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n\nPaul Scherrer Institute\n5232Villigen PSICHSwitzerland\n",
"Amit Keren \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n",
"Yaakov Eckstein \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n",
"Arkady Knizhnik \nPhysics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael\n\nRutherford Appleton Laboratory\nChilton DidcotOX11 0QXOxfordshireU.K\n",
"James S Lord \nPaul Scherrer Institute\n5232Villigen PSICHSwitzerland\n",
"Alex Amato "
] | [
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Paul Scherrer Institute\n5232Villigen PSICHSwitzerland",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Rutherford Appleton Laboratory\nChilton DidcotOX11 0QXOxfordshireU.K",
"Paul Scherrer Institute\n5232Villigen PSICHSwitzerland",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Paul Scherrer Institute\n5232Villigen PSICHSwitzerland",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Physics Department\nTechnion-Israel Institute of Technology\n32000HaifaIsrael",
"Rutherford Appleton Laboratory\nChilton DidcotOX11 0QXOxfordshireU.K",
"Paul Scherrer Institute\n5232Villigen PSICHSwitzerland"
] | [] | We characterize the spontaneous magnetic field, and determine the associated temperature Tg, in the superconducting state of (CaxLa1−x)(Ba1.75−xLa0.25+x)Cu3Oy using zero and longitudinal field µSR measurements for various values of x and y. Our major findings are: (I) Tg and Tc are controlled by the same energy scale, (II) the phase separation between hole poor and hole rich regions is a microscopic one, and (III) spontaneous magnetic fields appear gradually with no moment size evolution. | 10.1103/physrevlett.88.137003 | [
"https://arxiv.org/pdf/cond-mat/0110346v2.pdf"
] | 1,494,963 | cond-mat/0110346 | 870fe591b0a29f1e874b9260c364daec0f70ae23 |
Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu
3 Mar 2002 (November 20, 2018)
Amit Kanigel
Physics Department
Technion-Israel Institute of Technology
32000HaifaIsrael
Paul Scherrer Institute
5232Villigen PSICHSwitzerland
Amit Keren
Physics Department
Technion-Israel Institute of Technology
32000HaifaIsrael
Yaakov Eckstein
Physics Department
Technion-Israel Institute of Technology
32000HaifaIsrael
Arkady Knizhnik
Physics Department
Technion-Israel Institute of Technology
32000HaifaIsrael
Rutherford Appleton Laboratory
Chilton DidcotOX11 0QXOxfordshireU.K
James S Lord
Paul Scherrer Institute
5232Villigen PSICHSwitzerland
Alex Amato
Common energy scale for magnetism and superconductivity in underdoped cuprates: a µSR investigation of (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu
3 Mar 2002 (November 20, 2018)
We characterize the spontaneous magnetic field, and determine the associated temperature Tg, in the superconducting state of (CaxLa1−x)(Ba1.75−xLa0.25+x)Cu3Oy using zero and longitudinal field µSR measurements for various values of x and y. Our major findings are: (I) Tg and Tc are controlled by the same energy scale, (II) the phase separation between hole poor and hole rich regions is a microscopic one, and (III) spontaneous magnetic fields appear gradually with no moment size evolution.
There is growing evidence that at low temperatures (T), cuprates phase-separate into regions that are hole "poor" and hole "rich" 1 . While hole rich regions become superconducting below T c , the behavior of hole poor regions at these temperatures is not quite clear. Some data support the existence of magnetic moments in these regions. In impure cases, like Zn or Li doped YBCO, the impurity creates both the hole poor regions 2 and the magnetic moments 3,4 . In pure cases, such as LSCO, these magnetic moments are created spontaneously and undergo a spin glass like freezing at T g 5 . However, there are still many open questions regarding these moments and the spontaneous magnetic fields associated with them. For example: Is there a true phase transition at T g ? What is the field profile and how is it different from, or similar to, a canonical spin glass? Is the field confined solely to the hole poor regions or does it penetrate the hole rich regions? Also, the interplay between magnetism and superconductivity is not clear. Is strong magnetic background beneficial or detrimental to superconductivity?
We address these questions by performing zero and longitudinal field muon spin relaxation experiments on a series of polycrystalline (Ca x La 1−x )(Ba 1.75−x La 0.25+x )Cu 3 O y (CLBLCO) samples. This superconductor belongs to the 1:2:3 family and has several properties that make it ideal for our purpose. It is tetragonal throughout its range of existence 0 ≤ x 0.5, so there is no ordering of CuO chains. Simple valence sums 6 , more sophisticated bond-valance calculations 7 , and thermoelectric power measurements 8 show that the hole concentration is x independent. As shown in Fig. 1, by changing y, for a constant value of x, the full superconductivity curve, from the under-doped to the over-doped, can be obtained. Finally, for different Ca contents, parallel curves of T c vs y are generated. Therefore, with CLBLCO one can move continuously, and with minimal structural changes, from a superconductor resembling YBCO to one similar to LSCO. 11K The preparation of the samples is described elsewhere 9 . Oxygen content was determined using iodometric titration. All the samples were characterized using X-ray diffraction and were found to be single phase. T c is obtained from resistivity measurements. We also verified using TF-µSR that CLBLCO respects the Uemura relations 10 and that it is a bulk superconductor.
T g =5K 8K (Ca x La 1-x )(Ba 1.75-x La 0.25+x )Cu 3 O y y T c (K) x=0.1 x=0.2 x=0.3 x=0.4
The µSR experiments were done at two facilities. When a good determination of the base line was needed we used the ISIS pulsed muon facility, Rutherford Appleton Laboratory, UK. When high timing resolution was required we worked at Paul Scherrer Institute, Switzerland (PSI). Most of the data were taken with a 4 He cryostat. However, in order to study the internal field profile we had to avoid dynamical fluctuations by freezing the moments completely. For this purpose we used the 3 He cryostat at ISIS with a base temperature of 350 mK. Typical muon asymmetry [A(t)] depolarization curves, proportional to the muon polarization P z (t), are shown in Fig. 2 (a) for different temperatures in the x = 0.1 and y = 7.012 (T c = 33.1K) sample. The change of the polarization shape with temperature indicates a freezing process, and the data can be divided into three temperature regions. In region (I), given by T 8 K, the muon relaxes according to the well known Kubo-Toyabe (KT) function, typical of the case where only frozen nuclear moments are present 11 . In region (II), bounded by 8 K T 3 K, part of the polarization relaxes fast and the rest as in the first region. As the temperature is lowered the fast portion increases at the expense of the slow one. Moreover, the relaxation rate in the fast portion seems independent of temperature. Finally, at long time the asymmetry relaxes to zero. In region (III), where 3 K T , the asymmetry at long times no longer relaxes to zero, but instead recovers to a finite value. This value is ≃ 1/3 of the initial asymmetry A z (0).
To demonstrate that the internal field is static at base temperature, the muon polarization was measured with an external field applied parallel to the initial muon spinpolarization. This geometry allows one to distinguish be-tween dynamic and static internal fields. In the dynamic case the asymmetry is field independent 12 . In contrast, in the static case the total field experienced by the muon is a vector sum of H and the internal fields, which are of order B 2 1/2 . For H ≫ B 2 1/2 the total field is nearly parallel to the polarization. Therefore, in the static case, as H increases, the depolarization decreases, and the asymmetry recovers to its initial value. Because we are dealing with a superconductor this field sweep was done in fieldcool conditions. Every time the field was changed the sample was warmed up above T c and cooled down in a new field. The results are shown in Fig. 2(b). At an external field of 250 G, the total asymmetry is nearly recovered. Considering the fact that the internal field is smaller than the external one due to the Meissner effect, this recovery indicates that the internal field is static and of the order of tens of Gauss.
We divide the data analysis into two parts: high temperatures (region II), and base temperature. First we discuss region II. Here we focus on the determination of T g . For that purpose we fit a combination of a fast relaxing function and a KT function to the data 13
A z (t) = A m exp − √ λt + A n KT (t),(1)
where A m denotes the amplitude of the magnetic part, λ is the relaxation rate of the magnetic part, and A n is the amplitude of the nuclear part. The relaxation rate of the KT part was determined at high temperatures and is assumed to be temperature independent. The sum A m + A n is constrained to be equal to the total initial asymmetry at high temperatures. The relaxation rate λ is common to all temperatures. The solid lines in Fig. 2 are the fits to the data using Eq. 1.
6.93 6.94 6.95 6.96 6.97 6.98 6.99 7.00 7.01 7.02 0 The success of this fit indicates the simultaneous presence of two phases in the sample; part of the muons probe the magnetic phase while others probe only nuclear moments. As the temperature decreases A m , which is presented in the insert of Fig. 3, grows at the expense of A n . At low temperatures A m saturates to the full muon assymettry. A similar temperature dependecne of A m is found in all samples. The origin of the magnetic phase is electronic moments that slow down and freeze in a random orientation. The fact that λ is temperature independent means that in the magnetic phase γ µ B 2 1/2 , where γ µ is the muon gyromagnetic ratio, is temperature independent. In other words, as the temperature is lowered, more and more parts of the sample become magnetic, but the moments in these parts saturate upon freezing.
Our criterion for T g is the temperature at which A m is half of the total muon polarization as demonstrated by the vertical line in the insert of Fig. 3. The phase diagram that is shown in Fig. 3 represents T g for various samples differing in Ca and O contents. This diagram is systematic and rather smooth suggesting good control of sample preparation. As expected, for constant x, higher doping gives lower T g .
We have singled out three groups of samples with a common T g = 11, 8 and 5 K as shown in Fig. 3 by the horizontal solid lines. These samples are represented in the phase diagram in Fig. 1 by the dotted lines. The phase diagram, containing both T g and T c , is the first main finding of this work. It provides clear evidence of the important role of the magnetic interactions in high temperature superconductivity. In fact, this phase diagram is consistent with recent theories 14 of hole pair boson motion in an antiferromagnetic background. Those theories conclude that T c ∝ Jn s where n s is the superconducting carrier density, and J is the antiferromagnetic coupling energy 15 . From the measurements at constant x we see that T g ∝ Jf (n s ) where f is some decreasing function of n s . We assume that n s = n s [K(x)∆y] where ∆y = y − 7.15 is chemical doping measured from optimum, and K is a scaling parameter which relates chemical to mobile-charge doping. Since T max We now turn to discuss the muon depolarization at base temperature. In this case all the muons experience only a static magnetic field, as proven above. This allows one to reconstruct the internal field distribution out of the polarization curve. The polarization of a muon spin experiencing a unique field B is given by P z (t) = cos 2 (θ) + sin 2 (θ) cos(γ|B|t), where θ is the angle between the field and the initial spin direction. When there is an isotropic distribution of fields, a 3D powder averaging leads to
P z (t) = 1 3 + 2 3 ∞ 0 ρ(|B|) cos(γ|B|t)B 2 dB(2)
where ρ(|B|) is the distribution of |B|. Therefore, the polarization is given by the Fourier transform of ρ(|B|)B 2 and has a 1/3 base line. When the distribution of B is centered around zero field, ρ(|B|)B 2 is a function with a peak at B and a width ∆, and both these numbers are of the same order of magnitude [e.g. Fig 4(b)]. Therefore we expect the polarization to have a damped oscillation and to recover to 1/3, a phenomenon known as the dip [e.g. the inset in Fig 4 (b)]. Gaussian, Lorentzian and even exponential random field distribution 16 , and, more importantly, all known canonical spin glasses, produce polarization curves that have a dip before the 1/3 recovery. Furthermore, a dipless polarization curve that staruates to 1/3 cannot be explanied using dynamical arguments. Therefore, the most outstanding feature of the muon polarization curve at base temperature is the fact that no dip is present, although there is a 1/3 tail. This behavior was found in all of our samples with T c > 7 K, and also in Ca doped YBCO 17 and Li doped YBCO 18 . 4. (a) The internal field distribution extracted from the simulations for the case of correlation length ξ = 3 lattice constants, maximum moment size of 0.06µB and magnetic moment concentration p = 15%. Inset: The muon spin polarization for that distribution. (b) The same as above for the case of p=35%.
The lack of the dip in P z (t) can tell much about the internal field distribution. It means that B is much smaller than ∆. In that case the oscillations will be over-damped and the polarization dipless! In Fig. 4 we show, in addition to the B ≃ ∆ case described above [panel (b)], a field distribution that peaks around zero [panel (a)]. Here B is smaller than ∆, and, indeed, the associated polarization in the inset is dipless. Thus in order to fit the base temperature polarization curve we should look for ρ(|B|)B 2 with most of its weight around zero field. This means that ρ(|B|) diverges like 1/B 2 at |B| → 0, namely, there is abnormally high number of low field sites.
It also means that the phase separation is not a macroscopic one. If it were, all muons in the field free part would probe only nuclear moment and their polarization curve should have a dip or at least its begining as in the high temperature data. The same would apply for the total polarization curve, in contrast to observation. Thus, the superconducting and magnetic regions are intercalated on a microscopic scale (∼ 20Å) 19 . This is the second main finding of this work.
The special internal field distribution, and the nature of the gradual freezing of the spins, can be explained by the intrinsic inhomogeneity of hole concentration. The part of the sample that is hole poor, and for that reason is "more" antiferromagnetic, will freeze, while the part which is hole rich will not freeze at all. The variation in the freezing temperature of different parts of the sample can be explained by the distribution of sizes and hole concentration in these antiferromagnetic islands 20 . The large number of low field sites is a result of the fact that the magnetic field generated in the magnetic regions will penetrate into the hole rich regions but not completely.
To improve our understanding of the muon polarization, we performed simulations of a toy model aimed at reproducing the results described above. A 2D 100 × 100 square lattice is filled with two kinds of moments, nuclear and electronic. All the nuclear moments are of the same size, they are frozen and they point in random directions. Out of the electronic moments only a small fraction p is assumed to be frozen; they represent magnetic regions with uncompensated antiferromagnetic interactions. Since these regions may vary in size the moments representing them are random, up to a maximum size. The frozen electronic moments induce spin polarization in the other electronic moments surrounding them. Following the work of others 21 , we use decaying staggered spin susceptibility which we take to be exponential, namely,
χ ′ (r) = (−1) nx+ny exp(−r/ξ)(3)
where r = n x a x + n y a y represents the position of the neighbor Cu sites, a is the lattice vector, and ξ is the characteristic length scale. Because of this decay, at low frozen spin concentration, large parts of the lattice are practically field free (expect for nuclear moments). However, the important point is that no clear distinction between magnetic and field free (superconducting) regions exists.
The muon polarization time evolution in this kind of field distribution is numerically simulated. The interaction between the muon and all the other moments is taken to be dipolar, and ξ is taken to be 3 lattice constants 1,2 . The dashed line in Fig. 2 is a fit to the T = 350 mK data, which yield p = 15% and maximum moment size ≃ 0.06µ B . As can be seen, the line fits the data very well. However, as expected, the fit is sensitive to pξ 2 only, namely the effective area of the magnetic islands, so longer ξ would have given smaller p. The field distributions and the polarization curve shown in Fig. 4 were actually generated using the simulation. In (a) the spin density is 15% while in (b) the density is 35%.
In panel (c) of Fig. 2 we show the spin polarization for different hole concentration, varying from 0% to 35% with the same ξ = 3. The resemblance between the simulation results and the muon polarization as a function of temperature in panel (a) leads us to our third conclusion that the freezing process is mostly a growth in the total area of the frozen AF islands.
We are now in a position to address the questions presented in the introduction. The appearance of spontaneous magnetic field in CLBLCO is a gradual process. As the temperature is lowered microscopic regions of frozen moments appear in the samples, and their area increases but the moments do not. In the ground state the field profile is very different from that of a canonical spin glass or any other standard magnet. It could only be generated by microscopic intercalation of an abnormal number of zero field regions with magnetic regions without a clear distinction between the two. Finally, and most importantly, the phase diagram containing both T c and T g leads us to believe that these temperatures are determined by the same energy scale given by J.
−x)(Ba1.75−xLa0.25+x)Cu3Oy. The dashed lines indicate samples with equal Tg. Insets: Tc/T max c and Tg/T max c as a function of K(x)∆y where ∆y = y − 7.15, and K(x) is chosen so that all Tc/T max c data sets collapse to a single curve.
FIG
. 2. (a) ZF-µSR spectra obtained in a x = 0.1, y = 7.012 sample. The solid lines are fit to the data using Eq. 1, the dashed line is a fit using the simulation as described in the text. (b) µSR spectra obtained in longitudinal fields from the x = 0.4, y = 6.984 sample at 350mK. (c) Polarization curves generated by the simulation program as described in the text.
FIG. 3 .
3Tg vs. y. The horizontal solid lines are the equal Tg lines appearing in Fig 1. Inset: Magnetic amplitude as function of temperature for a x = 0.3 y = 6.994 sample. The arrow indicates Tg of that sample.
∝
Jn s (optimum), both T c /T max c and T g /T max c should be functions only of K(x)∆y. We find K(x) by making all T c /T max c collapse onto one curve. This is demonstrated in the upper inset of Fig. 1. Using these values of K(x) we also plot T g /T max c as a function of K(x)∆y in the lower inset of Fig. 1. Again all data sets collapse onto a single curve. This indicates that the same single energy scale J controls both the superconducting and magnetic transitions.
FIG. 4. (a) The internal field distribution extracted from the simulations for the case of correlation length ξ = 3 lattice constants, maximum moment size of 0.06µB and magnetic moment concentration p = 15%. Inset: The muon spin polarization for that distribution. (b) The same as above for the case of p=35%.
We would like to thank the PSI and ISIS facilities for their kind hospitality and continuing support of this project. We acknowledge very helpful discussions with Assa Auerbach and Ehud Altman. This work was funded by the Israeli Science Foundation, the EU-TMR program, and the Technion V. P. R fund -Posnansky, and P. and E. Nathen, research funds.
cond-mat/0101251. C Howald, Nature. J.M. Tranquada et al375561C. Howald et al., cond-mat/0101251, J.M. Tranquada et al, Nature 375, 561 (1995);
. V J Emery, S A Kivelson, Physica C. 209597V.J. Emery and S.A. Kivelson, Physica C 209, 597 (1993).
. B Nachumi, Phys. Rev. Lett. 775421B. Nachumi et al. Phys. Rev. Lett. 77, 5421 (1996);
. S H Pan, Nature. 401746S. H. Pan et al., Nature 401, 746 (2000).
. P Mendels, Phys. Rev. B. 4910035P. Mendels et al., Phys. Rev. B 49, 10035 (1994).
. J Bobroff, Phys. Rev. Lett. 864116J. Bobroff et al., Phys. Rev. Lett. 86, 4116 (2001).
. C Niedermayer, Phys. Rev. Lett. 803843C. Niedermayer et al. Phys. Rev. Lett. 80, 3843 (1998);
. F C Chou, Phys. Rev. Lett. 712323F. C. Chou et al., Phys. Rev. Lett. 71, 2323 (1993).
. O Chmaissem, Phys. Rev. B. 63174510O. Chmaissem et al., Phys. Rev. B 63, 174510 (2001).
. A Knizhnik, Physica C. 321199A. Knizhnik et al., Physica C 321, 199 (1999).
. D Goldschmidt, Phys. Rev. B. 48532D. Goldschmidt et al., Phys. Rev. B 48, 532 (1993).
Muon Science: Muons in Physics, Chemistry and Materials. S. L. Lee, S. H. Kilcoyne and R. CywinskiLondonInstitute of PhysicsMuon Science: Muons in Physics, Chemistry and Materi- als, Eds S. L. Lee, S. H. Kilcoyne and R. Cywinski (Insti- tute of Physics, London), 1999.
As long as γµH < ν, where ν is the fluctuation rate. As long as γµH < ν, where ν is the fluctuation rate.
. A T Savici, Physica B. 338A. T. Savici et al., Physica B 289-290, 338 (2000).
. S C Zhang, Phys. rev. 6013060S. C. Zhang et al., Phys. rev. B60, 13060 (1999);
. E Altman, A Auerbach, cond-mat/0108087E. Altman and A. Auerbach, cond-mat/0108087.
. Y J Uemura, Phys. Rev. Lett. 622317Y. J. Uemura et al., Phys. Rev. Lett. 62, 2317 (1989);
. V J Emery, S A Kivelson, Nture. 374434V. J. Emery and S. A. Kivelson, Nture 374, 434 (1995).
. M I Larkin, Phys. Rev. Lett. 851982M. I. Larkin et al., Phys. Rev. Lett. 85, 1982 (2000).
. C Bernhard, Phys. Rev. B. 588937C. Bernhard et al. Phys. Rev. B 58, 8937 (1998).
. P Mendels, private communicationP. Mendels (private communication).
A moment of 1µB will induce a 1G field at a distance of ∼ 20Å. This field is equivalent to the nuclear moments backgroundA moment of 1µB will induce a 1G field at a distance of ∼ 20Å. This field is equivalent to the nuclear moments background.
. J H Cho, Phys. Rev. B. 463179J.H. Cho et. al., Phys. Rev. B 46, 3179 (1992).
. A J Millis, H Monien, D Pines, Phys. Rev. B. 42167A. J. Millis, H. Monien, and D. Pines, Phys. Rev. B 42, 167 (1990);
. J Bobroff, Phys. Rev. Lett. 792117J. Bobroff et al., Phys. Rev. Lett. 79, 2117 (1997);
. M H Julien, Phys. Rev. Lett. 843422M. H. Julien et al, Phys. Rev. Lett. 84, 3422 (2000).
| [] |
[
"Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays",
"Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays",
"Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays",
"Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays"
] | [
"J M Lindert \nDepartment of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK\n",
"S Pozzorini \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n",
"M Schönherr \nInstitute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUK\n",
"J M Lindert \nDepartment of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK\n",
"S Pozzorini \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n",
"M Schönherr \nInstitute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUK\n"
] | [
"Department of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"Institute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUK",
"Department of Physics and Astronomy\nUniversity of Sussex\nBN1 9QHBrightonUK",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"Institute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUK"
] | [] | We present next-to-leading order QCD and electroweak (EW) theory predictions for V + 2 jet production, with V = Z, W ± , considering both the QCD and EW production modes and their interference. We focus on phase-space regions where V + 2 jet production is dominated by vector-boson fusion, and where these processes yield the dominant irreducible backgrounds in searches for invisible Higgs boson decays. Predictions at parton level are provided together with detailed prescriptions for their implementation in experimental analyses based on the reweighting of Monte Carlo samples. The key idea is that, exploiting accurate data for W + 2 jet production in combination with a theory-driven extrapolation to the Z + 2 jet process can lead to a determination of the irreducible background at the few-percent level. Particular attention is devoted to the estimate of the residual theoretical uncertainties due to unknown higher-order QCD and EW effects and their correlation between the different V + 2 jet processes, which is key to improve the sensitivity to invisible Higgs decays. | 10.1007/jhep01(2023)070 | [
"https://export.arxiv.org/pdf/2204.07652v1.pdf"
] | 248,227,412 | 2204.07652 | b5ed79964675313bb158f82b91109662ddc00526 |
Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays
J M Lindert
Department of Physics and Astronomy
University of Sussex
BN1 9QHBrightonUK
S Pozzorini
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
M Schönherr
Institute for Particle Physics Phenomenology
Department of Physics
Durham University
DH1 3LEDurhamUK
Precise predictions for V+2 jet backgrounds in searches for invisible Higgs decays
We present next-to-leading order QCD and electroweak (EW) theory predictions for V + 2 jet production, with V = Z, W ± , considering both the QCD and EW production modes and their interference. We focus on phase-space regions where V + 2 jet production is dominated by vector-boson fusion, and where these processes yield the dominant irreducible backgrounds in searches for invisible Higgs boson decays. Predictions at parton level are provided together with detailed prescriptions for their implementation in experimental analyses based on the reweighting of Monte Carlo samples. The key idea is that, exploiting accurate data for W + 2 jet production in combination with a theory-driven extrapolation to the Z + 2 jet process can lead to a determination of the irreducible background at the few-percent level. Particular attention is devoted to the estimate of the residual theoretical uncertainties due to unknown higher-order QCD and EW effects and their correlation between the different V + 2 jet processes, which is key to improve the sensitivity to invisible Higgs decays.
of the Higgs boson proceeds via H → ZZ * → 4ν, with a branching ratio of only about 10 −3 [1]. In various extensions of the SM this invisible branching ratio can be strongly enhanced [2][3][4], in particular in scenarios where the Higgs boson can decay into a pair of weakly interacting massive particles -prime candidates of particle dark matter [5][6][7][8][9][10] (for a recent review see Ref. [11]). Therefore, experimental limits on invisible Higgs decays (H → inv) can be used to exclude regions of parameter space of these models. At the LHC any production mode where the Higgs boson is produced in association with visible SM particles can in principle be used in order to search for H → inv. Most stringent bounds have been obtained combining searches in Higgs production via vector-boson fusion (VBF) and Higgs production in association with a vector boson (VH) performed by both ATLAS [12][13][14][15] and CMS [16][17][18]. These searches yield as currently best limit on the invisible Higgs branching ratio Br(H → inv) < 0.19 at 95% confidence level [18]. The sensitivity in these searches is dominated by the VBF channel, i.e. the signature of two forward jets with large invariant mass together with sizeable missing transverse energy. This signature receives large contributions from irreducible SM backgrounds, originating in particular from Z-boson production and decay into neutrinos in association with two jets. Significant sensitivity improvements in H → inv searches can be achieved by controlling these backgrounds at the percent level. This in turn becomes possible via a theory assisted data-driven strategy, where precision measurements are combined with state-of-the-art theoretical predictions for Z+2 jet and W +2 jet distributions and for their ratios. Using this approach for the V +jet backgrounds to monojet signals [19] made it possible to enhance the sensitivity of dark-matter searches at the LHC in a very significant way [20,21].
Besides controlling backgrounds in H → inv searches, V + 2 jet production is of importance and relevance in its own right. It serves as a laboratory for QCD dynamics and can be used to derive stringent bounds on anomalous triple gauge boson couplings and corresponding dimension-6 effective field theory coefficients [22][23][24][25][26]. In regard of the former, VBF production of vector bosons, which contributes to V + 2 jet production at large dijet invariant mass and/or rapidity separation, can provide important insights for the understanding of the QCD dynamics in vector boson scattering (VBS) processes.
In this paper we present new theory predictions for V + 2 jet production, with V = Z, W ± , including higher-order QCD and electroweak (EW) corrections together with detailed recommendations for their implementation for improving V + 2 jet backgrounds in searches for invisible Higgs decays. To be precise, we consider V + 2 jet production at next-to-leading order (NLO) QCD and EW. At the leading order (LO), these processes receive three perturbative contributions. The leading one in the strong coupling constant α s is customary denoted as QCD production mode, while the contribution with the lowest order in α s is denoted as EW production mode. The third LO contribution corresponds to the interference between the QCD and the EW modes. The EW mode receives contributions from VBF-type production as well as from diboson production with subsequent semi-leptonic decays, and in the case of W + 2 jet also from single-top production with leptonic decays. At the NLO-level four perturbative contributions emerge, of which only the highest and lowest order in α s can unambiguously be denoted as NLO QCD corrections to the QCD modes and NLO EW corrections to the EW mode, respectively. The remaining two contributions formally receive both O(α s ) and O(α) corrections and partly overlap. In this study we present predictions for all of these LO and NLO contributions, considering pp → W ± + 2 jets and pp → Z + 2 jets including off-shell leptonic decays and invisible decays in the case of Z + 2 jet production. We critically investigate remaining higher-order uncertainties at the NLO level and their correlation between the different V + 2 jet processes. To this end we consider besides remaining QCD and EW uncertainties also uncertainties due to missing mixed QCD-EW corrections and due to the matching to parton showers (PS). For the implementation of these theoretical predictions in the framework of invisible Higgs searches we propose a procedure based on the reweighting of Monte Carlo samples, providing also detailed prescriptions for the estimate of theoretical uncertainties including correlations between the Z + 2 jet and W + 2 jet processes.
The NLO QCD corrections to the V + 2 jet QCD production modes are widely available [27][28][29] (for pp → V +n jets with n > 2 see e.g. [30][31][32][33][34][35][36]) and even next-to-next-to leading order (NNLO) corrections are within reach [37,38]. The NLO QCD corrections to the QCD modes are readily available within general purpose shower Monte Carlo (SMC) programs [39][40][41][42], where they typically enter Monte Carlo samples when NLO predictions for V + 0, 1, 2 jets production are merged and combined with parton showers at NLO [43][44][45][46]. Additionally, logarithmically enhanced corrections beyond fixed-order NLO due to wideangle QCD emissions are available [47][48][49]. NLO EW corrections to the QCD modes of V +2 jet production are known at fixed-order [50][51][52][53] and have also been combined with a QCD+QED parton shower using an approximation where only subleading QED effects are neglected [52]. The QCD corrections to the EW modes are only known in the so-called VBF approximation, where the VBF subprocess alone is considered, and the cross-talk between quark lines is neglected in the higher-order corrections [54,55]. Within this Figure 1: Tower of perturbative contributions to V + 2 jet production at LO and NLO considered and evaluated in this study. In the presented counting the O(α) vector-boson decays are included.
↵ S ↵ LO NLO Mode QCD EW ↵ 2 S ↵ 2 ↵ S ↵ 3 ↵ 4 ↵ 5 ↵ S ↵ 4 ↵ 2 S ↵ 3 ↵ 3 S ↵ 2
approximation NLO QCD corrections to the EW modes have been matched to parton showers [56,57]. The NLO EW corrections to the EW production modes are currently not known and presented here for the first time. The paper is organised as following. In Section 2 we discuss the structure of the NLO corrections to V + 2 jet production considering both the QCD and EW production modes and their interference. In Section 3 we propose a reweighting procedure for the incorporation of the higher-order corrections into Monte Carlo samples. Theoretical predictions and uncertainties are presented in Section 4, and our conclusions can be found in Section 5.
2 V + 2 jet QCD and EW production modes at NLO
At LO the process pp → V + 2 jet, with V = Z ν for pp → Z(νν) + 2 jets Z for pp → Z( + − ) + 2 jets W ± for pp → W ± ( ± ν) + 2 jets (1)
receives three perturbative contributions as illustrated in the top row of Fig. 1. Thus, the total LO differential cross section in a certain observable x can be written as
d dx σ V LO = d dx σ V,QCD LO + d dx σ V,EW LO + d dx σ V,interf LO .(2)
The QCD mode contributes at O(α 2 s α 2 ) and consists of absolute squares of the coherent sum of diagrams of O(g 2 s e 2 ), exemplified by Figs 2a and 2b. In this counting vector-boson decays ( + − /νν/ ± ν) are included. The EW mode, on the other hand, contributes at O(α 4 ) and comprises the absolute square of the coherent sum of all diagrams of O(e 4 ), see Figs. 2e-2l for example diagrams. Their interference contribution at O(α s α 3 ) then is mostly comprised of the interference of O(g 2 s e 2 ) diagrams with O(e 4 ) diagrams. It, however, also contains genuine contributions consisting of absolute squares of O(g s e 3 ) diagrams, for an example see Figs. 2c and 2d, typically containing an external gluon and an external photon.
The contributions to the EW mode (and consequently also to the interference) deserve some closer inspection. Diagrams illustrated in Figs. 2e and 2f, contribute to VBF-type production, while diagrams as in Figs. 2g and 2h contribute to (off-shell) diboson production with one vector boson decaying hadronically and the other leptonically. In the literature these are often denoted as t-channel and s-channel contributions, respectively. At LO these tand s-channel contributions can easily be separated in a gauge invariant way in the well known VBF approximation. For example, requiring at least one t-channel vector boson propagator and omitting t-u-channel interferences selects the VBF process, which includes contributions where the leptonically decaying vector boson couples directly to one of the external quark lines, as shown in Fig. 2e. In addition, the EW mode also features photon-induced processes, see Fig. 2i. Since we employ the five-flavour (5F) number scheme throughout in the PDFs, b-quarks are treated as massless partons, and channels with initial-state b-quarks are taken into account for all processes and perturbative orders. In the 5F scheme, the process pp → W + 2 jets includes partonic channels of type qb → q bW that involve EW topologies corresponding to t-channel single-top production, qb → q t(bW ), as illustrated in Fig. 2k. Top resonances occur also in light-flavour channels of type qq →bbW , which receive contributions from s-channel single-top production, qq →bt(bW ), illustrated in Fig. 2l. All these single-top contributions are consistently included in our predictions. At small m j1j2 their numerical impact can yield a substantial fraction of the total EW W + 2 jet cross section at LO. For example, the combined t-channel and s-channel pp → tj processes yields around 25% of the total EW W + 2 jet process at m j1j2 = 500 GeV. At higher m j1j2 the impact of the single-top modes is increasingly suppressed, and for m j1j2 > 2.5 TeV it is below 1% of the EW W + 2 jet process. More details on the impact of single-top contributions can be found in Sect. 4.2.1. The LO interferences between QCD and EW modes that contribute at O(α s α 3 ) are largely colour suppressed and yield very small contributions. This in particular holds in the VBF phase space, i.e. with large dijet invariant masses and large rapidity separation of the leading jets.
q q ′ q q ℓ/ν ℓ/ν V (a) q q ′ g g ℓ/ν ℓ/ν V (b) q q ′ γ g ℓ/ν ℓ/ν V (c) q q ′ γ g ν ℓ W W (d) q q ′ q ′ q ℓ/ν ℓ/ν V (e) q q ′ q ′ q ℓ/ν ℓ/ν V (f) q q ′ q q ′ ℓ/ν ℓ/ν V V ′ (g) q q ′ q q ′ ℓ/ν ℓ/ν V V ′ (h) q q ′ γ γ ℓ/ν ℓ/ν V (i) b q ′ q b ℓ ν W W t (j) q b q ′ b ν ℓ W W t (k) q b b q ′ ν ℓ W t W (l)
As illustrated in Fig. 1 (bottom row) at NLO four perturbative contributions emerge. Out of these only the contribution with the highest and lowest power of α s , i.e. the ones of O(α 3 s α 2 ) and O(α 5 ), can unambiguously be considered as, respectively, QCD corrections to the QCD mode and EW corrections to the EW mode. The former are well known in the literature [27][28][29] while the latter are considered here for the first time. The contribution of O(α 2 s α 3 ) can be seen as NLO EW correction to the QCD mode; however, it also receives QCD-like corrections with respect to the LO interference. Corresponding results have been presented in [50][51][52][53]. Similarly, the contribution of O(α s α 4 ) can be seen as NLO QCD correction to the EW mode which likewise also receives EW-like corrections with respect to the LO interference. So far QCD corrections to the EW mode are known in the literature only in the VBF approximation, where on top of the LO requirement of at least one t-channel vector-boson propagator (see above) QCD interactions between different quark lines are not allowed [54]. In this approximation the mentioned interference contributions do not enter. In this paper we present the first complete computation of the O(α s α 4 ) contribution, i.e. the complete NLO QCD corrections to the EW mode, which go beyond the VBF approximation. This computation takes into account any contribution at the given perturbative order, i.e. it entails cross-talk between different quark lines, t-, u-, and s-channel contributions and their interference, interference effects between the QCD and EW modes, as well as s-channel and t-channel single-top contribution in the case of EW W ± ( ± ν) + 2 jets production. Therefore, this computation can be seen as a unified NLO description of VBF vector-boson production, vector-boson pair-production with semi-leptonic decays, and (in the case of W ± ( ± ν)+2 jets production) t-channel plus s-channel single-top production.
In W + 2 jet production at O(α s α 4 ), top resonances occur, besides in t-channel and s-channel configurations, also in channels of type gb → W bqq , which involve W t-channel single-top production, gb → W t(bqq ). We have verified that these contributions are always at or significantly below the 1% level with respect to the EW LO mode for all considered observables. For m j1j2 > 2 TeV these contributions are suppressed to below the permil level. In all perturbative orders we consider QCD partons (quarks and gluons) on the same footing as photons, i.e. in the process definition pp → V jj we have j ∈ {q,q, g, γ}, and everywhere photon-induced production modes are included. However, we have verified that photon-induced production modes are at or below the 1% level for all considered observables.
The total differential NLO cross section for pp → V + 2 jet production in a certain observable x can be written as
d dx σ V NLO = d dx σ V,QCD NLO QCD+EW + d dx σ V,EW NLO QCD+EW + d dx σ V,interf LO ,(3)
where
d dx σ V,M NLO QCD+EW = d dx σ V,M LO + d dx δσ V,M NLO QCD + d dx δσ V,M NLO EW ,(4)d dx σ V,M NLO QCD = d dx σ V,M LO + d dx δσ V,M NLO QCD ,(5)
and pure NLO EW predictions without QCD corrections,
d dx σ V,M NLO EW = d dx σ V,M LO + d dx δσ V,M NLO EW .(6)
As a natural approximation of mixed QCD-EW higher-order corrections we also define a factorised combination of NLO QCD and NLO EW corrections,
d dx σ V,M NLO QCD×EW = d dx σ V,M NLO QCD 1 + κ V,M EW (x) ,(7)
with the NLO EW correction factors
κ V,M EW (x) = d dx δσ V,M NLO EW d dx σ V,M LO .(8)
Reweighting of Monte Carlo samples
The reweighting of MC samples is a natural way of combining (N)LO MC simulations with (N)NLO QCD+EW perturbative calculations and to account for the respective uncertainties and correlations in a systematic way. In the following we define a Monte Carlo reweighting procedure for the individual QCD and EW production modes in pp → V + 2 jet. For practical purposes the reweighting has to be performed based on a one-dimensional distribution in a certain observable x. To be precise, the relevant higher-order theory (TH) predictions for the observable at hand are defined as
d dx σ V,M TH ε V,M TH = dy θ V cuts (y) d dx d dy σ V,M TH ε V,M TH ,(9)
where V indicates the specific V +2 jet process in Eq. (1), and M = QCD, EW identifies the corresponding production mode. As reweighting observable for the case at hand, i.e. V + multijet production in the VBF phase-space, we choose the dijet invariant mass,
x = m j1j2 ,(10)
which is defined in more detail in Sect. 3.2. The integration on the r.h.s. of Eq. (9) involves all degrees of freedom y that are independent of x. Such degrees of freedom include the fully differential kinematic dependence on the vector-boson decay products and the two leading jets, as well as the QED and QCD radiation that accompanies the VBF production process, i.e. extra jets and photons, and also possible extra leptons and neutrinos from hadron decays.
The function θ V cuts (y) on the r.h.s. of Eq. (9) describes selection cuts for pp → V + 2 jet, and the details of its definition (see Sects. 3.2-3.3) play an important role for the consistent implementation of the MC reweighting procedure. Such cuts are typically chosen in a very similar way for V = Z, W , but are not necessarily identical. For instance, in the case V = W the QED radiation from the lepton stemming from the W → ν decay is typically subject to a dressing prescription, while dressing is irrelevant for Z → νν decays. Note also that the cuts that are applied to the theoretical calculations in Eq. (9) do not need to be identical to the ones employed in the experimental analysis. They are typically rather similar to the actual experimental cuts but more inclusive. 1 Theory uncertainties in Eq. (9) are parametrised through sets of nuisance parameters ε V,M TH , and variations of individual nuisance parameters in the range
ε V,M i,TH ∈ [−1, 1](11)
should be understood as 1σ Gaussian uncertainties. In a similar way as was proposed for monojet dark matter searches [19], the theory predictions for the V +2 jet x-distributions can be embodied into the corresponing MC simulations through a one-dimensional reweighting procedure. In this approach, the reweighted MC samples are defined as
d dx d dy σ V,M ( ε V,M MC , ε V,M TH ) := d dx σ V,M TH ( ε V,M TH ) d dx σ V,M MC ( ε V,M MC ) d dx d dy σ V,M MC ( ε V,M MC ) .(12)
On the r.h.s., σ V,M MC with M = QCD, EW correspond to the fully differential V +2 jet Monte Carlo samples before reweighting, and the σ V,M MC terms in the numerator and denominator must correspond to the same MC samples used in the experimental analysis. Monte Carlo uncertainties, described by ε V,M MC , must be correlated in the numerator and denominator, while they can be kept uncorrelated across different processes, apart from Z(νν)+ jets and Z( )+ jets. As for the d dx
σ V,M TH / d dx σ V,M MC ratio on the r.h.s. of Eq. (12)
, it is crucial that the numerator and the denomiator are determined using the same definition of the x-distribution, which is provided in Sects. 3.2-3.3.
The method proposed in [19] foresees the separate reweighting of the various V +jet processes, while the correlations between different processes and different x-regions is encoded into the corresponding correlations between nuisance parameters. In this paper we adopt a simplified approach, which is designed for the case where experimental analyses do not exploit theoretical information on the shape of the xdistribution, but only on the correlation between different processes at fixed x. In this case, the relevant information can be encoded into the Z/W ratio
R Z/W,M TH (x, ε Z,M TH , ε W,M TH ) = d dx σ Z,M TH ( ε Z,M TH ) d dx σ W,M TH ( ε W,M TH ) ,(13)
where Z = Z ν or Z , and W ≡ W + +W − . In this ratio theory uncertainties largely cancel due to the very similar dynamics of the Z + 2 jet and W + 2 jet processes. This in particular holds for the uncertainties related to higher-order QCD effects. Such cancellations depend on the amount of correlation between the uncertainties of the individual distributions, which are encoded into the corresponding nuisance parameters. Our theory predictions to be used for MC reweighting are provided directly at the level of the ratio of Eq. (13). This ratio makes it possible to translate the MC prediction for the x-distribution in W + 2 jet into a correposnding Z + 2 jet prediction,
d dx σ Z,M ( ε W,M MC , ε Z,M TH , ε W,M TH ) := R Z/W,M TH (x, ε Z,M TH , ε W,M TH ) d dx σ W,M MC ( ε W,M MC ) .(14)
Here the idea is that the MC uncertainties in σ W,M MC can be strongly constrained through data, while theory uncertainties are strongly reduced through cancellations in the ratio, which results into an accurate prediction for the x-distribution in Z + 2 jets. The latter can be applied to the whole Z+ jets sample via reweighting,
d dx d dy σ Z,M ( ε Z,M MC , ε W,M MC , ε Z,M TH , ε W,M TH ) := d dx σ Z,M ( ε W,M MC , ε Z,M TH , ε W,M TH ) d dx σ Z,M MC ( ε Z,M MC ) d dx d dy σ Z,M MC ( ε Z,M MC ) .(15)
Note that the double reweighting procedure defined in Eqs. (14)- (15) is equivalent to a single reweighting of the Z + 2 jet x-distribution,
d dx d dy σ Z,M ( ε Z,M MC , ε W,M MC ε Z,M TH , ε W,M TH ) := R Z/W,M TH (x, ε Z,M TH , ε W,M TH ) R Z/W,M MC (x, ε Z,M MC , ε W,M MC ) d dx d dy σ Z,M MC ( ε Z,M MC ) .(16)
where σ Z,M MC is the MC counterpart of the Z/W ratio defined in Eq. (13). As discussed above, the definition of the variable x and the binning of its distribution need to be the same in all three terms on the r.h.s. of Eq. (16). Instead, acceptance cuts must be identical in the numerator and denominator of the double ratio, while particle-level MC predictions can be subject to more exclusive or inclusive cuts in the experimental analysis.
In addition to the cancellation of theoretical uncertainties in the ratio R Z/W,M TH also correlated MC uncertainties tend to cancel in R Z/W,M MC , thus the reweighting procedure Eq. (16) turns a precise W + 2 jet measurement into a precise prediction for Z + 2 jets.
The reweighting in Eq. (16) can be applied to a Z(νν) + 2 jets as well as to a Z( + − ) + 2 jets MC sample, the former allows to constrain the irreducible backgrounds in Higgs to invisible searches, while the latter allows for validation against data in control regions.
Reweighting observables and cuts
In this section we specify the observables, acceptance cuts, and physics objects relevant for the reweighting in Eq. (16). The theoretical calculations presented in Sect. 4 are based on these definitions, which need to be adopted also for the MC predictions that enter in the denominator of the double ratio on the r.h.s. of Eq. (16). The details of this reweighting setup are designed such as to take full advantage of the precision of perturbative calculations, while excluding all effects that are better described by MC simulations (e.g. parton showering, hadronisation, and leptons or missing energy from hadron decays).
Observables and cuts
The reweighting in Eq. (16) should be performed based on the rato of the one-dimensional distribution in the dijet invariant mass x = m j1j2 , where j 1 , j 2 are the two hardest jets. The following binning is adopted for distributions in m j1j2
Theoretical predictions for the m j1j2 -distribution and their MC counterpart should be determined in the presence of the following cuts,
p T,j1 > 100 GeV , p T,j2 > 50 GeV , m j1j2 > 500 GeV , ∆η j1j2 > 2.5 , p T,V > 150 GeV ,(18)
for V = W ± , Z. The relevant definitions of jets and p T,V are discussed in Section 3.3. Note that only the reconstructed vector-boson momenta are subject to cuts, while no restriction is applied to the individual momenta of their decay products. For pp → + jj the additional process-specific cut
m > 40 GeV(19)
should be applied. For a realistic assessment of theoretical uncertainties, one should also consider the fact that, whithin experimental analyses, VBF cuts can be supplemented by a veto on additional jet radiation. In this case we recommend to perform two alternative reweightings with and without jet veto. The difference between MC samples reweighted with jet veto and in the nominal setup of Eq. (18) should be small and can be taken as an additional uncertainty. In particular we consider an additional veto on jet radiation p T,j3 < p T,cut = max(500 GeV, m jj )/20 .
We choose to employ a dynamic jet veto to minimise possible large logarithms that may spoil the perturbative convergence of our results.
Finally, in order to address the limitations of the proposed one-dimensional reweighting in m j1j2 , we split the phase space into the following three ∆φ j1j2 regions, where ∆φ j1j2 is the azimuthal-angle separation between the two leading jets,
Φ 1 = {∆φ j1j2 < 1} , Φ 2 = {1 < ∆φ j1j2 < 2} , Φ 3 = {2 < ∆φ j1j2 } .(21)
These ∆φ j1j2 bins are motivated by the fact that the higher-order corrections to the reweigthing ratios R Z/W,M TH , defined in Eq. (13), feature a non-negligible dependence on ∆φ j1j2 . As discussed in Sect. 4.3, this effect is taken into account through a theoretical uncertainty that is derived from the differences between the R Z/W,M TH ratios in the above ∆φ j1j2 regions.
Definition of physics objects
In the following we define the various physics objects relevant for higher-order perturbative calculations and for the reweighting in the Monte Carlo counterparts in Eq. (16).
Neutrinos
In parton-level calculations of pp → V + 2 jet, neutrinos originate only from vector-boson decays, while in Monte Carlo samples they can arise also from hadron decays. In order to avoid any bias in the reweighting procedure, only neutrinos arising from Z and W decays at Monte Carlo truth level should be considered.
Charged leptons
Distributions in the lepton p T and other leptonic observables are known to be highly sensitive to QED radiative corrections, and the differences in the treatment of QED radiation on Monte Carlo and theory side can lead to a bias in the reweighting procedure. To avoid such a bias, dressed leptons should be used, i.e. all leptons are combined with all nearly collinear photons that lie within a cone of
∆R γ = ∆φ 2 γ + ∆η 2 γ < ∆R rec .(22)
For the radius of the recombination cone we employ the standard value ∆R rec = 0.1, which allows one to capture the bulk of the collinear final-state radiation, while keeping contamination from large-angle photon radiation from other sources at a negligible level. All lepton observables as well as the kinematics of the reconstructed W and Z bosons are defined in terms of dressed leptons, and, in accordance with standard experimental practice, both muons and electrons should be dressed. In this way differences between electrons and muons, = e, µ, become negligible, and the reweighting function needs to be computed only once for a generic lepton flavour . Similarly as for neutrinos, only charged leptons that arise from Z and W decays at Monte Carlo truth level should be considered. Concerning QCD radiation in the vicinity of leptons, no lepton isolation requirement should be imposed in the context of the reweighting procedure. Instead, in the experimental analysis lepton isolation cuts can be applied in the usual manner.
Z and W bosons
The off-shell four-momenta of W and Z bosons are defined as
p µ W + = p µ + + p µ ν , p µ W − = p µ − + p μ ν ,(23)p µ Z = p µ + + p µ − , p µ Z = p µ ν + p μ ν ,
where the leptons and neutrinos that result from Z and W decays are defined as discussed above.
Jets
Similarly as for the charged leptons, photons are recombined with collinear quarks within ∆R qγ < ∆R rec prior to jet clustering. Subsequently, QCD partons (quarks and gluons) together with the remaining photons are clustered into jets according to the anti-k T algorithm [58] using R=0.4 and ordered by their transverse momentum.
M W = 80.399 GeV Γ W = 2.085 GeV M Z = 91.1876 GeV Γ Z = 2.495 GeV M H = 125 GeV Γ H = 4.07 MeV m b = 0 GeV Γ b = 0 m t = 172.5 GeV Γ t = 1.32 GeV , G µ = 1.1663787 · 10 −5 GeV −2
Theoretical predictions and uncertainties
In this section we present our theoretical input for invisible-Higgs searches. The relevant input parameters are documented in Sect. 4.1, and in Sect. 4.2 we discuss NLO QCD+EW predictions for pp → V + 2 jets at parton level and matched to the parton shower. Our main results for Z/W ratios and their theoretical uncertainties are presented in Sect. 4.3.
All predictions presented in this paper have been obtained within the Sherpa+OpenLoops framework, which supports fully automated NLO QCD+EW calculations at parton level [41,51,59] as well as matching [60,61] to Sherpa's parton shower [62] and multi-jet merging [43] at NLO as implemented in the Sherpa Monte Carlo framework [41,[63][64][65]. In particular, Sherpa+OpenLoops allows for the simulation of the entire tower of QCD and EW contributions of O(α n s α m ) that are relevant for multi-jet processes like pp → V + 2 jets at LO and NLO. All relevant renormalised virtual amplitudes are provided by the OpenLoops 2 program [66] which implements the techniques of [67,68] and is interfaced with Collier [69] and OneLOop [70] for the calculation of scalar integrals.
Definition of numerical setup
In the following we specify input parameters and PDFs employed for theoretical predictions in this study. As discussed in Section 3, Monte Carlo samples used in the experimental analyses do not need to be generated with the same input parameters and PDFs used for higher-order theoretical predictions.
In the calculation of pp → νν/ ν/ + 2 jets we use the coupling constants, masses and widths as listed in Table 1. All unstable particles are treated in the complex-mass scheme [71], where width effects are absorbed into the complex-valued renormalised masses
µ 2 i = M 2 i − iΓ i M i for i = W, Z, t.(24)
The EW couplings are derived from the gauge-boson masses and the Fermi constant G µ using
α = √ 2 sin 2 θ w µ 2 W G µ π ,(25)
and the weak mixing angle θ w . The latter is determined by
sin 2 θ w = 1 − cos 2 θ w = 1 − µ 2 W µ 2 Z(26)
in the complex-mass scheme. The G µ -scheme guarantees an optimal description of pure SU (2) interactions at the EW scale as it absorbs universal higher-order corrections to the weak mixing angle into the LO contribution already and, thus, minimises higher-order corrections. It is therefore the scheme of choice for W + multijet production, and it provides a very good description of Z + multijet production as well.
The CKM matrix is assumed to be diagonal, and we checked at LO and NLO QCD that for W +multijet production the difference with respect to a non-diagonal CKM matrix is always well below 1%. As renormalisation scale µ R and factorisation scale µ F we set
µ R,F = ξ R,F µ 0 , with µ 0 = 1 2 H T and 1 2 ≤ ξ R , ξ F ≤ 2 .(27)
Here H T is defined as the scalar sum of the transverse energy of all parton-level final-state objects,
H T = E T,V + i∈partons p T,i , with E T,V = m 2 V + p 2 T,V(28)
where m V and p T,V are, respectively, the invariant mass and the transverse momentum of the reconstructed off-shell vector boson momenta as defined in Eq. (23), while the sum includes all final-state QCD and QED partons (q, g, γ) including those emitted at NLO. 2 Our default scale choice corresponds to ξ R = ξ F = 1, and theoretical QCD scale uncertainties are assessed by applying the standard 7-point variations (ξ R , ξ F ) = (2, 2), (2, 1), (1, 2), (1, 1), (1, 1 2 ), ( 1 2 , 1), ( 1 2 , 1 2 ). For the calculation of hadron-level cross sections at NLO(PS) QCD + NLO EW we employ the NNPDF31_nlo_as_0118_luxqed PDF set, which encodes QED effects via the LUXqed methodology of [72]. The same PDF set, and the related α s value, is used throughout, i.e. also in the relevant LO and NLO ingredients used in the estimate of theoretical uncertainties. Consistently with the 5F number scheme employed in the PDFs, b-quarks are treated as massless partons, and channels with initial-state b-quarks are taken into account for all processes and production modes.
In addition to fixed-order calculation including NLO QCD and EW corrections, we also match the NLO QCD corrections to the QCD mode to the parton shower. Here we set the scales according to the CKKW scale setting algorithm of [43,73], i.e. we interprete the given configuration using the inverse of the parton shower (using only its QCD splitting functions) to arrive at a core process and the reconstructed splitting scales t i ,
α n+k s (µ 2 R ) = α k s (µ 2 core ) n i=1 α s (t i ) .(29)
We restrict ourselves to strongly ordered hierarchies only, i.e. µ Q > t 1 > t 2 > .. > t n , as the parton shower would produce them in its regular evolution. In consequence, depending on the phase space point, possible core configurations are pp → V , pp → V + j, pp → V + jj, and pp → V + jjj. Further, we set both the factorisation and the shower starting scale, µ F and µ Q respectively, to the scale µ core = 1 2 H T defined on the reconstructed core process. In our region of interest where the usual Sudakov factors are negligible, our NLOPS simulation is thus equivalent to the two-jet component of an inclusive NLO merged calculation in the MEPS@NLO algorithm without additional multiplicities merged on top of it.
Higher-order QCD, EW and PS predictions for V + 2 jet
In this section we present LO and NLO QCD+EW predictions for pp → Z(νν) + 2 jets and pp → W ± ( ± ν) + 2 jets including also parton-shower effects. Each process is split into a QCD and EW production mode as discussed in Sect. 2.
LO contributions and interference
In Fig. 3 we show LO predictions for Z + 2 jet (left) and W + 2 jet (right) production considering the QCD and EW modes together with the LO interference. In the case of W + 2 jet production we also show the LO contribution due to pp → tj with leptonic on-shell decays of the top. The final-state jet can be a light jet or a bottom-quark jet, i.e. this process comprises t-channel and s-channel single-top production at LO. The single-top processes are consistently included in the the off-shell matrix elements of the EW mode of pp → W ± ( ± ν) + 2 jets. For both Z + 2 jet and W + 2 jet production the QCD mode largely dominates over the EW mode in the bulk of the phase-space; however, at large m j1j2 the EW mode becomes subsequently more and more important, eventually dominating over the QCD mode for about m j1j2 > 4 TeV. For both considered processes the LO interference remains more or less constant with respect to the EW mode, at about 2 − 3% relative to it over the entire m j1j2 range. The pp → tj process yields around 25% of the total EW W + 2 jet process at the lower end of the considered m j1j2 range. At large m j1j2 the impact of the single-top modes is increasingly suppressed, and for m j1j2 > 2.5 TeV it drops below 1% of the EW W + 2 jet process.
QCD production
The NLO QCD and EW corrections to the production of V +2 jets via QCD interactions are well known in the literature. For example, Ref. [52] presents a systematic investigation of QCD and EW correction effects on high-energy observables. Here we focus on NLO corrections and correlations relevant for invisible-Higgs searches at large invariant masses of the two hardest jets. Besides fixed-order NLO corrections we also investigate the effect of parton-shower matching at NLO QCD. and pp → W ± ( ± ν) + 2 jets (right) at LO. The upper frame shows absolute predictions for the QCD (blue), EW (red), and interference (green) production modes. For pp → W ± ( ± ν) + 2 jets we also show the LO pp → tj contributions (orange), which belong to the EW production mode and include t-channel and s-channel single-top production. The relative importance of the various contributions normalised to the EW production mode is displayed in the lower frame. The bands correspond to QCD scale variations, and in the case of ratios only the numerator is varied.
presented together with the additive and multiplicative combination of NLO QCD and EW corrections. For both processes the effect of QCD, EW and shower corrections, as well as the QCD scale variations is remarkably similar. The impact of QCD corrections is negative, and below 1 TeV it remains quite small, while in the m j1j2 tail it becomes increasingly large, reaching around −20% at 2-3 TeV and −50% at 4 TeV. Parton-shower corrections are at the percent level in the m j1j2 -tail, while below 2 TeV their effect is more sizeable and negative, reaching 20-30% around 500 GeV. Also the NLO EW corrections yield an increasingly negative contribution with rising m j1j2 . Their impact, however, is rather mild and reaches only about −10% in the multi-TeV region.
In Fig. 5 we show the same m j1j2 -distributions and theoretical predictions of Fig. 4 in the presence of the dynamic veto of Eq. (20) against a third jet. At LO QCD, where only two jets are present, the veto has no effect, while the NLO QCD and NLOPS QCD predictions are strongly reduced. The maximal effect is observed at m j1j2 = 500 GeV, where the veto of Eq. (20) corresponds to p T,cut = 25 GeV, and the NLO QCD cross section is suppressed by a factor four. Above 500 GeV the value of p T,cut grows linearly with m j1j2 , and the effect of the veto on the cross section becomes less important. In spite of the large NLO QCD corrections up to 1-2 TeV, the small difference between NLO and NLOPS predictions suggests the absence of large higher-order effects beyond NLO. Moreover, we observe that the pattern of suppression with respect to LO and also the NLOPS corrections with respect to NLO QCD are highly universal between Z + 2 jet and W + 2 jet production. The NLO EW corrections are almost identical to the inclusive selection.
EW production
Numerical results for EW V +2 jet production including QCD and EW corrections are shown in Figs. 6-11. We remind the reader that here we present the first complete computation of the QCD corrections to the EW production modes, i.e. without resorting to the VBF approximation, and also the first computation of the EW corrections to the EW modes.
In Fig. 6 are only at the level of a few percent and decrease to negligible levels in the tail. This is consistent with the computation of NLO QCD corrections for the V + 2 jet processes in the VBF approximation, where residual scale uncertainties are at the 2% level [54]. Here we note that, given the rather large size of the NLO QCD corrections, such small scale uncertainties cannot be regarded as a reliable estimate of unknown higher-order effects. In the p T,V distribution the EW corrections display a typical behaviour induced by the dominance of EW Sudakov logarithms. At 1 TeV the EW corrections reduce the NLO QCD cross section by 40-50%, with a spread of about 10% between the additive and the multiplicative combinations. Both QCD and EW corrections are highly correlated between the two considered processes, i.e. the relative impact of these corrections is almost identical. Higher-order QCD and EW corrections to the transverse momentum distribution of the hardest jet, p T,j1 , are shown in Fig. 7. Here the QCD corrections are largest at small p T,j1 and decrease in the tail. For p T,j1 above a few hundred GeV NLO QCD corrections drop below 10%. The NLO EW corrections increase logarithmically at large p T,j1 and reach −40% at 1 TeV. Due to the smallness of the higher-order QCD corrections in the tail, differences between additive and multiplicative combinations are negligible. Again a very high degree of correlation of the higher-order corrections is observed between the two processes.
In Figs. 8 and 9 we turn to the distribution in the invariant mass between of two leading jets, m j1j2 , defined inclusively and with an additional dynamic veto on central jet activity as introduced in Sect. 3.2, respectively. These distributions are crucial for background estimations in invisible-Higgs searches. For the jet-inclusive distributions higher-order QCD and EW corrections are highly correlated between the two considered processes with differences at the 5% level for the QCD corrections at small m j1j2 . At LO QCD, scale uncertainties increase with m j1j2 and reach 20-30% in the multi-TeV range. At NLO QCD, scale uncertainties are reduced to the 1% level all the way up to the multi-TeV regime. Overall, the NLO QCD corrections have a marked impact on the shape of the m j1j2 distribution, ranging from +70% at small m j1j2 to about +5% above 2 TeV. At the same time, NLO EW corrections are negative and increase σ/dσ NLO QCD Figure 8: Distribution in the invariant mass of the two hardest jets, m j1j2 , for EW pp → Z(νν) + 2 jets (left) and EW pp → W ± ( ± ν) + 2 jets (right). Curves and bands as in Fig. 6.
towards the m j1j2 tail. However, they remain smaller compared to the corresponding corrections in p T,V or p T,j1 . This is due to the fact that, at very large m j1j2 , the Mandelstam invariantst andû are much smaller as compared toŝ ∼ m j1j2 . As a consequence the double Sudakov logarithms ln 2 (|r|/M 2 W ) witĥ r =t,û are significantly suppressed with respect to ln 2 (ŝ/M 2 W ). At m j1j2 = 5 TeV the EW corrections amount to about 20% and differences between an additive and a multiplicative combinations of QCD and EW corrections remain at 1% level. The dynamic central jet veto has marked impact on the NLO QCD corrections, in particular in the small m j1j2 region. Here, the corrections are reduced to about +20% for Z + 2 jet production, and turn negative to about −20% for W + 2 jet production. The jet veto has a much smaller effect in the TeV regime. Here the QCD corrections for both Z(νν) and W ± ( ± ν) production are at the percent level only. Unsurprisingly, the EW corrections are hardly effected by the central jet veto.
In Fig. 10 we plot the differential distribution in the azimuthal separation of the two hardest jets, ∆φ j1j2 . In this observable EW corrections are at the 10% level with hardly any variation across the ∆φ j1j2 range. QCD corrections on the other hand show a mild increase towards smaller ∆φ j1j2 . Inter- σ/dσ NLO QCD Figure 9: Distribution in the invariant mass of the two hardest jets, m j1j2 , for EW pp → Z(νν) + 2 jets (left) and EW pp → W ± ( ± ν) + 2 jets (right) subject to the dynamic third jet veto of Eq. (20). Curves and bands as in Fig. 6. Figure 10: Distribution in the azimuthal separation of the two hardest jets, ∆φ j1j2 , for EW pp → Z(νν) + 2 jets (left) and EW pp → W ± ( ± ν) + 2 jets (right). Curves and bands as in Fig. 6.
Sherpa+OpenLoops
estingly, in this region the QCD corrections also show a non-universality between the two considered processes at the 10% level. This non-universality can be attributed to the following two mechanisms. The first one is single-top production, which enters only pp →W ± ( ± ν) + 2 jets in the form of sand t-channel contributions at LO and also associated W t production at NLO QCD (see Sect. 2). The second mechanism consists of s-channel contributions that correspond to diboson subprocesses of type qq → V V , where one of the weak bosons decays into two jets. At m j1j2 > M W,Z , such diboson channels can contribute through hard initial-state radiation, which plays the role of one of the two hardest jets. Their non-universality is due to the fact that the QCD corrections to W ± Z production are much larger as compared to W + W − and ZZ production. Both mechanisms tend to enhance W ± ( ± ν) + 2 jets production at small ∆φ j1j2 , while they tend to be suppressed at larger ∆φ j1j2 . The impact of these mild non-universalities is discussed in more detail in Sect. 4.3. Finally, in Fig. 11 we consider the distribution in the rapidity separation of the two hardest jets, ∆η j1j2 . Figure 11: Distribution in the rapidity separation of the two hardest jets, ∆η j1j2 , for EW pp → Z(νν) + 2 jets (left) and EW pp → W ± ( ± ν) + 2 jets (right). Curves and bands as in Fig. 6.
Also in this case the EW corrections are almost constant and at the level of 10%. For the Z(νν) channel also the QCD corrections are constant and at the level of 20%. For the W ± ( ± ν) channel the QCD corrections increase up to 30% for small rapidity separation. In actual analyses for VBF-V production and invisible-Higgs searches often tighter requirements on ∆η j1j2 than the here considered ∆η j1j2 > 2.5 are imposed. This will further increase the level of correlation between the W ± ( ± ν) and Z(νν) channels. Thus correlation uncertainties derived here and then applied with tighter ∆η j1j2 requirements can be seen as conservative.
Precise predictions and uncertainties for V + 2 jet ratios
In this section we present predictions and theoretical uncertainties for the ratios of Eq. (13) between the m j1j2 distributions in pp → Z(νν) + 2 jets and pp → W ± ( ± ν) + 2 jets. Numerical predictions for these process ratios and the related uncertainties can be found at [74], where also additional ratios between pp → Z( + − ) + 2 jets and pp → W ± ( ± ν) + 2 jets distributions are available. The Z/W ratios are the key ingredients of the reweighting procedure defined in Eq. (16). As nominal theory predictions we take the fixed-order NLO QCD×EW ratios
R Z/W,M TH (x) := R Z/W,M NLO QCD×EW (x) = d dx σ Z,M NLO QCD×EW d dx σ W,M NLO QCD×EW ,(30)
where x = m j1j2 is the dijet invariant mass. To describe theory uncertainties we introduce nuisance parameters that are directly acting on the Z/W ratios, combining the uncertainties of the individual processes and their correlations. With this approach, our complete predictions with uncertainties read
R Z/W,M TH (x, ε Z/W,M TH ) := R Z/W,M TH (x) + i ε Z/W,M i,TH δR Z/W,M i,TH (x) ,(31)
where the nuisance parameters ε (x) factors embody various sources of theory uncertainty, as defined in the following. Note that for the two V +2 jet production modes (M ), i.e. for QCD and EW production, we define two independent ratios and uncertainties.
To account for unknown QCD corrections beyond NLO in a conservative way, we avoid using scale uncertainties and, following Ref. [19], we handle the difference between LO QCD and NLO QCD ratios as uncertainty. More precisely, we consider the effect of switching off NLO QCD corrections in our nominal NLO QCD×EW predictions,
While this approach effectively downgrades the known NLO QCD corrections to an uncertainty, the bulk of the QCD corrections cancel in the ratio, and the uncertainty δR Z/W,M QCD remains quite small. For parton showering and matching at NLO we apply the uncertainty
δR Z/W,M PS (x) := R Z/W,M NLOPS QCD×EW (x) − R Z/W,M NLO QCD×EW (x) ,(33)
i.e. the full difference between fixed-order NLO and NLOPS predictions.
To describe the effect of unknown mixed QCD-EW uncertainties beyond NLO we intorduce the uncertainty δR
Z/W,M mix (x) := R Z/W,M NLO QCD+EW (x) − R Z/W,M NLO QCD×EW (x) ,(34)
which corresponds to the difference between the additive and multiplicative combination of NLO QCD and NLO EW corrections. Also this prescription can be regarded as a conservative estimate since the multiplicative combination is expected to provide a correct description of the dominant mixed QCD-EW effects beyond NLO.
In case a jet veto is applied to the experimental analysis, also the following uncertainty should be considered, δR
Z/W,M veto (x) := R Z/W,M TH,veto (x) − R Z/W,M NLO QCD×EW (x) ,(35)
where the ratio R Z/W,M TH,veto is computed in the presence of the "theoretical" veto detailed in Eq. (20). Note that the effect of the veto cancels to a large extent in the ratio. Thus the prescription of Eq. (20) does not need to be identical to the veto that is employed in the experimental analysis.
In order to account for the non-negligible ∆φ j1j2 dependence of QCD higher-order effects in the EW production modes (see Sect. 4.3.2), we split the phase space into the three ∆φ j1j2 bins Φ i defined in Eq. (21), and we define the uncertainty
δR Z/W,M ∆φ (x) := 3 i=1 R Z/W,M NLO QCD×EW (x) Φi − R Z/W,M NLO QCD×EW (x) 2 ,(36)
where the first ratio between brackets is restricted to the Φ i bin. In other words, the variance of the ratio of Eq. (30) in ∆φ j1j2 space is taken as uncertainty of the one-dimensional reweighting procedure.
Finally, also PDF uncertainties should be considered. In this case, PDF variations in the numerator and denominator of the Z/W ratio should be correlated. In the following subsections we present predictions for the ratios defined in Eq. (30) and for the various ingredients that enter the theoretical uncertainties of Eqs. (31)-(36).
Z/W ratios for the QCD production mode
As observed in Sect. 4.2.2, the QCD and EW corrections to the QCD production modes of the individual Z + 2 jet and W + 2 jet processes are strongly correlated. This is confirmed by the smallness of the corrections in the Z/W ratios shown in Figs. 12-13. The left and right plots of Fig. 12 present the ratio of m j1j2 -distributions with the inclusive selection cuts, defined in Eq. (18), and in the presence of the additional jet veto, defined in Eq. (20). The value of the ratio is around 0.24 and remains almost constant in the considered m j1j2 range from 500 to 5000 GeV. The size of the QCD and PS corrections that enter the uncertainties of Eqs. (32)- (33) is shown in the middle panels. In the inclusive selection NLO QCD corrections to the ratio remain below 4 − 6% in the entire m j1j2 range, and PS corrections remain below 6%. When the jet veto is applied, the Z/W ratio remains stable at the percent level. The jet-veto uncertainty of Eq. (35) is thus quite small. Also QCD and PS corrections to the ratio are largely insensitive to the jet veto.
As shown in the lowest frames of Fig. 12, the EW corrections to the QCD Z/W ratio are around 2% and almost independent of m j1j2 , both for the inclusive selection and including a jet veto. Due to the strong cancellation of QCD and EW corrections in the ratio, the difference between the additive and multiplicative NLO QCD-EW combinations, which enters the uncertainty of Eq. (34), is completely negligible.
In Fig. 13 we present the QCD Z/W ratio for the distributions in ∆φ j1j2 without applying a jet veto. Note that, as a result of the acceptance cuts, Eq. (18), these ∆φ j1j2 distributions are dominated by events with 500 GeV < m j1j2 < 1500 GeV. The results thus feature a very small dependence on ∆φ j1j2 , both for the nominal ratio, as well as for the individual corrections. This observation supports the one-dimensional m j1j2 -reweighting procedure proposed in Sect. 3, and the ∆φ j1j2 -uncertainty of Eq. (36) can be neglected for the QCD production modes. Similarly, the lowest panel shows the relative effect of switching off EW corrections (NLO QCD, green) or replacing the multiplicative by the additive combination of QCD and EW corrections (NLO QCD+EW, black).
Z/W ratios for the EW production mode
Higher-order predictions for the ratios of distributions in EW Z + 2 jet and EW W + 2 jet production are presented in Figs. 14-16. The left and right plots of Fig. 14 show the ratio of m j1j2 -distributions with inclusive selection cuts and in the presence of the additional jet veto. The EW Z/W ratio is around 0.15 and remains rather stable when m j1j2 grows from 500 GeV to 5 TeV.
In the absence of the jet veto, as expected from the findings of Sect. 4.2.3, the ratio is quite stable with respect to higher-order corrections. In particluar, for m j1j2 > 1 TeV, which corresponds to the most relevant region for invisible-Higgs searches, QCD corrections are at the percent level. Below 1 TeV the QCD corrections tend to become more significant reaching +10% at m j1j2 = 500 GeV. The impact of EW corrections on the inclusive m j1j2 -ratio does not exceed 1% in the plotted m j1j2 range, and the mixed QCD-EW uncertainties of Eq. (34) are negligible.
In the presence of the jet veto, QCD corrections become rather sizeable below 1 TeV and reach the level of +50% at 500 GeV. As a consequence, also mixed QCD-EW uncertainties are somewhat enhanced. This non-universal behaviour of the QCD corrections leads to an enhancement of the QCD uncertainty, as defined in Eq. (32). However, we note that the non-universality of the EW production modes at m j1j2 < 1 TeV tends to be washed out by the dominance of the QCD production modes, where all correction effects feature a high degree of universality. Moreover, we point out that the prescription of Eq. (32) is very conservative and may be replaced by a more realistic estimate if QCD uncertainties play a critical role.
Together with their non-universal behaviour at m j1j2 < 1 TeV, the QCD corrections to the EW Z/W ratio feature also a nontrivial dependence on ∆φ j1j2 . This is illustrated in Fig. 15 Figure 13: Ratios of the QCD pp → Z(νν) + 2 jets and QCD pp → W ± ( ± ν) + 2 jets distributions in ∆φ j1j2 . The same higher-order predictions and conventions as in Fig. 12 are used. of the ∆φ j1j2 distributions for EW Z + 2 jet and EW W + 2 jet production. The ∆φ j1j2 dependence of this ratio features variations at the level of 20% at LO and 15% at NLO QCD×EW . The EW corrections are very small, and their dependence on ∆φ j1j2 does not exceed 1%. In contrast, the impact of QCD corrections on the ratio ranges from −10% at small ∆φ j1j2 to +10% around ∆φ j1j2 = 2.5.
In order to account for this ∆φ j1j2 dependence in the reweighting of the one-dimensional m j1j2 distribution we split the phase space into the three ∆φ j1j2 bins defined in Eq. (21). The ratios of m j1j2 distributions for EW Z + 2 jet and EW W + 2 jet production in these three ∆φ j1j2 bins are shown in Fig. 16. For m j1j2 > 2 TeV, in all three ∆φ j1j2 -bins we observe very small QCD corrections at the onepercent level, consistently with the behaviour of the inclusive m j1j2 distribution in Fig. 14. This is due both to the moderate size of the QCD corrections to the individual EW Z + 2 jet and W + 2 jet cross sections (see Fig. 8) and to their strong correlation. In constrast, for 500 GeV < mjj < 2 TeV the size of the QCD corrections and their dependence on ∆φ j1j2 are quite significant. With decreasing m j1j2 the impact of the QCD corrections can grow up the level of +10% or −20%, depending on ∆φ j1j2 . Also the nominal NLO QCD×EW ratio features a non-negligible dependence on ∆φ j1j2 . In order to account for the uncertainties associated with this nontrivial m j1j2 and ∆φ j1j2 dependence, the high-order QCD uncertainty for the inclusive m j1j2 distribution, defined in Eq. (32), is complemented by the additional uncertainty of Eq. (36), which accounts for the variation of the nomimal ratio in the different ∆φ j1j2 bins.
Conclusions
The precise control of SM backgrounds is key in order to harness the full potential of invisible-Higgs searches in the VBF production mode at the LHC. Irreducible background contributions to the corresponding signature of missing transverse energy plus two jets with high invariant mass arise from the SM processes pp → Z(νν) + 2 jets and pp → W ± ( ± ν) + 2 jets, where the lepton is outside of the acceptance region. Such backgrounds can be predicted with rather good theoretical accuracy in perturbation theory, while the residual theoretical uncertainties can be further reduced with a data-driven approach. In particular, the irreducible pp → Z(νν) + 2 jets background can be constrained by means of accurate data for pp → W ± ( ± ν) + 2 jets with a visible lepton, in combination with precise theoretical predictions for Figure 14: Ratios of the EW pp → Z(νν) + 2 jets and EW pp → W ± ( ± ν) + 2 jets distributions in m j1j2 inclusive (left) and in the presence of the dynamic veto of Eq. (20) against a third jet (right). Same higher-order predictions and conventions as in Fig. 12, but without matching to the parton shower.
the correlation between Z + 2 jet and W + 2 jet production.
In this article we have presented parton-level predictions including complete NLO QCD and EW corrections for all relevant V + 2 jet processes in the SM. These reactions involve various perturbative contributions, which can be split into QCD modes, EW modes, and interference contributions. For the first time we have consistently computed all four perturbative contributions that arise at NLO QCD+EW without applying any approximations. Based on the observation that the LO interference between the QCD and EW modes is very small, the NLO contributions of O(α 3 s α 2 ) and O(α 2 s α 3 ) can be regarded as QCD and EW corrections to the QCD production mode, while O(α s α 4 ) and O(α 5 ) correspond to QCD and EW corrections to the EW production mode. In the signal region for invisible-Higgs searches, i.e. at large dijet invariant mass, m j1j2 , the EW V + 2 jet production mode is dominated by VBF topologies, but our calculations account for all possible V + 2 jet topologies, including contributions that correspond to diboson production with semi-leptonic decays, as well as single-top production and decay in the s-, tand W t-channels.
The QCD corrections to the EW modes are small at large m j1j2 , while the EW corrections can reach up to −20%. Both for the QCD and the EW modes, we have found a very high degree of correlation between the higher-order QCD and EW corrections to pp → Z(νν) + 2 jets and pp → W ± ( ± ν) + 2 jets. As a result of this strong correlation, higher-order corrections and uncertainties cancel to a large extent in the ratio of pp → Z(νν) + 2 jets and pp → W ± ( ± ν) + 2 jets cross sections. Based on this observation we have proposed to exploit precise theoretical predictions for this Z/W ratio in combination with data in order to control the V + 2 jet backgrounds to invisible-Higgs searches with few-percent precision. To this end we have provided an explicit recipe, based on the reweighting of m j1j2 distributions, which can be applied to the Monte Carlo samples that are used in the experimental analyses. This reweighting is implemented at the level of the QCD and EW Z/W ratios, such as to exploit the very small theoretical uncertainties in these observables.
In the phase space relevant for invisible-Higgs searches, at m j1j2 > 1 TeV, the correlation of higherorder corrections in Z + 2 jet and W + 2 jet production turns out to be particular strong, and theoretical Figure 15: Ratios of the EW pp → Z(νν) + 2 jets and EW pp → W ± ( ± ν) + 2 jets distributions in ∆φ j1j2 without jet veto. Same higher-order predictions and conventions as in Fig. 12. Figure 16: Ratios of the EW pp → Z(νν) + 2 jets and EW pp → W ± ( ± ν) + 2 jets distributions in m j1j2 without jet veto in the regions ∆φ j1j2 < 1 (left), 1 < ∆φ j1j2 < 2 (middle), and ∆φ j1j2 > 2 (right). Same higher-order predictions and conventions as in Fig. 12, but without matching to the parton shower.
uncertainties in the Z/W ratios are as small as a few percent. Moderate decorrelation effects have been observed at smaller m j1j2 in the ratio of the EW production modes. Such effects can reach up to 10% in the ratio. They are driven by non-universal QCD corrections to the EW V + 2 jet production modes, and they originate from semileptonic diboson topologies and single-top contributions that are not included in the naive VBF approximation. The Z/W correlation can in principle be further be enhanced separating these non-universal contributions. We leave this to future investigation. Based on the predictions and uncertainties derived in this article significant sensitivity improvements can be expected in searches for invisible Higgs decays. In fact, our predictions and the proposed reweighting procedure have already been applied in a recent ATLAS search [75] yielding an upper limit of 14.5% on the invisible branching ratio of the Higgs at 95% confidence level. The approach and the theoretical predictions presented in this paper can also be applied to measurments of V + 2 jet production via VFB in order to derive constraints on effective field theories beyond the Standard Model.
Figure 2 :
2Example LO diagrams at O(g 2 s e 2 ) (a,b), O(g s e 3 ) (c,d), and O(e 4 ) (e-l). The square of O(g 2 s e 2 ) diagrams yields the O(α 2 s α 2 ) QCD LO amplitude, while the square of the O(e 4 ) diagrams yields the O(α 4 ) EW LO amplitude. The O(α s α 3 ) perturbative contribution emerges as square of O(g s e 3 ) diagrams, or due to the interference between O(g 2 s e 2 ) and O(e 4 ) diagrams.
and M = {QCD, EW} identifies the corresponding production mode. The NLO QCD and NLO EW corrections δσ V,M NLO QCD and δσ V,M NLO EW correspond to the perturbative contributions of O(α 3 s α 2 ) and O(α 2 s α 3 ) for M = QCD, and of O(α s α 4 ) and O(α 5 ) for M = EW. For later convenience we also define pure NLO QCD predictions without EW corrections,
m j1j2 GeV ∈ [500, 550, . . . , 950, 1000, 1100, . . . , 1900, 2000, 2500, 3000, 3500, 4000, 6000, 13000] .
Fig. 4 Figure 3 :
43shows the distribution in m j1j2 for pp → W ± ( ± ν) + 2 jets and pp → Z(νν) + 2 jets in various approximations. Predictions and scale variations at LO QCD, NLO QCD and NLOPS QCD accuracy are Distribution in the invariant mass of the two hardest jets, m j1j2 , for pp → Z(νν) + 2 jets (left)
Figure 4 :
4differential predictions in the transverse momentum of the (reconstructed) vector bosons, p T,V , are shown. We observe that the NLO QCD corrections increase the LO EW cross section by about 20% showing hardly any p T,V dependence. QCD scale uncertainties at LO are at 10% for small p T,V and increase up to 20% in the tail of the p T,V distribution. The QCD scale uncertainties at NLO QCD Distribution in the invariant mass distribution of the two hardest jets, m j1j2 , for QCD pp → Z(νν) + 2 jets (left) and QCD pp → W ± ( ± ν) + 2 jets (right). The upper frame displays absolute LO QCD (blue), NLO QCD (green), and NLO+PS QCD (magenta) predictions, and ratios with respect to NLO QCD are presented in the central panel. The bands correspond to QCD scale variations, and in the case of ratios only the numerator is varied. The lower panel shows the relative impact of NLO QCD+EW (black) and NLO QCD×EW (red) predictions normalised to NLO QCD.
Figure 5 :
5Distribution in the invariant mass distribution of the two hardest jets, m j1j2 , for QCD pp → Z(νν) + 2 jets (left) and QCD pp → W ± ( ± ν) + 2 jets (right) subject to the dynamic veto of Eq. (20) against a third jet. Curves and bands as inFig. 4but without NLO QCD+EW predictions.
Figure 6 :
6Distribution in the reconstructed transverse momentum of the off-shell vector boson, p T,V , for EW pp → Z(νν) + 2 jets (left) and EW pp → W ± ( ± ν) + 2 jets (right). Absolute EW LO (blue), NLO QCD (green), NLO QCD+EW (black) and NLO QCD×EW (red) predictions are shown in the upper panel. Here NLO QCD and NLO EW corrections should be understood as O(α s ) and O(α) effects wrt to the EW LO. The same predictions normalised to NLO QCD are shown in the lower panel. The bands correspond to QCD scale variations, and in the case of ratios only the numerator is varied.
δR Z/W,M QCD (x) := R Z/W,M NLO EW (x) − R Z/W,M NLO QCD×EW (x) .
Figure 12 :
12Ratios of the QCD pp → Z(νν) + 2 jets and QCD pp → W ± ( ± ν) + 2 jets distributions in m j1j2 inclusive (left) and in the presence of the dynamic veto of Eq. (20) against a third jet (right). The upper panels compare absolute predictions at LO (blue), NLO QCD (green), NLOPS QCD (magenta) and NLO QCD×EW (red) accuracy. The impact of QCD corrections is illustrated in the middle panel, which shows the relative variation wrt the nominal NLO QCD×EW prediction (red) when switching on the parton shower (NLOPS QCD×EW , purple) or switching off QCD corrections (NLO EW, orange).
→ νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeV
Table 1 :
1Values of the various physical input parameters. The value of m b depends on the employed flavour-number scheme as discussed in the text.
pp → ℓ ± ν ℓ + 2 jets at 13 TeVSherpa+OpenLoops
EW NLO QCD
EW LO
EW NLO QCDxEW
EW NLO QCD+EW
10 −7
10 −6
10 −5
10 −4
pp → ν ℓνℓ + 2 jets at 13 TeV
dσ/dp
T,j
1 [pb/GeV]
10 2
10 3
0.4
0.6
0.8
1
1.2
1.4
p T,j 1 [GeV]
σ/dσ NLO QCD
Sherpa+OpenLoops
EW NLO QCD
EW LO
EW NLO QCD×EW
EW NLO QCD+EW
10 −6
10 −5
10 −4
10 −3
dσ/dp
T,j
1 [pb/GeV]
10 2
10 3
0.4
0.6
0.8
1
1.2
1.4
p T,j 1 [GeV]
σ/dσ NLO QCD
Figure 7: Distribution in the transverse momentum of the hardest jet, p T,j1 , for EW pp → Z(νν) + 2 jets
(left) and EW pp → W ± ( ± ν) + 2 jets (right). Curves and bands as in Fig. 6.
Sherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
EW NLO QCD+EW
10 −6
10 −5
10 −4
pp → ν ℓνℓ + 2 jets at 13 TeV
dσ/dm
j
1 j
2 [pb/GeV]
1000
2000
3000
4000 5000
0.4
0.6
0.8
1
1.2
1.4
m j 1 j 2 [GeV]
σ/dσ NLO QCD
Sherpa+OpenLoops
EW LO
EW NLO QCD×EW
EW NLO QCD+EW
10 −6
10 −5
10 −4
pp → ℓ ± ν ℓ + 2 jets at 13 TeV
dσ/dm
j
1 j
2 [pb/GeV]
1000
2000
3000
4000 5000
0.4
0.6
0.8
1
1.2
1.4
m j 1 j 2 [GeV]
pp → ν ℓνℓ + 2 jets at 13 TeVEW LO
EW NLO QCD
EW NLO QCD×EW
EW NLO QCD+EW
10 −3
10 −2
dσ/d∆η
j
1 j
2 [pb]
0
1
2
3
4
5
6
7
0.4
0.6
0.8
1
1.2
1.4
∆η j 1 j 2
σ/dσ NLO QCD
Sherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
EW NLO QCD+EW
10 −2
10 −1
pp → ℓ ± ν ℓ + 2 jets at 13 TeV
dσ/d∆η
j
1 j
2 [pb]
0
1
2
3
4
5
6
7
0.4
0.6
0.8
1
1.2
1.4
∆η j 1 j 2
σ/dσ NLO QCD
, where we plot the ratio pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeVSherpa+OpenLoops
QCD LO
QCD NLO QCD
QCD NLOPS QCD
QCD NLO QCD×EW
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
R
Z/W,QCD
0.8
0.9
1
1.1
1.2
1.3
QCD NLO QCD×EW
QCD NLO EW
QCD NLOPS QCD×EW
R/R NLO QCDxEW
0
0.5
1
1.5
2
2.5
3
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
QCD NLO QCD×EW
QCD NLO QCD
QCD NLO QCD+EW
∆φ j 1 j 2
R/R NLO QCDxEW
pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeV pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeVSherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
0
0.1
0.2
0.3
0.4
0.5
R
Z/W,EW
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCD×EW
1000
2000
3000
4000 5000
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO QCD
EW NLO QCD+EW
m j 1 j 2 [GeV]
R/R NLO QCDxEW
w/ jet veto
Sherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
0
0.1
0.2
0.3
0.4
0.5
R
Z/W,EW
0.7
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCD×EW
1000
2000
3000
4000 5000
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO QCD
EW NLO QCD+EW
m j 1 j 2 [GeV]
R/R NLO QCD×EW
EW LO EW NLO QCD EW NLO QCD×EW pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeVEW NLO QCD×EW EW NLO QCD EW NLO QCD+EW ∆φ j 1 j 2 R/R NLO QCD×EW0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
R
Z/W,EW
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCD×EW
0
0.5
1
1.5
2
2.5
3
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW EW NLO QCD EW NLO QCD+EW m j1j2 [GeV] R/R NLO QCD×EW 1 < ∆φ j 1 j 2 < 2 pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeV EW NLO QCD×EW EW NLO QCD EW NLO QCD+EW m j1j2 [GeV]R/R NLO QCD×EW ∆φ j 1 j 2 > 2 pp → νν + 2 jets / pp → ℓ ± ν ℓ + 2 jets at 13 TeV1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCDxEW
1000
2000
3000 4000 5000
0.8
0.9
1
1.1
1.2
1.3
Sherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
R
Z/W,EW
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCD×EW
1000
2000
3000 4000 5000
0.8
0.9
1
1.1
1.2
1.3
Sherpa+OpenLoops
EW LO
EW NLO QCD
EW NLO QCD×EW
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
R
Z/W,EW
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO EW
R/R NLO QCDxEW
1000
2000
3000 4000 5000
0.8
0.9
1
1.1
1.2
1.3
EW NLO QCD×EW
EW NLO QCD
EW NLO QCD+EW
R/R NLO QCDxEW
This is not a necessary prerequisite, i.e. the theoretical cuts θ V cuts (y) may be also more exclusive then experimental cuts. The crucial prerequisite is that the MC samples that are going to be reweighted with Eq. (9) and applied to the experimental analyis should extend over the full phase-space regions that are covered by the theoretical calculations and by the experimental analyses.
This scale choice corresponds to the scale setter DH_Tp2 in Sherpa.
AcknowledgmentsWe thank Lorenzo Mai for useful discussions and cross-checks. We also thank Christian Gütschow for useful discussions and comments on the manuscript.
PROPHECY4F 3.0: A Monte Carlo program for Higgs-boson decays into four-fermion final states in and beyond the Standard Model. A Denner, S Dittmaier, A Mück, arXiv:1912.02010Comput. Phys. Commun. 254A. Denner, S. Dittmaier, and A. Mück, PROPHECY4F 3.0: A Monte Carlo program for Higgs-boson decays into four-fermion final states in and beyond the Standard Model, Comput. Phys. Commun. 254 (2020) 107336, [arXiv:1912.02010].
Invisible Decays of Higgs Bosons. R E Shrock, M Suzuki, Phys. Lett. B. 110250R. E. Shrock and M. Suzuki, Invisible Decays of Higgs Bosons, Phys. Lett. B 110 (1982) 250.
Signatures of an invisibly decaying Higgs particle at LHC. D Choudhury, D P Roy, hep-ph/9312347Phys. Lett. B. 322D. Choudhury and D. P. Roy, Signatures of an invisibly decaying Higgs particle at LHC, Phys. Lett. B 322 (1994) 368-373, [hep-ph/9312347].
Invisible Higgs Decays from Higgs Graviscalar Mixing. D Dominici, J F Gunion, arXiv:0902.1512Phys. Rev. D. 80D. Dominici and J. F. Gunion, Invisible Higgs Decays from Higgs Graviscalar Mixing, Phys. Rev. D 80 (2009) 115006, [arXiv:0902.1512].
The MSSM invisible Higgs in the light of dark matter and g-2. G Belanger, F Boudjema, A Cottrant, R M Godbole, A Semenov, hep-ph/0106275Phys. Lett. B. 519G. Belanger, F. Boudjema, A. Cottrant, R. M. Godbole, and A. Semenov, The MSSM invisible Higgs in the light of dark matter and g-2, Phys. Lett. B 519 (2001) 93-102, [hep-ph/0106275].
Implications of LHC searches for Higgs-portal dark matter. A Djouadi, O Lebedev, Y Mambrini, J Quevillon, arXiv:1112.3299Phys. Lett. B. 709A. Djouadi, O. Lebedev, Y. Mambrini, and J. Quevillon, Implications of LHC searches for Higgs-portal dark matter, Phys. Lett. B 709 (2012) 65-69, [arXiv:1112.3299].
Higgs Portal Vector Dark Matter : Revisited. S Baek, P Ko, W.-I Park, E Senaha, arXiv:1212.2131JHEP. 0536S. Baek, P. Ko, W.-I. Park, and E. Senaha, Higgs Portal Vector Dark Matter : Revisited, JHEP 05 (2013) 036, [arXiv:1212.2131].
Cornering light Neutralino Dark Matter at the LHC. L Calibbi, J M Lindert, T Ota, Y Takanishi, arXiv:1307.4119JHEP. 13210L. Calibbi, J. M. Lindert, T. Ota, and Y. Takanishi, Cornering light Neutralino Dark Matter at the LHC, JHEP 10 (2013) 132, [arXiv:1307.4119].
Combined analysis of effective Higgs portal dark matter models. A Beniwal, F Rajec, C Savage, P Scott, C Weniger, M White, A G Williams, arXiv:1512.06458Phys. Rev. D. 9311A. Beniwal, F. Rajec, C. Savage, P. Scott, C. Weniger, M. White, and A. G. Williams, Combined analysis of effective Higgs portal dark matter models, Phys. Rev. D 93 (2016), no. 11 115016, [arXiv:1512.06458].
Invisible Higgs Decays to Hooperons in the NMSSM. A Butter, T Plehn, M Rauch, D Zerwas, S Henrot-Versillé, R Lafaye, arXiv:1507.02288Phys. Rev. D. 9315011A. Butter, T. Plehn, M. Rauch, D. Zerwas, S. Henrot-Versillé, and R. Lafaye, Invisible Higgs Decays to Hooperons in the NMSSM, Phys. Rev. D 93 (2016) 015011, [arXiv:1507.02288].
S Argyropoulos, O Brandt, U Haisch, arXiv:2109.13597Collider Searches for Dark Matter through the Higgs Lens. S. Argyropoulos, O. Brandt, and U. Haisch, Collider Searches for Dark Matter through the Higgs Lens, arXiv:2109.13597.
Search for an invisibly decaying Higgs boson or dark matter candidates produced in association with a Z boson in pp collisions at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1708.09624Phys. Lett. B. 776ATLAS Collaboration, M. Aaboud et al., Search for an invisibly decaying Higgs boson or dark matter candidates produced in association with a Z boson in pp collisions at √ s = 13 TeV with the ATLAS detector, Phys. Lett. B 776 (2018) 318-337, [arXiv:1708.09624].
Search for invisible Higgs boson decays in vector boson fusion at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1809.06682Phys. Lett. B. 793ATLAS Collaboration, M. Aaboud et al., Search for invisible Higgs boson decays in vector boson fusion at √ s = 13 TeV with the ATLAS detector, Phys. Lett. B 793 (2019) 499-519, [arXiv:1809.06682].
Search for dark matter in events with a hadronically decaying vector boson and missing transverse momentum in pp collisions at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1807.11471JHEP. 18010ATLAS Collaboration, M. Aaboud et al., Search for dark matter in events with a hadronically decaying vector boson and missing transverse momentum in pp collisions at √ s = 13 TeV with the ATLAS detector, JHEP 10 (2018) 180, [arXiv:1807.11471].
Combination of searches for invisible Higgs boson decays with the ATLAS experiment. M Aaboud, ATLAS CollaborationarXiv:1904.05105Phys. Rev. Lett. 12223ATLAS Collaboration, M. Aaboud et al., Combination of searches for invisible Higgs boson decays with the ATLAS experiment, Phys. Rev. Lett. 122 (2019), no. 23 231801, [arXiv:1904.05105].
Search for invisible decays of Higgs bosons in the vector boson fusion and associated ZH production modes. S Chatrchyan, CMS CollaborationarXiv:1404.1344Eur. Phys. J. C. 742980CMS Collaboration, S. Chatrchyan et al., Search for invisible decays of Higgs bosons in the vector boson fusion and associated ZH production modes, Eur. Phys. J. C 74 (2014) 2980, [arXiv:1404.1344].
Searches for invisible decays of the Higgs boson in pp collisions at √ s = 7, 8, and 13 TeV. V Khachatryan, CMS CollaborationarXiv:1610.09218JHEP. 13502CMS Collaboration, V. Khachatryan et al., Searches for invisible decays of the Higgs boson in pp collisions at √ s = 7, 8, and 13 TeV, JHEP 02 (2017) 135, [arXiv:1610.09218].
Search for invisible decays of a Higgs boson produced through vector boson fusion in proton-proton collisions at √ s = 13 TeV. A M Sirunyan, CMS CollaborationarXiv:1809.05937Phys. Lett. B. 793CMS Collaboration, A. M. Sirunyan et al., Search for invisible decays of a Higgs boson produced through vector boson fusion in proton-proton collisions at √ s = 13 TeV, Phys. Lett. B 793 (2019) 520-551, [arXiv:1809.05937].
Precise predictions for V + jets dark matter backgrounds. J M Lindert, arXiv:1705.04664Eur. Phys. J. C. 7712J. M. Lindert et al., Precise predictions for V + jets dark matter backgrounds, Eur. Phys. J. C 77 (2017), no. 12 829, [arXiv:1705.04664].
Search for new phenomena in events with an energetic jet and missing transverse momentum in pp collisions at √ s =13 TeV with the ATLAS detector. G Aad, ATLAS CollaborationPhysATLAS Collaboration, G. Aad et al., Search for new phenomena in events with an energetic jet and missing transverse momentum in pp collisions at √ s =13 TeV with the ATLAS detector, Phys.
. arXiv:2102.10874Rev. D. 10311Rev. D 103 (2021), no. 11 112006, [arXiv:2102.10874].
Search for new particles in events with energetic jets and large missing transverse momentum in proton-proton collisions at √ s = 13 TeV. A Tumasyan, CMS CollaborationarXiv:2107.13021JHEP. 15311CMS Collaboration, A. Tumasyan et al., Search for new particles in events with energetic jets and large missing transverse momentum in proton-proton collisions at √ s = 13 TeV, JHEP 11 (2021) 153, [arXiv:2107.13021].
Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at √ s = 8 TeV using the ATLAS detector. G Aad, ATLAS CollaborationarXiv:1401.7610JHEP. 0431ATLAS Collaboration, G. Aad et al., Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at √ s = 8 TeV using the ATLAS detector, JHEP 04 (2014) 031, [arXiv:1401.7610].
Measurement of electroweak production of a W boson and two forward jets in proton-proton collisions at √ s = 8 TeV. V Khachatryan, CMS CollaborationarXiv:1607.06975JHEP. 14711CMS Collaboration, V. Khachatryan et al., Measurement of electroweak production of a W boson and two forward jets in proton-proton collisions at √ s = 8 TeV, JHEP 11 (2016) 147, [arXiv:1607.06975].
Measurement of the cross-section for electroweak production of dijets in association with a Z boson in pp collisions at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1709.10264Phys. Lett. B. 775ATLAS Collaboration, M. Aaboud et al., Measurement of the cross-section for electroweak production of dijets in association with a Z boson in pp collisions at √ s = 13 TeV with the ATLAS detector, Phys. Lett. B 775 (2017) 206-228, [arXiv:1709.10264].
Measurements of electroweak W jj production and constraints on anomalous gauge couplings with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1703.04362Eur. Phys. J. C. 777ATLAS Collaboration, M. Aaboud et al., Measurements of electroweak W jj production and constraints on anomalous gauge couplings with the ATLAS detector, Eur. Phys. J. C 77 (2017), no. 7 474, [arXiv:1703.04362].
Electroweak production of two jets in association with a Z boson in proton-proton collisions at √ s = 13 TeV. A M Sirunyan, CMS CollaborationarXiv:1712.09814Eur. Phys. J. C. 787CMS Collaboration, A. M. Sirunyan et al., Electroweak production of two jets in association with a Z boson in proton-proton collisions at √ s = 13 TeV, Eur. Phys. J. C 78 (2018), no. 7 589, [arXiv:1712.09814].
Next-to-Leading Order Corrections to W + 2 jet and Z + 2 Jet Production at Hadron Colliders. J M Campbell, R K Ellis, hep-ph/0202176Phys. Rev. D. 65113007J. M. Campbell and R. K. Ellis, Next-to-Leading Order Corrections to W + 2 jet and Z + 2 Jet Production at Hadron Colliders, Phys. Rev. D 65 (2002) 113007, [hep-ph/0202176].
NLO QCD corrections to W boson production with a massive b-quark jet pair at the Tevatron p anti-p collider. F Cordero, L Reina, D Wackeroth, hep-ph/0606102Phys. Rev. D. 7434007F. Febres Cordero, L. Reina, and D. Wackeroth, NLO QCD corrections to W boson production with a massive b-quark jet pair at the Tevatron p anti-p collider, Phys. Rev. D 74 (2006) 034007, [hep-ph/0606102].
Associated Production of a W Boson and One b Jet. J M Campbell, R K Ellis, F Cordero, F Maltoni, L Reina, D Wackeroth, S Willenbrock, arXiv:0809.3003Phys. Rev. D. 7934023J. M. Campbell, R. K. Ellis, F. Febres Cordero, F. Maltoni, L. Reina, D. Wackeroth, and S. Willenbrock, Associated Production of a W Boson and One b Jet, Phys. Rev. D 79 (2009) 034023, [arXiv:0809.3003].
Precise Predictions for W + 3 Jet Production at Hadron Colliders. C F Berger, Z Bern, L J Dixon, F Cordero, D Forde, T Gleisberg, H Ita, D A Kosower, D Maitre, arXiv:0902.2760Phys. Rev. Lett. 102C. F. Berger, Z. Bern, L. J. Dixon, F. Febres Cordero, D. Forde, T. Gleisberg, H. Ita, D. A. Kosower, and D. Maitre, Precise Predictions for W + 3 Jet Production at Hadron Colliders, Phys. Rev. Lett. 102 (2009) 222001, [arXiv:0902.2760].
Generalized unitarity at work: first NLO QCD results for hadronic W + 3jet production. R K Ellis, K Melnikov, G Zanderighi, arXiv:0901.4101JHEP. 0477R. K. Ellis, K. Melnikov, and G. Zanderighi, Generalized unitarity at work: first NLO QCD results for hadronic W + 3jet production, JHEP 04 (2009) 077, [arXiv:0901.4101].
W+3 jet production at the Tevatron. R K Ellis, K Melnikov, G Zanderighi, arXiv:0906.1445Phys. Rev. D. 8094002R. K. Ellis, K. Melnikov, and G. Zanderighi, W+3 jet production at the Tevatron, Phys. Rev. D 80 (2009) 094002, [arXiv:0906.1445].
Next-to-Leading Order QCD Predictions for W+3-Jet Distributions at Hadron Colliders. C F Berger, Z Bern, L J Dixon, F Cordero, D Forde, T Gleisberg, H Ita, D A Kosower, D Maitre, arXiv:0907.1984Phys. Rev. D. 8074036C. F. Berger, Z. Bern, L. J. Dixon, F. Febres Cordero, D. Forde, T. Gleisberg, H. Ita, D. A. Kosower, and D. Maitre, Next-to-Leading Order QCD Predictions for W+3-Jet Distributions at Hadron Colliders, Phys. Rev. D 80 (2009) 074036, [arXiv:0907.1984].
Precise Predictions for W + 4 Jet Production at the Large Hadron Collider. C F Berger, Z Bern, L J Dixon, F Cordero, D Forde, T Gleisberg, H Ita, D A Kosower, D Maitre, arXiv:1009.2338Phys. Rev. Lett. 10692001C. F. Berger, Z. Bern, L. J. Dixon, F. Febres Cordero, D. Forde, T. Gleisberg, H. Ita, D. A. Kosower, and D. Maitre, Precise Predictions for W + 4 Jet Production at the Large Hadron Collider, Phys. Rev. Lett. 106 (2011) 092001, [arXiv:1009.2338].
Next-to-Leading Order W + 5-Jet Production at the LHC. Z Bern, L J Dixon, F Cordero, S Höche, H Ita, D A Kosower, D Maître, K J Ozeren, arXiv:1304.1253Phys. Rev. D. 881Z. Bern, L. J. Dixon, F. Febres Cordero, S. Höche, H. Ita, D. A. Kosower, D. Maître, and K. J. Ozeren, Next-to-Leading Order W + 5-Jet Production at the LHC, Phys. Rev. D 88 (2013), no. 1 014025, [arXiv:1304.1253].
Weak vector boson production with many jets at the LHC √ s = 13 TeV. F R Anger, F Cordero, S Höche, D Maître, arXiv:1712.08621Phys. Rev. D. 979F. R. Anger, F. Febres Cordero, S. Höche, and D. Maître, Weak vector boson production with many jets at the LHC √ s = 13 TeV, Phys. Rev. D 97 (2018), no. 9 096010, [arXiv:1712.08621].
Two-Loop QCD Corrections to Wbb Production at Hadron Colliders. S Badger, H B Hartanto, S Zoia, arXiv:2102.02516Phys. Rev. Lett. 1271S. Badger, H. B. Hartanto, and S. Zoia, Two-Loop QCD Corrections to Wbb Production at Hadron Colliders, Phys. Rev. Lett. 127 (2021), no. 1 012001, [arXiv:2102.02516].
S Abreu, F F Cordero, H Ita, M Klinkert, B Page, V Sotnikov, arXiv:2110.07541Leading-Color Two-Loop Amplitudes for Four Partons and a W Boson in QCD. S. Abreu, F. F. Cordero, H. Ita, M. Klinkert, B. Page, and V. Sotnikov, Leading-Color Two-Loop Amplitudes for Four Partons and a W Boson in QCD, arXiv:2110.07541.
NLO corrections merged with parton showers for Z+2 jets production using the POWHEG method. E Re, arXiv:1204.5433JHEP. 1031E. Re, NLO corrections merged with parton showers for Z+2 jets production using the POWHEG method, JHEP 10 (2012) 031, [arXiv:1204.5433].
The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. J Alwall, R Frederix, S Frixione, V Hirschi, F Maltoni, O Mattelaer, H S Shao, T Stelzer, P Torrielli, M Zaro, arXiv:1405.0301JHEP. 0779J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07 (2014) 079, [arXiv:1405.0301].
Event Generation with Sherpa 2.2. E Bothmann, Sherpa CollaborationarXiv:1905.09127SciPost Phys. 73Sherpa Collaboration, E. Bothmann et al., Event Generation with Sherpa 2.2, SciPost Phys. 7 (2019), no. 3 034, [arXiv:1905.09127].
A study of multi-jet production in association with an electroweak vector boson. R Frederix, S Frixione, A Papaefstathiou, S Prestel, P Torrielli, arXiv:1511.00847JHEP. 13102R. Frederix, S. Frixione, A. Papaefstathiou, S. Prestel, and P. Torrielli, A study of multi-jet production in association with an electroweak vector boson, JHEP 02 (2016) 131, [arXiv:1511.00847].
QCD matrix elements + parton showers: The NLO case. S Höche, F Krauss, M Schönherr, F Siegert, arXiv:1207.5030JHEP. 0427S. Höche, F. Krauss, M. Schönherr, and F. Siegert, QCD matrix elements + parton showers: The NLO case, JHEP 04 (2013) 027, [arXiv:1207.5030].
NLO QCD matrix elements + parton showers in e + e − -> hadrons. T Gehrmann, S Höche, F Krauss, M Schönherr, F Siegert, arXiv:1207.5031JHEP. 01T. Gehrmann, S. Höche, F. Krauss, M. Schönherr, and F. Siegert, NLO QCD matrix elements + parton showers in e + e − -> hadrons, JHEP 01 (2013) 144, [arXiv:1207.5031].
Merging Multi-leg NLO Matrix Elements with Parton Showers. L Lönnblad, S Prestel, arXiv:1211.7278JHEP. 16603L. Lönnblad and S. Prestel, Merging Multi-leg NLO Matrix Elements with Parton Showers, JHEP 03 (2013) 166, [arXiv:1211.7278].
Merging meets matching in MC@NLO. R Frederix, S Frixione, arXiv:1209.6215JHEP. 1261R. Frederix and S. Frixione, Merging meets matching in MC@NLO, JHEP 12 (2012) 061, [arXiv:1209.6215].
W Plus Multiple Jets at the LHC with High Energy Jets. J R Andersen, T Hapola, J M Smillie, arXiv:1206.6763JHEP. 04709J. R. Andersen, T. Hapola, and J. M. Smillie, W Plus Multiple Jets at the LHC with High Energy Jets, JHEP 09 (2012) 047, [arXiv:1206.6763].
Z/γ plus multiple hard jets in high energy collisions. J R Andersen, J J Medley, J M Smillie, arXiv:1603.05460JHEP. 13605J. R. Andersen, J. J. Medley, and J. M. Smillie, Z/γ plus multiple hard jets in high energy collisions, JHEP 05 (2016) 136, [arXiv:1603.05460].
Combined subleading high-energy logarithms and NLO accuracy for W production in association with multiple jets. J R Andersen, J A Black, H M Brooks, E P Byrne, A Maier, J M Smillie, arXiv:2012.10310JHEP. 10504J. R. Andersen, J. A. Black, H. M. Brooks, E. P. Byrne, A. Maier, and J. M. Smillie, Combined subleading high-energy logarithms and NLO accuracy for W production in association with multiple jets, JHEP 04 (2021) 105, [arXiv:2012.10310].
Electroweak corrections to lepton pair production in association with two hard jets at the LHC. A Denner, L Hofer, A Scharf, S Uccirati, arXiv:1411.0916JHEP. 0194A. Denner, L. Hofer, A. Scharf, and S. Uccirati, Electroweak corrections to lepton pair production in association with two hard jets at the LHC, JHEP 01 (2015) 094, [arXiv:1411.0916].
NLO electroweak automation and precise predictions for W+multijet production at the LHC. S Kallweit, J M Lindert, P Maierhöfer, S Pozzorini, M Schönherr, arXiv:1412.5157JHEP. 0412S. Kallweit, J. M. Lindert, P. Maierhöfer, S. Pozzorini, and M. Schönherr, NLO electroweak automation and precise predictions for W+multijet production at the LHC, JHEP 04 (2015) 012, [arXiv:1412.5157].
NLO QCD+EW predictions for V + jets including off-shell vector-boson decays and multijet merging. S Kallweit, J M Lindert, P Maierhöfer, S Pozzorini, M Schönherr, arXiv:1511.08692JHEP. 0421S. Kallweit, J. M. Lindert, P. Maierhöfer, S. Pozzorini, and M. Schönherr, NLO QCD+EW predictions for V + jets including off-shell vector-boson decays and multijet merging, JHEP 04 (2016) 021, [arXiv:1511.08692].
Automation of electroweak corrections for LHC processes. M Chiesa, N Greiner, F Tramontano, arXiv:1507.08579J. Phys. G. 431M. Chiesa, N. Greiner, and F. Tramontano, Automation of electroweak corrections for LHC processes, J. Phys. G 43 (2016), no. 1 013002, [arXiv:1507.08579].
QCD corrections to electroweak nu(l) j j and l+ l-j j production. C Oleari, D Zeppenfeld, hep-ph/0310156Phys. Rev. D. 6993004C. Oleari and D. Zeppenfeld, QCD corrections to electroweak nu(l) j j and l+ l-j j production, Phys. Rev. D 69 (2004) 093004, [hep-ph/0310156].
Next-to-leading order QCD corrections to photon production via weak-boson fusion. B Jager, arXiv:1004.0825Phys. Rev. D. 81114016B. Jager, Next-to-leading order QCD corrections to photon production via weak-boson fusion, Phys. Rev. D 81 (2010) 114016, [arXiv:1004.0825].
Next-to-leading order QCD corrections to electroweak Zjj production in the POWHEG BOX. B Jager, S Schneider, G Zanderighi, arXiv:1207.2626JHEP. 08309B. Jager, S. Schneider, and G. Zanderighi, Next-to-leading order QCD corrections to electroweak Zjj production in the POWHEG BOX, JHEP 09 (2012) 083, [arXiv:1207.2626].
Parton Shower Effects on W and Z Production via Vector Boson Fusion at NLO QCD. F Schissler, D Zeppenfeld, arXiv:1302.2884JHEP. 0457F. Schissler and D. Zeppenfeld, Parton Shower Effects on W and Z Production via Vector Boson Fusion at NLO QCD, JHEP 04 (2013) 057, [arXiv:1302.2884].
The anti-k t jet clustering algorithm. M Cacciari, G P Salam, G Soyez, arXiv:0802.1189JHEP. 0463M. Cacciari, G. P. Salam, and G. Soyez, The anti-k t jet clustering algorithm, JHEP 04 (2008) 063, [arXiv:0802.1189].
An automated subtraction of NLO EW infrared divergences. M Schönherr, arXiv:1712.07975Eur. Phys. J. C. 782M. Schönherr, An automated subtraction of NLO EW infrared divergences, Eur. Phys. J. C 78 (2018), no. 2 119, [arXiv:1712.07975].
Matching NLO QCD computations and parton shower simulations. S Frixione, B R Webber, hep-ph/0204244JHEP. 0629S. Frixione and B. R. Webber, Matching NLO QCD computations and parton shower simulations, JHEP 06 (2002) 029, [hep-ph/0204244].
A critical appraisal of NLO+PS matching methods. S Höche, F Krauss, M Schönherr, F Siegert, arXiv:1111.1220JHEP. 0949S. Höche, F. Krauss, M. Schönherr, and F. Siegert, A critical appraisal of NLO+PS matching methods, JHEP 09 (2012) 049, [arXiv:1111.1220].
A Parton shower algorithm based on Catani-Seymour dipole factorisation. S Schumann, F Krauss, arXiv:0709.1027JHEP. 03803S. Schumann and F. Krauss, A Parton shower algorithm based on Catani-Seymour dipole factorisation, JHEP 03 (2008) 038, [arXiv:0709.1027].
AMEGIC++ 1.0: A Matrix element generator in C++. F Krauss, R Kuhn, G Soff, hep-ph/0109036JHEP. 0244F. Krauss, R. Kuhn, and G. Soff, AMEGIC++ 1.0: A Matrix element generator in C++, JHEP 02 (2002) 044, [hep-ph/0109036].
Automating dipole subtraction for QCD NLO calculations. T Gleisberg, F Krauss, arXiv:0709.2881Eur. Phys. J. C. 53T. Gleisberg and F. Krauss, Automating dipole subtraction for QCD NLO calculations, Eur. Phys. J. C 53 (2008) 501-523, [arXiv:0709.2881].
Event generation with SHERPA 1.1. T Gleisberg, S Höche, F Krauss, M Schönherr, S Schumann, F Siegert, J Winter, arXiv:0811.4622JHEP. 027T. Gleisberg, S. Höche, F. Krauss, M. Schönherr, S. Schumann, F. Siegert, and J. Winter, Event generation with SHERPA 1.1, JHEP 02 (2009) 007, [arXiv:0811.4622].
. F Buccioni, J.-N Lang, J M Lindert, P Maierhöfer, S Pozzorini, H Zhang, M F Zoller, arXiv:1907.13071Eur. Phys. J. C. 210F. Buccioni, J.-N. Lang, J. M. Lindert, P. Maierhöfer, S. Pozzorini, H. Zhang, and M. F. Zoller, OpenLoops 2, Eur. Phys. J. C 79 (2019), no. 10 866, [arXiv:1907.13071].
Scattering Amplitudes with Open Loops. F Cascioli, P Maierhofer, S Pozzorini, arXiv:1111.5206Phys. Rev. Lett. 108F. Cascioli, P. Maierhofer, and S. Pozzorini, Scattering Amplitudes with Open Loops, Phys. Rev. Lett. 108 (2012) 111601, [arXiv:1111.5206].
On-the-fly reduction of open loops. F Buccioni, S Pozzorini, M Zoller, arXiv:1710.11452Eur. Phys. J. C. 781F. Buccioni, S. Pozzorini, and M. Zoller, On-the-fly reduction of open loops, Eur. Phys. J. C 78 (2018), no. 1 70, [arXiv:1710.11452].
Collier: a fortran-based Complex One-Loop LIbrary in Extended Regularizations. A Denner, S Dittmaier, L Hofer, arXiv:1604.06792Comput. Phys. Commun. 212A. Denner, S. Dittmaier, and L. Hofer, Collier: a fortran-based Complex One-Loop LIbrary in Extended Regularizations, Comput. Phys. Commun. 212 (2017) 220-238, [arXiv:1604.06792].
OneLOop: For the evaluation of one-loop scalar functions. A Van Hameren, arXiv:1007.4716Comput. Phys. Commun. 182A. van Hameren, OneLOop: For the evaluation of one-loop scalar functions, Comput. Phys. Commun. 182 (2011) 2427-2438, [arXiv:1007.4716].
Electroweak corrections to charged-current e + e − → 4 fermion pro results. A Denner, hep-ph/0505042Nucl.Phys. 724A. Denner et al., Electroweak corrections to charged-current e + e − → 4 fermion pro results, Nucl.Phys. B724 (2005) 247-294, [hep-ph/0505042].
. A Manohar, P Nason, G P Salam, G Zanderighi, arXiv:1607.04266How bright is the proton? A precise determination of the photon PDFA. Manohar, P. Nason, G. P. Salam, and G. Zanderighi, How bright is the proton? A precise determination of the photon PDF, arXiv:1607.04266.
QCD matrix elements and truncated showers. S Hoeche, F Krauss, S Schumann, F Siegert, arXiv:0903.1219JHEP. 0553S. Hoeche, F. Krauss, S. Schumann, and F. Siegert, QCD matrix elements and truncated showers, JHEP 05 (2009) 053, [arXiv:0903.1219].
Public repository with NLO QCD+EW theoretical predictions and uncertainties for V +2 jet ratios. J M Lindert, J. M. Lindert, Public repository with NLO QCD+EW theoretical predictions and uncertainties for V +2 jet ratios, https: // gitlab. com/ Lindert/ vjj. git .
Search for invisible Higgs-boson decays in events with vector-boson fusion signatures using 139 fb −1 of proton-proton data recorded by the ATLAS experiment. G Aad, ATLAS CollaborationarXiv:2202.07953ATLAS Collaboration, G. Aad et al., Search for invisible Higgs-boson decays in events with vector-boson fusion signatures using 139 fb −1 of proton-proton data recorded by the ATLAS experiment, arXiv:2202.07953.
| [] |
[
"Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data",
"Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data",
"Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data",
"Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data"
] | [
"Jin-Zhu Yu \nDepartment of Civil Engineering\nUniversity of Texas at Arlington\nArlingtonTXUSA\n",
"Hiba Baroud \nDepartment of Civil and Environmental Engineering\nVanderbilt University\nNashvilleTNUSA\n",
"Jin-Zhu Yu \nDepartment of Civil Engineering\nUniversity of Texas at Arlington\nArlingtonTXUSA\n",
"Hiba Baroud \nDepartment of Civil and Environmental Engineering\nVanderbilt University\nNashvilleTNUSA\n"
] | [
"Department of Civil Engineering\nUniversity of Texas at Arlington\nArlingtonTXUSA",
"Department of Civil and Environmental Engineering\nVanderbilt University\nNashvilleTNUSA",
"Department of Civil Engineering\nUniversity of Texas at Arlington\nArlingtonTXUSA",
"Department of Civil and Environmental Engineering\nVanderbilt University\nNashvilleTNUSA"
] | [] | Hierarchical Bayesian Poisson regression models (HBPRMs) provide a flexible modeling approach of the relationship between predictors and count response variables. The applications of HBPRMs to large-scale datasets require efficient inference algorithms due to the high computational cost of inferring many model parameters based on random sampling. Although Markov Chain Monte Carlo (MCMC) algorithms have been widely used for Bayesian inference, sampling using this class of algorithms is time-consuming for applications with large-scale data and time-sensitive decision-making, partially due to the non-conjugacy of many models. To overcome this limitation, this research develops an approximate Gibbs sampler (AGS) to efficiently learn the HBPRMs while maintaining the inference accuracy. In the proposed sampler, the data likelihood is approximated with Gaussian distribution such that the conditional posterior of the coefficients has a closed-form solution. Numerical experiments using real and synthetic datasets with small and large counts demonstrate the superior performance of AGS in comparison to the state-of-the-art sampling algorithm, especially for large datasets. | 10.48550/arxiv.2211.15771 | [
"https://export.arxiv.org/pdf/2211.15771v1.pdf"
] | 254,069,482 | 2211.15771 | fec6fd46010bf06c76a8b884a889d0f13ee6714f |
Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data
Jin-Zhu Yu
Department of Civil Engineering
University of Texas at Arlington
ArlingtonTXUSA
Hiba Baroud
Department of Civil and Environmental Engineering
Vanderbilt University
NashvilleTNUSA
Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian Models for Grouped Count Data
ARTICLE HISTORY Compiled November 30, 2022 Total Words: 6040Conditional conjugacyApproximate MCMCGaussian approximationIntractable likelihood
Hierarchical Bayesian Poisson regression models (HBPRMs) provide a flexible modeling approach of the relationship between predictors and count response variables. The applications of HBPRMs to large-scale datasets require efficient inference algorithms due to the high computational cost of inferring many model parameters based on random sampling. Although Markov Chain Monte Carlo (MCMC) algorithms have been widely used for Bayesian inference, sampling using this class of algorithms is time-consuming for applications with large-scale data and time-sensitive decision-making, partially due to the non-conjugacy of many models. To overcome this limitation, this research develops an approximate Gibbs sampler (AGS) to efficiently learn the HBPRMs while maintaining the inference accuracy. In the proposed sampler, the data likelihood is approximated with Gaussian distribution such that the conditional posterior of the coefficients has a closed-form solution. Numerical experiments using real and synthetic datasets with small and large counts demonstrate the superior performance of AGS in comparison to the state-of-the-art sampling algorithm, especially for large datasets.
Introduction
Count data are frequently encountered in a wide range of applications, such as finance, epidemiology, sociology, and operations, among others [1]. For example, in epidemiological studies, the occurrences of a disease are often recorded as counts on a regular basis [2]. Death counts, classified by various demographic variables, are regularly recorded by government agencies [3]. In customer service centers, the service level is often measured based on the number of customers served during a given period of time. More recently, data-driven disaster management approaches have used count data to analyze the impact of disasters (e.g., number of power outages [4] and pipe breaks [5]) and the recovery process (e.g., recovery rate [6]). Understanding the features that can influence the occurrence of such events is critical to inform future decisions and policies. Therefore, statistical models have been developed to accommodate the complexity of count data, among which are Hierarchical Bayesian Poisson regression models (HBPRMs) that have been widely employed to analyze count data under uncertainty [7][8][9][10]. The wide applicability of this class of models is due to the fact that the hierarchical Bayesian approach offers the flexibility to capture the complex hierarchical structure of count data and predictors by estimating different parameters for different data groups, thereby improving the estimation accuracy of parameters for each group. The data can be grouped based on geographical areas, types of experiments in clinical studies, or different hazard types and intensities in disaster studies. The hierarchical structure assumes that the parameters of the prior distribution are uncertain and characterized by their own probability distribution with corresponding parameters referred to as hyperparameters. Therefore, this class of models can account for the individual-and group-level variations in estimating the parameters of interest and the uncertainty around the estimation of hyperparameters [11].
The flexibility of hierarchical models in capturing the complex interactions in the data comes with a high computational expense since all the model parameters need to be estimated jointly [12]. Furthermore, large-scale data may be structured in many levels or groups [13], resulting in a large number of parameters to learn for a hierarchical model, further increasing the computational load. Given that many of the applications involving count data have recently benefited from technological advances in data collection and storage, there is a critical need to ensure the applicability of HBPRMs. As a result, efficient inference algorithms are needed to support the use of statistical learning models such as HBPRMs in risk-based decision-making, especially for time-sensitive applications such as resource allocation during emergency response and disaster recovery.
The most popular algorithms for parameter inference in hierarchical Bayesian models (and generally for Bayesian inference) are Markov Chain Monte Carlo (MCMC) algorithms. MCMC algorithms obtain samples from a target distribution by constructing a Markov chain (irreducible and aperiodic) in the parameter space that has precisely the target distribution as its stationary distribution [14]. This class of algorithms provide a powerful tool to obtain posterior samples and then estimate the parameters of interest when the exact full posterior distributions are only known up to a constant and direct sampling is not possible [14]. However, a major drawback of standard MCMC algorithms, such as the Metropolis-Hastings algorithm (MH), is that they suffer from slow mixing, requiring numerous Monte Carlo samples that grow with the dimension and complexity of the dataset [15,16]. In some applications of Bayesian approaches (e.g., emergency response), decisions relying on outcomes of the model cannot afford to wait days for running MCMC chains to collect a sufficiently large number of posterior samples. As such, the application of standard MCMC algorithms to learn Bayesian models such as HBPRMs or other hierarchical Bayesian models for large datasets is significantly limited and a fast approximate MCMC is needed.
The key idea of approximate MCMC is to replace complex distributions that lead to a computational bottleneck with an approximation that is simpler or faster to sample from than the original [17,18]. Several studies have applied analytical approximation techniques by exploiting conjugacy to accelerate MCMC-based inference in hierarchical Bayesian models [19][20][21][22]. More specifically, an approximate Gibbs sampling algorithm to is used to enable the inference of the rate parameter in the hierarchical Poisson regression model in [19]. The conditional posterior of the rate parameter, which does not have a closed-form expression due to non-conjugacy between Poisson likelihood and log-normal prior distribution, is approximated as a mixture of Gaussian and Gamma distributions using the moment matching method. The exact conditional moments are obtained by minimizing the Kullback-Liebler divergence between the original and the approximate conditional posterior distributions. Conjugacy is also employed to improve inference efficiency in large and more complex hierarchical models in [21]. It is shown that the approximation using conjugacy can be utilized even though the original hierarchical model is not fully conjugate [21]. As an example in their study, the approximate full conditional distributions are derived when the likelihood function follows a gamma distribution while the prior for the parameters are assumed to be multivariate normal and inverse Wishart distribution. In [22], a Gaussian approximation to the conditional distribution of the normal random effects in the hierarchical Bayesian binomial model (HBBM) is derived using Taylor series expansion, such that Gibbs sampling can be applied to infer the HBBM more efficiently. A similar approach that approximates the data likelihood with a Gaussian distribution to allow for faster inference of parameters is used for parameter inference in a Bayesian Poisson model [20]. With regard to count data, a fast approximate Bayesian inference method is proposed to infer a negative binomial model (NB) in [23]. The non-conjugacy of the NB likelihood is addressed by the Pólya-Gamma data augmentation. This technique is first developed in [24] and is employed to approximate the likelihood as a Gaussian distribution. Consequently, the conditional posteriors of all but one parameters have a closed-form solution and a Metropolis-within-Gibbs algorithm is thus developed for the posterior inference.
While approximate MCMC algorithms have been developed for hierarchical and non-hierarchical Poisson models as well as hierarchical Bayesian binomial and negative binomial models, the development of approximate MCMC algorithm for an efficient inference of HBPRMs for grouped count data is still lacking. In this paper, we propose an approximate Gibbs sampler to address this problem. To deal with the non-conjugacy between the likelihood and the prior, we approximate the conditional likelihood as a Gaussian distribution, leading to closed-form conditional posteriors for all model parameters. The contribution lies in the derivation of a closed-form approximation to the complex conditional posterior of the parameters and the development of the Approximate Gibbs sampling (AGS) algorithm. The proposed algorithm allows for an efficient inference of the general HBPRM using the approximate Markov chain without compromising the inference accuracy, enabling the use of HBPRMs in applications with large-scale data and time sensitive decision-making. To demonstrate the performance of the proposed AGS algorithm, we conduct multiple numerical experiments and compare the inference accuracy and computational load to state-of-the-art sampling algorithms.
The rest of this paper is organized as follows. In Sec. 2, a general hierarchical Bayesian Poisson model for grouped count data is presented, and the closed-form solution to the approximate conditional posterior distribution of each regression coefficients is derived, followed by a description of the proposed AGS algorithm. Sec. 3 introduces the datasets used in the numerical experiments along with the comparison of the performance of sampling algorithms. Conclusions and future work are provided in Sec. 4.
Hierarchical Bayesian Poisson Regression Model
This section presents the Hierarchical Bayesian Poisson Regression Model (HBPRM) for count data. Without loss of generality, we consider a general HBPRM, the hierarchical version of Poisson log-normal model [19,[25][26][27] for grouped count data, in which the coefficient for each covariate varies across groups (Eq. (1) to Eq. (5)). This model can be applied to count datasets in which the counts can be divided into multiple groups based on the covariates. Let D = {x, y} be the dataset where x represents the covariates and y represents the dependent positive counts. This HBPRM assumes that each count, y ij , follows a Poisson distribution. The log of the mean in the Poisson distribution is a linear function of the covariates. In the hierarchical Bayesian paradigm, each of the parameters (regression coefficients) in the linear function follows a prior distribution with hyperparameter(s) which are in turn specified by a hyperprior distribution. Note that the hyperprior is shared among the parameters of the same covariate for all groups, thereby resulting in shrinkage of the parameters towards the group-mean [11]. When the variance of the hyperprior is decreased to zero, the hierarchical model is reduced to a non-hierarchical model. The mathematical formulation of the HBPRM is provided in Eq. (1) to Eq. (5):
y ij |λ ij ∼ Pois(λ ij ), ∀ i = 1, . . . , n j , j = 1, . . . , J,(1)ln λ ij = K k=1
w jk x ijk , ∀ i = 1, . . . , n j , j = 1, . . . , J, k = 1, . . . , K,
w jk |µ k , σ 2 k ∼ N (µ k , σ 2 k ), ∀ j = 1, . . . , J, k = 1, . . . , K,(2)µ k |m, τ 2 ∼ N (m, τ 2 ), ∀ k = 1, . . . , K,(3)σ 2 k |a, b ∼ IG(a, b), ∀ k = 1, . . . , K.(4)
In the HBPRM formulation, y ij is the i-th count within group j with an estimated mean of λ ij , n j is the number of data points in group j, w jk is the regression coefficient of covariate k, and x ijk is the i-th value in group j of covariate k. The prior for the coefficient of each covariate, µ k , is assumed to be a Gaussian distribution (N ) while the prior for the variance, σ 2 k , is assumed to be an inverse-gamma distribution (IG). The Gaussian and inverse-gamma distributions are specified such that we can exploit conditional conjugacy for analytical and computational convenience. Alternative distributions (such as half-Cauchy and uniform distributions) for the prior of group-level variance σ 2 k do not have this benefit, which will significantly increase the computational load. According to Ref. [28], when the group-level variance is close to zero, must be set to a reasonable value. For our model, the estimated group variance is always much larger than zero because the mean of estimated variance is approximately +J 2 (Eq. (17)) where J is the number of groups. As such, the can be set to a sufficiently large value, such as 2.
Inference
Given an observed count dataset structured using multiple groups, D, fitting an HBPRM entails the estimation of the joint posterior density distribution of all the parameters, which is only known up to a constant. If we denote the parameters by Θ = w 11 , . . . , w jk , . . . , w JK ; µ 1 , . . . , µ k ; σ 2 1 , . . . , σ 2 K , then the joint posterior factorizes as
p (Θ|y, x) ∝ J j=1 nj i=1 Pois y ij | exp K k=1 w jk x ijk × K k=1 N w jk |µ k , σ 2 k N µ k |m, τ 2 IG σ 2 k |a, b .(6)
Sampling from the joint posterior becomes a challenging task as it does not admit a closed-form expression. While MCMC algorithms (e.g., the MH) can be used, the need to judiciously tune the step size for the desired acceptance rate often repel users from using this algorithm [29,30]. In comparison, the Gibbs sampler is more efficient and does not require any tuning of the proposal distribution, therefore it has been used for Bayesian inference in a wide range of applications [31,32]. Classical Gibbs sampling requires that one can directly sample from the conditional posterior distribution of each parameter (or block of parameters), such as conditional conjugate posterior distributions. The full conditional posteriors for implementing the Gibbs sampler are
p (w jk |−) ∝ nj i=1 Pois y ij | exp K k=1 w jk x ijk N w jk |µ k , σ 2 k ,(7)p (µ k |−) ∝ N w 1k , . . . , w Jk |µ k , σ 2 k N µ k |m, τ 2 ,(8)p σ 2 k |− ∝ N w 1k , . . . , w Jk |µ k , σ 2 k IG σ 2 k | a 2 , b 2 ,(9)
where p (·|−) represents the conditional posterior of a parameter of interest given the remaining parameters and the data. Due to the Gaussian-Gaussian and Gaussianinverse-gamma conjugacy, Eq. (8) and Eq. (9) can be expressed in an analytical form [33] p
(µ k |−) ∝ N µ k 1 1 τ 2 + J σ 2 k m τ 2 + J j=1 w jk σ 2 k , 1 1 τ 2 + J σ 2 k ,(10)p σ 2 k |− ∝ IG σ 2 k a + J 2 , b + J j=1 (w jk − µ k ) 2 2 .(11)
However, Eq. (7) does not admit an analytical solution because the Poisson likelihood is not conjugate to the Gaussian prior. Consequently, it is challenging to sample directly from the conditional posterior to enable the Gibbs sampler. In this case, other algorithms can be used to obtain p (w jk |−), such as adaptive-rejection sampling [34], and Metropolis-within-Gibbs algorithm [35]. However, these algorithms introduce an additional computational cost due to the need to evaluate the complex conditional distribution. Therefore, we propose to use a Gaussian approximation to the Poisson likelihood given in Eq. (7) to obtain a closed-form solution to the conditional posterior of coefficients. With the closed-form solution, the complex inference of regression coefficients can be simplified to save computational resources. Reducing the computational cost of sampling from p (w jk |−) is critical for datasets with a large number of groups because the number of regression coefficients, J × K, can be significantly larger than the number of prior parameters, 2K.
Gaussian Approximation to Log-gamma Distribution
This section introduces the Gaussian approximation to the log-gamma distribution that is used to obtain the closed-form approximate conditional posterior distribution in Section 2.4. Consider a gamma random variable z with probability density function (pdf) given by
p (z|α, β) = z α−1 e− z β Γ (α) β α , α > 0, β > 0,(12)
where Γ (·) is the gamma function, and α and β are the location parameter and the scale parameter, respectively. The random variable, ln z ∈ R, follows a log-gamma distribution. The mean µ z and variance σ 2 z of log-gamma distribution are calculated using Eq. (13) and Eq. (14), respectively [36].
µ z = ψ 0 (α) + ln β (13a) = −γ + ∞ n=1 1 n − 1 n + α − 1 + ln β (13b) = −γ + α−1 n=1 1 n + ln β (13c) σ 2 z = ψ 1 (α) (14a) = ∞ n=0 1 (α + n) 2 (14b) = π 2 6 − α−1 n=1 1 n 2 (14c)
In Eq. (13a) and Eq. (14a), ψ 0 (·) and ψ 1 (·) are the zeroth and the first order of polygamma functions [37]. In Eq. (13b) and Eq. (13c), γ is the Euler-Mascheroni constant [38].
For large values of α, the pdf of log-gamma distribution can be approximated by that of a Gaussian distribution [20,39], shown in Eq. (15).
Log-gamma(ln z|α, β) ≈ N (ln z|ψ 0 (α) + ln β, ψ 1 (α)) (15) To apply the approximation in the conditional posterior (Eq. (20) to Eq. (21)), we need to include y in the pdf of log-gamma distribution. Therefore, we let α = y and β = 1, and Eq. (15) becomes Log-gamma(ln z|y, 1) ≈ N (ln z|ψ 0 (y) , ψ 1 (y)) .
Note that because α > 0 and α is replaced by count data y, the approximation can only be applied to positive counts. Similarly, plugging in α = y, β = 1, and Γ(n) = (n − 1)! where n ∈ Z + , Eq. (12) becomes
p (z|y, 1) = z y−1 e −z (y − 1)! .(17)
Next, we need to relate Eq. (16) and Eq. (17). First, using the "change of variable" method and substituting ln z with v [20] in Eq. (17), we obtain the pdf of v
p (v|y, 1) = p (z = e v |y, 1) ∂e v ∂v (18a) = 1 (y − 1)! e vy e −e v .(18b)
Then, using Eq. (16) yields The comparison between the true and approximate Gaussian distribution, i.e., left and right-hand sides of Eq. (19) respectively, is shown in Fig. 1. When the counts are small, such as y ≤ 3 ((a) to (c)), the approximation is not very close to the true distribution. We can also see that the Kolmogorov-Smirnov (KS) distance (f), defined as the largest absolute difference between the cumulative density functions for the approximate distribution and true distribution, is relatively large. As the value of y increases, the approximate Gaussian distribution is increasingly closer to the true distribution. Also, the absolute error in the mean value of the Gaussian approximation is relatively small when y is greater than 3. Notice again that the approximation given by Eq. (19) is not directly applicable to zero counts. However, when zero counts are present in the dataset, such as in epidemiology studies, one can increase each count by a positive count (e.g., 5). This linear transformation allows one to circumvent the problem brought by zero counts (dependent variable) without affecting the model accuracy since such a transformation does not change the distribution of the error and preserves the relation between the dependent and independent variables.
1 (y − 1)! e vy e −e v ≈ 1 2πψ 1 (y) e (v−ψ 0 (y)) 2 −2ψ 1 (y) . (19) (a) (b) (c) (d) (e) (f)
Closed-form Approximate Conditional Posterior Distribution
In the conditional posterior of coefficient w jk given by Eq. (7), the likelihood function is
nj i=1 Pois y ij | e K k=1 wjkxijk = nj i=1 1 y ij (y ij − 1)! e K k=1 wjkxijkyij e −e K k=1 w jk x ijk .(20)
Applying the approximation given by Eq. (19) yields
nj i=1 Pois y ij | e K k=1 wjkxijk ≈ nj i=1 1 y ij 1 2πψ 1 (y ij ) e ( K k=1 w jk x ijk −ψ 0 (y ij )) 2 −2ψ 1 (y ij ) .(21)
Plugging Eq. (21) into Eq. (7) we get
p (w jk |−) ∝ exp (w jk − µ k ) 2 −2σ 2 k nj i=1 exp K k=1 w jk x ijk − ψ 0 (y ij ) 2 −2ψ 1 (y ij ) (22) = exp (w jk − µ k ) 2 −2σ 2 k + nj i=1 K k=1 w jk x ijk − ψ 0 (y ij ) 2 −2ψ 1 (y ij ) .(23)
As the product of two Gaussians is still Gaussian, the posterior can also be written as
p (w jk |−) ∝ exp (w jk − µ k ) 2 σ 2 k ,(24)
where µ k and σ 2 k are the mean and variance of the approximate Gaussian posterior. Completing the squares (see Appendix A for more details) we get
µ k = µ k + σ 2 k nj i=1 xijk ψ1(yij) ψ 0 (y ij ) − K h=1,h =k x ijh w jh σ 2 k nj i=1 x 2 ijk ψ1(yij) + 1 , ∀j = 1, . . . , J,(25)σ 2 k = σ 2 k σ 2 k nj i=1 x 2 ijk ψ1(yij) + 1 , ∀j = 1, . . . , J.(26)
Now that the full conditional posterior distributions can be expressed analytically, we can construct the approximate Gibbs sampler (Algorithm 1) to obtain posterior samples of the parameters in HBPRM efficiently.
Experiments
We evaluate the performance of our proposed AGS algorithm by applying it to several synthetic and real data sets. The performance of AGS is evaluated in terms of the accuracy, efficiency, and computational time. The proposed approach is compared with the state-of-the-art MCMC algorithm, No-U-Turn sampler (NUTS) [40], using the same datasets and performance metrics. NUTS is an extension to Hamiltonian Monte Carlo (HMC) algorithm that exploits Hamiltonian dynamics to propose samples. NUTS can free users from tuning the proposals and has been demonstrated to provide efficient inference of complex hierarchical models [12]. The description of the datasets and the experimental setup is provided in this section. The code and non-confidential data used for the experiments are available upon request.
Data Description
Multiple synthetic and real datasets are used to evaluate the performance of AGS for different data types and sizes. This section describes the approach to generating synthetic data and the characteristics of real datasets which include power outages, Covid-19 positive cases, and bike rentals. A subset of each dataset is provided in tables 2 to 5.
Synthetic data. The synthetic datasets are generated according to the model shown in Eq. (27). This model ensures the generated datasets cover a wide range of counts and produce a dataset that is similar to that of responding to emergency incidents during disasters such as power outages. An example of the synthetic dataset is presented in Table 2.
In Eq. (27), the notation (·) min−max represents the min-max normalizing function 1 . Each count y ij is rounded to the nearest integer. x ·j is the group-level covariate for group j. We generate 15 synthetic datasets (S1, . . . , S15) with varying total numbers of data points (N d ) in each dataset and varying numbers of data points in each group n j (for simplicity, it is assumed the same for each group in the same synthetic dataset), numbers of covariates K (K ≤ 6), and numbers of groups J to analyze the effect of the size of the data on the performance of AGS (Table 6). Power outage data. The power outage data includes the number of customers without power following 11 disruptive events (denoted by P1, . . . , P11). The power outage data is grouped by the disruptive event, i.e., power outage counts after the same disruptive event fall into the same group. The covariates in each dataset include PS (surface pressure, Pa), TQV (precipitable water vapor, kg·m −2 ), U10M (10-meter eastward wind speed, m/s), V10M (10-meter northward wind speed, m/s), t (time after the start of an event, hours). The outage datasets were collected from public utility companies during severe weather events and the weather data from the National Oceanic and Atmospheric Administration.
Covid-19 test data. The Covid-19 test dataset is obtained from Ref. [41], which are originally collected from seven studies (two preprints and five peer-reviewed articles) [41], the exposure is assumed to have occurred five days before the symptom onset and log(t), log(t) 2 , log(t) 3 , and N s are used as the covariates. Bike sharing data. The bike sharing data include daily bike rental counts for 729 days and the covariates we use include normalized temperature, normalized humidity, and casual bike rentals. The dataset is obtained from the UC Irvine Machine Learning Repository [42,43]. Bike rental counts are grouped by whether the rental occurs on a working day (Table 5).
To investigate the performance of AGS for small counts, including zero counts, we also simulate datasets with small counts using the following model shown in Eq. (28). T EXP represents the truncated exponential distribution. The PDF of T EXP (0.7, 1, y max ), where 0.7 is the rate parameter while 1 and y max are the lower and upper bounds, is given by
x ij1 ∼ U (0.1, 2) , i = 1, . . . , n j , j = 1, . . . , J (28a) x ij2 ∼ U (0.1, 1) , i = 1, . . . , n j , j = 1, . . . , J (28b) x ij3 ∼ U (0.1, 0.5) , i = 1, . . . , n j , j = 1, . . . , J (28c) x ij4 ∼ U (1, 10) , i = 1, . . . , n j , j = 1, . . . , J (28d) x ij5 ∼ U (0.5, 5) , i = 1, . . . , n j , j = 1, . . . , J (28e) x ·j ∼ T EXP (0.7, 1, y max ) , j = 1, . . . , J (28f) w jk ∼ N (0.1, 0.1) , j = 1, . . . , J, k = 1, . . . , K (28g) y ij = ef (x) = 0.7e −0.7(ymax−x−1) , x ∈ [1, y max ], 0, otherwise.(29)
This particular truncated exponential distribution instead of a uniform distribution is used to ensure that the generated counts will not concentrate on small values. By changing the value of the upper bound, we can generate counts in different ranges. Notice that since the floor function, · , is used in Eq. (28h), the generated counts can have zeros, which is smaller than the lower bound 1.
Experiment Setup
In the HBPRM for the count datasets listed above, we employ N (0, 1) and IG (1, 1) as a weakly-informative prior [44] for µ w and σ 2 w respectively. In the numerical experiments, NUTS is implemented with Stan [44]. Numbers are averaged over 4 runs of 10000 iterations for each algorithm, discarding the first 5000 samples as warm-ups/burn-ins.
We compare AGS with NUTS in terms of average sampling time in seconds per 1000 iterations (T s ), sampling efficiency (E s ), R 2 , and Root Mean Square Error (RMSE). Sampling efficiency is quantified as the mean effective sampler size (n eff ) over the average sampling time in seconds per 1000 iterations, i.e., E s =n eff /T s , wheren eff is the effective sample size of multiple sequences of samples [11,Chapter 11]. To make this paper self-contained, we have included the details for calculatingn eff , R 2 , and RMSE in Appendix B. All experiments are implemented with R (version 3.6.1) on a Windows 10 desktop computer with a 3.40 GHz Intel Core i7-6700 CPU and 16.0 GB RAM.
Results
The performances of NUTS and AGS under different datasets are summarized in Table 6 (synthetic datasets) and Table 7 (real datasets). On both the synthetic and real datasets, AGS consistently outperforms NUTS in the average sampling time, especially when the size of the datasets is large. Depending on the dataset, the improvement in the average inference speed can be greater than one order of magnitude. This observation shows that using the Gaussian approximation to avoid the evaluation of complex conditional posterior can significantly boost the sampling speed. However, for all the datasets except for power outage dataset P1, the sampling efficiency of AGS is significantly lower than that of NUTS because the effective sample size obtained from AGS is much lower than that from NUTS. The relatively low inference efficiency of AGS does not compromise the accuracy of parameter estimates. In all the examined datasets, R 2 and RMSE have comparable values for both AGS and NUTS. The inference accuracy is better (higher R 2 and lower RMSE) for AGS for all the synthetic datasets while in eight out of the thirteen (about 62%) real datasets, AGS has slightly higher R 2 and lower RMSE. In particular, the results on the Covid test data show that as long as a significant percentage of counts are not all very small counts, then the proposed approximate Gibbs sampler can outperform the NUTS in predictive accuracy. Overall, it can be concluded that AGS significantly decreases the computational load by allowing for faster sampling without compromising the accuracy of the estimates. We also investigate the scalability of the algorithms as it is crucial for large-scale hierarchical data. Therefore, we show the average sampling time of the two algorithms under different sizes of datasets to understand their performance for larger datasets. We conduct an empirical analysis of the average sampling time (seconds) per 1000 iterations for all the synthetic and real datasets shown in Fig. 2. The sampling time of both samplers increases as a function of the size of the dataset. However, when compared to NUTS, the increase in the sampling time of AGS is significantly lower, showing a significantly smaller rate of time increase over the size of datasets and suggesting improved scalability. This observation also indicates that although NUTS can generate samples effectively, it becomes inefficient in the case of large datasets as evaluating the gradient in proposing new samples becomes computationally expensive [45]. Results on the inference accuracy of two algorithms on selected Covid-19 test dataset with small counts are summarized in Table 8 where RG represents the range of counts, P CT 0 represents the percentage of zero counts, and P CT 1,5 represents the percentage of counts in [1,5]. We can see that AGS can outperform NUTS in R 2 and RMSE when there are no zero counts (subset1 and subset3) in the covid test data. However, when zero counts are included (subset2 and the whole set), NUTS performs better than AGS. Comparing the performance of both algorithms on subset 2, we can see that even when all the counts are small but positive, AGS can still outperform NUTS in inference accuracy by a small margin. We also compare the inference accuracy of both algorithms using synthetic datasets with small counts (SS1 -SS10) ( Table 9). It can be observed that for most synthetic datasets with a relatively large percentage of small counts (SS1-SS3 and SS5-SS8), NUTS outperforms AGS in terms of R 2 and RMSE. If the percentage of small counts is not very high (SS9 and SS10), then AGS outperforms NUTS, even when there are zero counts (SS10).
Based on the performance comparison using synthetic datasets with small counts and Covid-19 test datasets with small counts, we can conclude that when there is a large percentage of small counts, particularly zero counts, NUTS tends to outperform AGS. The specific percentage of small counts in a dataset that leads to a better performance of NUTS than AGS varies with the dataset. That is, depending on the particular dataset, AGS may still outperform NUTS when there is a large percentage of small counts.
Conclusions
This research proposes a scalable approximate Gibbs sampling algorithm for the HBPRM for grouped count data. Our algorithm builds on the approximation of datalikelihood with Gaussian distribution such that the conditional posterior for coefficients have a close-form solution. Empirical examples using synthetic and real datasets demonstrate that the proposed algorithm outperforms the state-of-the-art sampling algorithm, NUTS, in inference efficiency. The improvement in efficiency is greater for larger datasets, suggesting improved scalability. Due in part to the Gibbs updates, the AGS trades off greater accuracy for slower mixing Markov chains, leading to a much lower effective sample size and therefore lower sampling efficiency. However, when sampling time is of great concern to model users (e.g. predicting incidents and demands to allocate resources during a disaster), AGS would be the only feasible option. As the approximation quality improves with larger counts, our algorithm works better for count datasets in which the counts are large. When a large portion of counts in a dataset are zero or very small counts, then NUTS tend to outperform AGS in inference accuracy. Therefore, when there are zero counts and inference accuracy is critical, NUTS is recommended over AGS.
It is worth noting that the approximate conditional distributions of the parameters in the HBPRMs for grouped count data may not be compatible with each other, i.e., there may not exist an implicit joint posterior distribution [17,46] after applying the approximation. However, despite potentially incompatible conditional distributions, the use of such approximate MCMC samplers is suggested due to the computational efficiency and analytical convenience [18,46], especially when the efficiency improvement outweighs the bias introduced by approximation [17].
Future work can explore scalable inference in hierarchical Bayesian models for data with excessive zeros [47][48][49] as the Poisson regression model is not appropriate for zero-inflated count data.
The conditional posterior of regression coefficient w jk can be written as
p (w jk |−) ∝ exp (w jk − µ k ) 2 −2σ 2 k + nj i=1 x ijk w jk + K h=1,h =k x ijh w jh − ψ 0 (y ij ) 2 −2ψ 1 (y ij ) .
(A1) Let A be the exponent in Eq. (A1) and expand the square terms, then we have
A = (w jk − µ k ) 2 −2σ 2 k + nj i=1 w jk x ijk + K h=1,h =k w jh x ijh − ψ 0 (y ij ) 2 −2ψ 1 (y ij ) (A2) = w 2 jk − 2µ k w jk + C 1 −2σ 2 k + nj i=1 x 2 ijk w 2 jk + 2 K h=1,h =k w jh x ijh − ψ 0 (y ij ) x ijk w jk + C 2 −2ψ 1 (y ij ) (A3) = − 1 2 1 σ 2 k + nj i=1 x 2 ijk ψ 1 (y ij ) w 2 jk + µ k σ 2 k + nj i=1 K h=1,h =k w jh x ijh − ψ 0 (y ij ) x ijk −ψ 1 (y ij ) w jk + C 3 .(A4)
Dividing the numerator and denominator by the coefficient of the quadratic term, we get
A ∝ w 2 jk + µ k σ 2 k + n j i=1 K h=1,h =k w jh x ijh −ψ 0( y ij ) x ijk −ψ 1 (yij) − 1 2 1 σ 2 k + n j i=1 x 2 ijk ψ 1( y ij ) w jk 1 − 1 2 1 σ 2 k + n j i=1 x 2 ijk ψ 1 (yij) + C 4 (A5) ∝ w jk − µ k σ 2 k − n j i=1 K h=1,h =k w jh x ijh −ψ 0( y ij ) x ijk ψ 1( y ij ) 1 σ 2 k + n j i=1 x 2 ijk ψ 1( y ij ) 2 −2 · 1 1 σ 2 k + n j i=1 x 2 ijk ψ 1( y ij ) + C 5 .(A6)
Therefore, we obtain the mean and variance of the the approximate Gaussian posterior
µ k = µk σ 2 k − nj i=1 K h=1,
where the estimated auto-correlations ρ t are computed as
ρ t = 1 − V t 2 var + (B2)
and T is the first odd positive integer for which ρ T +1 + ρ T +2 is negative. In Eq. (B2), V t , the variogram at each lag t, is given by • R 2
V t = 1 m (n − t) m j=1 n i=t+1 (ψ i,j − ψ i−t,j ) 2 ,(B3)
The R 2 of generic predicted valuesŷ i , i = 1, . . . , N of the dependent variables y i , i = 1, . . . , N is expressed as
R 2 = 1 − N i=1 (y i −ŷ i ) 2 N i=1 (y i −ȳ) 2 ,(B5)
whereȳ is the average value of y i , i = 1, . . . , N .
• RMSE
The RMSE of predicted values is given by RMSE = N i=1 (ŷi−yi) 2 N .
Figure 1 .
1Quality of the Gaussian approximation (dashed blue line) to the true distribution (solid red line) for different values of y: (a) y=1; (b) y=3; (c) y=5; (d) y=10; (e) y=20. (f) Values of KS distance (solid red line) between the approximate distribution and true distribution and values of the absolute error in the mean (dashed blue line) of the approximate distribution as the value of y increases.
Algorithm 1 :
1Approximate Gibbs sampler Input: x, y, number of samples as warm-ups N 0 , number of desired samples N 1 . Output: Desired posterior samples,
. . . , n j , j = 1, . . . , J (27c)x ij4 ∼ U (1, 10) , i = 1, . . . , n j , j = 1, . . . , J (27d) x ij5 ∼ U (0.5, 5) , i = 1, . . . , n j , j = 1, . . . , J (27e) x ij6 ∼ U(10, 100), i = 1, . . . , n j , j = 1, . . . , J (27f)x ·j ∼ U 10 4 , 10 6 , j = 1, . . . , J (27g) w jk ∼ N (0.001, 0.001) , j = 1, . . . , J, k = 1, . . . ·j , i = 1, . . . , n j , j = 1, . . . , J.
·j , i = 1, . . . , n j , j = 1, . . . , J. (28h)
Figure 2 .
2Scalability of NUTS and AGS on (a) real datasets and (b) synthetic datasets. The size of a dataset, i.e., the total number of count data points, is calculated by J j=1 n j × J.
,
∀j = 1, . . . , J. (A8) Appendix B. Metrics used for comparing samplers • Effective sample size (n eff )For each scalar estimand ψ, the simulations/samples are labeled as ψ ij (i = 1, . . . , n; j = 1, . . . , m) where n is the number of samples in each chain (sequence) and m is the number of chains. The between-sequence variance B and the withinsequence variance W are given as The effective sample size is calculated according to Ref.
and var + , the marginal posterior variance of the estimand,
Table 1 .
1Notationsi
Index of a count data point in a group
j, k
Index of a group and index of a covariate
a, b
Shape and scale parameters of the inverse-gamma distribution
n j
Number of data points in group j
w jk
Coefficient for covariate k in group j
x ijk
Value of covariate k for count data point i in group j
x ·j
Value of group level covariate for group j
y ij
Count data point i in group j
v
Log of the gamma random variable z
z
Gamma random variable
E s
Sampling efficiency
D
Dataset including the covariates and the counts
J, K
Number of groups, and number of covariates
N 0 , N 1 Number of samples as warm-ups and number of desired samples to keep
N d
Total number of data points in a dataset
P CT 0
Percentages of zero counts in a dataset
P CT 1,5 Percentages of counts in [1, 5] in a dataset
RG
Range of counts in a dataset
T s
Sampling time per 1000 iterations in seconds
α, β
Location parameter and shape parameter of the gamma distribution
γ
Euler-Mascheroni constant
Index of iterations
µ, σ
Mean and variance of the Gaussian distribution
λ ij
Mean for count data point i in group j
Θ
Set of parameters
m, τ 2
Mean and variance for µ
2. Methodology
Table 2 .
2An example of synthetic datasetx 1
x 2
x 3
x 4
x 5
x 6
y
0.14
0.11
0.27
1.49
0.66
12.10
42
0.61
0.15
0.27
2.22
1.52
24.76
6440
0.64
0.58
0.27
3.11
1.93
34.67
11424
0.77
0.58
0.30
5.70
2.13
38.71
13535
1.07
0.60
0.34
6.38
2.77
62.73
25064
1.29
0.75
0.42
7.78
3.69
79.38
33917
1.41
0.91
0.47
8.84
4.91
85.56
38272
1.55
0.93
0.49
8.93
4.96
92.86
41806
Table 3 .
3A subset of power outage data after exposure to Covid-19, and the total number of samples tested, N s . The response variable is the number of patients who tested positive among the samples. The total number of test cases is 380. As the proposed approximation cannot be applied to zero counts, we remove the test cases with zero positive test among the samples tested. The total number of test cases after removing those with zero counts is 298. The test cases are grouped by studies. Following Ref.Event ID
PS
TQV
U10M
V10M
t
Outage count
1
99691.45
43.18
2.39
4.79
4
66807
1
100917.62
26.75
1.11
1.39
32
18379
1
101041.88
36.29
1.11
-0.18
60
12096
1
101467.45
55.72
-1.73
-0.67
116
14231
1
101155.43
37.50
-0.08
-3.01
144
10155
1
101037.79
32.13
-0.48
-1.53
172
4758
1
101194.86
40.20
-0.90
1.66
200
2699
1
101183.98
46.61
-0.54
1.20
228
2297
1
101136.76
34.68
-1.14
-1.50
256
248
1
101086.31
43.80
-2.19
-2.44
284
43
that provide data on RT-PCR (reverse transcriptase polymerase chain reaction) per-
formance by time since the symptom onset or exposure using samples derived from
nasal or throat swabs among patients tested for Covid-19. The number of studies
(groups) is 10. Each study includes multiple test cases (Table 4), each of which in-
cludes the days, t,
Table 4 .
4A subset of the Covid-19 test data
Study ID
Test case ID
log(t)
log(t) 2
log(t) 3
N s
Positive count
1
1
1.23
1.51
1.86
35
15
1
2
1.26
1.58
1.98
23
11
1
3
1.28
1.64
2.09
20
6
1
5
1.32
1.75
2.31
20
8
1
6
1.34
1.8
2.42
11
3
1
7
1.36
1.85
2.53
11
5
1
8
1.38
1.9
2.63
9
2
1
9
1.4
1.95
2.73
6
3
1
10
1.41
2
2.83
5
2
Table 5 .
5A subset of the bike sharing data. Workingday=1 indicates the rental occurs on a working day, and 0 otherwise.Workingday
Temperature
Humidity
Casual rental count
Total count
0
0.34
0.81
331
985
0
0.36
0.70
131
801
1
0.20
0.44
120
1349
1
0.20
0.59
108
1562
1
0.23
0.44
82
1600
1
0.20
0.52
88
1606
1
0.20
0.50
148
1510
0
0.17
0.54
68
959
0
0.14
0.43
54
822
Table 6 .
6Performance of NUTS and AGS on synthetic datasetsDataset
Characteristics
Ts (s)
Es
R 2
RMSE
N d
K
J
NUTS
AGS
NUTS
AGS
NUTS
AGS
NUTS
AGS
S1
200
2
10
1.51
0.96
649.28
33.46
0.9390
0.9450
6500
6191
S2
400
2
10
2.66
0.98
373.46
34.99
0.9430
0.9500
5823
5455
S3
800
2
20
5.05
1.63
195.21
18.63
0.9576
0.9626
3526
3310
S4
200
3
10
5.76
1.25
170.39
5.28
0.9614
0.9660
4235
3976
S5
400
3
10
16.63
1.29
59.78
2.03
0.9683
0.9710
3450
3305
S6
800
3
20
40.53
2.44
24.30
1.80
0.9677
0.9719
3993
3728
S7
200
4
10
7.64
1.63
129.25
1.59
0.9716
0.9760
2955
2718
S8
400
4
10
23.72
1.94
41.69
1.50
0.9639
0.9701
4406
4012
S9
800
4
20
45.00
3.31
21.99
0.49
0.9740
0.9770
2958
2790
S10 200
5
10
12.12
2.06
81.26
0.70
0.9574
0.9631
4941
4593
S11 400
5
10
24.70
2.41
39.68
0.49
0.9782
0.9826
2514
2245
S12 800
5
20
64.00
4.12
15.44
0.27
0.9767
0.9804
3446
3157
S13 200
6
10
14.66
2.44
67.49
0.23
0.9817
0.9876
2720
2721
S14 400
6
10
42.01
2.63
20.17
0.24
0.9922
0.9948
1339
1128
S15 800
6
20
93.20
4.97
8.44
0.17
0.9910
0.9940
1994
1629
Table 7 .
7Performance of NUTS and AGS on real datasetsDataset
Characteristics
Ts (s)
Es
R 2
RMSE
N d
K
J
NUTS
AGS
NUTS
AGS
NUTS
AGS
NUTS
AGS
P1
3817
5
56
885.65
15.76
1.12
1.25
0.9730
0.9801
13558
11621
P2
2467
5
50
652.87
13.71
1.50
0.62
0.9873
0.9884
1446
1384
P3
1548
5
35
387.47
9.41
2.54
0.92
0.9850
0.9870
1974
1833
P4
632
5
26
165.73
6.67
5.94
0.46
0.9923
0.9918
1327
1373
P5
520
5
16
135.16
4.31
7.27
1.85
0.9934
0.9940
3473
3312
P6
421
5
17
118.73
4.54
8.25
2.68
0.9908
0.9918
2526
2387
P7
375
5
23
39.86
5.76
24.75
2.41
0.9459
0.9355
3574
3903
P8
247
5
10
8.75
2.78
111.91
2.38
0.9744
0.9729
795
818
P9
157
5
8
5.48
2.24
179.87
4.07
0.9964
0.9967
803
766
P10
115
5
6
4.49
1.72
218.17
6.75
0.9869
0.9915
7715
6222
P11
63
5
4
5.49
1.25
177.38
0.92
0.9356
0.9027
251
308
Bike share 729
3
2
9.09
0.98
109.54
24.75
0.6743
0.6292
1101
1175
Covid test 298
3
11
34.60
2.59
28.57
18.54
0.8517
0.8582
2.53
2.47
q
q
q
q
q
q
q
q
q
q
q
q
q
Table 8 .
8Inference accuracy of NUTS and AGS on selected Covid-19 test datasets with small counts. The number of covariates K = 3 and the number of groups J = 11.Dataset
Characteristics
R 2
RMSE
RG
P CT 0 (%) P CT 1,5 (%) N d
NUTS
AGS
NUTS
AGS
Subset1
[1, 5]
0.0
100.0
168 0.4403 0.4452 0.9964 0.9920
Subset2
[0, 5]
32.5
67.5
249 0.6606 0.5684 0.9370 1.0570
Subset3
[1, 38]
0.0
56.4
298 0.8517 0.8582 2.5337 2.4685
Whole set [0, 38]
21.4
44.3
379 0.8692 0.8546 2.3492 2.4769
Table 9 .
9Inference accuracy of NUTS and AGS on synthetic datasets with small counts. The number of covariates K = 5 and the number of groups J = 8.Dataset
Characteristics
R 2
RMSE
RG
P CT 0 (%) P CT 1,5 (%) N d
NUTS
AGS
NUTS
AGS
SS1
[1, 5]
0.0
100.0
252 0.9226 0.8139 0.3198 0.4959
SS2
[0, 5]
21.3
78.8
320 0.5414 0.4567 0.9460 1.0010
SS3
[1, 10]
0.0
71.2
288 0.9625 0.9421 0.4478 0.5564
SS4
[0, 10]
10.0
64.1
320 0.6461 0.6480 1.4805 1.4764
SS5
[1, 15]
0.0
57.3
307 0.9766 0.9732 0.5715 0.6113
SS6
[0, 15]
4.1
55.0
320 0.9641 0.9538 0.7225 0.8193
SS7
[1, 20]
0.0
47.0
319 0.9822 0.9819 0.6639 0.6708
SS8
[0, 20]
3.1
46.9
320 0.9691 0.9642 0.8774 0.9446
SS9
[1, 30]
0.0
32.2
314 0.9723 0.9767 1.3059 1.1979
SS10
[0, 30]
1.9
31.6
320
0.9525 0.9563 1.7246 1.6556
For an array of real numbers represented by a generic vector x. The min-max normalization of x is given byx min−max = x−x min xmax−x min .
Sequential Bayesian analysis of multivariate count data. T Aktekin, N Polson, R Soyer, Bayesian Analysis. 132Aktekin T, Polson N, Soyer R. Sequential Bayesian analysis of multivariate count data. Bayesian Analysis. 2018;13(2):385-409.
Bayesian analysis of a time series of counts with covariates: An application to the control of an infectious disease. J L Hay, A N Pettitt, Biostatistics. 24Hay JL, Pettitt AN. Bayesian analysis of a time series of counts with covariates: An application to the control of an infectious disease. Biostatistics. 2001;2(4):433-444.
Hierarchical Poisson models for spatial count data. De Oliveira, V , Journal of Multivariate Analysis. 122De Oliveira V. Hierarchical Poisson models for spatial count data. Journal of Multivariate Analysis. 2013;122:393-408.
Estimating the spatial distribution of power outages during hurricanes in the Gulf coast region. S R Han, S D Guikema, S M Quiring, Reliability Engineering & System Safety. 942Han SR, Guikema SD, Quiring SM, et al. Estimating the spatial distribution of power outages during hurricanes in the Gulf coast region. Reliability Engineering & System Safety. 2009;94(2):199-210.
A hierarchical bayesian approach for assessing infrastructure networks serviceability under uncertainty: A case study of water distribution systems. J Z Yu, M Whitman, A Kermanshah, Reliability Engineering & System Safety. 215107735Yu JZ, Whitman M, Kermanshah A, et al. A hierarchical bayesian approach for assessing infrastructure networks serviceability under uncertainty: A case study of water distribu- tion systems. Reliability Engineering & System Safety. 2021;215:107735.
Quantifying community resilience using hierarchical Bayesian kernel methods: A case study on recovery from power outages. J Z Yu, H Baroud, Risk Analysis. 399Yu JZ, Baroud H. Quantifying community resilience using hierarchical Bayesian kernel methods: A case study on recovery from power outages. Risk Analysis. 2019;39(9):1930- 1948.
A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods. J Ma, K M Kockelman, P Damien, Accident Analysis & Prevention. 403Ma J, Kockelman KM, Damien P. A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods. Accident Analysis & Prevention. 2008;40(3):964-975.
Bayesian hierarchical model for the prediction of football results. G Baio, M Blangiardo, Journal of Applied Statistics. 372Baio G, Blangiardo M. Bayesian hierarchical model for the prediction of football results. Journal of Applied Statistics. 2010;37(2):253-264.
A segment level analysis of multi-vehicle motorcycle crashes in Ohio using Bayesian multi-level mixed effects models. T Flask, I V Schneider, Wh, D Lord, Safety Science. 66Flask T, Schneider IV WH, Lord D. A segment level analysis of multi-vehicle motorcycle crashes in Ohio using Bayesian multi-level mixed effects models. Safety Science. 2014; 66:47-53.
Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions. S H Khazraee, V Johnson, D Lord, Accident Analysis & Prevention. 117Khazraee SH, Johnson V, Lord D. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions. Accident Analysis & Prevention. 2018;117:181-195.
Bayesian data analysis. Chapman and Hall/CRC. A Gelman, J B Carlin, H S Stern, Gelman A, Carlin JB, Stern HS, et al. Bayesian data analysis. Chapman and Hall/CRC; 2013.
Hamiltonian Monte Carlo for hierarchical models. Current trends in Bayesian methodology with applications. M Betancourt, M Girolami, 79Betancourt M, Girolami M. Hamiltonian Monte Carlo for hierarchical models. Current trends in Bayesian methodology with applications. 2015;79(30):2-4.
PGMHD: A scalable probabilistic graphical model for massive hierarchical data problems. K Aljadda, M Korayem, C Ortiz, 2014 IEEE International Conference on Big Data. IEEEAlJadda K, Korayem M, Ortiz C, et al. PGMHD: A scalable probabilistic graphical model for massive hierarchical data problems. In: 2014 IEEE International Conference on Big Data (Big Data); IEEE; 2014. p. 55-60.
Chain Monte Carlo method and its application. S Brooks, Markov, Journal of the Royal Statistical Society: Series D (the Statistician). 471Brooks S. Markov Chain Monte Carlo method and its application. Journal of the Royal Statistical Society: Series D (the Statistician). 1998;47(1):69-100.
Accelerating asymptotically exact MCMC for computationally intensive models via local approximations. P R Conrad, Y M Marzouk, N S Pillai, Journal of the American Statistical Association. 111516Conrad PR, Marzouk YM, Pillai NS, et al. Accelerating asymptotically exact MCMC for computationally intensive models via local approximations. Journal of the American Statistical Association. 2016;111(516):1591-1607.
Accelerating MCMC algorithms. C P Robert, V Elvira, N Tawn, Wiley Interdisciplinary Reviews: Computational Statistics. 1051435Robert CP, Elvira V, Tawn N, et al. Accelerating MCMC algorithms. Wiley Interdisci- plinary Reviews: Computational Statistics. 2018;10(5):e1435.
Noisy Monte Carlo: Convergence of Markov Chains with approximate transition kernels. P Alquier, N Friel, R Everitt, Statistics and Computing. 261-2Alquier P, Friel N, Everitt R, et al. Noisy Monte Carlo: Convergence of Markov Chains with approximate transition kernels. Statistics and Computing. 2016;26(1-2):29-47.
Optimal approximating Markov Chains for Bayesian inference. J E Johndrow, J C Mattingly, S Mukherjee, arXiv:150803387arXiv preprintJohndrow JE, Mattingly JC, Mukherjee S, et al. Optimal approximating Markov Chains for Bayesian inference. arXiv preprint arXiv:150803387. 2015;.
Efficient and accurate approximate Bayesian inference with an application to insurance data. G Streftaris, B J Worton, Computational Statistics & Data Analysis. 525Streftaris G, Worton BJ. Efficient and accurate approximate Bayesian inference with an application to insurance data. Computational Statistics & Data Analysis. 2008; 52(5):2604-2622.
Bayesian Poisson regression for crowd counting. A B Chan, N Vasconcelos, IEEE 12th international conference on computer. IEEEChan AB, Vasconcelos N. Bayesian Poisson regression for crowd counting. In: 2009 IEEE 12th international conference on computer vision; IEEE; 2009. p. 545-551.
Bayesian inference in hierarchical models by combining independent posteriors. R Dutta, P Blomstedt, S Kaski, arXiv:160309272arXiv preprintDutta R, Blomstedt P, Kaski S. Bayesian inference in hierarchical models by combining independent posteriors. arXiv preprint arXiv:160309272. 2016;.
Asymptotic posterior approximation and efficient MCMC sampling for generalized linear mixed models. B Berman, dissertationBerman B. Asymptotic posterior approximation and efficient MCMC sampling for gener- alized linear mixed models [dissertation].
. Uc Irvine, UC Irvine; 2019.
Fast Bayesian estimation of spatial count data models. P Bansal, R Krueger, D J Graham, Computational Statistics & Data Analysis. 107152Bansal P, Krueger R, Graham DJ. Fast Bayesian estimation of spatial count data models. Computational Statistics & Data Analysis. 2020;:107152.
Bayesian inference for logistic models using pólya-gamma latent variables. N G Polson, J G Scott, J Windle, Journal of the American Statistical Association. 108504Polson NG, Scott JG, Windle J. Bayesian inference for logistic models using pólya-gamma latent variables. Journal of the American Statistical Association. 2013;108(504):1339- 1349.
Bayesian multivariate poisson lognormal models for crash severity modeling and site ranking. J Aguero-Valverde, P P Jovanis, Transportation Research Record. 21361Aguero-Valverde J, Jovanis PP. Bayesian multivariate poisson lognormal models for crash severity modeling and site ranking. Transportation Research Record. 2009;2136(1):82-91.
A Bayesian Poissonlognormal model for count data for multiple-trait multiple-environment genomic-enabled prediction. G3: Genes, Genomes. O A Montesinos-López, A Montesinos-López, J Crossa, Genetics. 75Montesinos-López OA, Montesinos-López A, Crossa J, et al. A Bayesian Poisson- lognormal model for count data for multiple-trait multiple-environment genomic-enabled prediction. G3: Genes, Genomes, Genetics. 2017;7(5):1595-1606.
Fast Bayesian inference for modeling multivariate crash counts. V Serhiyenko, S A Mamun, J N Ivan, Analytic Methods in Accident Research. 9Serhiyenko V, Mamun SA, Ivan JN, et al. Fast Bayesian inference for modeling multivari- ate crash counts. Analytic Methods in Accident Research. 2016;9:44-53.
Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper). A Gelman, Bayesian analysis. 13Gelman A. Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper). Bayesian analysis. 2006;1(3):515-534.
Automatic step size selection in random walk metropolis algorithms. T L Graves, arXiv:11035986arXiv preprintGraves TL. Automatic step size selection in random walk metropolis algorithms. arXiv preprint arXiv:11035986. 2011;.
Mixing of MCMC algorithms. L Holden, Journal of Statistical Computation and Simulation. 8912Holden L. Mixing of MCMC algorithms. Journal of Statistical Computation and Simula- tion. 2019;89(12):2261-2279.
Markov Chain Monte Carlo in practice: A roundtable discussion. R E Kass, B P Carlin, A Gelman, The American Statistician. 522Kass RE, Carlin BP, Gelman A, et al. Markov Chain Monte Carlo in practice: A roundtable discussion. The American Statistician. 1998;52(2):93-100.
Estimation of wind speed distribution using Markov Chain Monte Carlo techniques. W K Pang, J J Forster, M D Troutt, Journal of Applied Meteorology. 408Pang WK, Forster JJ, Troutt MD. Estimation of wind speed distribution using Markov Chain Monte Carlo techniques. Journal of Applied Meteorology. 2001;40(8):1476-1484.
Conjugate Bayesian analysis of the Gaussian distribution. K P Murphy, Technical ReportMurphy KP. Conjugate Bayesian analysis of the Gaussian distribution. Technical Report. 2007;Available from: https://www.cs.ubc.ca/~murphyk/mypapers.html.
Adaptive rejection sampling for Gibbs sampling. W R Gilks, P Wild, Journal of the Royal Statistical Society: Series C (Applied Statistics). 412Gilks WR, Wild P. Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society: Series C (Applied Statistics). 1992;41(2):337-348.
Bayesian estimation of state-space models using the Metropolis-Hastings algorithm within Gibbs sampling. J Geweke, H Tanizaki, Computational Statistics & Data Analysis. 372Geweke J, Tanizaki H. Bayesian estimation of state-space models using the Metropolis- Hastings algorithm within Gibbs sampling. Computational Statistics & Data Analysis. 2001;37(2):151-170.
The log-gamma distribution and non-normal error. Variance: Advancing the Science of Risk. L J Halliwell, Halliwell LJ. The log-gamma distribution and non-normal error. Variance: Advancing the Science of Risk. 2018;.
On some properties of digamma and polygamma functions. N Batir, Journal of Mathematical Analysis and Applications. 3281Batir N. On some properties of digamma and polygamma functions. Journal of Mathe- matical Analysis and Applications. 2007;328(1):452-465.
. Mačys J. On the Euler-Mascheroni constant. Mathematical Notes. 94Mačys J. On the Euler-Mascheroni constant. Mathematical Notes. 2013;94.
A log-gamma model and its maximum likelihood estimation. R L Prentice, Biometrika. 613Prentice RL. A log-gamma model and its maximum likelihood estimation. Biometrika. 1974;61(3):539-544.
The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. M D Hoffman, A Gelman, Journal of Machine Learning Research. 151Hoffman MD, Gelman A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research. 2014;15(1):1593-1623.
Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. L M Kucirka, S A Lauer, O Laeyendecker, Annals of Internal Medicine. Kucirka LM, Lauer SA, Laeyendecker O, et al. Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. Annals of Internal Medicine. 2020;.
UCI Machine Learning Repository. A Asuncion, D Newman, Asuncion A, Newman D. UCI Machine Learning Repository ; 2007.
Event labeling combining ensemble detectors and background knowledge. -T H Fanaee, J Gama, Progress in Artificial Intelligence. 22-3Fanaee-T H, Gama J. Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence. 2014;2(2-3):113-127.
Stan Development Team. Stan modeling language users guide and reference manual. Stan Development Team. Stan modeling language users guide and reference manual; 2016.
Performance of Hamiltonian Monte Carlo and No-U-Turn sampler for estimating genetic parameters and breeding values. M Nishio, A Arakawa, Genetics Selection Evolution. 51173Nishio M, Arakawa A. Performance of Hamiltonian Monte Carlo and No-U-Turn sampler for estimating genetic parameters and breeding values. Genetics Selection Evolution. 2019; 51(1):73.
Parameterization and Bayesian modeling. A Gelman, Journal of the American Statistical Association. 99466Gelman A. Parameterization and Bayesian modeling. Journal of the American Statistical Association. 2004;99(466):537-545.
A multilevel zero-inflated Conway-Maxwell type negative binomial model for analysing clustered count data. S G Gholiabad, A Moghimbeigi, J Faradmal, Journal of Statistical Computation and Simulation. 919Gholiabad SG, Moghimbeigi A, Faradmal J, et al. A multilevel zero-inflated Conway- Maxwell type negative binomial model for analysing clustered count data. Journal of Statistical Computation and Simulation. 2021;91(9):1762-1781.
Simulating comparisons of different computing algorithms fitting zero-inflated poisson models for zero abundant counts. X Liu, B Winter, L Tang, Journal of Statistical Computation and Simulation. 8713Liu X, Winter B, Tang L, et al. Simulating comparisons of different computing algorithms fitting zero-inflated poisson models for zero abundant counts. Journal of Statistical Com- putation and Simulation. 2017;87(13):2609-2621.
A zero-inflated regression model for grouped data. S Brown, A Duncan, M N Harris, Oxford Bulletin of Economics and Statistics. 776Brown S, Duncan A, Harris MN, et al. A zero-inflated regression model for grouped data. Oxford Bulletin of Economics and Statistics. 2015;77(6):822-831.
Derivation of the approximate conditional posterior Derivation of the approximate posterior distribution in the posterior distribution is presented below. Terms that do not impact w jk are regarded as a constant, i.e. C i (i = 1. A Appendix, 5) in the following equationsAppendix A. Derivation of the approximate conditional posterior Derivation of the approximate posterior distribution in the posterior distribution is presented below. Terms that do not impact w jk are regarded as a constant, i.e. C i (i = 1, . . . , 5) in the following equations.
| [] |
[
"Separability and entanglement in superpositions of quantum states",
"Separability and entanglement in superpositions of quantum states"
] | [
"Saronath Halder \nHarish-Chandra Research Institute\nA CI of Homi Bhabha National Institute\nChhatnag Road211 019Jhunsi, AllahabadIndia\n",
"Ujjwal Sen \nHarish-Chandra Research Institute\nA CI of Homi Bhabha National Institute\nChhatnag Road211 019Jhunsi, AllahabadIndia\n"
] | [
"Harish-Chandra Research Institute\nA CI of Homi Bhabha National Institute\nChhatnag Road211 019Jhunsi, AllahabadIndia",
"Harish-Chandra Research Institute\nA CI of Homi Bhabha National Institute\nChhatnag Road211 019Jhunsi, AllahabadIndia"
] | [] | It is known that probabilistically mixing an arbitrary pair of pure quantum states, one of which is entangled and the other product, in any bipartite quantum system, one always obtains an entangled state, provided the entangled state of the pair appears with a nonzero probability. On the other hand, if we consider any superposition of the same pair, with a nonzero amplitude for the entangled state of the pair, the output state may not always be entangled. Motivated by this fact, in this work, we study the superpositions of a pure entangled state and a pure product state, when the amplitudes corresponding to the states appearing in any superposition are nonzero. We show, in particular, that all such superpositions produce only entangled states if the initial entangled state has Schmidt rank three or higher. Again, superposing a pure entangled state and a product state cannot lead to product states only, in any bipartite quantum system. These lead us to define conditional and unconditional inseparabilities of superpositions. These concepts in turn are useful in quantum communication protocols. We find that conditional inseparability of superpositions help in identifying strategies for conclusive local discrimination of shared quantum ensembles. We also find that the unconditional variety leads to systematic methods for spotting ensembles exhibiting the phenomenon of more nonlocality with less entanglement and two-element ensembles of conclusively and locally indistinguishable shared quantum states. | 10.1103/physreva.107.022413 | [
"https://export.arxiv.org/pdf/2108.02260v2.pdf"
] | 236,924,714 | 2108.02260 | c1b2efdf6c77cd9d017cbb9bac0ef66b49083a33 |
Separability and entanglement in superpositions of quantum states
23 Apr 2023
Saronath Halder
Harish-Chandra Research Institute
A CI of Homi Bhabha National Institute
Chhatnag Road211 019Jhunsi, AllahabadIndia
Ujjwal Sen
Harish-Chandra Research Institute
A CI of Homi Bhabha National Institute
Chhatnag Road211 019Jhunsi, AllahabadIndia
Separability and entanglement in superpositions of quantum states
23 Apr 2023arXiv:2108.02260v2 [quant-ph]
It is known that probabilistically mixing an arbitrary pair of pure quantum states, one of which is entangled and the other product, in any bipartite quantum system, one always obtains an entangled state, provided the entangled state of the pair appears with a nonzero probability. On the other hand, if we consider any superposition of the same pair, with a nonzero amplitude for the entangled state of the pair, the output state may not always be entangled. Motivated by this fact, in this work, we study the superpositions of a pure entangled state and a pure product state, when the amplitudes corresponding to the states appearing in any superposition are nonzero. We show, in particular, that all such superpositions produce only entangled states if the initial entangled state has Schmidt rank three or higher. Again, superposing a pure entangled state and a product state cannot lead to product states only, in any bipartite quantum system. These lead us to define conditional and unconditional inseparabilities of superpositions. These concepts in turn are useful in quantum communication protocols. We find that conditional inseparability of superpositions help in identifying strategies for conclusive local discrimination of shared quantum ensembles. We also find that the unconditional variety leads to systematic methods for spotting ensembles exhibiting the phenomenon of more nonlocality with less entanglement and two-element ensembles of conclusively and locally indistinguishable shared quantum states.
I. INTRODUCTION
Quantum entanglement [1-3] is regarded as an important resource because it finds applications in several information processing protocols, like quantum teleportation [4,5], quantum dense coding [6][7][8][9][10][11], and quantum key distribution [12,13]. Therefore, it is important to understand the origin and characteristics of such a resource. Entanglement in quantum states of shared systems is a result of the superposition principle of quantum physics. Arbitrary superpositions, however, do not lead to entangled states of the corresponding shared physical system. For example, if we consider superpositions of a pure entangled state and a product state, then we may not always get entanglement as output. In this context, we mention that if we mix any pure entangled state with an arbitrary product state with nonzero probabilities then we always get an entangled state [14]. Motivated by this fact, in this work we consider the superpositions of a pure entangled state and a product state, when the amplitudes (i.e., superposition coefficients) corresponding to the (normalized) states appearing in any superposition are nonzero.
The question of entanglement of superposed quantum states has already been posed in the literature. The subsequent studies have, as far as we know, been always quantitative, providing important bounds on the amount of entanglement generated in different types of superpositions, considering different entanglement measures [15][16][17][18][19][20][21][22][23][24][25][26]. We, however, wish to study the problem qualitatively -answering only whether a given superposition of quantum states of a shared system is entangled or not. In particular, we ask when the superpositions of a given pair of a pure entangled state and a product state can produce entangled states, provided that the coefficients corresponding to the states appearing in any superposi-tion are nonzero. A quantitative study of course provides the qualitative answer, but we find that in several cases where the quantitative solution is missing, one can still extract qualitative results. This is one of the reasons why we focus on a qualitative study in this work. Furthermore, we provide several applications of such a study.
In discussions about robustness of entanglement, one often considers the mixtures of an entangled state with a separable state, i.e., convex combination of the states, where the latter is deemed as "noise" [14,[27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. By considering such a mixture, one examines how robust the given entangled state is, against the chosen noise. In other words, we ask how much mixing of the noise does the entangled state tolerate, so that the newly produced state remains entangled. Sometimes, it is possible to find a separable state for a given entangled state, such that any mixture of the states produce entangled states, provided that the initial entangled state appears with nonzero probability in any of the mixtures. In such a scenario, we can say that the entangled state is unconditionally robust in the direction of that separable state [14,30,31,38,42]. As mentioned earlier, for an arbitrary pair of a pure entangled state and a product state, any mixture of the states with nonzero probabilities produces entangled states only [14]. Therefore, all pure entangled states are unconditionally robust in the direction of all product states. Following this notion of unconditional robustness, we ask whether it is possible to find product states, such that any superposition of a product state with a given pure entangled state is always entangled when the coefficients corresponding to the states appearing in any superposition are nonzero. If it is possible to produce such an instance, then we refer to the phenomenon as unconditional inseparability of superpositions.
In general, we find that for all pairs of a pure entan-gled state and a product state, arbitrary superpositions lead to entangled states only, when the initial entangled state has Schmidt rank three or above and the coefficient corresponding to it is nonzero in any superposition. Clearly, for bipartite quantum systems, none of whose constituents is a qubit, all pure entangled states are "unconditionally superposition robust" in the direction of all product states, when the initial entangled state has Schmidt rank three or above. This, therefore, provides a parallel scenario in the context of superpositions, with respect to the result stated before from Ref. [14] in the complementary context of mixtures. The opposite problem is to uncover instances where superpositions produce product states. We find that in arbitrary bipartite quantum systems, a pure entangled state and a product state cannot lead, via superpositions, to product states only. We refer to this phenomenon as conditional inseparability of superpositions. These results about superposing a pure entangled state and a product state in arbitrary bipartite quantum systems are given in Sec. II. Thereafter, we provide some detailed discussions related to the two-qubit system in Sec. III.
In Sec. IV, we discuss about the applications of the above findings. For example, we show that the phenomenon of unconditional inseparability of superpositions finds application in the context of "nonlocality" associated with the problem of state discrimination under local quantum operations and classical communication (LOCC) . We find that the phenomenon of conditional inseparability of superpositions can also have important applications in the context of local quantum state discrimination problems. We then identify a class of unextendible entangled bases, which we refer to as "r-UEBs", and prove that they can contain no state which is conclusively locally identifiable with nonzero probability.
Some discussions on the results, including a comparison with the quantitative results already known in the literature, are presented in Sec. V.
II. SUPERPOSING AN ENTANGLED STATE AND A PRODUCT STATE
We provide two results in this section for the general case, that is for pure states of arbitrary bipartite dimensions. Both concern the entanglement or its absence in a superposition of a pure entangled state and a product state. We will use the concept of the Schmidt rank of bipartite pure quantum states, which is defined as the number of nonzero (Schmidt) coefficients in a Schmidt decomposition of the (bipartite pure) state [1-3]. However, before we provide the theorems, we formally define the concepts of conditional and unconditional inseparabilities of superposition.
Definition 1. [Conditional and unconditional inseparabilities of superposition] If it is possible to find a product state, such that any superposition of the product state with a given pure entangled state is always entangled, when the coefficients corresponding to the states appearing in any superposition are nonzero, then we say that the initial entangled state is unconditionally robust in the direction of the product state and we refer to the phenomenon as an instance of unconditional inseparability of superpositions. Otherwise, it is an instance of conditional inseparability of superpositions.
Theorem 1. Any nontrivial superposition of an arbitrary entangled pure state and an arbitrary product state of a bipartite quantum system is entangled, provided the initial entangled state has Schmidt rank three or higher.
Remark 1. Among bipartite quantum systems, barring those for which a local dimension is two (i.e., barring C 2 ⊗ C d systems), a pure state can have Schmidt rank three or higher. Remark 2. By a "nontrivial" superposition of two pure quantum states, we imply that the initial states are not included in the discussion. I.e., if we consider any superposition of two states |e and |p as a 1 |e + a 2 |p , then the coefficients a 1 and a 2 corresponding to the states |e and |p must be nonzero. If one of them is zero then after superposition, we get back one of the initial states, i.e., |e or |p . We do not include these "trivial" cases here. Remark 3. We therefore find that for two qutrits and higher local dimensions, all pure entangled states of Schmidt rank three or higher are "unconditionally superposition robust" in the direction of all product states. The complementary situation, where a pure entangled state is mixed with a product state, always leads to an entangled state, is known from Ref. [14].
Proof. Let |e be an entangled state with Schmidt rank ≥ 3 and let |p be a product state. If possible, let a 1 |e + a 2 |p = |p ′ be a product state, where a 1 , a 2 are nonzero complex numbers. So, a 1 |e = |p ′ − a 2 |p . The vector a 1 |e is not normalized, but it is possible to define the Schmidt rank of this element, which is exactly equal to that of |e , i.e., ≥ 3. Similarly, if we define the Schmidt rank for the vector |p ′ − a 2 |p , then it can be shown that it is not greater than two, giving us a contradiction, proving that the initial assumption of |p ′ being a product state is not true. We are therefore left with proving that |p ′ − a 2 |p has Schmidt rank ≤ 2 (where |p and |p ′ are two product states and a 2 is a nonzero complex number), which we do now. This can be understood in the following way. We assume that |p = |α |β and |p ′ = |α ′ |β ′ . If the states on any of the sides of the bipartite system are linearly dependent, i.e., if either |α and |α ′ are linearly dependent or |β and |β ′ are so (or both), then |p ′ − a 2 |p can be written in tensor product form, and so will have unit Schmidt rank. Let therefore |α and |α ′ be linearly independent, and similarly, let |β and |β ′ be also so. Then, |α ′ can be written as b 0 |α + b 1 |α ⊥ , where |α and |α ⊥ are orthogonal to each other, and where b 0 and b 1 are complex numbers. Then we can rewrite |p ′ − a 2 |p as |α
(b 0 |β ′ − a 2 |β ) + b 1 |α ⊥ |β ′ .
Tracing out the first party, the reduced density matrix of the second is a convex combination (probabilistic mixture, but possibly not normalized to unit probability) of the vectors b 0 |β ′ −a 2 |β and b 1 |β ′ . This matrix cannot have more than two nonzero eigenvalues, which implies that the Schmidt rank of |p ′ − a 2 |p is ≤ 2. This completes the proof.
Generalization of Theorem 1. Let us consider an entangled state |e of Schmidt rank, r ≥ 3, and also r − 2 product states, |p 1 , |p 2 , . . ., |p r−2 . Then the state |ψ = a 0 |e + a 1 |p 1 + a 2 |p 2 + . . . + a r−2 |p r−2 is always entangled, where a i are complex numbers such that ψ|ψ = 1 and a 0 = 0.
Proof. Consider the element |e ′ 1 := a 0 |e + a 1 |p 1 . We claim that |e ′ 1 cannot have Schmidt rank less than r − 1. This follows from the following contradiction. Suppose,
|e ′ 1 has Schmidt rank ≤ r−2. So, |e ′ 1 = |0 |0 ′ +|1 |1 ′ + · · ·+|(l − 1) |(l − 1) ′ ,
where l ≤ r−2, and where we have ignored the normalization of the constituent Schmidt kets and the overall superposition. Now, a 0 |e = |e ′ 1 −a 1 |p 1 . We can take
|p 1 = |α |β , where |α = c 0 |0 + c 1 |1 + c 2 |2 +· · ·+c l−1 |l − 1 +c l |l ⊥ with |l ⊥ being orthogonal to the mutually orthogonal kets, |0 , |1 , |2 , . . . , |l − 1 . So, |e ′ 1 − a 1 |p 1 can be written as |0 (|0 ′ − a 1 c 0 |β ) + |1 (|1 ′ −a 1 c 1 |β )+· · ·+|(l − 1) (|(l − 1) ′ −a 1 c l−1 |β )− a 1 c l |l ⊥ |β .
This cannot have Schmidt rank > r − 1 because tracing out the first party lands us in a state that has support on a space spanned by ≤ r−1 kets. But a 0 |e has Schmidt rank r. Clearly, the Schmidt rank of a 0 |e cannot be equal to that of |e ′ 1 −a 1 |p 1 . Thus, |e ′ 1 cannot have Schmidt rank less than r − 1. In a similar fashion, we can prove that the element a 0 |e + a 1 |p 1 + a 2 |p 2 cannot have Schmidt rank less than r − 2, and finally, |ψ cannot have Schmidt rank less than r − r + 2 = 2. Note that in the above argument, if we take c l = 0 then anyway, the contradiction will occur. This completes the proof of the generalization of Theorem 1. The next result looks at a scenario that is complementary to the one answered in the foregoing theorem.
Theorem 2. There does not exist a pair of a pure entangled state and a product state such that any superposition of the states produce a product state, when the coefficients corresponding to the initial states in the superpositions are nonzero.
Proof. Theorem 1 implies that the current proof needs to be done only for entangled states of Schmidt rank two. However, we provide a general proof here. Let |e be an entangled state and |p a product of an arbitrary bipartite quantum system. Consider now the superposition |ψ = ǫ|e + 1 − |ǫ| 2 |p , for arbitrary nonzero and non-unit complex number ǫ such that ψ|ψ = 1. Let us consider the metric d(|φ 1 , |φ 2 ) = 1 − | φ 1 |φ 2 | 2 on the tensor-product Hilbert space corresponding to the bipartite quantum system under consideration. The set of states |ψ generated by varying ǫ, now considered to be real and ∈ (0, 1), can be understood as a continuous "line segment" connecting the "points" |e and |p on the joint Hilbert space. Now since product states form a closed set in this Hilbert space with respect to the metric d, there is always an ǫ 1 > 0 such that |ψ is entangled for all ǫ ∈ [ǫ 1 , 1].
The above results, and especially the remark after Theorem 1, clearly underline the importance of considering bipartite systems of low dimensions for analyzing entanglement in superpositions. And we consider the twoqubit case in detail in the following section.
III. TWO-QUBIT SYSTEMS: UNCONDITIONAL INSEPARABILITY OF SUPERPOSITIONS
Theorem 1 is void for two-qubit systems (actually for all C 2 ⊗ C d systems). There are no pure states of two qubits that are of Schmidt rank three or higher. The landscape is richer here than in the higher dimensions considered in Theorem 1, and a pure entangled state, when superposed with a product state, can lead to entangled as well as product states, in the case of two-qubit systems.
Let us first discuss the cases when the output (i.e., the superposed state) is a product state. An example is obtained by superposing (|00 + |11 )/ √ 2 and |11 with suitable coefficients, so that the output is |00 . If we consider a 1 (|00 + |11 )/ √ 2 + a 2 |11 , then we can take a 1 = √ 2 and a 2 = −1 to get |00 . In this example, the input entangled and product states are nonorthogonal. This however is not a necessity, and we now give an example of a superposition of a pure entangled state and an orthogonal product state, such that the output is a product. This can be performed systematically by the usual method of solving the eigenvalue equation of a local density matrix of the two-qubit pure state. As we will see in the succeeding section, this exercise can be of crucial importance in identifying ensembles of locally indistinguishable shared states.
Example 1. Consider the two-qubit entangled state,
|e 1 = 2/ √ 5 |00 + 1/ √ 5 |11
, written in Schmidt decomposition. We ask the following question: Is there a product state that is orthogonal to this entangled state, and for which a superposition of the product state with the given entangled state can produce a product state? To answer this question, we write down an arbitrary superposition of the two states, apply the condition of orthogonality, and solve the eigenvalue equation of a local density matrix of the superposed state. Some algebra leads us to the product state |p 1 = (2/
√ 5 |0 − 1/ √ 5 |1 )(1/ √ 17 |0 + 4/ √ 17 |1 )
, for which the superposition 3/ √ 26|e 1 + 17/26|p 1 is a product state. We therefore see that a two-qubit pure entangled state can superpose with a two-qubit product state to create a product state. The same pairs will always lead to at least some entangled states when other superposition coefficients are considered, as guaranteed by Theorem 2. Such pairs are what we can refer to as leading to conditional inseparability of superpositions. However, we are now interested to discuss about unconditional inseparability of superpositions, which refers to a set of pure quantum states, any superposition of which can only produce an entangled state. We provide now two examples of such unconditional inseparability of superpositions, where we focus only on sets which consist of a twoqubit pure entangled state and a two-qubit product state.
Example 2. An arbitrary two-qubit pure entangled state can be written, in Schmidt form, as a 1 |00 + a 2 |11 , where a 1 , a 2 are (nonzero) positive real numbers such that a 2 1 + a 2 2 = 1. For varying a 1 and a 2 , a twodimensional subspace is spanned, whose orthogonal complement contains the product states |01 and |10 . One can check that any superposition of a 1 |00 + a 2 |11 with |01 or |10 is always entangled -unconditional inseparability of superpositions.
This fact can be generalized to higher dimensions, but Theorem 1 reports an even better generalization there.
Example 3. Consider now the maximally entangled state, (|00 − |11 )/ √ 2, and the (orthogonal) product state, | + + , where |+ = (|0 + |1 )/ √ 2. Superpositions of these two states produce entangled states only. Therefore, this pair provides another example of unconditional inseparability of superpositions.
Unconditional inseparability and nonorthogonality. We will see in the following section that superpositions of a pure entangled state and an orthogonal product state is of importance in local state discrimination tasks. However, it is possible to find examples of unconditional inseparability of superpositions even in a pair consisting of an entangled and a product state which are nonorthogonal. Consider the entangled state, |e = a 1 |00 + a 2 |11 , where a 1 , a 2 are nonzero Schmidt coefficients and a 2 1 + a 2 2 = 1. We try to figure out a possible structure of a product state |p , not necessarily orthogonal to |e , such that |ψ = e |e + p |p is entangled for all nonzero values of e and p, with ψ|ψ = 1. Obviously, |p has the structure, α |e + β |e ⊥ , with |α| 2 + |β| 2 = 1, and where |e ⊥ is orthogonal to |e . We assume that |e ⊥ is an entangled state which has the structure, |e ⊥ = a 3 |01 + a 4 |10 , a 3 , a 4 are nonzero Schmidt coefficients and a 2 3 + a 2 4 = 1. Based on the values of a 1 , a 2 , a 3 , a 4 , the values of α, β will be fixed for which |p will be a product state. We now prove that α, β must be real and they form a unique pair, for α |e + β |e ⊥ to be a product state. To prove this, we consider the superposition cos θ 2 (a 1 |00 + a 2 |11 ) + e iφ sin θ 2 (a 3 |01 + a 4 |10 ). This state is a product state iff | cos 2 θ 2 a 1 a 2 − e i2φ sin 2 θ 2 a 3 a 4 | = 0. This implies that cos 2 θ 2 a 1 a 2 −cos 2φ sin 2 θ 2 a 3 a 4 = 0 and sin 2φ sin 2 θ 2 a 3 a 4 = 0. The second equation implies that sin 2φ = 0, i.e., φ = nπ 2 , with n being an integer. Putting this value of φ in the equation cos 2 θ 2 a 1 a 2 −cos 2φ sin 2 θ 2 a 3 a 4 = 0, we get tan 2 θ 2 = (−1) n a3a4 a1a2 . But tan 2 θ 2 cannot be negative, and thus we have to take (−1) n = 1, i.e., n must be an even number. We now set a3a4 a1a2 = k, a constant. Thus, we get α = cos θ 2 = k k+1 and β = sin θ 2 = 1 k+1 . Clearly therefore, α, β are real, and they form a unique pair.
Next, we substitute |p = α |e + β |e ⊥ in the expression of |ψ = e |e + p |p , to get |ψ = (e + pα) |e + pβ |e ⊥ . We now compare this expression with the expression of |p = α |e + β |e ⊥ . We have just seen that there is a unique combination of coefficients for a linear combination of |e and |e ⊥ to be a product state, and that is the state |p . So, all other linear combinations in |ψ = (e + pα) |e + pβ |e ⊥ = e |e + p |p form entangled states. Moreover, e|p = α = k k+1 = 0, so that |e and |p are not orthogonal. We therefore have identified a class of pairs, each consisting of an entangled state and a nonorthogonal product state that provide unconditional inseparability of superpositions.
IV. APPLICATIONS
In this section, we demonstrate that the concepts and corresponding examples that we discussed in the preceding sections can be useful in certain quantum communication tasks.
A. Unextendible entangled basis and conclusive local discrimination of quantum states
We begin with the definition of an unextendible entangled basis (UEB) .
Definition 2. [Unextendible entangled basis] An unextendible entangled basis is a set of mutually orthonormal entangled states of a composite Hilbert space such that there are no entangled states in the orthogonal complement of their span.
We note here that the span of the entangled states within a UEB must not be equivalent to the whole given Hilbert space, for the concept of UEB to be nontrivial. While the definition has been generalized to the multiparty case, we focus on the bipartite case only. We consider a special type of unextendible entangled basis in C d1 ⊗ C d2 , d 1 , d 2 > 2, as given by the succeeding definition.
Definition 3. [r-UEB] Let r be a positive integer and r ≥ 3. We call an unextendible entangled basis as an r-UEB if all elements of that UEB has a Schmidt rank r or higher, and there is at least one element of Schmidt rank r.
For r-UEBs, we can present Theorem 3, given below. We note here that "conclusive identification" of each state, in a given set of states, with some nonzero probability is required for "conclusive distinguishability" of a given set. And to identify a quantum state, drawn from the given set of states, conclusively under LOCC, it is necessary and also sufficient to find a product state which has nonzero overlap with the considered state but the product state must have zero overlap with the other states of the set [109].
Theorem 3. An r-UEB contains no state which is conclusively locally identifiable.
Proof. If we draw any state from the given UEB, it is not possible to produce a product state by taking any superposition of the drawn state and the product states from the complementary subspace. This follows from Theorem 1. Suppose that the drawn state is |e , and let {|p 1 , |p 2 , . . . , |p n } be a (product) basis for the complementary subspace. Let |φ = a 0 |e + a 1 |p 1 + . . . + a n |p n , where a i are nonzero complex numbers such that φ|φ = 1. We can rewrite |φ as a 0 |e + a ′ |p ′ , where a ′ = |a 1 | 2 + |a 2 | 2 + · · · + |a n | 2 and where |p ′ = (1/a ′ )(a 1 |p 1 + · · · + a n |p n ) is another product state (by the definition of UEB). Therefore, |φ is an entangled state (by Theorem 1). So, it is not possible to find any product state which is nonorthogonal to the drawn state but orthogonal to the rest of the states of the UEB.
It is also important to mention the following: The state |φ = a ′ 0 |e + a ′ 1 |e ′ , |a ′ 0 | 2 + |a ′ 1 | 2 = 1, can be a product state but it will be nonorthogonal to |e ′ also, where |e , |e ′ are different states drawn from the UEB. This is not desired because to identify the state |e unambiguously with some nonzero probability, we need a |φ which must be nonorthogonal to |e but orthogonal to |e ′ . This is the criterion we get from [109].
These complete the proof.
The proof of Theorem 3 also implies that given a complete orthonormal basis, whose states are all entangled, no state of that basis can be conclusively locally identified. This is because there is no room to find a product state which is nonorthogonal to the drawn state from the given basis. See Ref. [52] in this regard. Note also that when an incomplete entangled basis is given, the conclusive local identifiability is not that obvious.
To provide an example of an r-UEB, considered in Theorem 3, we construct a 3-UEB in C 4 ⊗ C 4 . We first identify nine entangled states of Schmidt rank 3, viz., the states of the set, {|ψ 1 , |ψ 2 , |ψ 3 }, belonging to the subspace spanned by the states in {|00 , |11 , |22 }, the states in {|ψ 4 , |ψ 5 , |ψ 6 }, belonging to the subspace spanned by {|01 , |12 , |20 }, and the states in {|ψ 7 , |ψ 8 , |ψ 9 }, belonging to the subspace spanned by {|02 , |10 , |21 }. In fact, these nine states can be orthogonal to each other. Next, we consider the product states |03 , |13 , |23 , |30 , |31 , |32 , |33 . These product states are also orthogonal to the previously mentioned nine entangled states. We now present the final basis in C 4 ⊗ C 4 . It consists of the states, 1 √ 2 (|ψ 1 ± |03 ), 1 √ 2 (|ψ 2 ± |13 ), 1 √ 2 (|ψ 3 ± |23 ), 1 √ 2 (|ψ 4 ± |30 ), |ψ 5 , |ψ 6 , |ψ 7 , |ψ 8 , |ψ 9 , |31 , |32 , |33 . Clearly, the first thirteen states form a UEB in C 4 ⊗ C 4 with the property that the entangled states have Schmidt rank three. This UEB constitutes an example of an r-UEB (for r = 3) as used in Theorem 3.
The UEBs of Theorem 3, exhibit a stronger form of nonlocality, compared to those sets of states which are not perfectly locally distinguishable but conclusively locally distinguishable. These UEBs are also more nonlocal compared to those sets of states, a few states of which are conclusively locally identifiable but all of them are not.
We should note here that the above type of basis is not possible in C 2 ⊗ C d . In that case, Schmidt rank ≥ 3 is not possible for any pure state. Particularly, for two qubits, it is possible to construct UEBs of cardinality three only [108], at least one state of which is conclusively locally identifiable [110].
Partially entangled subspaces. A "partially entangled subspace" contains both entangled and product states but it is deficient in product states, so that it is not possible to find a product basis for the subspace. Such a partially entangled subspace can be obtained by using an r-UEB. Consider any (entangled) state drawn from an r-UEB. The drawn state along with the product basis for the complementary subspace of the r-UEB, produce a partially entangled subspace. We note that the product state deficit is due to the fact that the drawn state will superpose with all other states to form only entangled states (by Theorem 1), thus always blocking at least one dimension in which there is only an entangled state, so that no basis for the subspace can be formed using only product states. An interesting feature of a partially entangled subspace is that all full-rank states associated with the subspace is entangled, as they violate the range criterion. This is directly due to the product state deficit in the subspace.
Generalization of Theorem 3. Following the generalization of Theorem 1 along with Theorem 3, we can consider another type of UEB in a bipartite system C d1 ⊗C d2 , where the entangled states have Schmidt rank at least r and the complementary subspace contains states of Schmidt rank ≤ (r − 2). Then, no state of the UEB can be conclusively identified by LOCC.
B. Strategies for conclusive local discrimination
Suppose that we are given a set of three two-qubit pure mutually-orthogonal entangled states |Ψ 1 , |Ψ 2 , and |Ψ 3 , such that the (unique) state which is orthogonal to these states, is a product state. Such a set cannot be perfectly distinguished by separable measurements [108,111], which is a strict superset of the set of LOCC-based measurements [43]. Nevertheless, it is possible to identify, by LOCC, at least one state of the set conclusively with some nonzero probability [110]. We remember here the definitions of conclusive distinguishability and identification, given in Sec. IV A. And we reiterate that to identify a quantum state, drawn from a given set of states, conclusively under LOCC-based measurement strategies, it is necessary and sufficient to find a product state which has nonzero overlap with the considered state but the product state must have zero overlap with the other states of the set [109]. In a practical scenario, it will be important to know the form of such product states to prepare a measurement setup for the conclusive identification. Now, we go back to the given set of three states. Suppose, the state |Ψ 1 = 2/ √ 5 |00 + 1/ √ 5 |11 , and we want to identify this state conclusively, by LOCC, with some nonzero probability. We also assume that the product state, which is orthogonal to the states |Ψ i , ∀i = 1, 2, 3, is given by |Φ = (2/
√ 5 |0 − 1/ √ 5 |1 )(1/ √ 17 |0 + 4/ √ 17 |1 )
. Clearly, to identify the state |Ψ 1 conclusively under LOCC with some nonzero probability, we have to find out a product state by taking a superposition of |Ψ 1 and |Φ . This can be found from Example 1 in the preceding section.
C. More nonlocality with less entanglement Consider a pair consisting of a pure entangled normalized state | Ψ 1 and an orthogonal product normalized state | Φ in a two-qubit system, such that a nontrivial superposition of the pair is always entangled. So for arbitrary complex numbers, a 1 and a 2 , with a 1 , a 2 = 0 and |a 1 | 2 + |a 2 | 2 = 1, a 1 | Ψ 1 + a 2 | Φ is entangled. The existence of such pairs are exactly the content of the concept of unconditional inseparability of superpositions, considered in the preceding section. It is always possible to find other states | Ψ 2 and | Ψ 3 such that the states | Ψ 1 , | Ψ 2 , | Ψ 3 , and | Φ form a two-qubit orthonormal basis. See Refs. [108,112] for such bases.
The states | Ψ 1 , | Ψ 2 , and | Ψ 3 cannot be conclusively distinguished by LOCC. In particular, the state | Ψ 1 cannot be conclusively identified by LOCC. This follows from the fact that if | Ψ 1 can be conclusively identified by LOCC, then it is necessary and sufficient to find a product state which is nonorthogonal to | Ψ 1 but orthogonal to | Ψ 2 and | Ψ 3 [109,110]. Obviously, this product state must belong to the subspace spanned by | Ψ 1 and | Φ . But according to our assumption about the {| Ψ 1 , | Φ } pair, this subspace contains only one product state, viz. | Φ , because all others are of the form, a 1 | Ψ 1 + a 2 | Φ , with a 1 , a 2 = 0, and are entangled. And, | Ψ 1 and | Φ are orthogonal to each other. So, it is not possible to find a product state which is nonorthogonal to | Ψ 1 but orthogonal to | Ψ 2 and | Ψ 3 . It is important to mention here that any set of three orthogonal two-qubit maximally entangled states can always be conclusively distinguished by LOCC [112]. Therefore, the set consisting of the states | Ψ 1 , | Ψ 2 , and | Ψ 3 are "more nonlocal" than any set of three orthogonal two-qubit maximally entangled states, where the "nonlocality" is in the sense of conclusive local indistinguishability of a set of orthogonal shared quantum states. This also implies that the states | Ψ 1 , | Ψ 2 , and | Ψ 3 cannot be maximally entangled. This also follows from Ref. [85], because we have assumed that the state | Φ is a product state and therefore, all of the states | Ψ 1 , | Ψ 2 , and | Ψ 3 cannot be maximally entangled states.
It is clear that the average entanglement of the states of the set {| Ψ 1 , | Ψ 2 , | Ψ 3 } is lower than that of any set of maximally entangled two-qubit states. However, the states of the latter states are conclusively distinguishable by LOCC, while those of the former are not. We are therefore led to the phenomenon of more nonlocality with less entanglement [52]. See Ref. [112] in this regard. Locally indistinguishable mutually orthogonal ensembles of shared states can be used by a "boss" to send a secret to her "sub-ordinates" in such a way that they can recover the secret only if they cooperate.
D. Two-element ensembles and conclusive local indistinguishability
Two orthogonal states of two qubits form the most elementary class of quantum ensembles. Such ensembles, if the constituent elements are pure, can never lead to local indistinguishability, perfect or conclusive [44], leading to their uselessness from this perspective. To search for two-element ensembles of two qubits that are conclusively indistinguishable under LOCC, at least one of the two elements must be a mixed state. If we further require that the sum of the dimensions of the supports of the states within a two-element ensemble must not be equal to the total dimension of the Hilbert space, then in case of two qubits, the only option is to consider ensembles consisting of a mixed state of rank-2 and a pure state. Consider now the pure state | Ψ 1 and the mixed state ̺ = p 1 | Ψ 2 Ψ 2 | + p 2 | Ψ 3 Ψ 3 |, where p 1 , p 2 > 0 and p 1 + p 2 = 1. The states | Ψ 1 , | Ψ 2 , and | Ψ 3 are exactly as in the preceding subsection. As long as | Ψ 1 cannot be conclusively identified by LOCC for the set, {| Ψ 1 , | Ψ 2 , | Ψ 3 }, the two-element ensemble, {| Ψ 1 Ψ 1 |, ̺}, is also not conclusively distinguishable by LOCC. See Ref. [108] in this regard.
V. CONCLUSION
Before we summarize our findings, we mention that the quantitative bounds on entanglement of superpositions that are already known in the literature might not be always useful to identify if the state after a superposition is separable or entangled. An example will be helpful to understand this. Let us consider an entangled state |φ 1 = a |00 + b |11 + c |22 , 0 < a, b, c < 1, a 2 + b 2 + c 2 = 1 and a product state |φ 2 = |01 . These two states are orthogonal to each other. And, no nontrivial superposition of these two states can produce a product state, i.e., the state |ψ = α 1 |φ 1 + α 2 |φ 2 is always entangled for 0 < α 1 , α 2 < 1, α 2 1 + α 2 2 = 1 1 . Notice that for this example, to detect entanglement, we need an inequality E(|ψ ) > 0 and this must hold for all values of a, b, c, α 1 , α 2 as defined above. It is actually difficult to find such a strict inequality. For example, one can try to use the lower bounds on the entanglement of superpositions, reported so far (such as Theorem 5 of [16], Theorem 2 of [17], Theorem 3 of [19], or, Theorem 3 of [24]). It is easy to check that using these bounds, it is not possible to prove that entanglement of the state |ψ is nonzero for all values of 0 < a, b, c < 1, a 2 + b 2 + c 2 = 1 and 0 < α 1 , α 2 < 1, α 2 1 + α 2 2 = 1. But Theorem 1 of this paper can serve the purpose of detecting entanglement in |ψ .
Usually, the quantitative bounds which are reported in the previous papers, contain a term related to the entanglement of the output state. Now, if we calculate the amount of entanglement of the output state, then obviously, we can tell whether the output state is separable or entangled. But in our case, it is not required to calculate the entanglement of the output state, e.g. for concluding if the output state is an entangled state, as for instance seen in Theorem 1 of the present paper. Of course, the quantitative results in the literature are very useful, and moreover, the qualitative results presented here are also not conclusive in all cases. But what we wish to argue is that both quantitative and qualitative analyses could be of value in unknotting the status of entanglement and separability in superpositions of quantum states.
To conclude, we have explored separability and inseparability of superpositions of quantum states of shared systems. We have introduced the notions of conditional and unconditional inseparability of superpositions. Specifically, we have shown that all nontrivial superpositions of all pairs of an entangled state and a product state are entangled for bipartite systems, when the initial entangled state is of Schmidt rank three or higher. Obviously, the two-qubit system is left out in this result, and it is subsequently analyzed in detail, and we provided several specific cases there. We then found that the considerations are useful in several quantum communication tasks. In particular, we identified a class of unextendible entangled bases, which we referred to as r-UEBs, and proved that they can contain no state which is conclusively locally identifiable. Moreover, the notion of unconditional inseparability of superpositions is found useful to exhibit the phenomenon of more nonlocality with less entanglement in a two-qubit system. Furthermore, we have established a one-to-one correspondence between the phenomenon more nonlocality with less entanglement and a class of two-element ensembles which cannot be conclusively distinguished by local quantum operations and classical communication. ACKNOWLEDGMENT We acknowledge support from the Department of Science and Technology, Government of India through the QuEST grant (Grant No. DST/ICPS/QUST/Theme-3/2019/120).
[1] R. Horodecki
Rev. Mod. Phys. 81, 865 (2009). [2] O. Gühne and G. Tóth, Entanglement detection, Phys. Rep. 474, 1 (2009). [3] S. Das, T. Chanda, M. Lewenstein, A. Sanpera, A. Sen(De), and U. Sen, The separability versus entan-[9] R. Prabhu, A. K. Pati, A. Sen(De), and U. Sen, Exclusion principle for quantum dense coding, Phys. Rev. A 87, 052319 (2013). [10] R. Prabhu, A. Sen(De), and U. Sen, Genuine Phys. Rev. A 88, 042329 (2013).,
P. Horodecki,
M. Horodecki,
and
K.
Horodecki,
Quantum
entanglement,
multiparty
quantum
entanglement
suppresses
multiport
classical
information
transmission,
Although α 1 and α 2 are chosen here as real, picking complex values for them does not change the conclusion. This can be seen as follows. Clearly, it is enough to have an extra phase attached to only one (any one) among α 1 , α 2 , and let us choose it to be α 2 . Now this phase can be pulled into the definition of the |0 of the first party. However, then an extra phase appears in the term α 1 a|00 , which can then be absorbed in the |0 of the second party. These re-definitions of the local bases are local unitaries, which does not affect the entanglement content of |ψ .
Teleporting an unknown quantum state via dual classical and einstein-podolskyrosen channels. C H Bennett, G Brassard, C Crépeau, R Jozsa, A Peres, W K Wootters, 10.1103/PhysRevLett.70.1895Phys. Rev. Lett. 701895C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Teleporting an unknown quantum state via dual classical and einstein-podolsky- rosen channels, Phys. Rev. Lett. 70, 1895 (1993).
Experimental quantum teleportation. D Bouwmeester, J.-W Pan, K Mattle, M Eibl, H Weinfurter, A Zeilinger, 10.1038/37539Nature. 390575D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. We- infurter, and A. Zeilinger, Experimental quantum tele- portation, Nature 390, 575 (1997).
Communication via one-and two-particle operators on einstein-podolskyrosen states. C H Bennett, S J Wiesner, 10.1103/PhysRevLett.69.2881Phys. Rev. Lett. 692881C. H. Bennett and S. J. Wiesner, Communication via one-and two-particle operators on einstein-podolsky- rosen states, Phys. Rev. Lett. 69, 2881 (1992).
Dense coding in experimental quantum communication. K Mattle, H Weinfurter, P G Kwiat, A Zeilinger, 10.1103/PhysRevLett.76.4656Phys. Rev. Lett. 764656K. Mattle, H. Weinfurter, P. G. Kwiat, and A. Zeilinger, Dense coding in experimental quantum communication, Phys. Rev. Lett. 76, 4656 (1996).
Sen(De), and U. Sen, Distributed quantum dense coding. D Bruß, G M D'ariano, M Lewenstein, C Macchiavello, A , 10.1103/PhysRevLett.93.210501Phys. Rev. Lett. 93210501D. Bruß, G. M. D'Ariano, M. Lewenstein, C. Macchi- avello, A. Sen(De), and U. Sen, Distributed quantum dense coding, Phys. Rev. Lett. 93, 210501 (2004).
Distributed quantum dense coding with two receivers in noisy environments. T Das, R Prabhu, A Sen(de, ) , U Sen, 10.1103/PhysRevA.92.052330Phys. Rev. A. 9252330T. Das, R. Prabhu, A. Sen(De), and U. Sen, Distributed quantum dense coding with two receivers in noisy envi- ronments, Phys. Rev. A 92, 052330 (2015).
Quantum cryptography: Public key distribution and coin tossing. C H Bennett, G Brassard, 10.1016/j.tcs.2014.05.025Theor. Comput. Sci. 5607C. H. Bennett and G. Brassard, Quantum cryp- tography: Public key distribution and coin tossing, Theor. Comput. Sci. 560, 7 (2014).
Quantum cryptography based on bell's theorem. A K Ekert, 10.1103/PhysRevLett.67.661Phys. Rev. Lett. 67661A. K. Ekert, Quantum cryptography based on bell's the- orem, Phys. Rev. Lett. 67, 661 (1991).
Rank two bipartite bound entangled states do not exist. P Horodecki, J A Smolin, B M Terhal, A V , 10.1016/S0304-3975(01)00376-0Theor. Comput. Sci. 292589P. Horodecki, J. A. Smolin, B. M. Terhal, and A. V. Thapliyal, Rank two bipartite bound entangled states do not exist, Theor. Comput. Sci. 292, 589 (2003).
Entanglement of superpositions. N Linden, S Popescu, J A Smolin, 10.1103/PhysRevLett.97.100502Phys. Rev. Lett. 97100502N. Linden, S. Popescu, and J. A. Smolin, Entanglement of superpositions, Phys. Rev. Lett. 97, 100502 (2006).
Reexamination of entanglement of superpositions. G Gour, 10.1103/PhysRevA.76.052320Phys. Rev. A. 7652320G. Gour, Reexamination of entanglement of superposi- tions, Phys. Rev. A 76, 052320 (2007).
Concurrence of superpositions. C.-S Yu, X X Yi, H.-S Song, 10.1103/PhysRevA.75.022332Phys. Rev. A. 7522332C.-S. Yu, X. X. Yi, and H.-S. Song, Concurrence of su- perpositions, Phys. Rev. A 75, 022332 (2007).
Bounds on negativity of superpositions. Y.-C Ou, H Fan, 10.1103/PhysRevA.76.022320Phys. Rev. A. 7622320Y.-C. Ou and H. Fan, Bounds on negativity of superpo- sitions, Phys. Rev. A 76, 022320 (2007).
Tight bounds on the concurrence of quantum superpositions. J Niset, N J Cerf, 10.1103/PhysRevA.76.042328Phys. Rev. A. 7642328J. Niset and N. J. Cerf, Tight bounds on the concurrence of quantum superpositions, Phys. Rev. A 76, 042328 (2007).
Multipartite entanglement of superpositions. D Cavalcanti, M O Cunha, A Acín, 10.1103/PhysRevA.76.042329Phys. Rev. A. 7642329D. Cavalcanti, M. O. Terra Cunha, and A. Acín, Multipartite entanglement of superpositions, Phys. Rev. A 76, 042329 (2007).
Bounds on the multipartite entanglement of superpositions. W Song, N.-L Liu, Z.-B Chen, 10.1103/PhysRevA.76.054303Phys. Rev. A. 7654303W. Song, N.-L. Liu, and Z.-B. Chen, Bounds on the multipartite entanglement of superpositions, Phys. Rev. A 76, 054303 (2007).
Entanglement of subspaces in terms of entanglement of superpositions. G Gour, A Roy, 10.1103/PhysRevA.77.012336Phys. Rev. A. 7712336G. Gour and A. Roy, Entanglement of sub- spaces in terms of entanglement of superpositions, Phys. Rev. A 77, 012336 (2008).
Tangles of superpositions and the convex-roof extension. A Osterloh, J Siewert, A Uhlmann, 10.1103/PhysRevA.77.032310Phys. Rev. A. 7732310A. Osterloh, J. Siewert, and A. Uhlmann, Tan- gles of superpositions and the convex-roof extension, Phys. Rev. A 77, 032310 (2008).
Concurrence of superpositions of many states. S J Akhtarshenas, 10.1103/PhysRevA.83.042306Phys. Rev. A. 8342306S. J. Akhtarshenas, Concurrence of superpositions of many states, Phys. Rev. A 83, 042306 (2011).
Entanglement and discord of the superposition of greenberger-horne-zeilinger states. P Parashar, S Rana, 10.1103/PhysRevA.83.032301Phys. Rev. A. 8332301P. Parashar and S. Rana, Entanglement and discord of the superposition of greenberger-horne-zeilinger states, Phys. Rev. A 83, 032301 (2011).
Genuine multipartite entanglement of superpositions. Z Ma, Z Chen, S.-M Fei, 10.1103/PhysRevA.90.032307Phys. Rev. A. 9032307Z. Ma, Z. Chen, and S.-M. Fei, Genuine multipartite entanglement of superpositions, Phys. Rev. A 90, 032307 (2014).
Separability and entanglement of composite quantum systems. M Lewenstein, A Sanpera, 10.1103/PhysRevLett.80.2261Phys. Rev. Lett. 802261M. Lewenstein and A. Sanpera, Separability and entanglement of composite quantum systems, Phys. Rev. Lett. 80, 2261 (1998).
Robustness of entanglement. G Vidal, R Tarrach, 10.1103/PhysRevA.59.141Phys. Rev. A. 59141G. Vidal and R. Tarrach, Robustness of entanglement, Phys. Rev. A 59, 141 (1999).
Geometrical interpretation for robustness of entanglement. J Du, M Shi, X Zhou, R Han, 10.1016/S0375-9601(00)00108-0Phys. Lett. A. 267244J. Du, M. Shi, X. Zhou, and R. Han, Geomet- rical interpretation for robustness of entanglement, Phys. Lett. A 267, 244 (2000).
Maximally entangled mixed states under nonlocal unitary operations in two qubits. S Ishizaka, T Hiroshima, 10.1103/PhysRevA.62.022310Phys. Rev. A. 6222310S. Ishizaka and T. Hiroshima, Maximally entangled mixed states under nonlocal unitary operations in two qubits, Phys. Rev. A 62, 022310 (2000).
A comparison of the entanglement measures negativity and concurrence. F Verstraete, K Audenaert, J Dehaene, B D Moor, 10.1088/0305-4470/34/47/329J. Phys. A: Math. Gen. 3410327F. Verstraete, K. Audenaert, J. Dehaene, and B. D. Moor, A comparison of the entangle- ment measures negativity and concurrence, J. Phys. A: Math. Gen. 34, 10327 (2001).
Robustness of multiparty entanglement. C Simon, J Kempe, 10.1103/PhysRevA.65.052327Phys. Rev. A. 6552327C. Simon and J. Kempe, Robustness of multiparty en- tanglement, Phys. Rev. A 65, 052327 (2002).
Robustness of quantum gates in the presence of noise. A W Harrow, M A Nielsen, 10.1103/PhysRevA.68.012308Phys. Rev. A. 6812308A. W. Harrow and M. A. Nielsen, Robust- ness of quantum gates in the presence of noise, Phys. Rev. A 68, 012308 (2003).
Generalized robustness of entanglement. M Steiner, 10.1103/PhysRevA.67.054305Phys. Rev. A. 6754305M. Steiner, Generalized robustness of entanglement, Phys. Rev. A 67, 054305 (2003).
Quantifying entanglement with witness operators. F G S L Brandão, 10.1103/PhysRevA.72.022310Phys. Rev. A. 7222310F. G. S. L. Brandão, Quantifying entanglement with witness operators, Phys. Rev. A 72, 022310 (2005).
Robustness of multiqubit entanglement in the independent decoherence model. S Bandyopadhyay, D A Lidar, 10.1103/PhysRevA.72.042339Phys. Rev. A. 7242339S. Bandyopadhyay and D. A. Lidar, Robustness of mul- tiqubit entanglement in the independent decoherence model, Phys. Rev. A 72, 042339 (2005).
Connecting the generalized robustness and the geometric measure of entanglement. D Cavalcanti, 10.1103/PhysRevA.73.044302Phys. Rev. A. 7344302D. Cavalcanti, Connecting the generalized robust- ness and the geometric measure of entanglement, Phys. Rev. A 73, 044302 (2006).
Robustness of entangled states that are positive under partial transposition. S Bandyopadhyay, S Ghosh, V Roychowdhury, 10.1103/PhysRevA.77.032318Phys. Rev. A. 7732318S. Bandyopadhyay, S. Ghosh, and V. Roychowdhury, Robustness of entangled states that are positive under partial transposition, Phys. Rev. A 77, 032318 (2008).
Multiparticle entanglement under the influence of decoherence. O Gühne, F Bodoky, M Blaauboer, 10.1103/PhysRevA.78.060301Phys. Rev. A. 7860301O. Gühne, F. Bodoky, and M. Blaauboer, Multiparti- cle entanglement under the influence of decoherence, Phys. Rev. A 78, 060301(R) (2008).
Robustness of entanglement as a resource. R Chaves, L Davidovich, 10.1103/PhysRevA.82.052308Phys. Rev. A. 8252308R. Chaves and L. Davidovich, Robustness of entangle- ment as a resource, Phys. Rev. A 82, 052308 (2010).
Entanglement robustness and geometry in systems of identical particles. F Benatti, R Floreanini, U Marzolino, 10.1103/PhysRevA.85.042329Phys. Rev. A. 8542329F. Benatti, R. Floreanini, and U. Marzolino, Entangle- ment robustness and geometry in systems of identical particles, Phys. Rev. A 85, 042329 (2012).
Construction of noisy bound entangled states and the range criterion. S Halder, R Sengupta, 10.1016/j.physleta.2019.04.003Phys. Lett. A. 383S. Halder and R. Sengupta, Construction of noisy bound entangled states and the range criterion, Phys. Lett. A 383, 2004 (2019).
Quantum nonlocality without entanglement. C H Bennett, D P Divincenzo, C A Fuchs, T Mor, E Rains, P W Shor, J A Smolin, W K Wootters, 10.1103/PhysRevA.59.1070Phys. Rev. A. 591070C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters, Quantum nonlocality without entanglement, Phys. Rev. A 59, 1070 (1999).
Local distinguishability of multipartite orthogonal quantum states. J Walgate, A J Short, L Hardy, V Vedral, 10.1103/PhysRevLett.85.4972Phys. Rev. Lett. 854972J. Walgate, A. J. Short, L. Hardy, and V. Vedral, Local distinguishability of multipartite orthogonal quantum states, Phys. Rev. Lett. 85, 4972 (2000).
Optimal local discrimination of two multipartite pure states. S Virmani, M F Sacchi, M B Plenio, D Markham, 10.1016/S0375-9601(01)00484-4Phys. Lett. A. 28862S. Virmani, M. F. Sacchi, M. B. Plenio, and D. Markham, Optimal local discrimination of two mul- tipartite pure states, Phys. Lett. A. 288, 62 (2001).
Distinguishability of bell states. S Ghosh, G Kar, A Roy, A Sen(de, ) , U Sen, 10.1103/PhysRevLett.87.277902Phys. Rev. Lett. 87277902S. Ghosh, G. Kar, A. Roy, A. Sen(De), and U. Sen, Distinguishability of bell states, Phys. Rev. Lett. 87, 277902 (2001).
Nonlocal variables with product-state eigenstates. B Groisman, L Vaidman, 10.1088/0305-4470/34/35/313J. Phys. A: Math. Gen. 346881B. Groisman and L. Vaidman, Nonlo- cal variables with product-state eigenstates, J. Phys. A: Math. Gen. 34, 6881 (2001).
Hiding bits in bell states. B M Terhal, D P Divincenzo, D W Leung, 10.1103/PhysRevLett.86.5807Phys. Rev. Lett. 865807B. M. Terhal, D. P. DiVincenzo, and D. W. Leung, Hid- ing bits in bell states, Phys. Rev. Lett. 86, 5807 (2001).
Hiding classical data in multipartite quantum states. T Eggeling, R F Werner, 10.1103/PhysRevLett.89.097905Phys. Rev. Lett. 8997905T. Eggeling and R. F. Werner, Hiding clas- sical data in multipartite quantum states, Phys. Rev. Lett. 89, 097905 (2002).
Nonlocality, asymmetry, and distinguishing bipartite states. J Walgate, L Hardy, 10.1103/PhysRevLett.89.147901Phys. Rev. Lett. 89147901J. Walgate and L. Hardy, Nonlocality, asym- metry, and distinguishing bipartite states, Phys. Rev. Lett. 89, 147901 (2002).
Local indistinguishability of orthogonal pure states by using a bound on distillable entanglement. S Ghosh, G Kar, A Roy, D Sarkar, A Sen(de, ) , U Sen, 10.1103/PhysRevA.65.062307Phys. Rev. A. 6562307S. Ghosh, G. Kar, A. Roy, D. Sarkar, A. Sen(De), and U. Sen, Local indistinguishability of orthogonal pure states by using a bound on distillable entanglement, Phys. Rev. A 65, 062307 (2002).
Local indistinguishability: More nonlocality with less entanglement. M Horodecki, A Sen(de, ) , U Sen, K Horodecki, 10.1103/PhysRevLett.90.047902Phys. Rev. Lett. 9047902M. Horodecki, A. Sen(De), U. Sen, and K. Horodecki, Local indistinguishability: More nonlocality with less entanglement, Phys. Rev. Lett. 90, 047902 (2003).
Orthogonality and distinguishability: Criterion for local distinguishability of arbitrary orthogonal states. P.-X Chen, C.-Z Li, 10.1103/PhysRevA.68.062107Phys. Rev. A. 6862107P.-X. Chen and C.-Z. Li, Orthogonality and distinguishability: Criterion for local distin- guishability of arbitrary orthogonal states, Phys. Rev. A 68, 062107 (2003).
Local distinguishability of quantum states and the distillation of entanglement. P.-X Chen, C.-Z Li, Quantum Inf. Comput. 3203P.-X. Chen and C.-Z. Li, Local distinguishability of quantum states and the distillation of entanglement, Quantum Inf. Comput. 3, 203 (2003).
Locally accessible information: How much can the parties gain by cooperating?. P Badzia¸g, M Horodecki, A Sen(de, ) , U Sen, 10.1103/PhysRevLett.91.117901Phys. Rev. Lett. 91117901P. Badzia¸g, M. Horodecki, A. Sen(De), and U. Sen, Locally accessible information: How much can the parties gain by cooperating?, Phys. Rev. Lett. 91, 117901 (2003).
Distillation protocols: Output entanglement and local mutual information. M Horodecki, J Oppenheim, A Sen(de, ) , U Sen, 10.1103/PhysRevLett.93.170503Phys. Rev. Lett. 93170503M. Horodecki, J. Oppenheim, A. Sen(De), and U. Sen, Distillation protocols: Output entanglement and local mutual information, Phys. Rev. Lett. 93, 170503 (2004).
Distinguishability of maximally entangled states. S Ghosh, G Kar, A Roy, D Sarkar, 10.1103/PhysRevA.70.022304Phys. Rev. A. 7022304S. Ghosh, G. Kar, A. Roy, and D. Sarkar, Distinguishability of maximally entangled states, Phys. Rev. A 70, 022304 (2004).
Distinguishability of complete and unextendible product bases. S De Rinaldis, 10.1103/PhysRevA.70.022309Phys. Rev. A. 7022309S. De Rinaldis, Distinguishability of com- plete and unextendible product bases, Phys. Rev. A 70, 022309 (2004).
Distinguishability and indistinguishability by local operations and classical communication. H Fan, 10.1103/PhysRevLett.92.177905Phys. Rev. Lett. 92177905H. Fan, Distinguishability and indistinguishability by local operations and classical communication, Phys. Rev. Lett. 92, 177905 (2004).
Locally accessible information and distillation of entanglement. S Ghosh, P Joag, G Kar, S Kunkri, A Roy, 10.1103/PhysRevA.71.012321Phys. Rev. A. 7112321S. Ghosh, P. Joag, G. Kar, S. Kunkri, and A. Roy, Lo- cally accessible information and distillation of entangle- ment, Phys. Rev. A 71, 012321 (2005).
Distinguishing bipartite orthogonal states by locc: best and worst cases. M Nathanson, 10.1063/1.1914731J. Math. Phys. 4662103M. Nathanson, Distinguishing bipartite orthog- onal states by locc: best and worst cases, J. Math. Phys. 46, 062103 (2005).
Bipartite subspaces having no bases distinguishable by local operations and classical communication. J Watrous, 10.1103/PhysRevLett.95.080505Phys. Rev. Lett. 9580505J. Watrous, Bipartite subspaces having no bases distin- guishable by local operations and classical communica- tion, Phys. Rev. Lett. 95, 080505 (2005).
Multipartite nonlocality without entanglement in many dimensions. J Niset, N J Cerf, 10.1103/PhysRevA.74.052103Phys. Rev. A. 7452103J. Niset and N. J. Cerf, Multipartite nonlocal- ity without entanglement in many dimensions, Phys. Rev. A 74, 052103 (2006).
Bounds on multipartite entangled orthogonal state discrimination using local operations and classical communication. M Hayashi, D Markham, M Murao, M Owari, S Virmani, 10.1103/PhysRevLett.96.040501Phys. Rev. Lett. 9640501M. Hayashi, D. Markham, M. Murao, M. Owari, and S. Virmani, Bounds on multipartite en- tangled orthogonal state discrimination using local operations and classical communication, Phys. Rev. Lett. 96, 040501 (2006).
Distillation protocols that involve local distinguishing: Composing upper and lower bounds on locally accessible information. A Sen ; De, ) , U Sen, M Lewenstein, 10.1103/PhysRevA.74.052332Phys. Rev. A. 7452332A. Sen(De), U. Sen, and M. Lewenstein, Distillation pro- tocols that involve local distinguishing: Composing up- per and lower bounds on locally accessible information, Phys. Rev. A 74, 052332 (2006).
Sen(De), and U. Sen, Quantification of quantum correlation of ensembles of states. M Horodecki, A , 10.1103/PhysRevA.75.062329Phys. Rev. A. 7562329M. Horodecki, A. Sen(De), and U. Sen, Quantifica- tion of quantum correlation of ensembles of states, Phys. Rev. A 75, 062329 (2007).
Distinguishing arbitrary multipartite basis unambiguously using local operations and classical communication. R Duan, Y Feng, Z Ji, M Ying, 10.1103/PhysRevLett.98.230502Phys. Rev. Lett. 98230502R. Duan, Y. Feng, Z. Ji, and M. Ying, Distin- guishing arbitrary multipartite basis unambiguously using local operations and classical communication, Phys. Rev. Lett. 98, 230502 (2007).
Characterizing locally indistinguishable orthogonal product states. Y Feng, Y Shi, 10.1109/TIT.2009.2018330IEEE Trans. Inf. Theory. 552799Y. Feng and Y. Shi, Characterizing locally indistinguishable orthogonal product states, IEEE Trans. Inf. Theory 55, 2799 (2009).
Distinguishability of quantum states under restricted families of measurements with an application to quantum data hiding. W Matthews, S Wehner, A Winter, 10.1007/s00220-009-0890-5Commun. Math. Phys. 291813W. Matthews, S. Wehner, and A. Winter, Distinguisha- bility of quantum states under restricted families of measurements with an application to quantum data hid- ing, Commun. Math. Phys 291, 813 (2009).
Entanglement and perfect discrimination of a class of multiqubit states by local operations and classical communication. S Bandyopadhyay, 10.1103/PhysRevA.81.022327Phys. Rev. A. 8122327S. Bandyopadhyay, Entanglement and perfect dis- crimination of a class of multiqubit states by local operations and classical communication, Phys. Rev. A 81, 022327 (2010).
More nonlocality with less purity. S Bandyopadhyay, 10.1103/PhysRevLett.106.210402Phys. Rev. Lett. 106210402S. Bandyopadhyay, More nonlocality with less purity, Phys. Rev. Lett. 106, 210402 (2011).
Four locally indistinguishable ququad-ququad orthogonal maximally entangled states. N Yu, R Duan, M Ying, 10.1103/PhysRevLett.109.020506Phys. Rev. Lett. 10920506N. Yu, R. Duan, and M. Ying, Four locally indistin- guishable ququad-ququad orthogonal maximally entan- gled states, Phys. Rev. Lett. 109, 020506 (2012).
Entanglement, mixedness, and perfect local discrimination of orthogonal quantum states. S Bandyopadhyay, 10.1103/PhysRevA.85.042319Phys. Rev. A. 8542319S. Bandyopadhyay, Entanglement, mixedness, and per- fect local discrimination of orthogonal quantum states, Phys. Rev. A 85, 042319 (2012).
Local distinguishability of orthogonal quantum states in a 2 ⊗ 2 ⊗ 2 system. Y.-H Yang, F Gao, G.-J Tian, T.-Q Cao, Q.-Y. Wen, 10.1103/PhysRevA.88.024301Phys. Rev. A. 8824301Y.-H. Yang, F. Gao, G.-J. Tian, T.-Q. Cao, and Q.-Y. Wen, Local distinguishability of orthog- onal quantum states in a 2 ⊗ 2 ⊗ 2 system, Phys. Rev. A 88, 024301 (2013).
A framework for bounding nonlocality of state discrimination. A M Childs, D Leung, L Mančinska, M Ozols, 10.1007/s00220-013-1784-0Commun. Math. Phys. 3231121A. M. Childs, D. Leung, L. Mančinska, and M. Ozols, A framework for bounding nonlocality of state discrim- ination, Commun. Math. Phys. 323, 1121 (2013).
. Z.-C Zhang, F Gao, G.-J Tian, T.-Q Cao, Q.-Y , Z.-C. Zhang, F. Gao, G.-J. Tian, T.-Q. Cao, and Q.-Y.
Nonlocality of orthogonal product basis quantum states. Wen, 10.1103/PhysRevA.90.022313Phys. Rev. A. 9022313Wen, Nonlocality of orthogonal product basis quantum states, Phys. Rev. A 90, 022313 (2014).
Nonlocality of orthogonal product states. Z.-C Zhang, F Gao, S.-J Qin, Y.-H Yang, Q.-Y. Wen, 10.1103/PhysRevA.92.012332Phys. Rev. A. 9212332Z.-C. Zhang, F. Gao, S.-J. Qin, Y.-H. Yang, and Q.-Y. Wen, Nonlocality of orthogonal product states, Phys. Rev. A 92, 012332 (2015).
Quantum nonlocality of multipartite orthogonal product states. G.-B Xu, Q.-Y Wen, S.-J Qin, Y.-H Yang, F Gao, 10.1103/PhysRevA.93.032341Phys. Rev. A. 9332341G.-B. Xu, Q.-Y. Wen, S.-J. Qin, Y.-H. Yang, and F. Gao, Quantum nonlocality of multipartite orthogonal product states, Phys. Rev. A 93, 032341 (2016).
Difficulty of distinguishing product states locally. S Croke, S M Barnett, 10.1103/PhysRevA.95.012337Phys. Rev. A. 9512337S. Croke and S. M. Barnett, Difficulty of distinguishing product states locally, Phys. Rev. A 95, 012337 (2017).
Ultimate data hiding in quantum mechanics and beyond. L Lami, C Palazuelos, A Winter, 10.1007/s00220-018-3154-4Commun. Math. Phys. 361661L. Lami, C. Palazuelos, and A. Winter, Ultimate data hiding in quantum mechanics and beyond, Commun. Math. Phys. 361, 661 (2018).
Several nonlocal sets of multipartite pure orthogonal product states. S Halder, 10.1103/PhysRevA.98.022303Phys. Rev. A. 9822303S. Halder, Several nonlocal sets of mul- tipartite pure orthogonal product states, Phys. Rev. A 98, 022303 (2018).
Strong quantum nonlocality without entanglement. S Halder, M Banik, S Agrawal, S Bandyopadhyay, 10.1103/PhysRevLett.122.040403Phys. Rev. Lett. 12240403S. Halder, M. Banik, S. Agrawal, and S. Bandyopad- hyay, Strong quantum nonlocality without entangle- ment, Phys. Rev. Lett. 122, 040403 (2019).
Locally distinguishing quantum states with limited classical communication. S Halder, C Srivastava, 10.1103/PhysRevA.101.052313Phys. Rev. A. 10152313S. Halder and C. Srivastava, Locally distinguishing quantum states with limited classical communication, Phys. Rev. A 101, 052313 (2020).
Quantum data hiding with continuous-variable systems. L Lami, 10.1103/PhysRevA.104.052428Phys. Rev. A. 10452428L. Lami, Quantum data hiding with continuous-variable systems, Phys. Rev. A 104, 052428 (2021).
Unextendible maximally entangled bases. S Bravyi, J A Smolin, 10.1103/PhysRevA.84.042306Phys. Rev. A. 8442306S. Bravyi and J. A. Smolin, Unextendible maximally entangled bases, Phys. Rev. A 84, 042306 (2011).
Unextendible maximally entangled bases and mutually unbiased bases. B Chen, S.-M Fei, 10.1103/PhysRevA.88.034301Phys. Rev. A. 8834301B. Chen and S.-M. Fei, Unextendible maximally entangled bases and mutually unbiased bases, Phys. Rev. A 88, 034301 (2013).
Unextendible maximally entangled bases in C d ⊗ C d ′. M.-S Li, Y.-L Wang, Z.-J Zheng, 10.1103/PhysRevA.89.062313Phys. Rev. A. 8962313M.-S. Li, Y.-L. Wang, and Z.-J. Zheng, Unex- tendible maximally entangled bases in C d ⊗ C d ′ , Phys. Rev. A 89, 062313 (2014).
Unextendible maximally entangled bases in C d ⊗ C d. Y.-L Wang, M.-S Li, S.-M Fei, 10.1103/PhysRevA.90.034301Phys. Rev. A. 9034301Y.-L. Wang, M.-S. Li, and S.-M. Fei, Unex- tendible maximally entangled bases in C d ⊗ C d , Phys. Rev. A 90, 034301 (2014).
Unextendible maximally entangled bases and mutually unbiased bases in C d ⊗ C d ′. H Nan, Y.-H Tao, L.-S Li, J Zhang, 10.1007/s10773-014-2288-1Int. J. Theor. Phys. 54927H. Nan, Y.-H. Tao, L.-S. Li, and J. Zhang, Unextendible maximally entangled bases and mutually unbiased bases in C d ⊗ C d ′ , Int. J. Theor. Phys. 54, 927 (2015).
A note on mutually unbiased unextendible maximally entangled bases in C 2 ⊗ C 3. H Nizamidin, T Ma, S.-M Fei, 10.1007/s10773-014-2227-1Int. J. Theor. Phys. 54326H. Nizamidin, T. Ma, and S.-M. Fei, A note on mutu- ally unbiased unextendible maximally entangled bases in C 2 ⊗ C 3 , Int. J. Theor. Phys. 54, 326 (2015).
Constructing the unextendible maximally entangled basis from the maximally entangled basis. Y Guo, 10.1103/PhysRevA.94.052302Phys. Rev. A. 9452302Y. Guo, Constructing the unextendible maximally en- tangled basis from the maximally entangled basis, Phys. Rev. A 94, 052302 (2016).
Mutually unbiasedness between maximally entangled bases and unextendible maximally entangled systems in C 2 ⊗ C 2 k. J Zhang, H Nan, Y.-H Tao, S.-M Fei, 10.1007/s10773-015-2731-yInt. J. Theor. Phys. 55886J. Zhang, H. Nan, Y.-H. Tao, and S.-M. Fei, Mutually unbiasedness between maximally entangled bases and unextendible maximally entangled systems in C 2 ⊗ C 2 k , Int. J. Theor. Phys. 55, 886 (2016).
Connecting unextendible maximally entangled base with partial hadamard matrices. Y.-L Wang, M.-S Li, S.-M Fei, Z.-J Zheng, 10.1007/s11128-017-1537-7Quant. Info. Proc. 1684Y.-L. Wang, M.-S. Li, S.-M. Fei, and Z.-J. Zheng, Connecting unextendible maximally en- tangled base with partial hadamard matrices, Quant. Info. Proc. 16, 84 (2017).
Unextendible maximally entangled bases in C pd ⊗ C qd. G.-J Zhang, Y.-H Tao, Y.-F Han, X.-L Yong, S.-M Fei, 10.1007/s11128-018-2094-4Quant. Info. Proc. 17318G.-J. Zhang, Y.-H. Tao, Y.-F. Han, X.-L. Yong, and S.-M. Fei, Unextendible maximally entangled bases in C pd ⊗ C qd , Quant. Info. Proc. 17, 318 (2018).
Constructions of unextendible maximally entangled bases in C d ⊗ C d ′. G.-J Zhang, Y.-H Tao, Y.-F Han, X.-L Yong, S.-M Fei, 10.1038/s41598-018-21561-0Sci. Rep. 83193G.-J. Zhang, Y.-H. Tao, Y.-F. Han, X.-L. Yong, and S.-M. Fei, Constructions of unextendible maximally en- tangled bases in C d ⊗ C d ′ , Sci. Rep. 8, 3193 (2018).
A proof for the existence of nonsquare unextendible maximally entangled bases. F Liu, 10.1007/s10773-018-3771-xInt. J. Theor. Phys. 572496F. Liu, A proof for the existence of non- square unextendible maximally entangled bases, Int. J. Theor. Phys. 57, 2496 (2018).
Mutually unbiased unextendible maximally entangled bases in C d ⊗ C d+1. Y.-Y Song, G.-J Zhang, L.-S Xu, Y.-H Tao, 10.1007/s10773-018-3891-3Int. J. Theor. Phys. 573785Y.-Y. Song, G.-J. Zhang, L.-S. Xu, and Y.-H. Tao, Mu- tually unbiased unextendible maximally entangled bases in C d ⊗ C d+1 , Int. J. Theor. Phys. 57, 3785 (2018).
Constructing mutually unbiased bases from unextendible maximally entangled bases. H Zhao, L Zhang, S.-M Fei, N Jing, 10.1016/S0034-4877(20)30013-6Rep. Math. Phys. 85105H. Zhao, L. Zhang, S.-M. Fei, and N. Jing, Constructing mutually unbiased bases from unextendible maximally entangled bases, Rep. Math. Phys. 85, 105 (2020).
Unextendible entangled bases with fixed schmidt number. Y Guo, S Wu, 10.1103/PhysRevA.90.054303Phys. Rev. A. 9054303Y. Guo and S. Wu, Unextendible entangled bases with fixed schmidt number, Phys. Rev. A 90, 054303 (2014).
Mutually unbiased special entangled bases with schmidt number 2 in C 3 ⊗ C 4k. Y.-F Han, G.-J Zhang, X.-L Yong, L.-S Xu, Y.-H Tao, 10.1007/s11128-018-1824-yQuant. Info. Proc. 1758Y.-F. Han, G.-J. Zhang, X.-L. Yong, L.-S. Xu, and Y.-H. Tao, Mutually unbiased special entan- gled bases with schmidt number 2 in C 3 ⊗ C 4k , Quant. Info. Proc. 17, 58 (2018).
Constructions of unextendible entangled bases. F Shi, X Zhang, Y Guo, 10.1007/s11128-019-2435-yQuant. Info. Proc. 18324F. Shi, X. Zhang, and Y. Guo, Con- structions of unextendible entangled bases, Quant. Info. Proc. 18, 324 (2019).
New constructions of unextendible entangled bases with fixed schmidt number. X Yong, Y Song, Y Tao, 10.1007/s11128-019-2451-yQuant. Info. Proc. 18337X. Yong, Y. Song, and Y. Tao, New constructions of un- extendible entangled bases with fixed schmidt number, Quant. Info. Proc. 18, 337 (2019).
. Y.-L , Y.-L.
Wang, arXiv:1909.10043Special unextendible entangled bases with continuous integer cardinality. quant-phWang, Special unextendible entangled bases with continuous integer cardinality, arXiv:1909.10043 [quant-ph] (2019).
Locally unextendible non-maximally entangled basis. I Chakrabarty, P Agrawal, A K Pati, 10.26421/QIC12.3-4Quantum Inf. Comput. 12271I. Chakrabarty, P. Agrawal, and A. K. Pati, Lo- cally unextendible non-maximally entangled basis, Quantum Inf. Comput. 12, 0271 (2012).
A note on locally unextendible non-maximally entangled basis, Quantum Inf. B Chen, H Nizamidin, S.-M Fei, 10.26421/QIC13.11-12Comput. 131077B. Chen, H. Nizamidin, and S.-M. Fei, A note on locally unextendible non-maximally entangled basis, Quantum Inf. Comput. 13, 1077 (2013).
Multipartite unextendible entangled basis. Y Guo, Y Jia, X Li, 10.1007/s11128-015-1058-1Quant. Info. Proc. 143553Y. Guo, Y. Jia, and X. Li, Multipartite unextendible entangled basis, Quant. Info. Proc. 14, 3553 (2015).
Unextendible maximally entangled bases and mutually unbiased bases in multipartite systems. Y.-J Zhang, H Zhao, N Jing, S.-M Fei, 10.1007/s10773-017-3505-5Int. J. Theor. Phys. 563425Y.-J. Zhang, H. Zhao, N. Jing, and S.-M. Fei, Unextendible maximally entangled bases and mu- tually unbiased bases in multipartite systems, Int. J. Theor. Phys. 56, 3425 (2017).
Local indistinguishability and incompleteness of entangled orthogonal bases: Method to generate two-element locally indistinguishable ensembles. S Halder, U Sen, 10.1016/j.aop.2021.168550Ann. Phys. (N. Y.). 431168550S. Halder and U. Sen, Local indistinguishability and in- completeness of entangled orthogonal bases: Method to generate two-element locally indistinguishable ensem- bles, Ann. Phys. (N. Y.) 431, 168550 (2021).
Condition for unambiguous state discrimination using local operations and classical communication. A Chefles, 10.1103/PhysRevA.69.050307Phys. Rev. A. 6950307A. Chefles, Condition for unambiguous state discrimina- tion using local operations and classical communication, Phys. Rev. A 69, 050307(R) (2004).
Local distinguishability of any three quantum states. S Bandyopadhyay, J Walgate, J. Phys. A: Math. Theor. 4272002S. Bandyopadhyay and J. Walgate, Local dis- tinguishability of any three quantum states, J. Phys. A: Math. Theor. 42, 072002 (2009).
Distinguishability of quantum states by separable operations. R Duan, Y Feng, Y Xin, M Ying, 10.1109/TIT.2008.2011524IEEE Trans. Inf. Theory. 551320R. Duan, Y. Feng, Y. Xin, and M. Ying, Distin- guishability of quantum states by separable operations, IEEE Trans. Inf. Theory 55, 1320 (2009).
S Halder, U Sen, arXiv:2103.09140Unextendible entangled bases and more nonlocality with less entanglement. quant-phS. Halder and U. Sen, Unextendible entangled bases and more nonlocality with less entanglement, arXiv:2103.09140 [quant-ph] (2021).
| [] |
[
"PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation",
"PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation"
] | [
"Zhuojie Wu [email protected] \nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Zijian Wang [email protected] \nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Wenxuan Zou [email protected] \nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Fan Ji \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n",
"Hao Dang \nHenan University of Chinese Medicine\nHenanChina\n",
"Wanting Zhou [email protected] \nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Muyi Sun [email protected] \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n"
] | [
"Beijing University of Posts and Telecommunications\nBeijingChina",
"Beijing University of Posts and Telecommunications\nBeijingChina",
"Beijing University of Posts and Telecommunications\nBeijingChina",
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina",
"Henan University of Chinese Medicine\nHenanChina",
"Beijing University of Posts and Telecommunications\nBeijingChina",
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina"
] | [] | 3D to 2D retinal vessel segmentation is a challenging problem in Optical Coherence Tomography Angiography (OCTA) images. Accurate retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. However, making full use of the 3D data of OCTA volumes is a vital factor for obtaining satisfactory segmentation results. In this paper, we propose a Progressive Attention-Enhanced Network (PAENet) based on attention mechanisms to extract rich feature representation. Specifically, the framework consists of two main parts, the three-dimensional feature learning path and the two-dimensional segmentation path. In the three-dimensional feature learning path, we design a novel Adaptive Pooling Module (APM) and propose a new Quadruple Attention Module (QAM). The APM captures dependencies along the projection direction of volumes and learns a series of pooling coefficients for feature fusion, which efficiently reduces feature dimension. In addition, the QAM reweights the features by capturing fourgroup cross-dimension dependencies, which makes maximum use of 4D feature tensors. In the two-dimensional segmentation path, to acquire more detailed information, we propose a Feature Fusion Module (FFM) to inject 3D information into the 2D path. Meanwhile, we adopt the Polarized Self-Attention (PSA) block to model the semantic interdependencies in spatial and channel dimensions respectively. Experimentally, our extensive experiments on the OCTA-500 dataset show that our proposed algorithm achieves state-of-the-art performance compared with previous methods. | 10.1109/bibm52615.2021.9669490 | [
"https://arxiv.org/pdf/2108.11695v5.pdf"
] | 237,304,161 | 2108.11695 | e52ad3d37172911b4d8d4b47b148d896b1d056a4 |
PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Zhuojie Wu [email protected]
Beijing University of Posts and Telecommunications
BeijingChina
Zijian Wang [email protected]
Beijing University of Posts and Telecommunications
BeijingChina
Wenxuan Zou [email protected]
Beijing University of Posts and Telecommunications
BeijingChina
Fan Ji
Institute of Automation
Chinese Academy of Sciences
BeijingChina
Hao Dang
Henan University of Chinese Medicine
HenanChina
Wanting Zhou [email protected]
Beijing University of Posts and Telecommunications
BeijingChina
Muyi Sun [email protected]
Institute of Automation
Chinese Academy of Sciences
BeijingChina
PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal Vessel Segmentation
Index Terms-retinal vesseloptical coherence tomography angiography3Dattention mechanism
3D to 2D retinal vessel segmentation is a challenging problem in Optical Coherence Tomography Angiography (OCTA) images. Accurate retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. However, making full use of the 3D data of OCTA volumes is a vital factor for obtaining satisfactory segmentation results. In this paper, we propose a Progressive Attention-Enhanced Network (PAENet) based on attention mechanisms to extract rich feature representation. Specifically, the framework consists of two main parts, the three-dimensional feature learning path and the two-dimensional segmentation path. In the three-dimensional feature learning path, we design a novel Adaptive Pooling Module (APM) and propose a new Quadruple Attention Module (QAM). The APM captures dependencies along the projection direction of volumes and learns a series of pooling coefficients for feature fusion, which efficiently reduces feature dimension. In addition, the QAM reweights the features by capturing fourgroup cross-dimension dependencies, which makes maximum use of 4D feature tensors. In the two-dimensional segmentation path, to acquire more detailed information, we propose a Feature Fusion Module (FFM) to inject 3D information into the 2D path. Meanwhile, we adopt the Polarized Self-Attention (PSA) block to model the semantic interdependencies in spatial and channel dimensions respectively. Experimentally, our extensive experiments on the OCTA-500 dataset show that our proposed algorithm achieves state-of-the-art performance compared with previous methods.
I. INTRODUCTION
Retinal vessel segmentation is of great significance in the diagnosis of various ophthalmic diseases, such as diabetic retinopathy and glaucoma, which can lead to blindness [1]. In clinical practice, accurate retinal vessel segmentation could help doctors accurately diagnose diseases and improve diagnostic efficiency.
Optical Coherence Tomography (OCT) is a novel and noninvasive optical imaging modality, which uses coherent light to capture 3D structural data of retina with micrometer-level resolution [2], as shown in Fig. 1(a). Compared with color fundus images, OCT volumes can provide detailed information about the structure of the retina. Meanwhile, OCTA volumes can provide rich information about retinal blood flow, which † Corresponding authors. is generated from the OCT volumes by the Split-Spectrum Amplitude-Decorrelation Angiography (SSADA) algorithm [3], as shown in Fig. 1(c). OCTA solves the problem of OCT's inability to provide specific details about blood flow. Therefore, OCTA has become an important imaging method for the clinical diagnosis of retinal diseases and got a lot of attention from researchers nowadays.
Most of the previous work focused on color fundus images for retinal vessel segmentation. The traditional methods of retinal vessel segmentation are mostly accomplished by manually designed features [4]- [6]. Recently, with the development of deep learning, various Convolutional Neural Networks (CNNs) are used for retinal vessel segmentation [7]- [13]. Before Fully Convolutional Networks were widely used, segmentation was regarded as a pixel-by-pixel classification task using fully connected networks to predict the label of the center pixel of each patch [7]. Subsequently, a method based on fully convolutional neural networks solve the structured prediction problem and accomplish end-to-end segmentation [8], [9]. Further development, encoder-decoder architecture, especially U-Net architecture, has become the most popular segmentation framework for fundus images due to its excellent feature extraction capability and prominent practical performance [10]- [12]. Various sophisticated models based on encoder-decoder architecture are designed to solve the problems of retinal blood vessel segmentation [10]- [12]. To improve performance on capillaries, the multi-label architecture is proposed by adding additional supervisions to treat thin and thick vessels respectively [10]. Meanwhile, a coarse to fine network consisting of two U-shaped networks is designed for vessel segmentation. The coarse network produces a preliminary prediction map and the fine network refines the results [11]. In addition, the cross-connected multi-scale network solves the segmentation problem of fine vessels [12].
In recent years, the attention mechanisms has also been widely used in retinal vessel segmentation because of its powerful feature-dependent modeling capabilities [13]- [16]. RNA-Net employs residual non-local attention to solve the problem of the local fixed receptive fields which unable to collect global information [13]. In addition, a spatial attention lightweight network, named SA-UNet, enables efficient use of samples when only a few labeled samples are available [14]. Besides, CGA-Net designs a context guided attention for the imbalance of retinal vessel thickness distribution [15]. For the multi-scale vessel structure, Fully Attention-based Network (FANet) introduces the dual-direction attention into the retinal vessel segmentation [16].
Following the above methods, retinal vessel segmentation of OCTA volumes firstly require the obtainment of the corresponding projection image ( Fig. 1(b) and Fig. 1(d)) by retinal layer segmentation. However, some retinal diseases would destroy the retina structure and affect the retinal layer segmentation, in turn cause failed segmentation result. Moreover, segmentation on projection images cannot make full use of 3D information and bring OCTA volumes superiority into full play. After a long period of unremitting efforts, some progresses have been made in directly using OCTA volumes to obtain 2D vessel segmentation results. An Image Projection Network (IPN) achieves 3D to 2D image segmentation by employing unidirectional pooling along the projection direction of volumes [17]. In addition, Image Projection Network V2 (IPN-V2) and IPN-V2+ are proposed to enhance the ability of the horizontal direction perception and overcome the "checkerboard effect" respectively [18]. The 3D to 2D retinal vessel segmentation methods provide a novel and promising research idea. However, the unidirectional pooling layer cannot capture 3D information well for image segmentation. Meanwhile, the plane network does not use volumetric data and lacks spatialwise and channel-wise dependencies.
Inspired by the methods above mentioned, in this paper, we propose a Progressive Attention-Enhanced Network (PAENet) for 3D to 2D retinal vessel segmentation. The framework consists of two main parts, the three-dimensional feature learning path, and the two-dimensional segmentation path. To fuse 3D features more effectively, we design a novel Adaptive Pooling Module (APM), which learns a series of pooling coefficients along the projection direction of the volumes to fuse features adaptively. Due to the lack of volumetric data reuse in the two-dimensional segmentation path, we propose a Feature Fusion Module (FFM) to inject 3D information into the 2D path. Furthermore, in order to learn rich feature representation, in the three-dimensional feature learning path, we propose a new Quadruple Attention Module (QAM) that captures crossdimension dependencies. In the two-dimensional segmentation path, we adopt the Polarized Self-Attention (PSA) block to model the semantic interdependencies in spatial and channel dimensions respectively.
The main contributions are summarized as follows:
• We propose a novel Progressive Attention-Enhanced Network (PAENet) to accomplish 3D to 2D segmentation and a Feature Fusion Module (FFM) which accomplishes volumetric data reuse. to capture cross-dimension dependencies, and utilize Polarized Self-Attention (PSA) block to model the semantic interdependencies in spatial and channel dimensions respectively. • Sufficient experiments are conducted on the OCTA-500 dataset, the results demonstrate that our proposed method achieves state-of-the-art performance.
II. RELATED WORK
A. Retinal Vessel Segmentation
Retinal vessel segmentation can help doctors diagnose various ophthalmic diseases. Previous work mostly uses 2D color fundus images. Most of the early methods are manually design features for retinal vessel segmentation [4]- [6]. Staal et al. [19] propose a method based on image ridge to solve the automatic blood vessel segmentation. However, the prior methods have relatively poor robustness. With the rapid development of deep learning in medical image segmentation, the methods based on deep learning have shown strong robustness and powerful feature extraction ability [16], [20], [21]. Fu et al. [20] propose DeepVessel for accurate segmentation of capillaries and vessel junctions. This method uses a multi-scale and multi-level network to learn a rich hierarchical representation and employs a Conditional Random Field (CRF) to model the long-range interactions between pixels. Nevertheless, the networks' ability to recognize fine blood vessels is still limited. Guo et al. [21] embeds channel attention into U-Net to enhance the discrimination ability of the network. However, the segmentation result is coarse. Li et al. [16] propose the FANet based on attention mechanisms, which designed a dual-direction attention block to model global dependencies, making the segmentation result more delicate.
Compared with color fundus images, OCT volumes can provide detailed information about the structure of the retina. Researchers begin to study the use of OCT volumes for retinal vessel segmentation. Li et al. [17] propose an image projection network that accomplishes 3D to 2D image segmentation by unidirectional pooling along the projection direction of volumes. In addition, Image Projection Network V2 [18] is proposed to enhance the ability of the horizontal direction perception. However, the unidirectional pooling layer cannot adaptively capture 3D information for image segmentation. Meanwhile, the plane perceptron does not use volumetric data and lacks spatial-wise and channel-wise dependencies. In this paper, we promote the idea of the above method and propose the APM for efficient fuse volumetric data and reduce the feature dimension. The FFM is also proposed for volumetric data reuse in the two-dimensional segmentation path.
B. Attention Mechanism
Attention mechanism can effectively help the model refining perceive information and have proven to be helpful in computer vision tasks [22]- [24]. Squeeze-and-Excitation Networks (SENet) [22] enhances the representational ability of CNN by modeling the correlation between feature channels, while SENet lacks spatial attention. Woo et al. [23] propose a Convolutional Block Attention Module (CBAM) which cascade channel-wise attention and spatial-wise attention. However, the dependency of CBAM is local. Wang et al. [24] propose a Non-local Neural Network which computes the response at a position as a weighted sum of the features at all positions, so as to model global dependence. Fu et al. [25] introduces DANet which captures the semantic interdependencies in spatial and channel dimensions respectively. However, the computational cost of DANet is very huge because it computes the relationship between each location and each channel. Misra et al. [26] propose a lightweight but effective triplet attention network that captures cross-dimensional dependencies through three branches. However, the triple attention network can only be used in 2D networks, and can not effectively capture the cross-dimensional dependence of 3D networks. In this paper, we propose a Quadruple Attention Module (QAM) for capture cross-dimension dependencies of 4D feature tensors. Meanwhile, we apply Polarized Self-Attention (PSA) block [27] to effectively model global dependence of space-wise and channel-wise.
III. METHOD
In this work, we develop a Progressive Attention-Enhanced Network (PAENet) for 3D to 2D retinal vessel segmentation. The framework consists of two main parts, the threedimensional feature learning path and the two-dimensional segmentation path. Specifically, the three-dimensional feature learning path mainly uses APM for 3D volume dimension reduction and QAM for capturing cross-dimensional feature representation. In addition, the two-dimensional segmentation path is a progressive U-Net integrated with the PSA block and injected with 3D information. The framework of PAENet is shown in Fig. 2. Since we input OCT volume and OCTA volume together as two channels, assuming that the volume size is (L × W × H), and then the input of PAENet is X ∈ R C×L×W ×H and the output is Y ∈ R L×W . Here C equals 2.
A. Adaptive Pooling Module
In the three-dimensional feature learning path, it is very important to effectively reduce feature dimension and retain information to the greatest extent. Therefore, we propose Adaptive Pooling Module (APM) to achieve the above purpose. The structure of APM is shown in Fig. 3.
The input and output of APM are I ∈ R C×L×W ×H and y ∈ R C×L×W ×P respectively. P is the pooling size. To begin with, we divide the input into two groups and extract multi-scale features through two branches. The feature of each branch is I i ∈ R C 2 ×L×W ×H , i = 1, 2. K1 and K2 are two filters with kernel sizes of 3×3 and 5×5 respectively. Next, the whole multi-scale feature map M can be obtained by concatenation. The process is shown in the following formula:
M = Concat(K 1 (I 1 ), K 2 (I 2 ))(1)
where M ∈ R C×L×W ×H .
In the APM, we not only focus on spatial information, but also capture the correlation of channels. The channel description is obtained from the spatial information of multiscale feature M using global average pooling. Next, the channel-wise dependencies are captured by the convolutional operation. The channel attention Z is defined as:
Z = σ(Conv 2 (δ(Conv 1 (AvgP ool(M )))))(2)
where Z ∈ R C×1×1×1 , σ denotes the sigmoid function, δ represents the ReLU function, Conv 1 and Conv 2 is the 1×1 convolution.
To facilitate the subsequent feature fusion, the multi-scale feature map M is reshaped to M ∈ R H P ×C×L×W ×P . In addition, the same operation is performed on input I to obtain I ∈ R H P ×C×L×W ×P . After reweighting I , summation is conducted along the projection direction of the features for feature fusion. The final output of APM can be written as follows:
y = F (I · (Sof tmax(M ) Sof tmax(Z)))(3)
where the Sof tmax is used to obtain the attention weights in projection direction and channel dimension. denotes broadcast element-wise multiplication, and · refers to elementwise multiplication. F refers to the feature fusion along the projection direction.
B. Quadruple Attention Module
To capture cross-dimensional dependencies on the 4D tensor, we propose a lightweight and effective Quadruple Atten- tion Module (QAM). The QAM does not change the dimension size of the input. Suppose the input is X ∈ R C×L×W ×H and the output is X ∈ R C×L×W ×H , as shown in Fig. 4.
To begin with, the input X is divided into four branches by permuting. Next, the pooling layer preserves the rich representation of features through the max pool and average pool and reduces the first dimension of input to two. For example, the input X ∈ R C×L×W ×H permute to R L×C×W ×H , and then the final result R 2×C×W ×H is obtained by concatenating the max pooled features and the average pooled features on the first dimension. This process can be represented by the following equation:
P ool d (X ) = Concat(M axP ool d (X ), AvgP ool d (X )) (4) where X refers to the permuted input. d represents the first dimension of permuted tensor. Behind the pooling layer is a standard convolutional layer with the convolution kernel size 7 × 7 × 7 followed by a batch normalization layer and ReLU. An attention weight map is obtained through the sigmoid activation function.
Concretely, in the first branch, the relationship between the three dimensions (L, W, H) is constructed. The attention weight map X 1 ∈ R 1×L×W ×H obtained through the pooling layer and convolution layer is multiplied by the input X 1 ∈ R C×L×W ×H . The output of this branch is y 1 ∈ R C×L×W ×H . Meanwhile, the second branch constructs the relationship of three dimensions (C, W, H) by permuted. The tensor X 2 ∈ R L×C×W ×H after permuted obtains the attention weight map X 2 ∈ R 1×C×W ×H through the pooling layer and convolutional layer. Then the attention weight map is multiplied by the input to obtain the final output y 2 ∈ R L×C×W ×H . For the remaining branches, the relationship of (L, C, H) and (L, W, C) dimensions is constructed respectively. It is worth noting that before adding the results of the last three branches to the results of the first branch, they should be permuted and restored to the original input shape. Therefore, the final output of QAM is mathematically expressed as:
X = 1 4 (y 1 +ȳ 2 +ȳ 3 +ȳ 4 )(5)
where¯means the permute operation. To utilize the information of the 3D feature learning path in the 2D segmentation path, the Feature Fusion Module (FFM) is proposed to inject the feature maps of the 3D feature learning path into the 2D segmentation path for improving the performance of network and accomplishing volumetric data reuse. Concretely, as shown in Fig. 5, the 4D tensor X ∈ R C×L×W ×H is compressed to the same shape as 3D tensor X ∈ R C×L×W by pooling layers. Next, the pooled features are connected with the features of the 2D segmentation path. The MaxPool preserves more texture features, and the AvgPool preserves more overall features. Mathematically, it can be represented by the following equation:
C. Feature Fusion Module
f (X, X) = Concat( X, AvgP ool(X), M axP ool(X)) (6) D. Lightweight Polarized Self-Attention In the 2D segmentation path, we employ the lightweight Polarized Self-Attention (PSA) block that capture long-range contextual information in space and channel dimensions to boost the performance of the plain U-net. The PSA block is shown in Fig. 6. The PSA block has two branches: Spatialonly Self-Attention and Channel-only Self-Attention.
1) Spatial-only Self-Attention: The branch weights the input by generating a spatial attention matrix. Mathematically, it can be expressed as the following formula:
f sp (X) = Γ(Sof tmax(Γ(Avg(W 1 X)) × Γ(W 2 X)) · X (7)
where W 1 and W 2 are 1 × 1 convolutional layers respectively. Γ is reshape operator. × and · represent matrix multiplication and broadcast element-wise multiplication, respectively.
2) Channel-only Self-Attention: The weighting channelwise is shown in the following formula:
f ch (X) = LN (W 3 (Sof tmax(ΓW 1 X) × Γ(W 2 X))) · X (8)
where W 1 , W 2 and W 3 are 1 × 1 convolutional layers respectively. LN is LayerNorm layer. Therefore, the final output of the PSA block is as follows:
P SA(X) = f sp (X) + f ch (X)(9)
where + is element-wise sum.
IV. EXPERIMENTS
A. Datasets
We select data with a field of view size of 6mm × 6mm on the public dataset OCTA-500 to evaluate PAENet [18]. The dataset contains 300 subjects (NO.10001-NO.10300) with the volume size of 400px × 400px × 640px. Each sample contains a pair of volume data of OCT and OCTA. Following the previous work [18], the dataset is divided into a training set (NO.10001-NO.10180), a validation set (NO.10181-NO.10200), and a test set (NO.10201-NO.10300).
B. Implementation Details
The proposed network is implemented on the pytorch platform with two TITAN Xp GPUs. We employ Adaptive Moment Estimation (Adam) optimization with momentum 0.9. Meanwhile, we utilize the poly learning rate policy [28], [29]. The initial learning rate is set to 0.0003, and the batchsize is set to 4. Additionally, We crop each volume into patches for training. The patch size is 100px × 100px × 160px, the total training iterations are 25000.
C. Evaluation Metrics
To assess the performance of our proposed network, we choose five metrics for evaluation: dice coefficient (DICE), jaccard index (JAC), balance-accuracy (BACC), precision (PRE) and recall (REC).
DICE = 2T P/(2T P + F P + F N )(10)JAC = T P/(T P + F P + F N )(11)
BACC = (T P R + T N R)/2 (12)
P RE = T P/(T P + F P )(13)REC = T P/(T P + F N )(14)
where T P is true positive, F P is false positive, T N is true negative and F N is false negative. As we all know, the proportion of foreground and background is seriously unbalanced in retinal vessel segmentation. It is not reasonable to use accuracy to evaluate the segmentation results. Therefore, we use balance-accuracy to evaluate the results for the imbalance of the positive and negative samples. In (12), T P R = T P/(T P + F N ) is true positive rate, and T N R = T N/(T N + F P ) is true negative rate. The experimental result is the average and standard deviation on the test set.
D. Ablation Study 1) Ablation Study on Module:
We conduct a large number of experiments to verify the effectiveness of our proposed modules. We set up five groups of ablation experiments, and the experimental designs are listed as follows: (1) Baseline uses a unidirectional pooling [18] to reduce the feature dimension, while the next experiment uses APM instead. (2) only APM is integrated into baseline architecture. Table. I. In the second set of experiments, we can observe that adaptive pooling improves the baseline by 0.13% (DICE) / 0.22% (JAC) / 0.38% (PRE) and shows the highest PRE results. It is proved that the APM can effectively fuse volume information. Compared with unidirectional pooling, the APM has a stronger feature fusion ability that enables the network to recognize unlabeled capillaries, which leads to a slight decrease in BACC and REC. However, the problem mentioned above will be alleviated through model cross-dimensional dependencies in the subsequent modules. When APM and QAM are added to the baseline architecture, the improvements are 0.17% (DICE) / 0.28% (JAC) / 0.06% (BACC) / 0.26% (PRE) / 0.10% (REC). Obviously, the effectiveness of QAM in capturing cross-dimensional dependencies is proved. In the fourth group of experiments with the APM, QAM and FFM, the FFM injects volume information into the 2D segmented network, which improves the network by 0.21% (DICE) / 0.34% (JAC) / 0.13% (BACC) / 0.20% (PRE) / 0.23% (REC). According to the above data, it is proved that volumetric data reuse can provide more detailed information to the network. In the last group of experiments with all modules, the improvements are 0.26% (DICE) / 0.42% (JAC) / 0.19% (BACC) / 0.16% (PRE) / 0.37% (REC). Therefore, the PSA block is proved to be effective through modeling global spatial and channel dependence. Meanwhile, table. I shows that each module can coordinate with the other. The qualitative results of module ablation are shown in Fig. 7. In the qualitative results of the ablation experiment, we can see that the results of the retina are getting better and the segmentation of fine blood vessels achieves better performance.
2) Comparison with the Existing Methods: The comparison results between our proposed method with existing methods are shown in Table. II. Obviously, our proposed PAENet achieves state-of-the-art performance compared with several existing methods. Compared with IPN, our methods outperforms the performance of the IPN by 0.72% (DICE) / 1.14% (JAC) / 0.97% (BACC). Our method also surpasses IPN V2 by 0.28% (DICE) / 0.46% (JAC) / 0.52% (BACC). It demonstrates that our proposed method can complete the 3D to 2D retinal vessel segmentation in OCTA images better. Following the previous work [18], we also introduced a global retraining process and conducted related experiments. The experimental results are shown in Table. II-2. Compared with IPN v2+, our method outperforms the IPN v2+ by 0.28% (DICE) / 0.47% (JAC) / 0.22% (BACC) and reaches 89.69% (DICE) / 81.42% (JAC) / 93.68% (BACC). The above experiments show that our method can make full use of 3D volume data and model the dependence of different dimensions to accomplish the task of 3D to 2D retinal segmentation.
E. Results
We perform ablation analysis on each module to prove the effectiveness of the module, and the results are shown in Table. I. Meanwhile, the qualitative results of the module are shown in Fig. 7. In Table. II, our method outperforms the performance of the existing methods. Extensive experimental results demonstrate that our method can effectively accomplish the task of 3D to 2D retinal segmentation in OCTA images and achieve state-of-the-art performance compared with several existing methods. In addition, we also conducted a qualitative analysis, as shown in Fig. 7. Our method not only performs accurate segmentation of some fine vessels, but also completes the segmentation efficiently when the input image quality is poor.
V. CONCLUSION
In this paper, we propose a novel Progressive Attention-Enhanced Network (PAENet) for 3D to 2D retinal vessel segmentation in OCTA images. Specifically, we propose a novel Adaptive Pooling Module which captures dependencies along the projection direction of volumes for feature fusion. Meanwhile, we design a Quadruple Attention Module to model the cross-dimension relationship in the 4D tensor. To make full use of volume information, FFM is introduced to inject 3D information into the 2D segmentation network to accomplish volumetric data reuse. In addition, Polarized Selfattention blocks are integrated into the network to model global spatial and channel attention. Extensive experiments demonstrate that PAENet is an effective implementation of 3D to 2D segmentation networks and achieves state-of-theart performance compared with previous methods. In the future, we will explore the PAENet structure to achieve better performance and solve the problems of artifacts and capillaries in OCTA images.
Fig. 1 .
1(a) OCT volume. (b) projection map of OCT. (c) OCTA volume. (d) projection map of OCTA. (e) ground truth.
•
Adaptive Pooling Module (APM) is proposed to capture dependencies along the projection direction of volumes and effectively reduce the feature dimension in an adaptive way. • We propose a new Quadruple Attention Module (QAM)
Fig. 2 .
2The framework of Progressive Attention-Enhanced Network (PAENet).
Fig. 3 .
3The structure of the Adaptive Pooling Module.
Fig. 4 .
4Schematic illustration of the proposed Quadruple Attention Module. The subscript of the pool indicates that the pooling operation is performed in the corresponding first dimension.denotes broadcast element-wise multiplication, and denotes average broadcast element-wise addition.
Fig. 5 .
5The illustration of Feature Fusion Module. Since the 4D tensor is not shown in the figure, the channel dimension is represented in gray font.
Fig. 6 .
6The illustration of the Polarized Self-Attention block.
(3) APM and QAM are jointly integrated into baseline architecture. (4) APM, QAM and FFM are jointly integrated into baseline architecture. (5) All modules are jointly integrated into baseline architecture. The experimental results are shown in
Fig. 7 .
7Qualitative results of our proposed method. (a) The projection map of OCT, (b) The projection map of OCTA, (c) The corresponding ground truth, (d) The result of baseline, (e) The result of adding APM, (f) The result of adding APM and QAM, (g) The result of adding APM, QAM and FFM, (h) The results of the proposed PAENet. The red box is the details of the segmentation. From (d) to (h), the segmentation results are getting better and better.
TABLE I EFFECT
IOF DIFFERENT MODULES IN PAENET (MEAN ± SD). APM: ADAPTIVE POOLING MODULE. QAM: QUADRUPLE ATTENTION MODULE. FFM: FEATURE FUSION MODULE. PSA: POLARIZED SELF-ATTENTION.TABLE II COMPARISON RESULTS WITH EXISTING METHODS (MEAN ± SD). NO. 1 DENOTES THAT THE GLOBAL RETRAINING PROCESS IS NOT INTRODUCED. NO.APM QAM FFM PSA
DICE(%)
JAC(%)
BACC(%)
PRE(%)
REC(%)
89.10 ± 2.82
80.45 ± 4.36
93.85 ± 1.87
89.57 ± 3.69
88.77 ± 3.69
89.23 ± 2.79
80.67 ± 4.37
93.83 ± 2.09
89.95 ± 3.22
88.67 ± 4.19
89.27 ± 2.82
80.73 ± 4.40
93.91 ± 1.98
89.83 ± 3.56
88.87 ± 3.97
89.31 ± 2.58
80.79 ± 4.09
93.98 ± 1.98
89.77 ± 2.96
89.00 ± 4.02
89.36 ± 2.70 80.87 ± 4.25 94.04 ± 1.95 89.73 ± 3.32
89.14 ± 3.91
2 DENOTES THAT THE GLOBAL RETRAINING PROCESS IS INTRODUCED
No.
Methods
DICE(%)
JAC(%)
BACC(%)
PRE(%)
REC(%)
1
IPN [17]
88.64 ± 3.21
79.73 ± 4.92
93.07 ± 2.42
-
-
IPN V2 [18]
89.08 ± 2.73
80.41 ± 4.29
93.52 ± 2.13
-
-
PAENet
89.36 ± 2.70 80.87 ± 4.25 94.04 ± 1.95 89.73 ± 3.32 89.14 ± 3.91
2
IPN V2+ [18] 89.41 ± 2.74
80.95 ± 4.32
93.46 ± 2.12
-
-
PAENet+
89.69 ± 2.77 81.42 ± 4.39 93.68 ± 2.08 91.37 ± 3.23 88.22 ± 4.19
ACKNOWLEDGMENTThis work is completed when Zhuojie Wu is an intern under the guidance of Dr. Muyi Sun. This work is supported by NSFC (Grant No. 62006227).
Retinal imaging and image analysis. M D Abràmoff, M K Garvin, M Sonka, IEEE reviews in biomedical engineering. 3M. D. Abràmoff, M. K. Garvin, and M. Sonka, "Retinal imaging and image analysis," IEEE reviews in biomedical engineering, vol. 3, pp. 169-208, 2010.
Optical coherence tomography-current and future applications. M Adhi, J S Duker, Current opinion in ophthalmology. 243213M. Adhi and J. S. Duker, "Optical coherence tomography-current and future applications," Current opinion in ophthalmology, vol. 24, no. 3, p. 213, 2013.
Split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Y Jia, O Tan, J Tokayer, B Potsaid, Y Wang, J J Liu, M F Kraus, H Subhash, J G Fujimoto, J Hornegger, Optics express. 204Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger et al., "Split-spectrum amplitude-decorrelation angiography with optical coherence tomogra- phy," Optics express, vol. 20, no. 4, pp. 4710-4725, 2012.
Improved multiscale matched filter for retina vessel segmentation using pso algorithm. K Sreejini, V Govindan, Egyptian Informatics Journal. 163K. Sreejini and V. Govindan, "Improved multiscale matched filter for retina vessel segmentation using pso algorithm," Egyptian Informatics Journal, vol. 16, no. 3, pp. 253-260, 2015.
Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification. J V Soares, J J Leandro, R M Cesar, H F Jelinek, M J Cree, IEEE Transactions on medical Imaging. 259J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, "Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification," IEEE Transactions on medical Imaging, vol. 25, no. 9, pp. 1214-1222, 2006.
Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. S Roychowdhury, D D Koozekanani, K K Parhi, IEEE journal of biomedical and health informatics. 193S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, "Blood vessel segmentation of fundus images by major vessel extraction and subimage classification," IEEE journal of biomedical and health informatics, vol. 19, no. 3, pp. 1118-1128, 2014.
Convolutional neural networks for deep feature learning in retinal vessel segmentation. A F Khalaf, I A Yassine, A S Fahmy, 2016 IEEE International Conference on Image Processing (ICIP). A. F. Khalaf, I. A. Yassine, and A. S. Fahmy, "Convolutional neural networks for deep feature learning in retinal vessel segmentation," in 2016 IEEE International Conference on Image Processing (ICIP).
. IEEE. IEEE, 2016, pp. 385-388.
A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. A Dasgupta, S Singh, 2017 IEEE 14th International Symposium on Biomedical Imaging. IEEEA. Dasgupta and S. Singh, "A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation," in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). IEEE, 2017, pp. 248-251.
Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. K Hu, Z Zhang, X Niu, Y Zhang, C Cao, F Xiao, X Gao, Neurocomputing. 309K. Hu, Z. Zhang, X. Niu, Y. Zhang, C. Cao, F. Xiao, and X. Gao, "Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function," Neurocomputing, vol. 309, pp. 179-191, 2018.
Deep supervision with additional labels for retinal vessel segmentation task. Y Zhang, A C Chung, International conference on medical image computing and computer-assisted intervention. SpringerY. Zhang and A. C. Chung, "Deep supervision with additional labels for retinal vessel segmentation task," in International conference on medical image computing and computer-assisted intervention. Springer, 2018, pp. 83-91.
Ctf-net: Retinal vessel segmentation via deep coarse-to-fine supervision network. K Wang, X Zhang, S Huang, Q Wang, F Chen, 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). K. Wang, X. Zhang, S. Huang, Q. Wang, and F. Chen, "Ctf-net: Retinal vessel segmentation via deep coarse-to-fine supervision network," in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI).
. IEEE. IEEE, 2020, pp. 1237-1241.
Ccnet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. S Feng, Z Zhuo, D Pan, Q Tian, Neurocomputing. 392S. Feng, Z. Zhuo, D. Pan, and Q. Tian, "Ccnet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features," Neurocomputing, vol. 392, pp. 268-276, 2020.
Rna-net: Residual nonlocal attention network for retinal vessel segmentation. Y Chen, Y Dong, Y Zhang, K Zhang, 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEEY. Chen, Y. Dong, Y. Zhang, and K. Zhang, "Rna-net: Residual nonlocal attention network for retinal vessel segmentation," in 2020 IEEE Inter- national Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020, pp. 1560-1565.
Saunet: Spatial attention u-net for retinal vessel segmentation. C Guo, M Szemenyei, Y Yi, W Wang, B Chen, C Fan, 2020 25th International Conference on Pattern Recognition (ICPR). IEEEC. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, "Sa- unet: Spatial attention u-net for retinal vessel segmentation," in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 1236-1242.
Retinal vessel segmentation via context guide attention net with joint hard sample mining strategy. C Wang, R Xu, Y Zhang, S Xu, X Zhang, 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEEC. Wang, R. Xu, Y. Zhang, S. Xu, and X. Zhang, "Retinal vessel seg- mentation via context guide attention net with joint hard sample mining strategy," in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021, pp. 1319-1323.
Accurate retinal vessel segmentation in color fundus images via fully attention-based networks. K Li, X Qi, Y Luo, Z Yao, X Zhou, M Sun, IEEE Journal of Biomedical and Health Informatics. 256K. Li, X. Qi, Y. Luo, Z. Yao, X. Zhou, and M. Sun, "Accurate retinal vessel segmentation in color fundus images via fully attention-based networks," IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 6, pp. 2071-2081, 2020.
Image projection network: 3d to 2d image segmentation in octa images. M Li, Y Chen, Z Ji, K Xie, S Yuan, Q Chen, S Li, IEEE Transactions on Medical Imaging. 3911M. Li, Y. Chen, Z. Ji, K. Xie, S. Yuan, Q. Chen, and S. Li, "Image projection network: 3d to 2d image segmentation in octa images," IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3343-3354, 2020.
Ipn-v2 and octa-500: Methodology and dataset for retinal image segmentation. M Li, Y Zhang, Z Ji, K Xie, S Yuan, Q Liu, Q Chen, arXiv:2012.07261arXiv preprintM. Li, Y. Zhang, Z. Ji, K. Xie, S. Yuan, Q. Liu, and Q. Chen, "Ipn-v2 and octa-500: Methodology and dataset for retinal image segmentation," arXiv preprint arXiv:2012.07261, 2020.
Ridge-based vessel segmentation in color images of the retina. J Staal, M D Abràmoff, M Niemeijer, M A Viergever, B Van Ginneken, IEEE transactions on medical imaging. 234J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, "Ridge-based vessel segmentation in color images of the retina," IEEE transactions on medical imaging, vol. 23, no. 4, pp. 501-509, 2004.
Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. H Fu, Y Xu, S Lin, D W K Wong, J Liu, International conference on medical image computing and computerassisted intervention. SpringerH. Fu, Y. Xu, S. Lin, D. W. K. Wong, and J. Liu, "Deepvessel: Retinal vessel segmentation via deep learning and conditional random field," in International conference on medical image computing and computer- assisted intervention. Springer, 2016, pp. 132-139.
Channel attention residual u-net for retinal vessel segmentation. C Guo, M Szemenyei, Y Hu, W Wang, W Zhou, Y Yi, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPC. Guo, M. Szemenyei, Y. Hu, W. Wang, W. Zhou, and Y. Yi, "Channel attention residual u-net for retinal vessel segmentation," in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 1185-1189.
Squeeze-and-excitation networks. J Hu, L Shen, G Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
Cbam: Convolutional block attention module. S Woo, J Park, J.-Y. Lee, I S Kweon, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
Non-local neural networks. X Wang, R Girshick, A Gupta, K He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionX. Wang, R. Girshick, A. Gupta, and K. He, "Non-local neural net- works," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794-7803.
Dual attention network for scene segmentation. J Fu, J Liu, H Tian, Y Li, Y Bao, Z Fang, H Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, "Dual attention network for scene segmentation," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3146-3154.
Rotate to attend: Convolutional triplet attention module. D Misra, T Nalamada, A U Arasanipalai, Q Hou, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionD. Misra, T. Nalamada, A. U. Arasanipalai, and Q. Hou, "Rotate to attend: Convolutional triplet attention module," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 3139-3148.
Polarized self-attention: Towards high-quality pixel-wise regression. H Liu, F Liu, X Fan, D Huang, arXiv:2107.00782arXiv preprintH. Liu, F. Liu, X. Fan, and D. Huang, "Polarized self-attention: Towards high-quality pixel-wise regression," arXiv preprint arXiv:2107.00782, 2021.
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. L.-C Chen, G Papandreou, I Kokkinos, K Murphy, A L Yuille, IEEE transactions on pattern analysis and machine intelligence. 40L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs," IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834-848, 2017.
Rethinking atrous convolution for semantic image segmentation. L.-C Chen, G Papandreou, F Schroff, H Adam, arXiv:1706.05587arXiv preprintL.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, "Rethinking atrous convolution for semantic image segmentation," arXiv preprint arXiv:1706.05587, 2017.
| [] |
[
"GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps",
"GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps"
] | [
"Oren Barkan ",
"Omri Armstrong ",
"Amir Hertz ",
"Microsoft Israel ",
"Avi Caciularu ",
"Ori Katz ",
"Itzik Malkiel ",
"Noam Koenigstein ",
"\nThe Open University & Microsoft\nIsrael\n",
"\nTel-Aviv University\nIsrael\n",
"\nIlan University\nIsrael\n",
"\nTechnion & Microsoft\nIsrael\n",
"\nMicrosoft & Tel Aviv University\nIsrael\n",
"\nMicrosoft & Tel-Aviv University\nIsrael\n"
] | [
"The Open University & Microsoft\nIsrael",
"Tel-Aviv University\nIsrael",
"Ilan University\nIsrael",
"Technion & Microsoft\nIsrael",
"Microsoft & Tel Aviv University\nIsrael",
"Microsoft & Tel-Aviv University\nIsrael"
] | [] | We present Gradient Activation Maps (GAM) -a machinery for explaining predictions made by visual similarity and classification models. By gleaning localized gradient and activation information from multiple network layers, GAM offers improved visual explanations, when compared to existing alternatives. The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets. | 10.1145/3459637.3482430 | [
"https://arxiv.org/pdf/2109.00951v1.pdf"
] | 237,385,359 | 2109.00951 | de5c1f93e8bd10c75e4d7b878ee16873aca634d8 |
GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps
Oren Barkan
Omri Armstrong
Amir Hertz
Microsoft Israel
Avi Caciularu
Ori Katz
Itzik Malkiel
Noam Koenigstein
The Open University & Microsoft
Israel
Tel-Aviv University
Israel
Ilan University
Israel
Technion & Microsoft
Israel
Microsoft & Tel Aviv University
Israel
Microsoft & Tel-Aviv University
Israel
GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps
Explainable & Interpretable AIDeep LearningSaliency MapsEx- plainable & Transparent MLComputer VisionVisual ExplanationsClass Activation Maps
We present Gradient Activation Maps (GAM) -a machinery for explaining predictions made by visual similarity and classification models. By gleaning localized gradient and activation information from multiple network layers, GAM offers improved visual explanations, when compared to existing alternatives. The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.
INTRODUCTION
As the AI revolution disrupts industries and penetrates all walks of life, a growing need arises to intuitively explain machine-based decisions [6,25]. As a result, an emerging research area revolves around the need to make machine learning models more explainable. This work joins this common effort and presents Gradient Activation Maps (GAM) -a novel method for explaining visual similarity and classification networks.
A saliency map is an image depicting the relative contribution of each pixel in the input image w.r.t. the model's prediction. For example, Fig. 1 presents saliency maps produced by GAM for a classification task (a-c) and a similarity task (d-e). According to [19], a 'good' visual explanation technique should be (1) class discriminative i.e., localize the object in the image, and (2) high-resolution i.e., capture fine-grained details. However, comparing different visual explanation approaches is hard: A real methodological challenge stems from the lack of a ground-truth or a principled evaluation procedure. Hence, different works employed different evaluation procedures, often resorting to subjective visual assessments [20,22].
An actionable testing procedures for assessing the validity of saliency maps were recently proposed by Adebayo [1]. Their work revealed that despite producing quality looking visualizations, most * Authors contributed equally to this work. state-of-the-art methods produce saliency maps that are independent of either the model or the input-label relation, rendering them inadequate for producing explanations. An exception was the Grad-CAM (GC) method from [19] that stood out among all others in its ability to produce fine-grained saliency maps, while successfully passing all sanity tests [1]. Following the success of GC, an improved extension called Grad-CAM++ (GC++) was introduced and shown to outperform its predecessor on various visual explanation tasks [4].
GAM poses significant improvements upon GC and GC++ via several algoritmic features: Gradient localization, multi-layer analysis, and negative gradients suppression. These unique features lead to better saliency maps in terms of resolution, class discrimination, and object localization. In Sec. 3.3, we elaborate on the relation of GAM to GC and GC++ and explain the algorithmic advantages behind GAM's superior results.
Our contributions are as follows:
• We introduce GAM -a state-of-the-art method for extracting accurate saliency maps in terms of resolution and class discrimination. GAM is shown to outperform its alternatives on various objective and subjective evaluations, across all metrics, and especially in the case of small objects. • We present a unified formulation for visual similarity and classification that enables the utilization of GAM, GC and GC++ for explaining visual similarity models (a task that was overlooked in [4,19]). • We identify and demonstrate the limitations of GC and GC++, and explain how GAM averts these problems.
RELATED WORK 2.1 Explaining Visual Classification Models
The early methods proposed by [2,13,20,21,[29][30][31][32] are seminal works in visualization and understanding deep NNs. Guided Backpropagation (GBP) [21] visualizes the output prediction by propagating the gradients through the model and suppressing all negative gradients along the backward pass. However, GBP was shown to produce saliency maps that are not class discriminative [19]. Another approach [20], uses the gradients of predicted class scores w.r.t. to the input image to generate saliency maps.
Recently, Grad-CAM (GC) [19] created saliency maps based on the activations and gradients from the last convolution layer. In GC, the gradients of each channel are pooled to scalars. Then, these scalars weigh their corresponding activation maps that are summed together to produce the final saliency map. More recently, Grad-CAM++ (GC++) [4] was introduced as an improved version of GC. GC++ uses a weighted average of the pixel-wise gradients in order to create the weights for the activation maps.
Both GC and GC++ operate on the last convolutional layer and employ gradient pooling that leads to the loss of gradient localization. In contrast, GAM utilizes the raw gradients from multiple layers in the network, enabling gradient localization with improved resolution and class discrimination.
Explaining Visual Similarity Models
Previous works attempted to visually explain the decision made by similarity networks [9,14,18,23,26,28]. These networks are optimized to cluster images that are considered similar, in a learned vector space. Other methods [15,24] determined areas that contributed to image similarity by comparing filter responses of images patches. In [5], the authors utilized GC for explaining embedding networks that were trained on similarity tasks. However, their method is independent of the similarity score itself, hence it cannot be considered a "true" explanation to similarity.
Recently, VDSN [22] was introduced as a method for visual explanation for similarity networks. VDSN produces saliency maps for image-pairs by combining the activations of the last convolution layer before and after average / max pooling. However, unlike GAM that utilizes the gradients of the similarity w.r.t. the activations from multiple layers, VDSN does not use the gradients, and hence is indepedent of the similarity score. Moreover, VDSN is limited to use the last convolutional layer in architectures that employ average / max pooling, and is applicable to similarity networks only (thus unable to visually explain classification models).
Evaluating Saliency Maps
Evaluating saliency maps is challenging, as no real "ground truth" exists, and the quality of an explanation is often subjective. In [19,20], evaluations conducted using a weakly supervised object localization task, where the output saliency map is being used to specify the region in the image in which the classified object appears. We further extend this approach to the image similarity task, by using the saliency maps to specify the regions in which similar objects appear in both images.
In [4] the authors suggested the Average Drop Percentage (ADP) and the Percentage of Increase in Confidence (PIC) metrics, to measure the change in the model confidence when using explanation maps (Hadamard product of the saliency map with the original image) instead of original image. We follow these tests and further extend them to the image similarity task.
In [1], the authors suggest sanity tests for saliency maps methods: The parameter randomization and data randomization procedures test whether the produced saliency map is sensitive to the randomization of the model's parameter and data labels, respectively. Otherwise, the method fails to faithfully explain the model's prediction. Despite producing quality looking visualizations, the tests from [1] reveal that many of the popular saliency methods do not pass the tests, and therefore are not adequate for providing satisfactory model explanations. In Appendix 4.2 we show that GAM passes these tests.
GRADIENT ACTIVATION MAPS (GAM) 3.1 A Unified Formulation for Visual Similarity and Classification
We begin by defining notations for the network's input and (internal) building blocks. The network's input is an image, denoted by ∈ R 0 × 0 × 0 . The 3D activation produced by the -th convolutional layer (for the image ) is denoted by ℎ ∈ R × × , where 1 ≤ ≤ . Note that ℎ is not necessarily produced by a plain convolutional layer, but can be the output of a more complex function such as a residual [8] or DenseNet [10] block. We further denote ℎ ≜ ℎ [ ] (∈ R × ) as the -th activation map in ℎ . Let : R × × → R be a function that maps 3D tensors to a -dimensional vector representation. We denote the mapping of the last activation maps ℎ by ≜ (ℎ ) ∈ R . Note that may vary between different network architectures. Usually, it consists of a (channel-wise) global average pooling layer that is optionally followed by subsequent fully connected (FC) layers.
Finally, let : R × R → R be a scoring function that receives two vectors and outputs a score. The use of varies between tasks: classification and similarity. In classification tasks, represents the last hidden layer of the network. The logit score for the class is computed by ( , ), where ∈ R is the weights vector associated with the class . In multiclass (multilabel) classification, is usually set to the dot-product, optionally with bias correction, or the cosine similarity. Then, either a softmax (sigmoid) function, with some temperature, transfers values to the range [0, 1].
For similarity tasks, we consider two images , ∈ R 0 × 0 × 0 , and a similarity score: ( , ). A common practice is to set to the dot-product or cosine similarity. Further note that in the specific case of similarity, the representation produced by is not necessarily taken from the last hidden layer of the network. Therefore, can be set to the output from any FC layer. For the sake of brevity, from here onward, we abbreviate both ( , ) and ( , ) with . Disambiguation will be clear from the context.
The GAM Method
Given an image , we denote the -th saliency map
∈ R 0 × 0 by: ≜ (ℎ , ),
which is a function of the activation maps ℎ and their gradients: ≜ ℎ . We denote ≜ [ ] (similarly to the notation ℎ ). Then, we implement (ℎ , ) as:
(ℎ , ) = NRM RSZ ∑︁ =1 (ℎ ) • ( ) ,(1)
where is the ReLU activation function, and • is the Hadamard product. RSZ denotes the operation of resizing to a matrix of size 0 × 0 (the height and width of the original image ). NRM denotes the min-max normalization.
The motivation behind Eq. 1 is as follows: each filter in the -th convolutional layer captures a specific pattern. Therefore, we expect ℎ to have high (low) values in regions that do (not) correlate with the -th filter. In addition, regions in that receive positive (negative) values indicate that increasing the value of the same regions in ℎ will increase (decrease) value.
GAM highlights pixels that are both positively activated and associated with positive gradients. To this end, we first truncate all negative gradients (using ReLU). Then, we truncate negative values in the activation map ℎ , and multiply it (element-wise) by the truncated gradient map. This ensures that only pixels associated with both positive activation and gradients are preserved. Then, we sum the saliency maps across the channel (filter) axis to aggregate per pixel from all channels in the -th layer. The -th saliency map is obtained by resizing (via bi-cubic interpolation) to the original image spatial dimensions followed by min-max normalization. This process produces a set of saliency
maps = { } =1 .
The final saliency map ≜ ( , ) is computed based on a function that aggregate the information from the saliency maps produced by last layers. In this work, we implement as follows
( , ) = 1 = − +1 .(2)
Note that in our experiments, we found out that different implementations of , such as max-pooling, Hadamard product, or various weighted combinations of , performs worse than Eq. 2. Yet, in Sec. 4, we do investigate the effect of different values on the final saliency map .
GAM's Unique Features
GAM presents several advantages over GC and GC++: Gradient Localization: GC computes the saliency map based on a linear combination of the activation maps in the last convolutional layer as follows:
(ℎ , ) = NRM RSZ ∑︁ =1 ℎ ,(3)where = 1 =1 =1 [ ] .
When compared to GAM, the computation in Eq. 3 has two major drawbacks: First, the coefficients are the pooled gradients. Hence, in GC (and GC++), the gradient spatial information is lost. This is in contrast to our GAM approach (Eq. 1) that preserves (positive) gradient localization via the elementwise multiplication by ( ). The significance of this property is well expressed in the Positive gradients row in Fig. 2.
Multi-layer Analysis: GC produces saliency maps based on the last convolutional layer only. GAM, on the other hand, gleans information extracted from multiple layers (or blocks) that vary by their resolution and sensitivity (Eq. 2). Earlier blocks in the network are characterized with higher resolution. For example, in DenseNet, the last convolutional layer produces low-resolution activation maps of size 7 × 7 whereas the preceding convolutional layer produces activation maps of 14 × 14.
Our findings show that extracting information from earlier blocks is critical in certain architectures. In Sec. 4, we show that incorporating information from earlier blocks (i.e, setting > 1) enables GAM to produce fine-grained saliency maps that are more focused on the relevant objects. However, the application of the same feature to GC / GC++ hurts performance ( Fig. 8 and Tabs. 1, 2).
Negative Gradients Suppression: A subtle, yet highly important drawback of Eq. 3 stems from the way in which the (ReLU) operation is applied. In GC, the weighted combination of the activations ℎ is summed, where each activation is weighted by its pooled gradients . In architectures like ResNet or DenseNet, ℎ are always non-negative (due to the ReLU activation at the end of each block). However, the pooled gradients can still result in a negative value. As a result, GC might become insensitive to important regions (pixels) that should be intensified. The justification for this claim is as follows: Consider a pixel ( , ) in a region that contributes to the final score . Ideally, we wish this pixel to be intensified in the final saliency map. By its nature, such a pixel in an "important" region is expected to have positive (pooled) gradient values and positive activation values across several filters. However, it is also possible that some other filters that respond with a small, yet positive activation, will be associated with negative (pooled) gradients values. Mathematically, this is expressed by the following decomposition:
[ ∑︁ =1 ℎ ] = [ ∑︁ : <0 ℎ ] + [ ∑︁ : ≥0 ℎ ] . (4)
If | | ≥ , then the pixel ( , ) will have an intensity ≤ 0. In this case, the pixel ( , ) as well as other pixels in the region, are zeroed and masked due to the subsequent application of (ReLU) in Eq. 3. This might further lead to a relative intensification of other, less "informative" pixels ( ′ , ′ ) (associated with much smaller contributions than of ( , )), but for which ′ ′ > 0.
GAM on the other hand, applies to the gradients before the multiplication by the activations ℎ (Eq. 1). This ensures negative gradients are zeroed and hence do not (negatively) affect the region's intensity on other channels or layers. Thus, regions with positive gradients are never masked by and "correctly" intensified according to the magnitudes of the positive gradients and activations only.
In GC, the negative gradients problem becomes noticeable when using the cosine similarity. Fig. 2 exemplifies this effect, presenting a comparison between GC and GAM (using DenseNet201). We used the 'last layer' version of GAM (Eq. 2, = 1), ensuring the improvement by GAM is indeed due to the way it computes the saliency maps, neutralizing the contribution from earlier layers. Each pair of columns in Fig. 2 presents saliency maps computed w.r.t. the cosine similarity. We see that GC (third row, marked red) produces saliency maps that intensify wrong regions (left image of each pair in rows 2). Empirically, this is explained by the accumulated activation maps and the positive gradient maps (shown in rows 4-5 after ReLU), and the negative gradient maps (shown in row 6 after negation and ReLU). In both examples (dog and chair), we observe the high magnitude of the negative gradients and their adverse effect: the final intensity in regions of interests is significantly attenuated compared to the background, resulting in poor quality saliency maps. However, as explained above, by suppressing negative gradients in advance, GAM averts this problem and successfully produces adequate saliency maps.
The poor performance of GC, when using the cosine similarity (instead of the dot-product), can be further explained mathematically: In the case of the dot-product similarity, ( , ) = , and the gradients , are guaranteed to be non-negative. This stems from the fact that in DenseNet (and many other architectures), the global average pooling operation is applied after the application of ReLU, hence both
Eq. 5 shows that (and hence ) is the difference between two positive vectors, and hence may contain negative entries. Therefore, in the case of cosine similarity, negative gradients are possible, and might mask "important" regions in the image that should be intensified in the saliency map.
Finally, when using the dot product similarity, it is GC++ that completely fails. Fig. 3 compares between GC++ and the 'last layer' GAM ( = 1). GC++ weighs the pixel-wise gradients (before pooling) with the coefficients:
= 2 exp( ) ( ℎ ) 2 2 2 exp( ) ( ℎ ) 2 + ℎ 3 exp( ) ( ℎ ) 3 .(6)
Note that during the computation of , GC++ passes through the exponential function. However, when is the dot-product, this may lead to an "explosion" of the saliency map values, as observed in Fig. 3. GAM, however, produces adequate saliency maps.
EXPERIMENTAL RESULTS
Subjective Evaluation
First, we demonstrate GAM's ability to explain visual similarity models. To this end, we set to the embedding produced by the (channel-wise) global average pooling layer in an ImageNet pretrained DenseNet201 model (discarding the classifier head). To determine the similarity of two images, and , the images are passed through the model to generate the embeddings and . Then, the similarity score is computed by ( , ), where is either the dot-product or cosine similarity. Figure 6: Saliency maps produced by GAM, GC and GC++ w.r.t. the classes (top to bottom) "sunglasses", "oboe", "soccer ball", "coffeepot", "matchstick" and "anemone fish".
In Fig. 4, row-pairs present saliency maps for pairs of image representations w.r.t. the cosine similarity. The saliency maps by GAM, were produced using two layers (setting = 2 in Eq. 2). We see that GAM produces quality saliency maps, while GC (column 3) consistently fails. When compared to GC++, GAM exhibits saliency maps that are more focused on the source for the similarity. Results w.r.t. the dot-product appear in Fig. 5. In this case, we see that GC++ completely fails.
Next, we turn to demonstrate GAM's ability to visually explain classification models. In this case, the saliency maps are computed w.r.t. the logits scores produced by DenseNet-201. Specifically, we compute ( , ), where is the dot-product, is the image representation, and is the weights vector associated with the class . Fig. 6 presents examples of saliency maps produced by GAM ( = 2), GC and GC++. It is visible that GAM produces saliency maps that are more class discriminative than the ones produced by GC and GC++. These results further support the analysis from Sec. 3.3, demonstrating the advantages of GAM (over GC and GC++), and show that GAM generates adequate saliency maps in all settings.
Sanity Checks for Saliency Maps
As explained in Sec. 2.3, visually appealing saliency maps can be misleading. To assess the validity of GAM for explanations, we conduct the parameter randomization and the data randomization sanity tests from [1]. GAM passed both tests. Figure 7 presents examples from the sanity checks. The first row shows two saliency maps produced by GAM w.r.t. the "tabby cat" class. We see that when GAM utilizes an ImageNet pretrained ResNet50 model, it produces a focused saliency (around the cat), but when applying GAM to the same network with randomly initialized weights, it fails to detect the cat in the image. Thus, we conclude that GAM is sensitive to model parameters and passes the parameter randomization test. The second row shows that GAM produces an adequate saliency map when the model (LeNet-5 [11]) is trained with the true MNIST labels, but fails when the model is trained with random labels. Thus, we conclude that GAM is sensitive to data labels and passes the data randomization test.
Layer Ablation Study
In this section, we test whether GAM, GC, and GC++ benefit from the use of multiple layers. On one hand, earlier layers are associated with smaller receptive fields, giving better localization. On the other hand, these layers usually account for less semantic features. Fig. 8 presents a comparison of GAM, GC and GC++ when using multiple layers ( ∈ {1, 2, 3}). We see that GAM benefits from the use of multiple layers, while GC and GC++ do not. Figures 9 and 10 demonstrate the advantage of using multi-layer GAM compared to a single layer GAM. Three images are presented, each with a small object (dog, airplane, and cat). We see that GAM based on earlier layers ( = − 1, − 2) produces more focused saliency maps due to higher resolution analysis. This leads to a better localization in the final saliency map as seen in 'GAM(sum)' ( = 3). Figure 11 presents another layer-wise analysis, where it is observed that the last two layers (second row, last two columns), corresponding to = 2, best balances localization with the extraction of semantic features, yielding optimal results. In addition, gradient localization is observed in the 'Gradients' columns, which is a unique property of GAM (in contrast to GC that performs gradient pooling). For further explanations, see Sec. 3.3 (Gradient Localization).
Indeed, in our experiments, we noticed that GAM with = 2 best balances localization with the extraction of semantic features. Yet, when setting > 1 for GC and GC++, performance degrades. As we shall see, these trends repeat in the quantitative evaluation in Secs. 4.4 and 4.5 as well.
Objective Evaluation
Next, we present objective evaluation, following the measures suggested by in [4] (we refer to [4] for the full details):
Average Drop Percentage (ADP): ADP is computed as:
ADP = 100 ∑︁ =1 max(0, − ) ,
where is total number of images in evaluated dataset, is the model's output score (confidence) for the correct class w.r.t. the original image . is the same model's score, this time w.r.t. the 'explanation map' -a masked version of the original image (produced by the Hadamard product of the original image with the saliency map). The lower the ADP the better the result.
Percentage of Increase in Confidence (PIC): PIC is computed as:
PIC = 100 ∑︁ =1 1( < ).
PIC reports the percentage of the cases in which the model's output scores increase as a result of the replacement of the original image with the explanation map. The higher the PIC the better the result.
We further extended the evaluation from [4] to similarity tasks, by reporting ADP and PIC w.r.t. image-pairs similarity scores (instead of class specific scores). To this end, we created a similarity subset (will be made public) by randomly sample image-pairs from the ILSVRC-15-val dataset [17] (which does not overlap with the training set used the trained the models), but with the restriction that each pair contains images that are labeled with the same ground truth class. The similarity subset contains 3000 pairs in total, 3 for each class.
In addition, we tested the ability of GAM to benefit from using several layers, when it is applied on images with small objects. We compare the localization capability on small objects by narrowing the ILSVRC-15-val dataset to a subset that contains images for which the ground truth box area is below the 25% / 10% percentile area. For the similarity experiment, we randomly sampled another 3000 pairs, from the 25% / 10% narrower sets.
The results are reported in Tab. 1 (ResNet101). For each method, we report the results both for = 1 and = 2 (note that = 3 performs on par with = 2, hence omitted). Recall that for ADP (PIC), lower (higher) values indicate better performance, and Impr. reports the relative improvement obtained by using = 2 (over = 1). We see that GAM outperforms GC and GC++ at the majority of the scenarios. Moreover, GC (GC++) completely fail when using the cosine (dot-product) similarity. This is another empirical evidence for GC and GC++ limitations (Sec. 3.3), and the fact that GAM benefits from multiple layers, whereas GC and GC++ do not (and even degrade). Finally, the results for ResNet101 exhibit the same trends, but are excluded due to space limitation.
Object Localization and Segmentation
In this section, we compare the localization capability of GAM, GC and GC++ via an extensive set of experiments across various tasks, datasets, models, and settings. We measure the quality of the produced saliency maps by Intersection over Union (IoU%) w.r.t. the ground truth bounding boxes (BBox) or segmented areas. To this end, each saliency map is binarized with a fixed threshold before drawing the predicted BBox or segmented area. The fixed threshold was chosen for each test and method separately by a hold-out set (will be made public). Table 2 presents the obtained localization accuracy (IoU%) for each combination of task, dataset, model, and method, both for = 1 and = 2, including the obtained improvement when using = 2. Again, we observe that GAM outperforms the other methods. Moreover, it is evident that GAM significantly benefit from using multiple layers (especially in the case of small objects), whereas GC and GC++ suffer from a significant degradation in accuracy when utilizing more than a single layer. In what follows we discuss the results per task.
Localization by Classification: We followed the test protocol from GC [19], where the saliency maps of a classification model are used to draw a BBox around classified objects. We apply the two-layer GAM (Eq. 2, = 2) GC and GC++ on top of pretrained DenseNet201 and ResNet101. Figures 12, 13 and Fig. 14 (Rows 1-2) present examples for the generated saliency maps and BBoxes (marked orange). Tab. 2 (row 1) presents the localization accuracy (IoU%) between the predicted and ground truth (ILSVRC- 15-val) boxes. In all cases, GAM outperforms both GC and GC++. Localization by Similarity: We adjusted the protocol from the localization by classification experiment to support localization by similarity. To this end, we replace the classification score with the similarity score computed for image-pairs. We used the same imagepairs from the similarity subset (Sec. 4.4). Then, we drew a BBox for each image in the pair, and computed IoU% w.r.t. to the ground truth. Results w.r.t. the different similarity scores are reported in Tab. 2 (rows 5,8) and demonstrated in Fig. 15. Again, we observe that GC (GC++) fails when using the cosine (dot-product) similarity, and significantly degrades when utilizing multiple layer, while GAM performs the best and clearly benefits from multiresolution analysis.
Localization of Small Objects: We used the 25% and 10% partitions from Sec. 4.4 for testing the localization capability GAM on small objects. In addition, we conducted a localization experiment on medical imaging dataset ChestX-ray8 [27], where the classification decisions are usually made due to small details in the images. In this experiment, we used the CheXNet model from [16] that was trained on the ChestX-ray8 dataset to classify common thorax diseases. The results for the ILSVRC-15-val 25% / 10% subsets and the ChestX-ray8 appear in the classification and similarity sections in Tab. 2, and demonstrated in Fig. 14 (rows 3-4). We see that GAM significantly outperforms both GC and GC++. These findings support the observation from Sec. 4.3 that multi-layer GAM ( > 1) produces better saliency maps for small objects.
Object Segmentation: Finally, we tested the utilization of GAM, GC and GC++ for object segmentation. To this end, we applied the methods on top of two pretrained multi-label classification TResNet [3] models, trained on MS-COCO [12] and Pascal VOC [7] datasets. For each image, we computed the saliency maps w.r.t. each of the ground truth labels. Then, we computed the IoU% of the binarized saliency map w.r.t. ground truth segmentation (in pixels), for each ground truth label. The results appears in Tab. 2 (Segmentation), and exemplified in Fig. 14 (rows 5-6), and in Figs. 16 and 17. Overall, we see that GAM produces the most accurate segmentation.
CONCLUSION
This work joins a growing effort to make machine learning models more transparent and explainable. To this end, we present GAM, a state-of-the-art method for explaining visual similarity and classification models in a unified manner. Extensive subjective and objective evaluations show that GAM outperforms its alternatives across various tasks and datasets, and especially on small objects.
Figure 1 :
1Visual explanations produced by GAM for similarity and classification tasks (using a pretrained DenseNet201). (b-c) w.r.t. the scores for 'cat' and 'dog' classes. (f-g) w.r.t. the cosine similarity between the latent representations of the images in (d-e).
= and = are entry-wise nonnegative, and so does and (as the gradient of the average pooling function is a positive constant). This implies = 0 for all ( , ) in Eq. 4, thus negative gradients do not exist at all. However, in the case of the cosine similarity, ( , ) = | | | | | | | | ), and since both and are entry-wise non-negative we have ( ,
Figure 2 :
2GAM and GC saliency maps w.r.t. the cosine similarity for two pairs of images: Dogs and Chairs (DenseNet201). GC's failures are marked red. Rows 4-6 present the activation map, ReLUed positive gradient maps (summed across channels), and negative gradient map (summed across channels after negation and ReLU), respectively. See Sec. 3.3 for details.
Figure 3 :
3GAM and GC++ saliency maps w.r.t. dot product similarity for two pairs of images (DenseNet201). GC++'s failures are marked red. See Sec. 3.3 for details.
Figure 4 :
4Each pair of rows presents saliency maps produced by GAM, GC and GC++ w.r.t. the cosine similarity.
Figure 5 :
5Saliency maps produced by GAM, GC and GC++ (DenseNet201 model) w.r.t. the dot-product similarity. Each pair of columns corresponds to a pair of images for which the similarity score was computed.
Figure 7 :
7Sanity checks. Rows 1 and 2 present GAM results for the parameter randomization and data randomization tests w.r.t. the "tabby cat" (ImageNet) and "one" (MNIST) classes, using ResNet50 and LeNet-5, respectively. Left to right: Row 1: Original image, GAM computed based on a trained model, GAM computed based on an untrained model (random weights). Row 2: Original image, GAM computed based on a model that was trained with the ground truth labels, GAM computed based on a model that was trained with random labels.
Figure 8 :
8Layer ablation study (DenseNet201). Saliency maps are computed by GAM, GC and GC++, for = 1, 2, 3(Eq. 2), w.r.t. to class "basketball". GAM performs the best. See Sec. 4.3.
Figure 9 :
9GAM for small objects (DenseNet201). Saliency maps are computed w.r.t. the classes "golden retriever" (row 1) and "airliner" (row 2), for each layer = , − 1, − 2 and their sum (Eq. 2, = 3).
Figure 10 :
10GAM for small objects (DenseNet201). Saliency maps are computed w.r.t. the classes "tabby cat", for each layer = , − 1, − 2 and their sum (Eq. 2, = 3). The last column presents results produced by GC.
Figure 11 :
11GAM for visual similarity using pretrained Imagenet Densent201: Layer ablation study. Columns 1-2, 3-4 and 5-6 present the saliency maps (Eq. 1), activation maps ℎ (summed over the channel axis) and gradient maps ( ) (summed over channels), respectively, for = , − 1, − 2, − 3 (top to bottom). The last two columns present saliency maps computed based on, Eq. 2, with = 1, 2, 3, 4 (top to bottom), respectively.
Table 1 :
1Objective evaluation, including Layer ablation study by using = 1, 2 (Eq. 2) last layers of ResNet101. For ADP (PIC), lower (higher) is better. VRC stands for ILSVRC-15-val. 25% and 10% symbol the subsets of VRC that contain the small objects as explained in Sec. 4.4.
10%) DenseNet 31.2 35.7 14.4% 29.7 15.4 -51.4% 18.4 16 -13.0% 25%) DenseNet 39.6 43.8 10.6% 1 > 25%) TResNet 22.7 25.8 13.7% 21.4 21.2 -0.9% 21.5 21.1 -1.9% COCO(10%) TResNet 21.4 24.9 16.4% 20.Task and
Model
GAM
GC++
GC
Dataset
1
2
Impr.
1
2
Impr.
1
2
Impr.
Classification
VRC
DenseNet 54.9 56.9
3.6% 54.9 47.7 -13.1% 52.4 50.3 -4.0%
VRC(25%) DenseNet 39
43.8 12.3% 39.6 20.8 -47.5% 33.5 26 -22.4%
VRC(10%) DenseNet 23.4
33
41.0% 22.6 11.5 -49.1% 21.3 17.4 -18.3%
VRC
ResNet
55.9 57.1
2.1%
55
53.8
-4.1% 47.8 47.2 -1.3%
VRC(25%)
ResNet
40.8 43.1
8.3% 40.6 38.9
-4.2% 33.6 33.5 -0.3%
VRC(10%)
ResNet
26.1 33.4 29.5% 26.2 23.9
-8.4% 23.2 22.7 -2.2%
XRAY
CheXNet 25.8 28.4 10.1% 26.2 20.2 -22.9% 24.9 21.6 -13.3%
Similarity (cos)
VRC
DenseNet 57.4 60.7
5.7% 57.1 52.3 -10.0% 52.8 53.5 1.3%
VRC(25%) DenseNet 38.2 41.9
9.7% 37.4 21.9 -44.4% 25.5 22.6 -11.4%
VRC(VRC
ResNet
57.1 58.6
2.6%
56
49.8 -11.1% 39.1 38.2 -2.3%
VRC(25%)
ResNet
38.3 39.3
2.6% 36.1 28.5 -21.1% 27.2 24.9 -8.5%
VRC(10%)
ResNet
31.3 34.9 11.5% 29.4 22.2 -24.5% 21.3 20.4 -4.2%
Similarity (dot)
VRC
DenseNet 59.2 62.4
5.4% 1 >
1 >
-
58.9 54
-9.8%
VRC(1 >
-
38.2 25.5 -36.6%
VRC(10%) DenseNet 32
36.9 15.3% 1 >
1 >
-
31.2 19.6 -38.6%
VRC
ResNet
57.9 61.9
6.9% 1 >
1 >
-
57.3 57.3
0%
VRC(25%)
ResNet
39.3 43.1
9.7% 1 >
1 >
-
38.6 38.1 -1.3%
VRC(10%)
ResNet
31.9 36.6 14.7% 1 >
1 >
-
31.2 30.5 -2.3%
Segmentation
COCO
TResNet 28.3 30.7
8.5% 27.8 27.3
-0.8% 27.2 27.5 1.1%
COCO(7 20.5
-1.0% 21.1 20.3 -3.8%
VOC
TResNet 36.2 38.7
6.9% 35.5 34.8
-2%
35.5 34.2 -3.7%
VOC(25%)
TResNet 34.1 37.2
9.1% 32.1 31.5
-1.9% 33.5 31.1 -7.2%
VOC(10%)
TResNet 27.1 32.7 20.7% 26.8 25.3
-5.6% 26.2 24.9
-5%
Table 2 :
2Object Localization and segmentation results for different combination of task, dataset, model, and method. For each method, we report the accuracy (IoU%) achieved by using = 1, 2 (Eq. 2) last layers. VRC, XRAY, COCO, and VOC stands for ILSVRC-15, ChestX-ray8, MS-COCO, and Pascal-VOC, respectively. See Sec. 4.5 for details.
Sanity checks for saliency maps. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim, Advances in Neural Information Processing Systems. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In Ad- vances in Neural Information Processing Systems. 9505-9515.
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps. Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Proceedings of the ACM International Conference on Information & Knowledge Management (CIKM). the ACM International Conference on Information & Knowledge Management (CIKM)Itzik Malkiel, Omri Armstrong, and Noam KoenigsteinOren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, and Noam Koenigstein. 2021. Grad-SAM: Explaining Trans- formers via Gradient Self-Attention Maps. In Proceedings of the ACM International Conference on Information & Knowledge Management (CIKM).
Itamar Friedman. Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, arXiv:2009.14119Matan Protter, and Lihi Zelnik-Manor. 2020. Asymmetric Loss For Multi-Label Classification. cs.CVEmanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Fried- man, Matan Protter, and Lihi Zelnik-Manor. 2020. Asymmetric Loss For Multi-Label Classification. arXiv:2009.14119 [cs.CV]
Grad-cam++: Generalized gradientbased visual explanations for deep convolutional networks. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, N Vineeth, Balasubramanian, Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vi- neeth N Balasubramanian. 2018. Grad-cam++: Generalized gradient- based visual explanations for deep convolutional networks. In 2018
IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE. IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 839-847.
Adapting Grad-CAM for embedding networks. Lei Chen, Jianhui Chen, Hossein Hajimirsadeghi, Greg Mori, The IEEE Winter Conference on Applications of Computer Vision. Lei Chen, Jianhui Chen, Hossein Hajimirsadeghi, and Greg Mori. 2020. Adapting Grad-CAM for embedding networks. In The IEEE Winter Conference on Applications of Computer Vision. 2794-2803.
Accountability of AI Under the Law: The Role of Explanation. Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, O' David, Stuart Brien, James Schieber, David Waldo, Alexandra Weinberger, Wood, CoRR abs/1711.01134Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Stuart Schieber, James Waldo, David Wein- berger, and Alexandra Wood. 2017. Accountability of AI Under the Law: The Role of Explanation. CoRR abs/1711.01134 (2017).
The pascal visual object classes (voc) challenge. Mark Everingham, Luc Van Gool, K I Christopher, John Williams, Andrew Winn, Zisserman, International journal of computer vision. 88Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision 88, 2 (2010), 303- 338.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778.
Object localization via saliency maps using DenseNet201 over ILSVRC-15 dataset, w.r.t. labels: barbell, anemone fish, volleyball. Gam Ground-Truth, Gc++ Gc, 12bell cote, obelisk, ox and water towerGround-truth GAM GC++ GC Figure 12: Object localization via saliency maps using DenseNet201 over ILSVRC-15 dataset, w.r.t. labels: barbell, anemone fish, volleyball, bell cote, obelisk, ox and water tower.
Deep metric learning using triplet network. Elad Hoffer, Nir Ailon, International Workshop on Similarity-Based Pattern Recognition. SpringerElad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recog- nition. Springer, 84-92.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Wein- berger. 2017. Densely connected convolutional networks. In Proceed- ings of the IEEE conference on computer vision and pattern recognition. 4700-4708.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proc. IEEE. 86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278-2324.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Per- ona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Mi- crosoft coco: Common objects in context. In European conference on computer vision. Springer, 740-755.
Nodule and Pneumonia. GAM yields saliency maps that are more accurate, hence leading to better localization. Aravindh Mahendran, Andrea Vedaldi, Visualizing deep convolutional neural networks using natural pre-images. International Ground-truth GAM GC++ GC Figure 13: Object localization via saliency maps using CheXNet over ChestX-ray8 dataset, w.r.t. pathology: Atelectasis. Effusion, Mass, Pneumothorax120Aravindh Mahendran and Andrea Vedaldi. 2016. Visualizing deep convolutional neural networks using natural pre-images. International Ground-truth GAM GC++ GC Figure 13: Object localization via saliency maps using CheXNet over ChestX-ray8 dataset, w.r.t. pathology: Atelec- tasis, Effusion, Mass, Pneumothorax, Nodule and Pneumo- nia. GAM yields saliency maps that are more accurate, hence leading to better localization. Journal of Computer Vision 120, 3 (2016), 233-255.
Deep metric learning via lifted structured feature embedding. Hyun Oh Song, Yu Xiang, Stefanie Jegelka, Silvio Savarese, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted structured feature embedding. In Pro- ceedings of the IEEE conference on computer vision and pattern recogni- tion. 4004-4012.
CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. Filip Radenović, Giorgos Tolias, Ondřej Chum, European conference on computer vision. SpringerFilip Radenović, Giorgos Tolias, and Ondřej Chum. 2016. CNN im- age retrieval learns from BoW: Unsupervised fine-tuning with hard examples. In European conference on computer vision. Springer, 3-20.
Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, arXiv:1711.05225Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprintPranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Sh- panskaya, et al. 2017. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017).
ImageNet Large Scale Visual Recognition Challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, 10.1007/s11263-015-0816-yInternational Journal of Computer Vision (IJCV). 115Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115, 3 (2015), 211-252. https://doi.org/10.1007/ s11263-015-0816-y
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlorian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 815-823.
Object localization and segmentation via saliency maps. Rows 1-2, 3-4, 5-6 present BBox generation (orange) using DenseNet201 (w.r.t. labels: mongoose, gondola), CheXNet (w.r.t. label: Atelectasis), and segmentation (orange) using TResNet. Gam Ground-Truth, Gc++ Gc Figure, 14w.r.t. labels: boat, frisbee), respectivelyGround-truth GAM GC++ GC Figure 14: Object localization and segmentation via saliency maps. Rows 1-2, 3-4, 5-6 present BBox generation (or- ange) using DenseNet201 (w.r.t. labels: mongoose, gondola), CheXNet (w.r.t. label: Atelectasis), and segmentation (or- ange) using TResNet (w.r.t. labels: boat, frisbee), respectively.
Grad-CAM: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakr- ishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localiza- tion. In Proceedings of the IEEE international conference on computer vision. 618-626.
Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, arXiv:1312.6034Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprintKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).
Jost Tobias Springenberg, arXiv:1412.6806Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. Striving for simplicity: The all convolutional net. arXiv preprintJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Mar- tin Riedmiller. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014).
Visualizing deep similarity networks. Abby Stylianou, Richard Souvenir, Robert Pless, 2019 IEEE winter conference on applications of computer vision (WACV). IEEEAbby Stylianou, Richard Souvenir, and Robert Pless. 2019. Visualizing deep similarity networks. In 2019 IEEE winter conference on applications of computer vision (WACV). IEEE, 2029-2037.
Deep learning face representation by joint identification-verification. Yi Sun, Yuheng Chen, Xiaogang Wang, Xiaoou Tang, Advances in neural information processing systems. Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation by joint identification-verification. In Advances in neural information processing systems. 1988-1996.
Giorgos Tolias, Ronan Sicre, Hervé Jégou, arXiv:1511.05879Particular object retrieval with integral max-pooling of CNN activations. arXiv preprintGiorgos Tolias, Ronan Sicre, and Hervé Jégou. 2015. Particular object retrieval with integral max-pooling of CNN activations. arXiv preprint arXiv:1511.05879 (2015).
Making machine learning models interpretable. Alfredo Vellido, D José, Paulo J G Martín-Guerrero, Lisboa, COMPUTA-TIONAL INTELLIGENCE AND MACHINE LEARNING. Alfredo Vellido, José D. Martín-Guerrero, and Paulo J. G. Lisboa. 2012. Making machine learning models interpretable. In IN PROC. EURO- PEAN SYMPOSIUM ON ARTIFICIAL NEURAL NETWORKS, COMPUTA- TIONAL INTELLIGENCE AND MACHINE LEARNING.
The saliency maps are drawn using DenseNet201 over imagepairs from ILSVRC-15-val (validation set). The labels for the image-pairs are (top to bottom): hammerhead shark, weevil, lesser panda, analog clock and stupa. Gam Ground-Truth, Gc++ Gc, 15Object localization w.r.t. similarity score (cosine)Ground-truth GAM GC++ GC Figure 15: Object localization w.r.t. similarity score (cosine). The saliency maps are drawn using DenseNet201 over image- pairs from ILSVRC-15-val (validation set). The labels for the image-pairs are (top to bottom): hammerhead shark, weevil, lesser panda, analog clock and stupa.
Learning finegrained image similarity with deep ranking. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, Ying Wu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. 2014. Learning fine- grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1386-1393.
Chestx-ray8: Hospital-scale Ground-truth GAM GC++ GC Figure 16: Segmentation results based on saliency maps produced by GAM, GC, and GC++ (TResNet) on examples from Pascal VOC (validation) dataset, w.r.t. labels (top to bottom): aeroplane, bird, boat, car, cow, horse, motorbike, person, sheep and sofa. chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald M Summers, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionXiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. 2017. Chestx-ray8: Hospital-scale Ground-truth GAM GC++ GC Figure 16: Segmentation results based on saliency maps pro- duced by GAM, GC, and GC++ (TResNet) on examples from Pascal VOC (validation) dataset, w.r.t. labels (top to bottom): aeroplane, bird, boat, car, cow, horse, motorbike, person, sheep and sofa. chest x-ray database and benchmarks on weakly-supervised classifica- tion and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2097-2106.
Deep metric learning for person re-identification. Dong Yi, Zhen Lei, Shengcai Liao, Stan Z Li, 2014 22nd International Conference on Pattern Recognition. IEEEDong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. 2014. Deep met- ric learning for person re-identification. In 2014 22nd International Conference on Pattern Recognition. IEEE, 34-39.
Thomas Fuchs, and Hod Lipson. Jason Yosinski, Jeff Clune, Anh Nguyen, arXiv:1506.06579Understanding neural networks through deep visualization. arXiv preprintJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lip- son. 2015. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015).
Wei Yu, Kuiyuan Yang, Yalong Bai, Hongxun Yao, Yong Rui, arXiv:1412.6631Visualizing and comparing convolutional neural networks. arXiv preprintWei Yu, Kuiyuan Yang, Yalong Bai, Hongxun Yao, and Yong Rui. 2014. Visualizing and comparing convolutional neural networks. arXiv preprint arXiv:1412.6631 (2014).
Ground-truth GAM GC++ GC Figure 17: Segmentation results based on saliency maps produced by GAM, GC, and GC++ (TResNet) on examples from MS-COCO (validation) dataset. D Matthew, Rob Zeiler, Fergus, European conference on computer vision. Visualizing and understanding convolutional networks. w.r.t. labels (top to bottom): bird, kite, boat, traffic light and sinkMatthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Ground-truth GAM GC++ GC Figure 17: Segmentation results based on saliency maps pro- duced by GAM, GC, and GC++ (TResNet) on examples from MS-COCO (validation) dataset, w.r.t. labels (top to bottom): bird, kite, boat, traffic light and sink.
. Springer, Springer, 818-833.
Adaptive deconvolutional networks for mid and high level feature learning. D Matthew, Zeiler, W Graham, Rob Taylor, Fergus, 2011 International Conference on Computer Vision. IEEEMatthew D Zeiler, Graham W Taylor, and Rob Fergus. 2011. Adaptive deconvolutional networks for mid and high level feature learning. In 2011 International Conference on Computer Vision. IEEE, 2018-2025.
| [] |
[
"R elic G ravitationalW aves and C osm ology 1",
"R elic G ravitationalW aves and C osm ology 1"
] | [
"L P G Ri Shchuk [email protected] "
] | [] | [] | SchoolofPhysi cs and A stronom y,C ardi U ni versi ty,U ni ted K i ngdom and Sternberg A stronom i calInsti tute,M oscow State U ni versi ty,R ussi a A bstract T he paper begi ns w i th a bri ef recol l ecti on of i nteracti ons of the author w i th Ya B Zel dovi ch i n the context ofthe study ofrel i c gravi tati onal waves. T he pri nci pl es and earl y resul ts on the quantumm echani cal generati on of cosm ol ogi cal perturbati ons are then summ ari zed. T he expected am pl i tudes of rel i c gravi tati onal waves are di erent i n di erent frequency w i ndow s,and therefore the techni ques and prospectsofthei rdetecti on are di sti nct. O ne secti on ofthe paper descri besthe presentstate ofe ortsi n di rectdetecti on ofrel i c gravi tati onalwaves. A nother secti on i s devoted to i ndi rect detecti on vi a the ani sotropy and pol ari sati on m easurem ents of the cosm i c m i crowave background radi ati on (C M B ).It i s em phasi zed throughout the paper that the concl usi ons on the exi stence and expected am ount of rel i c gravi tati onal waves are based on a sol i d theoreti cal foundati on and the best avai l abl e cosm ol ogi cal observati ons. I al so expl ai n i n great detai lw hat went w rong w i th the so-cal l ed ' i n ati onary gravi tati onal waves' ,w hose am ounti spredi cted by i n ati onary theori ststo be negl i gi bl y sm al l ,thus depri vi ng them ofany observati onalsi gni cance.1 A contri buti on to the i nternati onalconference on cosm ol ogy and hi gh-energy astrophysi cs\Zel dovi ch-90"hel d i n M oscow ,20-24 D ecem ber,2004;http: //hea. i ki . rssi . ru/Z-90/ | 10.1070/pu2005v048n12abeh005795 | [
"https://export.arxiv.org/pdf/gr-qc/0504018v4.pdf"
] | 11,957,123 | gr-qc/0504018 | fbb2cdb59f8696f3e8b8641f9390cf5c81156839 |
R elic G ravitationalW aves and C osm ology 1
L P G Ri Shchuk [email protected]
R elic G ravitationalW aves and C osm ology 1
SchoolofPhysi cs and A stronom y,C ardi U ni versi ty,U ni ted K i ngdom and Sternberg A stronom i calInsti tute,M oscow State U ni versi ty,R ussi a A bstract T he paper begi ns w i th a bri ef recol l ecti on of i nteracti ons of the author w i th Ya B Zel dovi ch i n the context ofthe study ofrel i c gravi tati onal waves. T he pri nci pl es and earl y resul ts on the quantumm echani cal generati on of cosm ol ogi cal perturbati ons are then summ ari zed. T he expected am pl i tudes of rel i c gravi tati onal waves are di erent i n di erent frequency w i ndow s,and therefore the techni ques and prospectsofthei rdetecti on are di sti nct. O ne secti on ofthe paper descri besthe presentstate ofe ortsi n di rectdetecti on ofrel i c gravi tati onalwaves. A nother secti on i s devoted to i ndi rect detecti on vi a the ani sotropy and pol ari sati on m easurem ents of the cosm i c m i crowave background radi ati on (C M B ).It i s em phasi zed throughout the paper that the concl usi ons on the exi stence and expected am ount of rel i c gravi tati onal waves are based on a sol i d theoreti cal foundati on and the best avai l abl e cosm ol ogi cal observati ons. I al so expl ai n i n great detai lw hat went w rong w i th the so-cal l ed ' i n ati onary gravi tati onal waves' ,w hose am ounti spredi cted by i n ati onary theori ststo be negl i gi bl y sm al l ,thus depri vi ng them ofany observati onalsi gni cance.1 A contri buti on to the i nternati onalconference on cosm ol ogy and hi gh-energy astrophysi cs\Zel dovi ch-90"hel d i n M oscow ,20-24 D ecem ber,2004;http: //hea. i ki . rssi . ru/Z-90/
Introduction
T hestory ofrel i cgravi tati onalwaveshasreveal ed thecharacterofYa.B.Zeldovi ch notonl y asa greatsci enti stbutal so asa greatpersonal i ty.O neshoul d rem em ber thatthe begi nni ng ofthe 1970' swasdom i nated by the bel i efthat m assl ess parti cl es,such as photons,neutri nos,gravi tons,cannot be created by the gravi tati onal el d of a hom ogeneous i sotropi c uni verse. Zel dovi ch shared thi s vi ew and was publ i shi ng papers supporti ng thi s pi cture. H e was enthusi asti c about cosm ol ogi cal parti cl e creati on [ 1] and contri buted a l ot (together w i th coauthors) to thi s subject. H owever,he thought that som ethi ng i nteresti ng and i m portant coul d onl y happen i fthe earl y uni verse was hi ghl y ani sotropi c.
W hen Ishowed [ 2,3]thatm assl essgravi tons(gravi tati onalwaves)coul d, i n fact,be created by the gravi tati onal el d ofa hom ogeneous i sotropi c universe,a consi derabl e debate arose around thi s work. I argued that the coupl i ng ofgravi tons to the ' external 'gravi tati onal el d fol l ow s unam bi guousl y from the equati ons ofgeneralrel ati vi ty,and i t di ers from the coupl i ng of otherknow n m assl essparti cl esto gravi ty.In contrastto otherm assl ess el ds, thi sspeci ccoupl i ng ofgravi tati onalwavesal l ow sthei rsuperadi abati c(param etri c)am pl i cati on by the ' pum pi ng'gravi tati onal el d ofa nonstati onary uni verse. (A si m i l ar coupl i ng to gravi ty can be postul ated for the sti l lhypotheti calm assl essscal ar el d. ) Ifcl assi calgravi tati onalwaveswere present before the era ofam pl i cati on,they woul d have been am pl i ed. But thei r presense i s notofnecessi ty: even i fthe waves are i ni ti al l y i n thei r quantumm echani cal vacuum (ground) state, the state w i l l i nevi tabl y evol ve i nto a m ul ti -parti cl e state. In phenom enol ogi call anguage,gravi tati onalwaves are bei ng generated from thei r zero-poi nt quantum osci l l ati ons.
T he i ntense debate has ni shed i n a surpri si ng and very atteri ng way for m e. It i s com m on know l edge that i t was vi rtual l y i m possi bl e to w i n a sci enti cbetagai nstZel dovi ch -heknew practi cal l y everythi ng aboutphysi cs and had trem endous physi cali ntui ti on. Butsom eti m es he woul d nd a cute way ofadm i tti ng thathi sprevi ous thi nki ng wasnotqui te ri ght,and thathe al so l earned som ethi ng from a debate. O n thi s occasi on i t happened i n the fol l ow i ng m anner.
A fterone ofhi srare tri psto Eastern Europe (asfarasIrem em ber,i twas Pol and) Ya. B.gave m e a gi ft. T hi s was a poster show i ng a sophi sti cated, i m pressi oni st-styl e,l ady.T he factthatthi swasa posterw i th a sophi sti cated l ady wasnotreal l y surpri si ng -you coul d expectthi sfrom Yakov Bori sovi ch. W hat was surpri si ng and atteri ng for m e was hi s hand-w ri tten note at the bottom ofthe poster. In m y transl ati on from the R ussi an,i t sai d \T hank you for your goali n m y net". Ya. B.was hi nti ng at m y passi on for footbal l , and he knew that thi s com pari son woul d be appreci ated m uch better than any other. So,thi s i s how a great m an adm i ts a cl ari cati on ofan error;he si m pl y says \thank you for your goali n m y net".
It was cl ear from the very begi nni ng ofthe study of rel i c gravi tati onal waves that the resul t ofam pl i cati on ofa wave-el d shoul d depend on the strength and ti m e evol uti on ofthe gravi tati onalpum p el d. W e know l i ttl e about the very earl y uni verse these days, even l ess was know n at the begi nni ng ofthe 70' s.
T he best thi ng you can do i s to coni der pl ausi bl e m odel s. T he si m pl est opti on i s to assum e [ 2]that the cosm ol ogi calscal e factor a( ) i n the expressi on
ds 2 = a 2 ( )[ d 2 + ( ij + h ij )dx i dx j ](1)
consi sts ofpi eces ofpower-l aw evol uti on:
a( )= l o j j 1+ ;(2)
w here 0 = d=d = (a=c)d=dt. U si ng Eq. (2) and the unperturbed Ei nstei n equati ons one can al so nd the e ecti ve equati on ofstate for the ' m atter' ,w hatever i t i s,w hi ch dri ves the i nterval s ofa( ):
p = w = 1 3(1 + ) :(4)
T he som ew hat strange form ofthe i ndex 1 + i n Eq. (2) was m oti vated by a seri ous concern of that ti m e -i t was necessary to prove that even a sm al ldevi ati on from the excepti onall aw ofevol uti on a( ) / guarantees the e ect ofg. w .am pl i cati on. It i s onl y i n thi s excepti onalcase that the e ecti vepotenti ala 00 =a vani shes,and thereforethesuperadi abati ccoupl i ng of gravi tati onalwavesto the nonstati onary pum p el d a( )al so vani shes. (T he anal ogous e ecti ve potenti al i s absent i n equati ons for photons, m assl ess neutri nos,and som e m assl ess scal ar parti cl es. )
T he conveni ence ofthe notati on uti l i zed i n Eq. (2)i sthati tparam eteri ses the excepti onalcase by = 0 and devi ati ons from thi s case by a sm al l . Indeed,i t was show n [ 2]that the am pl i tude ofthe generated g. w .m ode i s proporti onalto sm al l ;but i t i s not zero i f 6 = 0. A t the sam e ti m e,i f i s not especi al l y sm al l ,the am pl i tude ofthe gravi tati onalwave h p (n),soon afterthe begi nni ng ofthe superadi bati c regi m e and w hi l e the wave i ssti l li n thi s regi m e,i . e. before any further processi ng ofthe am pl i tude,eval uates to
h p (n) l P l l o n n H 2+ :(5)
Esti m ate(5)i sapproxi m ate(wew i l lbedi scussi ng m oreaccurateform ul as bel ow ) but i t contai ns al lthe necessary physi cs. T he underl yi ng concepts of generati on and detecti on ofpri m ordi algravi tati onalwaveshave notchanged si nce the rstcal cul ati ons [ 2,3] ,and i ti si m portantforourfurtherdi scussi on to recal lthem agai n.
To begi n w i th,we note that Eq. (5) i s form ul ated for the di m ensi onl ess am pl i tude h ofa gi ven g. w .m ode characteri sed by a constant di m ensi onl ess wavenum ber n. (T he h( ) and ( ) m ode-functi ons are rel ated by h = =a. ) T he wavel ength , m easured i n uni ts of l aboratory standards (as Zel dovi ch used to say,m easured i n centi m eters),i s rel ated to n by ( ) = 2 a( )=n. It i s conveni ent to use (and we w i l lal ways do thi s) such anparam eteri sati on of a( ) that the present-day scal e factor i s a( R ) = 2l H , w here l H = c=H ( R ) i s the present-day val ue ofthe H ubbl e radi us. T hen, n H = 4 i s the wavenum ber ofthe waves w hose wavel ength today i s equal the prsent-day H ubbl e radi us. Longer waves have sm al l er n' s,and shorter waves have l arger n' s.
Expressi on (5) i s essenti al l y a consequence ofthe fol l ow i ng two assum pti ons.Fi rst,i ti sassum ed thatthe m ode underconsi derati on hasentered the superadi bati c regi m e i n the past,and i ssti l li n thi sregi m e. T hi sm eansthat the m ode' s frequency, i nstead ofbei ng m uch l arger than the characteri sti c frequency ofthe pum p el d,becam e com parabl e w i th i tatsom e ti m e i n the past.O r,i n cosm ol ogi calcontext,thewavel ength ( )ofthem oden,i nstead ofbei ng m uch shorterthan the i nstanteneousH ubbl e radi usc=H ( )= a 2 =a 0 , becam e equalto i t at som e m om ent ofti m e i ,i . e. i = c=H i . For the scal e factors ofEq. (2),thi s condi ti on l eads to (n=n H )j i j 1.
Second,we assum e thatby the begi nni ng ofthe superadi abati c regi m e at = i ,the m ode has sti l lbeen i n i ts vacuum state,rather than,say,i n a strongl y exci ted (m ul ti -parti cl e) state. T hat i s,i n the l anguage ofcl assi cal physi cs, the m ode' s am pl i tude near i was not m uch l arger than h i (n) l P l = i , w here l P l i s the Pl anck l ength, l P l = q G h=c 3 . T hi s condi ti on on the am pl i tude fol l ow sfrom the requi rem entthati ni ti al l y there were onl y the zero-poi ntquantum osci l l ati onsofthe g. w . el d,and the i ni ti alenergy ofthe m ode was (1=2) h! i . Because ofthe condi ti on i = c=H i ,we can al so w ri te
h i (n) as h i (n) H i l P l =c.
T he am pl i tude of the m ode, after the m ode' s entrance to the am pl i fyi ng superadi abati c regi m e,and as l ong as thi s regi m e l asts,rem ai ns at the constant l evelh i (n),i . e. h p (n) h i (n). T hi s hol ds true i nstead ofthe adi abati c decrease ofthe am pl i tude / 1=a( )thatwoul d be true i n the adi abati c regi m e. In general ,the quanti ty H i i s di erent for di erent n' s:
H i c l o (2+ ) i c l o n n H 2+ :
T herefore,a speci c dependence on n ari ses i n the functi on h i (n),and thi s i s how one arri ves at Eq. (5) i n a si m pl e qual i tati ve m anner.
Form ul a (5) gi ves the eval uati on of the pri m ordi al(before further processi ng) g. w .spectrum h p (n). R oughl y speaki ng, the i ni ti alvacuum spectrum h v (n)/ n has been transform ed i nto the pri m ordi alspectrum h p (n) h v (n)n 1+ i ,w here i characteri zes the scal e factorofthe era w hen the transi ti on from the adi abati c to superadi abati c regi m e has taken pl ace for the gi ven i ntervalofwavenum bers n. H owever,the sam e m ode n can sooner or l aterl eave the am pl i fyi ng regi m e and startosci l l ati ng agai n. O bvi ousl y,thi s reverse transi ti on from superadi abati c to adi abati c regi m e i sbei ng descri bed by the sam e theory.
T he nalam pl i tudesatsom e xed m om entofti m e (forexam pl e,today' s am pl i tudes) h f (n) are rel ated to the h p (n)-am pl i tudes by
h f (n) h p (n)n (1+ f ) ;
w here f characteri zesthe era w hen the opposi te transi ti on from the superadi abati cto adi abati cregi m ehastaken pl ace(thi si sw hy them i nussi gn ari ses i n front of1 + f i n the exponent).
T he di scussed am pl i tudes h(n) are i n fact the root-m ean-square (rm s) am pl i tudesofthe m ul ti -m ode el d,they determ i ne the m ean-square val ue of the wave el d h accordi ng to the generalform ul a
hh 2 i= Z h 2 rm s (n) dn n :
It i s necessary to say that i n the begi nni ng ofthe 80' s,the i n ati onary cosm ol ogi calscenari o governed by a scal ar el d [ 4]was gai ni ng popul ari ty. Its centralel em ent i s the i ntervalofdeSi tter expansi on,w hi ch corresponds to = 2 i n Eq. (2) ( grow s from 1 ,1 + < 0) and w = 1 i n Eq. (4). By the ti m e ofpubl i cati on ofthe i n ati onary scenari o,unusualequati ons of state for' m atter'dri vi ng the very earl y U ni verse,i ncl udi ng such exoti c ones as p = ; w = 1,had al ready been the subject ofcosm ol ogi calresearch, m ost notabl y i n the work ofA .D .Sakharov [ 5] .
T he g. w .cal cul ati ons for the speci alcase = 2 were perform ed i n a num berofpapers(see forexam pl e [ 6,7,8,9] ).If = 2,the dependence on n vani shes i n the generalEq. (5),and the pri m odi al(unprocessed) spectrum h p (n) becom es ' at' { that i s, n-i ndependent. Ironi cal l y, the prospects of di rect detecti on ofthe stochasti c g. w .background characteri sed by the correspondi ng processed (today' s) spectrum had al ready been expl ored by that ti m e [ 3] ;the processed spectrali ndex for thi s m odeli s = 1 i n notati ons of that paper. R ef. [ 3]al so suggested the use ofcross-correl ated data from two detectors and touched upon the techni que of' drag-free satel l i tes'that was l ater devel oped i n the Laser Interferom eter Space A ntenna (LISA ).
T he general i ty ofi n ati onary,quasi -deSi tter,sol uti onswasa seri ousconcern forZel dovi ch duri ng l ong ti m e. H e keptwonderi ng aboutthe sensi ti vi ty of i n ati onary sol uti ons to the choi ce of i ni ti alcondi ti ons. N obody woul d take the i n ati onary scenari o seri ousl y i fi t were a very contri ved or unstabl esol uti on.H owever,i twasshow n [ 10]thatthei n ati onary-type evol uti ons are,i n fact,attractorsi n the space ofal lpossi bl e sol uti onsofthe correspondi ng dynam i calsystem . T hi s deci si ve property m ade i n ati onary evol uti ons m ore pl ausi bl e and appeal i ng.
D irectdetection ofrelic gravitationalw aves
T hespectrum ofh rm s ( )expected today i sdepi cted i n Fi g. 1 (form oredetai l s, see [ 11,12] ). A l m ost everythi ng i n thi s graph i s the resul t ofthe processi ng ofthe pri m ordi alspectrum duri ng the m atter-dom i nated and radi ati ondom i nated stages.T hepostul ated ' Zel dovi ch' sepoch'governed by a very sti e ecti ve equati on of state, i s al so present i n the graph, as show n by som e rel ati ve i ncrease ofpower at very hi gh frequenci es. T he pri m ordi alpart of the spectrum survi ves onl y atfrequenci es bel ow the present-day H ubbl e frequency H 2 10 18 H z. T he avai l abl e C M B observati ons determ i ne the am pl i tude and spectralsl ope ofthe g. w .spectrum atfrequenci es around H , and thi s de nes the spectrum at hi gher frequenci es. In otherwords,the naltheoreti calresul tsdo notcontai n any di m ensi onl ess param eter w hi ch coul d be regul ated i n such a m anner as to m ake the contri buti on of,say,densi ty perturbati onsto the quadrupol e ani sotropy severalordersofm agni tude l argerthan the contri buti on ofgravi tati onalwaves. T hese contri buti ons m ust be roughl y equal ,but the theory cannot excl ude that one ofthem w i l lturn out to be a num eri calfactor 2-3 l arger than another. A ssum i ng that rel i c gravi tati onalwaves provi de a hal fofthe si gnal , one can nd from the observed T=T 10 5 that h rm s ( H ) 10 5 and, hence,i t fol l ow s from Eq. (5) that l P l =l o 10 5 . T he sl ope ofthe pri m ordi alg. w .spectrum i s al so taken from C M B observati ons. T he com m onl y used spectrali ndex n (we denote i t by a R om an l etter n i n order to di sti ngui sh from the wavenum ber n) i s rel ated to the param eter appeari ng i n Eq. (5) by the rel ati onshi p n = 2 + 5. T he sam e rel ati onshi p i s val i d for densi ty perturbati ons, to be di scussed l ater. T he current observati ons [ 13,14]gi ve evi dence for n 1,w hi ch corresponds to 2. T he parti cul ar graph i n Fi g. 1 i s pl otted for = 1: 9; n = 1: 2, w hi ch tal l i esw i th theC O BE data [ 15,16] .(T hi sspectrali ndex n > 1 i m pl i es that w < 1,accordi ng to Eq. (4). It i s not di cul t to i m agi ne that such an e ecti ve equati on of state coul d hol d i n the very earl y U ni verse, i f the recent supernovae observati ons hi nt at the val i di ty ofw < 1 even i n the present-day U ni verse ! ) In si m pl e words,the posi ti on and ori entati on ofthe enti re pi ece-w i se functi on h( )i sde ned by the know n val ue ofthe functi on at the poi nt = H and the know n sl ope ofthe functi on i n the vi ci ni ty of that poi nt.
Inci dental l y,thei ni ti alquantum vacuum condi ti onsforgravi tati onalwaves, atal lfrequenci esshow n i n the graph,are form ul ated atthe ' i ni ti al 'm om ents ofti m e,w hen each wavel ength ofi nterest was appreci abl y l onger than the Pl anck l ength. T herefore,the show n resul ts are i m m une to the short scal e am bi gui ti es ofthe so-cal l ed ' trans-Pl ancki an'physi cs (see for exam pl e [ 17] ). Iti sa di erentm atterthatatsom e frequenci es the i ni ti alstate i sal l owed to bea som ew hatexci ted state,ratherthan thepurevacuum state,w i thoutrunni ng i nto a con i ct w i th the adopted approxi m ati on ofsm al lperturbati ons. T hi s exoti c possi bi l i ty and the correspondi ng m odi cati ons ofthe spectrum were di scussed l ong ti m e ago [ 18](see al so a rel ated work [ 19] ).
T hegraph i n Fi g. 1 show sthepi ece-w i se envel ope ofthetoday' sspectrum . T hedi spl ayed resul ti squi teapproxi m ate.In parti cul ar,i tcom pl etel y i gnores the i nevi tabl e osci l l ati ons i n the spectrum , w hose ori gi n goes back to the gradualdi m i ni shi ng (squeezi ng) ofquantum -m echani caluncertai nti es i n the phasesoftheem ergi ng wavesand them acroscopi cm ani festati on ofthi se ect i n the form ofthe standi ng-wave pattern ofthe generated el d. (T hi si sal so rel ated to the concept of ' parti cl e pai r creati on' . ) W e w i l l di scuss these spectralosci l l ati ons bel ow . i t corresponds to gw 10 10 at frequency = 10 2 H z and i n i ts vi ci ni ty. W here do we stand now i n the attem pt ofdi rect detecti on ofrel i c gravitati onalwaves? T he sensi ti vi ty ofthe presentl y operati ng ground-based i nterferom etersi snotgood enough to reach the predi cted l evel ,butthe experi m entersare m aki ng a l otofprogress. T he data from the recentl y com pl eted S3 run of LIG O [ 20] w i l l probabl y al l ow one to reach the astrophysi cal l y i nteresti ng l evelof gw 10 4 ,as show n i n Fi g. 2 (courtesy ofJ.R om ano and the stochasti c backgrounds group of LSC ).Fortunatel y, the projected sensi ti vi ty ofthe advanced LIG O ( 2011) w i l lbe su ci ent to reach the re-qui red l evelofh rm s 10 25 ; gw 10 10 ,w hen a m onth-l ong stretch of cross-correl ated data from the two detectors i s avai l abl e.
Fi gure 3: Vari ous LISA sources i ncl udi ng rel i c gravi tati onalwaves T he ESA -N A SA space-based m i ssi on LISA ( 2013) w i l l have a better chance to di scoverrel i cgravi tati onalwaves. Si nce theexpected spectrum has l arger am pl i tudes at l ower frequenci es,the detectabi l i ty condi ti on i s potenti al l y i m provi ng atl owerfrequenci es. In Fi g.3 we show the LISA sensi ti vi ty i n frequency bi nsof f = 3 10 8 H z,w hi ch corresponds to an observati on ti m e of1 year. T hi s observati on ti m e shoul d m ake i t possi bl e to resol ve the g. w .l i nes from thousands of w hi te dwarf bi nari es i n our G al axy,radi ati ng at frequenci es l arger than 2 10 3 H z. By rem ovi ng the contri buti on of the bi nari esfrom the observed records,orby usi ng sophi sti cated data anal ysi stechni ques w i thoutactual l y rem ovi ng the contam i nati ng si gnal sfrom the data,one can e ecti vel y cl ean up the w i ndow ofi nstrum entalsensi ti vi ty at frequenci es above 2 10 3 H z. T hi s w i ndow i n the area ofm axi m alsensiti vi ty ofLISA i sshow n i n the graph togetherw i th the expected l evelofrel i c gravi tati onalwaves i n that w i ndow . T he di spl ayed spectrum was obtai ned under the assum pti on that = 2 (n = 1),i . e. for a at pri m ordi alspectrum . T he survi ved part ofthe pri m ordi al at spectrum i s seen on the graph as a hori zontal part of the curve i n the regi on ofvery sm al lwavenum bers n. T he norm al i sati on ofthe spectrum i s chosen i n such a way thatthe i nduced quadrupol e ani sotropy of the C M B today i s at the l evelofthe actual l y observed quadrupol e [ 15,13] . Speci cal l y,the tem perature functi on l(l+ 1)C l i n Fi g. 4,cal cul ated from the spectrum i n Fi g. 5,gi ves the requi red val ue of960 ( K ) 2 at l= 2. T he di stri buti on ofother i nduced m ul ti pol es i s al so show n i n Fi g. 4.
Fi gures 4 and 5 are pl aced one under another on purpose. T hi s pl acem entgi vesa bettervi sualdescri pti on ofthefactnoti ced and expl ai ned previousl y [ 23] . N am el y,the osci l l ati ons i n the m etri c (gravi tati onal el d) power spectrum are enti rel y responsi bl e for the osci l l ati ons i n the angul ar power spectrum of the C M B tem perature, w i th al m ost uni versal correspondence between extrem a i n the wavenum ber space n and extrem a i n the m ul ti pol e m om ent space l. Ifthere i s m uch/l i ttl e power i n the gravi tati onal el d perturbati ons ofa gi ven i ntervalofwavel engths,one shoul d expect m uch/l i ttl e power i n the tem perature uctuati ons at the correspondi ng angul ar scal e.
It i s the osci l l ati ons i n the m etri c power spectrum that are responsi bl e for the osci l l ati ons i n the l-space, and not the m ysteri ous expl anati ons often repeated i n the l i terature, w hi ch cl ai m that the peaks i n the functi on l(l+ 1)C l ari se because ofsom e waves bei ng caught (at the m om ent ofdecoupl i ng) i n thei r m axi m a or m i ni m a, w hi l e others are not. To i l l ustrate the rol e ofstandi ng gravi tati onalwaves and the associ ated power spectrum osci l l ati ons,versus travel i ng gravi tati onalwaves w i th no powerspectrum osci l l ati ons, i t was expl i ci tel y show n [ 23] that the l ater hypothesi s does not produce osci l l ati ons i n the l-space.
Inci dental l y,i t was argued [ 23]that i n the case ofdensi ty perturbati ons, the m ai n contri buti on to the peaksi n the tem perature functi on l(l+ 1)C l can al so be provi ded by osci l l ati ons i n the m etri c power spectrum ,rather than by the tem perature vari ati ons accom panyi ng sound waves i n the photonel ectron-baryon pl asm a at the l ast scatteri ng surface. In the case ofdensi ty perturbati ons,them etri cpowerspectrum i sm ostl y associ ated w i th thegravi tati onal el d ofthe dark m atter,w hi ch dom i natesotherm attercom ponents i n term s ofthe gravi tati onal el d.
O sci l l ati ons i n the m etri c power spectrum i n the earl y U ni verse are i nevi tabl e,and forthe sam e reason asi n the g. w .case,nam el y,because ofthe standi ng-wave pattern of the m etri c perturbati ons that i s rel ated to thei r quantum -m echani calori gi n. T herefore,the often-di scussed \acousti c" peaks i n the l-space m ay wel l turn out to be \gravi tati onal " peaks. It rem ai ns to be seen how thi s ci rcum stance can change i nferences about cosm ol ogi cal param eters.
W e shal lnow turn to the C M B pol ari sati on. (For som e i m portant papers on C M B pol ari sati on,see for exam pl e [ 24,25,26,27,28,29,30] . ) It fol l ow s from the radi ati on transfer equati ons that the pol ari sati on ofC M B i sm ai nl y determ i ned by the rstti m e-deri vati ve ofthe m etri c perturbati ons i n the i ntervalofti m e w hen the pol ari sati on i s m ai nl y produced. T herefore, i t i s the power spectrum of the functi on h 0 ij ( ;x) that i s of a pri m ary i mportance. Si nce the g. w . el d i tsel f, i ncl udi ng i ts norm al i sati on, has been ful l y determ i ned,the quanti ty ofour i nterest i s di rectl y cal cul abl e. In Fi g. 7 we show [ 22]the power spectrum (h 0 =n) 2 rm s (n),cal cul ated at the ti m e of decoupl i ng. T he i nduced E and B com ponents ofpol ari sati on are show n i n Fi g. 6. T hi s graph was deri ved under the usualassum pti ons about the recom bi nati on hi story,w hi ch m eans,i n parti cul ar,thatthe pol ari sati on was pri m ari l y accum ul ated duri ng a rel ati vel y short i ntervalofti m e around z dec .
Si m i l arl y to the case of tem perature ani sotropi es, the extrem a i n the C oncl udi ng thi s secti on I woul d l i ke to say as a w i tness that Zel dovi ch suggested usi ng the C M B pol ari sati on as a g. w .di scri m i nator,as earl y as i n the very begi nni ng ofthe 80' s. T hi s was cl earl y stated i n pri vate conversati ons,but Iam not aware ofany w ri tten records. T he \standard i n ati onary resul t" states that the nal(second crossi ng, f) am pl i tudes ofquanti ti es descri bi ng densi ty perturbati ons are rel ated to the i ni ti al( rstcrossi ng,i)val uesof' and otherquanti ti es,accordi ng to the eval uati on: H owever,accordi ng to the \standard i n ati onary resul t",thi s i s very far from bei ng the case. T he denom i nator ofthe l ast term ofEq. (6) contai ns a new factor: p 1 n.T hi sfactorgoesto zero i n thel i m i tofthem osti nteresti ng and observati onal l y preferred possi bi l i ty ofthe at (H arri son-Zel dovi ch-Peebl es) pri m ordi al spectrum n = 1. C orrespondi ngl y, the am pl i tudes of the generated densi ty perturbati ons go to i n ni ty,accordi ng to the predi cti on ofi n ati onary theori sts,i n the l i m i t ofthe at spectrum . (By now ,the \standard i n ati onary resul t" (6)hasbeen ci ted,used,prai sed,reform ul ated, popul ari sed,etc. i n hundreds ofi n ati onary publ i cati ons,so i t has becom e ' accepted by way ofrepeti ti on ' . ) A sw i l lbedem onstrated bel ow ,thedi vergencei n Eq. (6)i snota vi ol ati on, suddenl y descendi ng upon usfrom the ' bl ue sky' ,ofthe adopted approxi m ati on ofsm al ll i near perturbati ons. T hi s i s a m ani festati on ofthe i ncorrect theory. Even i fthe spectrali ndex n i s not very cl ose to 1,and you com bi ne n w i th a reasonabl e H i i n orderto obtai n,forexam pl e,a sm al lnum ber 10 5 for the r. h. s.ofEq. (6),thi s w i l lnot m ake your theory correct. T hi s w i l lbe just an acceptabl e num ber acci dental l y fol l ow i ng from the w rong form ul a. You w i l lhave to pay a heavy pri ce i n som e other pl aces.
! f (h S ) f ( ) f ( ) i H 2 _ ' ! i V 3=2 V ;' ! i H i p 1 n :(6)
A n attem ptto deri ve physi calconcl usi onsfrom thi sform ul a can onl y l ead to m i stakes. T he current l i terature i s ful lofi ncorrect far-reachi ng physi cal concl usi onsderi ved from thi sw rong theory. T hi si sa ki nd ofsi tuati on w hi ch L.D .Landau used to descri be sarcasti cal l y i n the fol l ow i ng words: \Ifyou assum e thatthe deri vati ve ofthe functi on si n x i sl n x,ratherthan cosx,you can m ake m any wonderfuldi scoveri es. . . . ".
In i n ati onary l i terature, the ' zero i n the denom i nator' factor p 1 n appears i n m any di erent dresses. It i s often w ri tten i n equi val ent form s,
such as(_ '=H ) i ,(V ;' =V ) i ,(H ;' =H ) i , p 1 + w i ,etc.
In ati oni stsare routi nel y hi di ng thei r absurd predi cti on ofi n ni tel y l arge am pl i tudes ofdensi ty perturbati ons that shoul d take pl ace i n the l i m i t ofthe at spectrum n ! 1. T hey di vi de the g. w .am pl i tude h T over the predi cted di vergent am pl i tude h S . T hi s di vi si on produces the so-cal l ed ' tensor-to-scal ar rati o' ,or ' consi stency rel ati on' : h T =h S p 1 n. T he quanti ty H i ,com m on for the T and S perturbati ons,cancel s out i n the com posed rati o,and the ' zero i n the denom i nator'factor i s transferred to the num erator ofthe nalexpressi on. It i sthen decl ared thatthe m etri c am pl i tude h S ofdensi ty perturbati onsi sde-term i ned by the observed C M B ani sotropi es,and,therefore,the i n ati onary ' consi stency rel ati on'dem ands that the g. w .am pl i tude h T m ust vani sh i n the l i m i t ofn ! 1.
In other words, i nstead ofbei ng horri ed by the fact that thei r theory predi ctsarbi trari l y l arge am pl i tudesofdensi ty perurbati ons(and,hence,the theory i s i n com pl ete di sagreem ent w i th observati ons,because the anal ysi s ofthe data show s no catastrophi c i ncrease i n the am pl i tude w hen the tested spectrali ndex approaches n = 1),supporters ofthe i n ati onary approach to sci ence system ati cal l y cl ai m that thei r theory i s i n ' spectacul ar agreem ent' w i th observati ons,and i t i s gravi tati onalwaves that shoul d vani sh.
Ifthi s were true,there woul d not be m uch sense i n attem pti ng to detect pri m ordi algravi tati onalwaves,asthe observati onspersi stentl y poi nttoward n 1,i ncl udi ng n = 1. It i s qui te com m on to hear these days enthusi asti c prom i sesofi n ati onary bel i eversto detect\i n ati onary gravi tati onalwaves" i n the \not-so-di stant future" vi a the m easurem ent ofB -m ode pol ari sati on ofC M B.Butfrom otherpapersofthesam eauthorsi tfol l ow sthattherei sno reason even to try. Ifyou trust and ci te i n ati onary form ul as,the expected am ount of\i n ati onary gravi tati onalwaves" shoul d be very sm al lor zero. You can onl y hope to be extrem el y l ucky i fyou suggestto detectthem ,even i n the qui te di stantfuture,forexam pl e w i th the proposed m i ssi on cal l ed Bi g Bang O bserver. A nd nobody shoul d be surpri sed i fyou have found nothi ng, because n = 1 i s i n the heart of al lcl ai m s, theoreti cal and observati onal . M oreover,m ostl oyali n ati oni stswoul d say thatthi swasexactl y w hatthey had al ways been predi cti ng.
To dem onstratethei ncorrectnessofi n ati onary concl usi ons,weshal lnow concentrateon the' zero i n thedenom i nator'factor.W ew i l lhaveto recal lthe quanti sati on procedure forgravi tati onalwaves and densi ty perturbati ons. It i snecessary to rem i nd the readerthatsom e i n ati oni stsand thei rsupporters i nsi sted form any yearson the cl ai m thatthe dram ati c di erence i n the nal num eri calval ues of h T and h S ari ses not because of the i ni ti alcondi ti ons, but because ofthe subsequent evol uti on. Speci cal l y,they cl ai m ed thatthe cl assi call ong-wavel ength ' scal ar'm etri c perturbati ons are capabl e of experi enci ng, i n contrast to gravi tati onal waves,a \bi g am pl i cati on duri ng reheati ng". (For a cri ti caldi scussi on,see [ 31] . ) But i t now l ooks as i fthe fal l acy ofthi s proposi ti on has becom e cl ear even to i ts m ost ardent proponents. T herefore, we shal lnow focus on the i ssue ofquantum m echani cs and i ni ti alcondi ti ons.
T he perturbed gravi tati onal el d for al lthree sorts ofcosm ol ogi calperturbati ons (scal ar,vector,tensor) i s descri bed by Eq. (1). For si m pl i ci ty,we are consi deri ng spati al l y at cosm ol ogi es,w hose spati alcurvature radi us i s i n ni te. H owever, i f the spati alcurvature radi us i s ni te but, say, onl y a factor of10 l onger than l H ,very l i ttl e w i l lchange i n our anal ysi s.
T he m etri c perturbati ons h ij ( ;x) can be expanded over spati alFouri er harm oni cs l abel ed by the wavevector n:
h ij ( ;x)= (7) C (2 ) 3=2 Z 1 1 d 3 n
In the di stant past,at ti m es near 0 ,and before i w hen a gi ven m ode entered superadi abati c regi m e,the g. w .am pl i tude behaved accordi ng to the l aw h( )/ 1 a( )) e in( 0 ) :
T he ti m e-deri vati ve of a( ) can be negl ected, i . e. a 0 =a n. T hen, we prom ote q and p to the status of quantum -m echani cal operators, denote them by bol d-face l etters,and w ri te dow n thei r asym ptoti c expressi ons:
q = s h 2 a 0 a h ce in( 0 ) + c y e in( 0 ) i ;(15)p = i s h 2 a a 0 h ce in( 0 ) + c y e in( 0 ) i :(16)
T he com m utati on rel ati onshi ps for the q;p operators,and for the annihi l ati on and creati on operators c;c y ,are gi ven by
h0j q 2 j 0i= h0j p 2 j 0i= h 2 ; q p = h 2 :
T he root-m ean-square val ue of q i n the vacuum state i s q rm s = q h=2. C om bi ni ng thi s num ber w i th the de ni ti on (12) we deri ve
h rm s = h0j h 2 j 0i 1=2 = p 2(2 ) 3=2 l P l 0 :(17)
Extrapol ati ng the i ni ti alti m e 0 up to the boundary between the adi abati c and superadi abati c regi m es at = i ,we deri ve the esti m ate h rm s l P l = i . It i s thi s eval uati on that was used i n [ 2] and i n Sec. 1. M ore accurate cal cul ati ons al ong these l i nes produce C = p 16 l P l i n expressi on (7) for gravi tati onalwaves.
A consi stent form al deri vati on of the total H am i l toni an, i ncl udi ng the term sdescri bi ng i nteracti on ofthe osci l l atorw i th external el d,i spresented i n R ef. [ 34]
w here sati s es Eq. (19). To m ake contact w i th earl i er work,i t shoul d be m enti oned that the previ ousl y i ntroduced quanti ty
B S T = 2 3 (a=a 0 ) 0 + 1 + w + ;
w here i s Bardeen' s potenti al and B ST stands for Bardeen, Stei nhardt, Turner [ 35] ,can be reduced to ourvari abl e (20)up to the num eri calcoeci ent (1=2).O urquanti ty fordensi ty perturbati onscan al so be rel ated to the vari abl e u C L M S ,w here C LM S stands for C hi bi sov,Lukash,M ukhanov, Sasaki [ 36,37,38] . In preparati on for quanti sati on,we shoul d rst i denti fy the i n ati onary ' zero i n the denom i nator'factor.T he unperturbed Ei nstei n equati onsforthe coupl ed system ofgravi tati onaland scal ar el ds requi re [ 33] (' 0
0 ) 2 = 2 a 0 a ! 2 :
T herefore,the ' zero i n the denom i nator'factor
_ ' 0 H i = s 2 ( p ) i i
T he appl i cati on of the substi tuti ons a !ã = a p , h ! to the g. w . Lagrangi an (13) gi ves ri se to the Lagrangi an L dp for the si ngl e dynam i cal degree offreedom descri bi ng S-perturbati ons:
L dp = 1 2n a p a 0 p 0 ! 2 0 2 n 2 2 :(22)
O bvi ousl y,the Eul er-Lagrange equati on
0 0 + 2 (a p ) 0 a p ; 0 + n 2 = 0(23)
deri vabl e from the l agrangi an (22) W e shal lstart w i th the anal ysi s ofthe paper [ 39]w hi ch, together w i th R ef. [ 40] , i s som eti m es referred to as the m ost recent work that contai ns a ri gorous m athem ati calderi vati on ofthe \standard i n ati onary resul t". T he authorofthese papers uses sl i ghtl y di erent notati ons,such as a 2 = e 2 and ' = . In hi s notati on,the quanti ty _ ' 0 =H i s _ =_ ,so that the ' zero i n the denom i nator'factorappearsas _ =_ ,w here the asteri sk m eans\the ti m e of hori zon crossi ng".
A sa \usefulexam pl e to keep i n m i nd" forquanti sati on ofdensi ty perturbati ons,the authorsuggeststhe arti ci alm odelofa testm assl essscal ar el d f i n deSi tterspace.ButtheLagrangi an,cl assi calsol uti ons,and quanti sati on procedure forf are i denti calto the g. w .case thatwe recal l ed above,so that hi s vari abl e f i s our h for gravi tati onal waves. H i s Lagrangi an (2. 12) for densi ty perturbati ons coi nci des i n structure w i th our Lagrangi an (22),and we di scuss one and the sam e observabl e quanti ty .
It i s worthw hi l e to quote expl i ci tel y the attem pted ri gorous proof [ 39] : \Si nce the acti on (2. 12) al so contai ns a factor _ =_ we al so have to set i ts val ue to the val ueathori zon crossi ng,thi sfactoronl y appearsi n norm al i zi ng the cl assi calsol uti on. In other words,near hori zon crossi ng we set
f = _ _ ;
w heref i sa canoni cal l y norm al i zed el d i n de-Si tterspace.T hi sproducesthe wel lknow n resul t. . . ". A nd the author i m m edi atel y w ri tes dow n the square ofthe \standard i n ati onary resul t",w i th the square ofthe factor _ =_ i n the denom i nator ofthe nalexpressi on. Letustry to traverse i n practi ce the path to the \wel lknow n resul t".(To befai rto theauthor,thederi vati on ofthe\standard i n ati onary resul t" does not appear to be the m ai n purpose ofhi s paper [ 39] ,so m y cri ti ci sm does noti m pl y anythi ng aboutotherstatem entsofthatpaper. ) T he factor _ =_ i n (2. 12)ofthe ci ted paperi sourfactor p i n Eq. (22). Iti srecom m ended [ 39] to com bi ne the resul tsforthe g. w .vari abl e h w i th the prescri pti on = 1 p h. So,i nstead ofEq. (15),we woul d have to w ri te
q = = s h 2ã 0 a 1 p h be in( 0 ) + b y e in( 0 ) i :(24)
T he canoni cal l y conjugate m om entum seem s to be
p = @L @ 0 = 1 n ã a 0 2 0 :(25)
T he ti m e deri vati ve of shoul d be negl ected, as i s ei ther a constant or a sl ow l y changi ng functi on at ti m es near 0 . T herefore, we woul d have to w ri te,i nstead ofEq. (16),the fol l ow i ng rel ati onshi p: i n w hi ch the ' zero i n the denom i nator'factor p i s m ani festl y present and squared,as the \wel lknow n resul t" prescri bes.
p = i s h 2ã a 0 p h be in( 0 ) + b y e in( 0 ) i :(26)
In the l i m i t ofvery sm al l p one obtai ns the di vergence ofi ni ti alam pl itudes,w hi ch i s i n the heart ofal li n ati onary predi cti ons. (In the publ i shed versi on [ 40]ofthe e-paper [ 39] ,the road to the \wel lknow n resul t" recomm ends,possi bl y due to a m i spri nt,the di am etri cal l y opposi te prescri pti on = _ _ f; w hi ch woul d send the factor to the num erator ofthe above cal cul ati on. It l ooksasthough the ' ri gorous'i n ati onary predi cti ons uctuate between zero and i n ni ty . ) In i n ati onary l i terature,the powerspectrum P R (k)ofcurvature perturbati ons i s usual l y w ri tten i n the form ofthe m ean-square val ue ofthe canoni cal l y conjugate m om entum p gi ves
h0 s j p 2 j 0 s i= h 2 0 ;
so that the factor p cancel s out i n the uncertai nty rel ati on
q p = h 2 :
T he deri ved num bers cl earl y i ndi cate that the quantum state j 0 s i i s not a genui ne (ordi nary) vacuum state j 0i for the dynam i calvari abl e ,but,on the contrary, i s a m ul ti -parti cl e (squeezed vacuum ) state. T hi s i s w hy we have used the subi ndex s.
To show how the states j 0i and j 0 s i are rel ated,we shal l rst transform the operators. Letusi ntroduce the anni hi l ati on and creati on operatorsc;c y accordi ng to the Bogol i ubov transform ati on
b = uc + vc y ; b y = u c y + v c;(28)
w here u = cosh r; v = e i2 si nh r:
T he param eters r and are cal l ed squeeze param eters. Let us assi gn the fol l ow i ng val ues to r and : e 2r = ; = n( 0 ) or e 2r = ; = n( 0 )+ 2 :
W e shal lnow use the substi tuti on (28),togetherw i th Eqns (29)and (30),to Eqns (24) and (26). T he factor 1= p cancel s out i n Eq. (24) and the factor p cancel s outi n Eq. (26). In term s ofc;c y ,the operators q;p w i l ltake the nalform :
q = s h 2ã 0 a h ce in( 0 ) + c y e in( 0 ) i ;(31)p = i s h 2ã a 0 h ce in( 0 ) + c y e in( 0 ) i :(32)
T he genui ne vacuum state forthe vari abl e (i . e. the ground state ofthe correspondi ng H am i l toni an) i s de ned by the condi ti on cj 0i= 0: C al cul ati ng the m ean square val ues ofq and i ts canoni cal l y conjugate m om entum p,we nd
h0j q 2 j 0i= h0j p 2 j 0i= h 2 ; q p = h 2 ;
as i t shoul d be.
Taki ng i nto account the de ni ti on (21),we nal l y deri ve the i ni ti alrm s val ue ofthe vari abl e = =a p : rm s = h0j 2 j 0i T hi s eval uati on,pl us the constancy ofthe quanti ty =a p throughout the l ong-wavel ength regi m e,i sthefoundati on oftheresul taccordi ng to w hi ch the nal(at the end ofthe l ong-wavel ength regi m e) am pl i tudes ofgravi tati onal waves and densi ty perturbati ons shoul d be roughl y equalto each other [ 33] .
T here i s no di m ensi onal param eter w hi ch coul d be regul ated i n such a way as to m ake one of the am pl i tudes several orders of m agni tude l arger than another. In term s of the ' scal ar'and ' tensor'm etri c am pl i tudes thi s m eansthath T =h S 1 foral l ' s. M ore accurate cal cul ati onsal ong the sam e l i nes produce C = p 24 l P l i n expressi on (7) for densi ty perturbati ons. C ertai nl y, the correct quanti sati on procedure (31), (32), as opposed to the i ncorrect (i n ati onary) procedure (24), (26),coul d be form ul ated from the outset ofquanti sati on. M athem ati cal l y,the Lagrangi ans (13) and (22) are al i ke,i fi n (13) one m eansã by a,and repl aces h w i th .
T he deri vati on ofthe H am i l toni an forS-perturbati onsrepeatsthe deri vati on forgravi tati onalwaves. U si ng the canoni calpai r q = ,p = @L=@ 0 for S ,we arri ve at the H am i l toni an (com pare w i th Eq. (98) i n R ef. [ 33] ) H ( )= nc y c + c y 2 + c 2 ;
w here the coupl i ng to the external el d i s now gi ven by the functi on ( )= (i=2)(ã 0 =ã).
T he H ei senberg equati ons of m oti on for the H ei senberg operators c( ), c y ( ) l ead to cl assi cal equati on (19). T he asym ptoti c expressi ons for the H ei senberg operators, In term s ofthe cl assi calm ode functi ons,i t i s the functi on =a p that shoul d sati sfy the cl assi calversi on ofthe i ni ti alcondi ti ons (31),and notthe functi on =a,w hi ch i spostul ated by the i n ati onary requi rem ent (27).T hey both are so-cal l ed ' gauge-i nvari ant'vari abl es,but thei r physi calm eani ng i s drasti cal l y di erent.T he ori gi nalderi vati onsofthe\wel l -know n resul t" were gui ded si m pl y by the vi sualanal ogy between the functi on u = i n theory of densi ty perturbati onsand the functi on i n the theory ofgravi tati onalwaves al ready devel oped by that ti m e.
T he assum pti on ofarbi trari l y l arge i ni ti alam pl i tudes ofcurvature perturbati ons or, techni cal l y speaki ng, the choi ce of the i ni ti al m ul ti -parti cl e squeezed vacuum state j 0 s i for ,i nstead ofthe ordi nary vacuum sate j 0i,i s the ori gi n ofthe absurd \standard i n ati onary resul t". C ertai nl y,thi sw rong assum pti on cannot be the basi s ofobservati onalpredi cti ons for cosm ol ogy.
C onclusions
T he grossl y i ncorrect predi cti ons ofi n ati onary theori sts shoul d not be the reason for doubts about the exi stence and expected am ount of rel i c gravitati onalwaves. T he generati on ofrel i c gravi tati onalwaves i s based on the val i di ty of general rel ati vi ty and quantum m echani cs i n a safe cosm ol ogical regi m e w here quanti sati on of the background gravi tati onal el d i s not necessary.
In our num eri caleval uati ons,we al so assum ed that the observed l argeangul ar-scal e ani sotropi es ofC M B are caused by cosm ol ogi calperturbati ons ofquantum -m echani calori gi n. T hi s i s not necessari l y true,but i t woul d be qui te di sastrous i fi t proved to be untrue.
Iti squi tea chal l engeto i m agi nethatthenaturaland unavoi dabl equantumm echani cal generati on of cosm ol ogi cal perturbati ons i s l ess e ecti ve than anythi ng el se. In any case,i frel i c gravi tati onalwaves are not di scovered at the (rel ati vel y hi gh)l eveldescri bed i n thi scontri buti on,the i m pl i cati onsw i l l bem uch m oreseri ousthan therejecti on ofonei n ati onary m odeloranother. T he real i ty ofourti m e i ssuch thati fthe proposali snotproperl y ' sexed-up' , i ti snotvery l i kel y to be funded. Butthe ul ti m ate truth l i esi n the factthat the realphysi cs ofthe very earl y U ni verse i s m uch m ore exci ti ng than the arti ci al hul l abal l oo over popul ar words such as ' i n ati on' or ' i n ati onary gravi tati onalwaves' .
H opeful l y, rel i c gravi tati onal waves w i l l be di scovered i n experi m ents, w hi ch are al ready i n the wel l -devel oped stage. Ipersonal l y woul d thi nk that thi s i s l i kel y to happen rst i n dedi cated ground-based observati ons, such as the recentl y approved C ardi -C am bri dge-O xford col l aborati on C LO V ER [ 41] . Let us hope thi s w i l li ndeed be the case.
A cknow ledgm ents I am grateful to D . Baskaran, J. R om ano and especi al l y M . M ensky for frui tful ldi scussi onsand hel p,and to P.Stei nhardtforcal l i ng m y attenti on to the paper [ 40]and for the accom panyi ng i ntense and usefulcorrespondence.
w here l o and are constants. T hen, the perturbed Ei nstei n equati ons for h ij ( ;x) si m pl i fy and can be sol ved i n el em entary functi ons. In parti cul ar, thei nterval sofpower-l aw evol uti on (2)m aketractabl ethee ecti ve' potenti al barri er'a 00 =a i n the gravi tati onalwave (g. w . ) equati on [
Fi gure 1 :
1Envel ope ofthe h rm s ( ) spectrum for the case = 1num eri calval ue ofh rm s at frequenci es around H i s determ i ned by the num eri calval ue ofthe observed quadrupol e ani sotropy ofC M B.A s w i l l be show n i n great detai li n Sec. 4, i t fol l ow s from the theory of cosm ol ogicalperturbati ons that rel i c gravi tati onalwaves shoul d provi de a si gni cant fracti on ofthe observed C M B si gnalatvery l arge angul arscal es(barri ng the l ogi calpossi bi l i ty that the observed ani sotropi es have nothi ng to do at al l w i th cosm ol ogi calperturbati ons ofquantum -m echani calori gi n).
Fi gure 2 :
2S3 LIG O noi se curves and the expected sensi ti vi ty 0 ess, the graph i n Fi g. 1 i s conveni ent i n that i t gi ves si m pl e answers to the m ostgeneralquesti ons on the am pl i tudes and spectralsl opes of rel i c gravi tati onalwaves i n vari ous frequency i nterval s. For exam pl e, i t show s the expected am pl i tude h rm s = 10 25 at = 10 2 H z. T hi s i s the l evel of the si gnal that we shal l be deal i ng w i th i n experi m ental program s. In term s ofthe param eter gw ( ),
3
Indirectdetection ofrelic gravitationalw aves via C M B anisotropies and polarisation T he expected am pl i tudesofrel i c gravi tati onalwavesreach thei rhi ghestl evel i n the frequency i nterval of 10 18 10 16 H z. T hi s i s w hy one has very good prospectsfori ndi rectdetecti on ofrel i c gravi tati onalwavesthrough the m easurem ents of ani sotropi es i n the di stri buti on over the sky of the C M B tem perature and pol ari sati on. (For an i ntroducti on to the theoreti caltool s ofC M B physi cs,see for exam pl e [ 21] . ) T heaccuratel y cal cul ated powerspectrum h 2 rm s (n)i sshow n i n Fi g.5 [ 22] . T he spectrum i s cal cul ated at the m om ent ofdecoupl i ng (recom bi nati on) of the C M B,w i th the redshi ft ofdecoupl i ng at z dec = 1100. T he deri vati on of the spectrum takes i nto account the quantum -m echani cal squeezi ng of the waves' phases, w hi ch m ani fests i tsel f m acroscopi cal l y i n the standi ng-wave character ofthe generated gravi tati onalwaves. From the vi ew poi nt ofthe underl yi ng physi cs,i t i s thi s i nevi tabl e quantum -m echani calsqueezi ng that i s responsi bl e for the osci l l ati ons i n the power spectrum .
12 Fi gure 4 Fi gure 7 :
1247Power spectrum of rst deri vati ve ofthe g. w .Fi g.6 are i n a good correspondence w i th each other. If there i snotm uch poweri n the rstti m e-deri vati ve ofthe m etri c,you shoul d notexpectm uch poweri n thepol ari sati on atthecorrespondi ng angul arscal e. O n the other hand,the regi on ofwavenum bers n 90,w here there i s the rst pronounced peak i n Fi g. 7,i s ful l y responsi bl e for the rst pronounced peak i n Fi g. 6 at the correspondi ng angul ar scal es l 90.Fi gure 8: Expected num eri call evelofani sotropy and pol ari sati on i nduced by rel i c gravi tati onalwaves
l+1) C l TT l(l+1) C l EE l(l+1) C l BB l(l+1) C l TEIn Fi g.8 we com bi ne together som e of the expected si gnal s from rel i c gravi tati onal waves. T hey are encoded i n the C M B ani sotropi es and pol ari sati on. T hi s gure i ncl udes al so a possi bl e pol ari sati on bum p,di scussed previ ousl y by other authors, at very sm al ll' s. T hi s feature ari ses because of the extended rei oni sati on peri od i n the rel ati vel y l ate uni verse, around z rei17. In agreem ent w i th the expl anati ons gi ven above,the am pl i tude and posi ti on ofthi sbum p i n thel-spacearedeterm i ned by theam pl i tudeand posi ti on ofthe rstm axi m um i n the powerspectrum (h 0 =n) 2 ofthe functi on h 0 ij ( ;x),cal cul ated at z rei . T he resul ti ng graphs i n Fi g. 6 and Fi g. 8 are qual i tati vel y si m i l ar to the graphs deri ved by other authors before us. H owever, we take the re-sponsi bi l i ty ofcl ai m i ng that the num eri call evelof,say,the B com ponent of pol ari sati on show n i n our graphs i s w hat the observers shoul d expect to see on the sky. O fcourse,thi s statem ent assum es that the observed l arge-scal e ani sotropi es ofC M B are caused by cosm ol ogi calperturbati ons ofquantumm echani calori gi n,and not by som ethi ng el se.T he true l evelofthe B si gnalcan be som ew hathi gherorsom ew hatl ower than the theoreti call evel show n i n our gures. But the si gnalcannot be, say,severalordersofm agni tude l owerthan the one show n on ourgraphs.In contrast,the i n ati onary l i terature cl ai m s that the am ount of\i n ati onary gravi tati onal waves" vani shes i n the l i m i t of the at pri m ordi al spectrum = 2 (n = 1). T herefore, the m ost l i kel y l evel of the B m ode si gnal produced by \i n ati onary gravi tati onalwaves" i s cl ose to zero. T hi s woul d m ake the detecti on i m possi bl e i n any foreseeabl e future. It i s a pi ty that m any ofourexperi m ental lcol l eagues,bei ng gui ded by the w rong theory,are accepti ng thei rdefeateven before havi ng started to bui l d i nstrum entsai m ed at detecti ng rel i c gravi tati onalwaves vi a the B com ponent ofpol ari sation. T hei r l ogi c seem s to be the fol l ow i ng: ' we woul d l ove to di scover the fundam ental l y i m portant rel i c gravi tati onalwaves,but we were tol d by i nati oni sts m any ti m es that thi s i s very unl i kel y to happen,so we agreed to feelsati s ed even i fwe succeed onl y i n putti ng som e l i m i ts on,say,pol ari sati on properti es of dust i n the surroundi ng cosm os' . T he author of thi s contri buti on fears that i n a com pl ex experi m ent l i ke the B -m ode detecti on, thi s ki nd ofl ogi c can onl y l ead to overl ooki ng the i m portant si gnalthat the experi m ent ori gi nal l y targeted.
4 T he false \standard in ationary result". H ow to correctly quantise a cosm ological harm onic oscillator W hy bother about rel i c gravi tati onal waves i f i n ati oni sts cl ai m that the am ountofrel i c gravi tati onalwaves(i n ati oni stsand fol l owerscal lthem \i n-ati onary gravi tati onalwaves") shoul d be zero or al m ost zero? T hi s cl ai m i sa di rect consequence ofthe so-cal l ed \standard i n ati onary resul t",w hi ch i s the m ai n contri buti on ofi n ati onary theori sts to the subject ofpracti cal , rather than i m agi nary,cosm ol ogy. In the i n ati onary scenari o,the ' i ni ti al 'era ofthe uni verse expansi on i s dri ven by a scal ar el d ' w i th the scal ar el d potenti alV (').Iti si n thi sera that the i ni ti al quantum vacuum condi ti ons for cosm ol ogi cal pertubati ons are bei ng form ul ated. T he i n ati onary sol uti ons for the scal e factor a( ) are cl ose to the deSi tter evol uti on characteri sed by = 2 i n Eq. (2). T he e ecti ve equati on ofstate for the scal ar el d i s al ways + p 0,so that for the power-l aw i nterval sofexpansi on dri ven by the scal ar el d,the param eter can onl y be 2, see Eq. (4). T herefore, one expects the pri m ordi al spectrum of the generated m etri c perturbati ons to be al m ost at, i . e. the pri m ordi alspectrali ndex n shoul d be cl ose to n = 1,w i th n 1. T hebegi nni ng oftheam pl i fyi ng superadi abati cregi m eforthegi ven m ode ofperturbati ons i s often cal l ed the ' rst H ubbl e radi us crossi ng' ,w hi l e the end of thi s regi m e for the gi ven m ode i s often cal l ed the ' second H ubbl e radi uscrossi ng' . T he \standard i n ati onary resul t" i sform ul ated forcosm ol ogi calperturbati ons cal l ed densi ty perturbati ons (scal ar,S,perturbati ons) as opposed to the gravi tati onalwaves (tensor,T ,perturbati ons) consi dered i n Sec. 1.
T
he num erator of the l ast term on the r. h. s. of Eq. (6) i s the val ue of the H ubbl e param eter taken at the m om ent of ti m e w hen the gi ven m ode entersthe superadi abati c regi m e. T hi si sthe sam e quanti ty H i w hi ch de nes the g. w .(' tensor' ) m etri c am pl i tude, as descri bed i n Sec. 1. Si nce we are supposed to startw i th the i ni ti alvacuum quantum state foral lcosm ol ogi cal perturbati ons, one woul d expect that the resul ts for densi ty perturbati ons shoul d be si m i l ar to the resul ts for gravi tati onalwaves. O ne woul d expect that the am pl i tude h S ofthe generated ' scal ar'm etri c perturbati ons shoul d be ni te and sm al l ,and ofthe sam e orderofm agni tude asthe am pl i tude h T of' tensor'm etri c perturbati ons.
three sorts ofcosm ol ogi calperturbati ons are di erent i n thatthey have three di erent sorts of pol ari sati on tensors s p ij (n), and each of them has two di erent pol ari sati on states s = 1;2. T he ' scal ar'and ' vector' m etri c perturbati ons are al ways accom pani ed by perturbati ons i n densi ty and/or vel oci ty ofm atter. T he norm al i sati on constant C i s determ i ned by quantum m echani cs,and the deri vati on ofi tsval uei soneofthe ai m sofourdi scussi on. Let us recal lthe procedure of quanti sati on of gravi tati onalwaves. Let usconsi der an i ndi vi dualg. w . m ode n. T he ti m e-dependent m ode functi ons s hn ( ) can be w ri tten as s and n, the g. w .m ode functi ons ( ) sati sfy the fam i l i ar equati on (3). T he acti on for each m ode has the form the g. w . Lagrangi an L i s gi ven by the expressi on [ 32] di m ensi onl ess g. w . vari abl e h = =a bri ngs us to the equati on of ch i s equi val ent to Eq. (3). In order to m ove from 3-di m ensi onal Fouri er com ponents to the usual descri pti on i n term s ofan i ndi vi dualosci l l ator w i th frequency n,we w i l lbe worki ng w i th the quanti ty h i ntroduced accordi ng to the de ni ti on a 0 i s a constant. T hi s constant a 0 i s the val ue of the scal e factor a( ) at som e i nstant ofti m e = 0 w here the i ni ti alcondi ti ons are bei ng form ul ated,and 0 = 2 a 0 =n.In term s of h,the Lagrangi an (10) takes the form quanti ty h = q i s the ' posi ti on'vari abl e,w hi l e the canoni cal l y conjugate ' m om entum 'vari abl e p
i ni ti alvacuum state j 0i i s de ned by the condi ti on cj 0i= 0: T hi si si ndeed a genui ne vacuum state ofa si m pl e harm oni c osci l l ator,w hi ch gi ves at = 0 the fol l ow i ng rel ati onshi ps
by equati ons (19)-(24)there. Techni cal l y,the deri vati on i s based on the canoni calpai r q = ,p = @L=@ 0 . T he H am i l toni an associ ated w i th the Lagrangi an (13) has the form H ( )= nc y c + c y 2 + c 2 ;(18)w here the coupl i ng to the external el d i s gi ven by the functi on ( ) = (i=2)(a 0 =a). In the sam e R ef.[ 34]one can al so nd the H ei senberg equati ons of m oti on for the H ei senberg operators c( ), c y ( ), and thei r connecti on to cl assi cal equati on (3). T he asym ptoti c expressi ons for the H ei senberg operators, c( )= ce in( 0 ) ; c y ( )= c y e in( 0 ) ; enter i nto form ul as(15),(16). C l earl y, the vacuum state j 0i, de ned as c( )j 0i= 0,m i ni m i ses the osci l l ator' s energy(18). A ri gorous quantum -m echani calSchrodi nger evol uti on ofthe i ni ti alvacuum state ofcosm ol ogi calperturbati onstransform sthi sstate i nto a strongl y squeezed (m ul ti -parti cl e) vacuum state[ 32] ,but we focus here onl y on the i ni ti alquantum state,w hi ch de nes the quantum -m echani calnorm al i sati on ofour cl assi calm ode functi ons.W e shal lnow sw i tch to densi ty perturbati ons.For each m ode n ofdensi ty perturbati ons (S-perturbati ons),the m ode' s m etri c com ponents h ij enteri ng Eq. (1) can be w ri tten ash ij = h( )Q ij + h l ( )n 2 Q ;ij ;w here the spati alei gen-functi ons Q are Q = e in x . T herefore, the m etri c com ponents associ ated w i th densi ty perturbati ons are characteri sed by two pol ari sati on am pl i tudes:h( )and h l ( ).Ifthei ni ti alera i sdri ven by an arbitrary scal ar el d ',there appearsa thi rd unknow n functi on -the am pl i tude ' 1 ( ) ofthe scal ar el d perturbati on: ' = ' 0 ( )+ ' 1 ( )Q : O ne often consi derstheso-cal l ed m i ni m al l y coupl ed to gravi ty scal ar el d ',w i th the energy-m om entum tensor T = ' ; ' ; g 1 2 g ' ; ' ; + V (') : T he coupl i ng ofscal ar el dsto gravi ty i ssti l la m atterofam bi gui ty,and the very possi bi l i ty ofquantum -m echani calgenerati on ofdensi ty perturbati ons rel i es on an extra hypothesi s, but we suppose that we were l ucky and the coupl i ng was such as we need. T he three unknow n functi ons h( ), h l ( ), ' 1 ( )shoul d be found from the perturbed Ei nstei n equati ons augm ented by the appropri ate i ni ti alcondi ti ons di ctated by quantum m echani cs. Iti si m portantto note thati n ati onary theori stsare sti l lstruggl i ng w i th i ntegrati on of the above-gi ven equati on for h l 0 . ) T he funci on does not depend on thi s rem ai ni ng coordi nate freedom ,and the constant C i cancel s out i n the expressi on de ni ng ( )i n term s ofh( ): a p = h H H 0 h 0 : T he functi on =a p i s that part ofthe scal ar m etri c am pl i tude h( ) w hi ch doesnotdepend on therem ai ni ng coordi natefreedom (' gauge-i nvari ant'm etri c perturbati on). In the short-wavel ength regi m e, the functi on descri bi ng densi ty perturbati ons behaves as / e in . T hi s i s the sam e behavi our as i n the case of the functi on descri bi ng gravi tati onal waves. T hi s si m i l ari ty between the respecti ve functi ons ( T and S ) i s val i d onl y i n the sense of thei r asym ptoti c -ti m e dependence,but not i n the sense ofthei r overal lnum ericalnorm al i sati on (see bel ow ). In the l ong-wavel ength regi m e,the dom i nant sol uti on to Eq. (19) i s / a p . T he quanti ty w hi ch rem ai ns constant i n thi s regi m e i s =a p . It i s thi s physi cal l y rel evant vari abl e that takes over from the anal ogous vari abl e h = =a i n the g. w .probl em . W e i ntroduce the notati on a p = ;
s expressed i n the form ofvery sm al lval ues ofthe di m ensi onl ess functi on p . W i thi n the approxi m ati on ofpower-l aw scal e factors (2),the functi on reduces to a set ofconstants, constant degeneratesto zero i n the l i m i tofthe evol uti on l aw w i th = 2;thati s,i n thel i m i tofthegravi tati onalpum p el d w hi ch i sresponsi bl efor the generati on ofpri m ordi alcosm ol ogi calperturbati ons w i th at spectrum n = 1. So,we are especi al l y i nterested i n the very sm al lval ues of p . It was show n [ 33]that the dynam i calprobl em for the scal ar-el d-dri ven S-perturbati ons can be obtai ned from the dynam i cal probl em for gravi tati onalwaves by si m pl e substi tuti ons: a( ) ! a( ) q ( ), T ( ) ! S ( ). (T hi s i s not a conjecture, but thi s i s a rul e w hose val i di ty was establ i shed after a thorough anal ysi s ofthese two probl em s separatel y. ) Each ofthese substi tuti ons i s val i d up to an arbi trary constant factor. U si ng these substituti ons,one obtai ns the S-equati on (19) from the T -equati on (3),and one obtai nsthephysi cal l y rel evantvari abl e = S =a p forS-perturbati onsfrom the g. w .vari abl e h = T =a.M ovi ng from the 3-di m ensi onalFouri er com ponents ofthe el d to an i ndi vi dualosci l l atorw i th frequency n,we i ntroduce the quanti ty accordi ng to the sam e rul e (12) that was used w hen we i ntroduced h. N am el y
i n term s ofthe i ndependent vari abl e ,i s equi val ent to Eq. (19) w hi ch i s the Eul er-Lagrange equati on deri vabl e from theLagrangi an (22)i n term softhei ndependentvari abl e S .T heLagrangi an (22) shoul d be used for quanti sati on. T he Lagrangi an i tsel f,as wel las the acti on and the H am i l toni an,does not degenerate i n the l i m i t ! 0,i . e. ,i n the l i m i t ofthe m ost i nteresti ng background gravi tati onal el d i n the form ofthe de-Si tter m etri c, = 0.
i s encouraged and tem pted to thi nk that the quantum state j 0 s i, anni hi l ated by b,nam el y bj 0 s i= 0; i s the vacuum state ofthe el d ,i . e. ,the ground state ofthe H am i l toni an associ ated w i th the Lagrangi an (22). T he cal cul ati on of the m ean square val ue of at = 0 produces the resul t h0 s j q 2 j 0
ng the i ni ti alti m e 0 up to the boundary between the adi abati c and superadi abati c regi m es at = i ,we arri ve at the esti m ate
P R (k)=
the basi c equati ons for densi ty perturbati ons. In i n ati onary papers, you w i l loften see equati ons contai ni ng com pl i cated com bi nati ons ofm etri c perturbati onsm i xed up w i th the unperturbed and/orperturbed functi onsofthe scal ar el d ' and V (').In ati oni sts are sti l lengaged i n endl ess di scussi ons on the shape ofthe scal ar el d potenti alV (') and w hat i t coul d m ean for countl ess i n ati onary m odel s. H owever, thi s state ofa ai rs i s si m pl y a re ecti on ofthe fact thatthe equati onshave notbeen properl y transform ed and si m pl i ed. Si nce the underl yi ng physi cs i s the i nteracti on ofa cosm ol ogi calharm oni c osci l l ator w i th the gravi tati onalpum p el d,m athem ati cal l y the equati ons shoul d revealthi s them sel ves. A nd i ndeed they do.Itwasshow n i n paper[ 33]that,forany potenti alV ('),there exi sts onl y one second-order di erenti alequati on to be sol ved:w here the functi on ( ) represents the si ngl e dynam i caldegree offreedom , descri bi ng S-perturbati ons.T hee ecti vepotenti albarri er(a p ) 0 0 =(a p )dependsonl y on a( )and i tsderi vati ves,i n ful lanal ogy w i th theg. w .osci l l ator, Eq. (3). T he ti m e-dependent functi on ( ( ) or (t)) i s de ned byA s soon as the appropri ate sol uti on for ( ) i s found,al lthree functi ons descri bi ng densi ty perturbati ons are easi l y cal cul abl e:T he constant C i re ects the rem ai ni ng coordi nate freedom w i thi n the cl ass ofsynchronous coordi nate system s. (A nother constant com es out from the k 3and u k are the m ode-functi ons (u k = n i n our notati ons) sati sfyi ng Eq.(19)w i th the i ni ti alcondi ti ons(34).Si nce at ti m es near 0 the coe ci ents a=a 0 andã=ã 0 are cl ose to 1,the equal i ty ofthei ni ti alval uesforh rm s and rm s fol l ow sal ready from thesi m pl e com pari son ofthe Lagrangi ans(13)and(22).T he rel ati onshi p between the above-m enti oned genui ne vacuum state j 0i and thesqueezed vacuum statej 0 s ii sdeterm i ned by theacti on ofthesqueeze operator S(r; ) on j 0i: j 0 s i= S(r; )j 0i;w here S(r; )= exp 1 2 r e i2 c 2 e i2 c y 2 :T he m ean num ber ofquanta i n the squeezed vacuum state i s gi ven by h0 s j c y cj 0 s i= si nh 2 r = 1 2 p :T hi si sa huge and di vergentnum ber,w hen the ' zero i n the denom i nator' factor p goes to zero. T herefore,the \standard i n ati onary resul t" for Sperturbati ons i s based on the w rong i ni ti alcondi ti ons, accordi ng to w hi ch the i ni ti alam pl i tude ofthe -perturbati onscan be arbi trari l y l arge from the very begi nni ng ofthei r evol ution.M oreover,the i ni ti alam pl i tude i s assum ed to go to i n ni ty i n the m ost i nteresti ng l i m i t of p ! 0 and n ! 1. If p does not devi ate from 1 too m uch, then the m ean num ber ofquanta i n the squeezed vacuum state i s acceptabl y sm al l ,and the w rong i ni ti alcondi ti ons gi ve resul ts su ci entl y cl ose to the correct ones. H owever, as i n the Landau exam pl e m enti oned above, i f the w rong form ul a gi ves acceptabl e answers for som e range of x, thi sdoesnotm ake the w rong theory a correctone.(Fi nal l y,i f p = 1,then a(t) / t,a( ) / e ,w = 1=3. From thi s m odelofcosm ol ogi calevol uti on the study ofrel i c gravi tati onalwaves hasbegan i n the rstpaperofR ef.[ 2]. )
. B Ya, Zel, J A W heel er,ed.J.K l auderW H FreemSan FranciYa.B.Zel dovi ch,i n M agic withoutM agic: J A W heel er,ed.J.K l auder, (W H Freem an,San Franci sco,1972)
. L P , Zh.Eksp.Teor.Fi z. 67Sov.Phys.JET PL.P.G ri shchuk.Zh.Eksp.Teor.Fi z.67,825 (1974) [ Sov.Phys.JET P 40,409 (1975)] ;
. A nn.N Y A cad.Sci. 302439A nn.N Y A cad.Sci .302,439 (1977)
Pi s' m a Zh. L P , Eksp.Teor.Fi z. 23293JET P Lett.L.P.G ri shchuk.Pi s' m a Zh.Eksp.Teor.Fi z.23,326 (1976)[ JET P Lett. 23,293 (1976)]
. A H Uth, Phys.R ev.D. 23347A .H .G uth.Phys.R ev.D 23,347 (1981)
. A D Sakharov, Zh.Eksp.Teor.Fi z. 49241Sov.Phys.JET PA .D .Sakharov.Zh.Eksp.Teor.Fi z.49,345 (1965) [ Sov.Phys.JET P 22,241 (1966)]
Pi s' m a Zh. A A Starobi, JET P l ett.30Eksp.Teor.Fi z. 30A .A .Starobi nsky.Pi s' m a Zh.Eksp.Teor.Fi z.30,719 (1979) [ JET P l ett.30,719 (1979)]
. V A Ubakov, M V Sazhi N, A V Veryaski N, Phys.Lett.B. 115189V .A .R ubakov,M .V .Sazhi n,A .V .Veryaski n.Phys.Lett.B 115,189 (1982)
. R Fabbriand, M D , Phys.Lett.B. 125445R .Fabbriand M .D .Pol l ock.Phys.Lett.B 125,445 (1983)
. L F , M B , N ucl .Phys.B. 244541L.F.A bbot and M .B.W i se.N ucl .Phys.B 244,541 (1984)
. V A Bel I Nsky, L P Ri Shchuk, I M , Ya.B.Zel dovi ch. Phys.Lett.B. 155232V .A .Bel i nsky,L.P.G ri shchuk,I.M .K hal atni kov,Ya.B.Zel dovi ch. Phys.Lett.B 155,232 (1985);
. Sov.Phys.JET P. 62427Sov.Phys.JET P 62,427 (1985)
Testing Rel ativistic G ravity in Space. L P Ri Shchuk, I N G Yros, Interferom Eters, gr-qc/0002035Lecture N otes i n Physi cs. C .Lam m erzahl ,C .W .Everi tt,F.W .H ehl562167Spri ngerL.P.G ri shchuk, i n G yros, C l ocks, Interferom eters: Testing Rel ativis- tic G ravity in Space,eds.C .Lam m erzahl ,C .W .Everi tt,F.W .H ehl . Lecture N otes i n Physi cs 562 (Spri nger,2001) p.167 (gr-qc/0002035)
A strophysics U pdate. L P , gr-qc/0305051J.W .M ason281Spri nger-Praxi sL.P.G ri shchuk,i n: A strophysics U pdate,Ed.J.W .M ason (Spri nger- Praxi s,2004) p 281 (gr-qc/0305051)
. C L Bennet, A stroph.J.Suppl .Ser. 1481C .L.Bennet etal ..A stroph.J.Suppl .Ser.148,1 (2003)
. L Page, A stroph.J.Suppl .Ser. 148223L.Page etal ..A stroph.J.Suppl .Ser.148,223 (2003)
Sm ootetal .A stroph. G F , J.Lett. 3961G .F.Sm ootetal .A stroph.J.Lett.396,L1 (1992);
. C L , Bennetetal . A stroph.J.Lett. 4641C .L.Bennetetal . A stroph.J.Lett.464,L1 (1996)
. D , M on.N ot.R oy.A st.Soc. 344D .M ai no al .M on.N ot.R oy.A st.Soc.344,562 (2003)
B Reene, astro- ph/0503458Extracti ng new physi cs from the C M B. B. G reene et al . Extracti ng new physi cs from the C M B. (astro- ph/0503458)
Si dorov.C l ass.Q uant.G rav. L P , Yu V , 6155L.P.G ri shchuk and Yu.V .Si dorov.C l ass.Q uant.G rav.6,L155 (1989)
G ravi tati onal waves and the cosm ol ogi cal equati on of state. T Rei Ghton, gr-qc/9907045T . C rei ghton. G ravi tati onal waves and the cosm ol ogi cal equati on of state.(gr-qc/9907045)
. Lig O Websi Te, LIG O websi te http: //w w w . l i go. cal tech. edu
. M G I Ovanni Ni .T Heoreti Caltool S For C M B Physi Cs, astro-ph/0412601M .G i ovanni ni .T heoreti caltool s for C M B physi cs.(astro-ph/0412601)
. D Baskaran, L P Ri Shchuk, A G Narev, in preparationD .Baskaran,L.P.G ri shchuk,A .G .Pol narev,in preparation
. S Bose, L P , Phys.R ev.D. 6643529S.Bose and L.P.G ri shchuk.Phys.R ev.D 66,043529 (2002)
. M J Stroph, J. 1531M .J.R ees.A stroph.J.153,L1 (1968)
. M M Basko, A G , Pol narev.M on.N ot.R oy.A str.Soc. 191207M .M .Basko and A .G .Pol narev.M on.N ot.R oy.A str.Soc.191,207 (1980)
Pol narev. A stron. A G , Zh. 62Sov. A stron. 29, 607 (1985)A . G . Pol narev. A stron. Zh. 62, 1041 (1985) [ Sov. A stron. 29, 607 (1985)]
. J R Bond, G Efstathi, A stroph.J. 28545J.R .Bond and G .Efstathi ou.A stroph.J.285,L45 (1984)
. M , U Jak, Phys.R ev.D. 551830M .Zal dari aga and U .Sel jak.Phys.R ev.D 55,1830 (1997)
. M Ski, A , A Stebbi, Phys. R ev. D. 557368M . K am i onkow ski , A . K osow sky, and A . Stebbi ns. Phys. R ev. D 55, 7368 (1997)
. W , M , astro-ph/9706147N ew A stronom y. 2323W .H u and M .W hi te.N ew A stronom y.2,323 (1997)(astro-ph/9706147)
C om m enton the\In uenceofC osm ol ogi calTransi ti ons on the Evol uti on ofD ensi ty Perturbati ons. L P , gr-qc/9801011L.P.G ri shchuk.C om m enton the\In uenceofC osm ol ogi calTransi ti ons on the Evol uti on ofD ensi ty Perturbati ons".(gr-qc/9801011)
. L P , Yu V Si, Phys.R ev.D. 423413L.P.G ri shchuk and Yu.V .Si dorov.Phys.R ev.D 42,3413 (1990)
. L P , Phys.R ev.D. 507154L.P.G ri shchuk.Phys.R ev.D 50,7154 (1994)
. L P , C l ass.Q uant.G ravi ty. 102449L.P.G ri shchuk.C l ass.Q uant.G ravi ty 10,2449 (1993)
. J M Bardeen, P J Stei, M S Turner, Phys.R ev.D. 28679J.M .Bardeen,P.J.Stei nhardt,and M .S.Turner.Phys.R ev.D 28, 679 (1983)
Pi s' m a Zh. V N Lukash, Eksp.Teor.Fi z. 31596JET P Lett.V .N .Lukash.Pi s' m a Zh.Eksp.Teor.Fi z.31,631 (1980) [ JET P Lett. 6,596 (1980)
. G , V On, N ot.R .A str.Soc. 200535G .C hi bi sov and V .M ukhanov.M on.N ot.R .A str.Soc.200,535 (1982)
. M Sasaki, Prog.T heor.Phys. 761036M .Sasaki .Prog.T heor.Phys.76,1036 (1986)
J , astro-ph/0210603 v4N on-gaussi an featuresofpri m ordi al uctuati onsi n si ngl e el d i n ati onary m odel s. J.M al dacena.N on-gaussi an featuresofpri m ordi al uctuati onsi n si ngl e el d i n ati onary m odel s.(astro-ph/0210603 v4)
. J , JH. 0513J.M al dacena.JH EP05 (2003)013
A .Tayl oretal .,i n Proceedingsof39th Recontres de M oriond. Fronti ersA .Tayl oretal .,i n Proceedingsof39th Recontres de M oriond (Fronti ers, 2004)
| [] |
[
"TEA-PSE 3.0: TENCENT-ETHEREAL-AUDIO-LAB PERSONALIZED SPEECH ENHANCEMENT SYSTEM FOR ICASSP 2023 DNS-CHALLENGE",
"TEA-PSE 3.0: TENCENT-ETHEREAL-AUDIO-LAB PERSONALIZED SPEECH ENHANCEMENT SYSTEM FOR ICASSP 2023 DNS-CHALLENGE"
] | [
"Yukai Ju \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Jun Chen \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Shimin Zhang \nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Shulin He \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Wei Rao \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Weixin Zhu \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Yannan Wang \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Tao Yu \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n",
"Shidong Shang \nTencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina\n"
] | [
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Audio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Audio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina",
"Tencent Ethereal Audio Lab\nTencent Corporation\nShenzhenChina"
] | [] | This paper introduces the Unbeatable Team's submission to the ICASSP 2023 Deep Noise Suppression (DNS) Challenge. We expand our previous work, TEA-PSE, to its upgraded version -TEA-PSE 3.0. Specifically, TEA-PSE 3.0 incorporates a residual LSTM after squeezed temporal convolution network (S-TCN) to enhance sequence modeling capabilities. Additionally, the localglobal representation (LGR) structure is introduced to boost speaker information extraction, and multi-STFT resolution loss is used to effectively capture the time-frequency characteristics of the speech signals. Moreover, retraining methods are employed based on the freeze training strategy to fine-tune the system. According to the official results, TEA-PSE 3.0 ranks 1st in both ICASSP 2023 DNS-Challenge track 1 and track 2.Index Terms-Personalized speech enhancement, TEA-PSE, multi-resolution 1. INTRODUCTION Our previous work, the Tencent-Ethereal-Audio-Lab personalized speech enhancement (TEA-PSE) [1], ranked 1st in the ICASSP 2022 Deep Noise Suppression (DNS) Challenge. Building upon this success, we advance the previous model and propose our upgraded system, TEA-PSE 3.0, for this year's DNS Challenge. First, inspired by the derivative operator module in TaylorEnhancer [2], we introduce a residual LSTM after every squeezed temporal convolution network (S-TCN) layer to enhance the sequence modeling capability. Second, we utilize the local-global representation (LGR)[3]structure to boost better speaker information extraction. Third, we adopt the multi-STFT resolution loss function [4] to effectively capture the time-frequency characteristics of the speech signals. Finally, we leverage a more effective three-step training strategy. Specifically, we first train the stage-one model and freeze this model to train the stage-two model. Then we load this pre-trained two-stage model and fine-tune all trainable parameters with the second stage's loss function. Based on the final results, our model achieves 1st place in both headset and non-headset tracks [5].2. PROPOSED METHOD 2.1. TEA-PSE 3.0 network | 10.1109/icassp49357.2023.10096838 | [
"https://export.arxiv.org/pdf/2303.07704v1.pdf"
] | 257,505,356 | 2303.07704 | d13329597c2daad6d1696b01bc0738faf4ffd979 |
TEA-PSE 3.0: TENCENT-ETHEREAL-AUDIO-LAB PERSONALIZED SPEECH ENHANCEMENT SYSTEM FOR ICASSP 2023 DNS-CHALLENGE
14 Mar 2023
Yukai Ju
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Jun Chen
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Shimin Zhang
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Shulin He
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Wei Rao
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Weixin Zhu
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Yannan Wang
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Tao Yu
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
Shidong Shang
Tencent Ethereal Audio Lab
Tencent Corporation
ShenzhenChina
TEA-PSE 3.0: TENCENT-ETHEREAL-AUDIO-LAB PERSONALIZED SPEECH ENHANCEMENT SYSTEM FOR ICASSP 2023 DNS-CHALLENGE
14 Mar 2023
This paper introduces the Unbeatable Team's submission to the ICASSP 2023 Deep Noise Suppression (DNS) Challenge. We expand our previous work, TEA-PSE, to its upgraded version -TEA-PSE 3.0. Specifically, TEA-PSE 3.0 incorporates a residual LSTM after squeezed temporal convolution network (S-TCN) to enhance sequence modeling capabilities. Additionally, the localglobal representation (LGR) structure is introduced to boost speaker information extraction, and multi-STFT resolution loss is used to effectively capture the time-frequency characteristics of the speech signals. Moreover, retraining methods are employed based on the freeze training strategy to fine-tune the system. According to the official results, TEA-PSE 3.0 ranks 1st in both ICASSP 2023 DNS-Challenge track 1 and track 2.Index Terms-Personalized speech enhancement, TEA-PSE, multi-resolution 1. INTRODUCTION Our previous work, the Tencent-Ethereal-Audio-Lab personalized speech enhancement (TEA-PSE) [1], ranked 1st in the ICASSP 2022 Deep Noise Suppression (DNS) Challenge. Building upon this success, we advance the previous model and propose our upgraded system, TEA-PSE 3.0, for this year's DNS Challenge. First, inspired by the derivative operator module in TaylorEnhancer [2], we introduce a residual LSTM after every squeezed temporal convolution network (S-TCN) layer to enhance the sequence modeling capability. Second, we utilize the local-global representation (LGR)[3]structure to boost better speaker information extraction. Third, we adopt the multi-STFT resolution loss function [4] to effectively capture the time-frequency characteristics of the speech signals. Finally, we leverage a more effective three-step training strategy. Specifically, we first train the stage-one model and freeze this model to train the stage-two model. Then we load this pre-trained two-stage model and fine-tune all trainable parameters with the second stage's loss function. Based on the final results, our model achieves 1st place in both headset and non-headset tracks [5].2. PROPOSED METHOD 2.1. TEA-PSE 3.0 network
The proposed model maintains the two-stage framework of TEA-PSE [1], consisting of MAG-Net and COM-Net, to handle magnitude and complex-valued features, respectively. Fig. 1(a) depicts the MAG-Net in detail, where E represents the speaker embedding obtained from the pre-trained ECAPA-TDNN network.
Encoder and decoder. The encoder is comprised of multiple frequency down-sampling (FD) layers, while the decoder is stacked by several frequency up-sampling (FU) layers. Each FD layer starts with a gated convolutional layer (GConv) to down-sample the input spectrum, followed by a cumulative layer norm (cLN) and PReLU. The FU layer is almost the same as the FD layer instead of replacing the GConv with transposed gated convolutional layer (TrGConv) to perform up-sampling. Sequence modeling structure. S-TCN consists of multiple squeezed temporal convolution modules (S-TCMs), as shown in Fig 1(c). To further enhance the model's sequence modeling capabilities, we add a residual LSTM after every S-TCN module (termed as S-TCN&L), as inspired by [2]. Fig 1(b) shows the modified S-TCN&L structure. The speaker embedding is combined with the latent feature only in the first S-TCM layer of the S-TCN module using the multiply operation.
Local-global representation. As local and global features of the speaker's enrollment speech (i.e. anchor) are both essential for target speaker extraction, we particularly incorporate the LGR structure [3] into our model, as shown in Fig 1(a). The speaker encoder consists of a bidirectional LSTM (BLSTM) and several FD layers, with the enrollment speech's magnitude as input. Note that there is an additional dense layer after the BLSTM to keep its dimension consistent with the input, and an average pooling operation is applied along the time dimension. The output of the speaker encoder is concatenated with the output of the previous FD layers in the encoder, corresponding to the further fusion of speaker information.
Loss function
We employ several loss functions to train our model. Specifically, the scale-invariant signal-to-noise ratio (SI-SNR) loss Lsi-snr and the power-law compressed phase-aware loss (magnitude loss Lmag and phase loss Lpha) are used. Additionally, we use asymmetric loss Lasym to constrain the estimated spectrum to avoid over-suppression. These loss functions are defined in the same way as our previous work [6]. First, we only train MAG-Net with L1. Following that, the pre-trained parameters of MAG-Net are frozen and only the COM-Net is optimized by L2.
L1 = Lsi-snr + 1 M M m=1 (Lmag + Lasym), L2 = Lsi-snr + 1 M M m=1 (Lmag + Lpha + Lasym).(1)
Furthermore, for all frequency domain loss functions, we explore multi-STFT resolution [4], where m indicates the scale corresponding to different STFT configurations. We train MAG-Net and COM-Net sequentially as described above and then load these pre-trained models to retrain the entire system using L2.
EXPERIMENTS 3.1. Dataset
We use the ICASSP 2022 DNS-challenge full-band dataset [7] for experiments. The noise data originates from DEMAND, Freesound, and AudioSet. We generate 100,000 room impulse responses (RIRs) based on the image method [8] with RT60 ∈ [0.1, 1.0]s.
Training setup
The window length and frameshift are 20ms and 10ms, respectively. For multi-STFT resolution loss, we use 3 different groups with FFT length ∈ {512, 1024, 2048}, window length ∈ {480, 960, 1920}, and frameshift ∈ {240, 480, 960}. We use FFT length 1024, window length 960, and frameshift 480 for single-STFT resolution loss. The Adam optimizer is used to optimize our models, and the initial learning rate is 1e −3 . The learning rate will be halved if the validation loss has no decrease for 2 epochs. We use on-the-fly data generation to increase the diversity of generated data and save storage space, which keeps the same settings as TEA-PSE.
The encoder and decoder consist of 6 FD layers and 6 FU layers, respectively. The GConv and TrGConv in both the encoder and decoder have a kernel size and stride of (2, 3) and (1,2) in the time and frequency axis, respectively. All GConv and TrGConv layers' channels are set to 64. The S-TCN&L module has 4 S-TCM layers with a kernel size of 5 for dilated Conv (DConv) and a dilation rate of {1, 2, 5, 9}, respectively, and a hidden size of 512 for the LSTM. All convolution channels in S-TCN&L are set to 64 except for the last pointwise Conv (PConv) layer. We stack 4 S-TCN&L groups to establish long-term relationships between consecutive frames and combine speaker embeddings. For the speaker encoder, we use a BLSTM with a hidden size of 512 and 5 FD layers, and all GConv layers' channels in the speaker encoder are set to 1.
Results and analysis
According to the blind test set results in Table 1, several observations can be made. First, adding a residual LSTM after every S-TCN module improves performance. Second, the LGR structure has proven to be effective in boosting speaker information extraction. Third, by using the multi-STFT resolution loss function, the proposed method achieves a significant improvement of 0.015 and 0.042 in OVRL for track 1 and track 2, respectively. Finally, retraining the dual-stage network with the pre-trained model provides an additional gain in performance.
CONCLUSIONS
The proposed TEA-PSE 3.0 utilizes the S-TCN&L module, which provides enhanced sequence modeling capabilities. With the LGR structure, our method can make better use of speaker information. Additionally, we investigate the effectiveness of the multi-STFT resolution loss function, comparing it with the single-STFT resolution. Based on the freeze training strategy, we explore the effect of model retraining. According to the official challenge results, TEA-PSE 3.0 ranks 1st in both tracks.
Fig. 1 .
1Details of (a) MAG-Net structure; (b) S-TCN&L structure; (c) S-TCM structure.
Table 2
2shows the mean opinion score (MOS) and word accuracy (WAcc) results on the DNS 2023 blind test set. TEA-PSE 3.0 has the highest BAK and OVRL. Besides, compared with unprocessed speech, the SIG and WAcc of the submission model are decreased, which is reasonable since the model introduces slight distortion to the extracted speech. TEA-PSE 3.0 has a total of 22.24 million trainable parameters. The number of multiply-accumulate operations (MAC) of TEA-PSE 3.0 is 19.66G per second. The average real-time factor (RTF) per frame for the submission method exported by ONNX is 0.46 on an Intel(R) Xeon(R) CPU E5-2678 v3 clocked at 2.4 GHz. Table 1. PDNSMOS P.835 results on the DNS 2023 blind test set. Table 2. MOS and WAcc results on the DNS 2023 blind test set.ID Method
Track 1
Track 2
SIG
BAK OVRL SIG
BAK OVRL
1
Noisy
4.152 2.369 2.709 4.046 2.159 2.497
2
TEA-PSE
3.996 4.010 3.528 3.850 3.884 3.342
3
+ LSTM
4.044 4.032 3.562 3.910 3.915 3.384
4
+ LGR
4.083 4.050 3.603 3.949 3.932 3.429
5
+ multi-STFT 4.086 4.058 3.618 3.986 3.949 3.471
6
+ retrain
4.108 4.053 3.645 3.993 3.951 3.493
System
Track 1
Track 2
SIG BAK OVRL WAcc SIG BAK OVRL WAcc
Noisy
3.76 1.22
1.22
0.843 3.83 1.22
1.24
0.857
DNS Baseline 3.20 2.67
2.34
0.687 3.22 2.68
2.38
0.727
TEA-PSE 3.0 3.52 2.88
2.71
0.761 3.64 2.92
2.72
0.768
TEA-PSE: Tencent-ethereal-audio-lab Personalized Speech Enhancement System for ICASSP 2022 DNS CHALLENGE. Y Ju, W Rao, X Yan, Y Fu, S Lv, L Cheng, Y Wang, L Xie, S Shang, ICASSP. IEEEY. Ju, W. Rao, X. Yan, Y. Fu, S. Lv, L. Cheng, Y. Wang, L. Xie, and S. Shang, "TEA-PSE: Tencent-ethereal-audio-lab Personalized Speech Enhancement System for ICASSP 2022 DNS CHALLENGE," in ICASSP. IEEE, 2022, pp. 9291-9295.
A General Deep Learning Speech Enhancement Framework Motivated by Taylor's Theorem. A Li, G Yu, C Zheng, W Liu, X Li, arXiv:2211.16764arXiv preprintA. Li, G. Yu, C. Zheng, W. Liu, and X. Li, "A General Deep Learn- ing Speech Enhancement Framework Motivated by Taylor's Theorem," arXiv preprint arXiv:2211.16764, 2022.
Local-global speaker representation for target speaker extraction. S He, W Rao, K Zhang, Y Ju, Y Yang, X Zhang, Y Wang, S Shang, arXiv:2210.15849arXiv preprintS. He, W. Rao, K. Zhang, Y. Ju, Y. Yang, X. Zhang, Y. Wang, and S. Shang, "Local-global speaker representation for target speaker ex- traction," arXiv preprint arXiv:2210.15849, 2022.
Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. R Yamamoto, E Song, J Kim, ICASSP. IEEER. Yamamoto, E. Song, and J. Kim, "Parallel WaveGAN: A fast wave- form generation model based on generative adversarial networks with multi-resolution spectrogram," in ICASSP. IEEE, 2020, pp. 6199-6203.
Deep Speech Enhancement Challenge at ICASSP 2023. H Dubey, A Aazami, V Gopal, B Naderi, S Braun, R Cutler, H Gamper, M Golestaneh, R Aichner, ICASSP. H. Dubey, A. Aazami, V. Gopal, B. Naderi, S. Braun, R. Cutler, H. Gam- per, M. Golestaneh, and R. Aichner, "Deep Speech Enhancement Chal- lenge at ICASSP 2023," in ICASSP, 2023.
TEA-PSE 2.0: Sub-Band Network for Real-Time Personalized Speech Enhancement. Y Ju, S Zhang, W Rao, Y Wang, T Yu, L Xie, S Shang, SLT. IEEEY. Ju, S. Zhang, W. Rao, Y. Wang, T. Yu, L. Xie, and S. Shang, "TEA- PSE 2.0: Sub-Band Network for Real-Time Personalized Speech En- hancement," in SLT. IEEE, 2023, pp. 472-479.
ICASSP 2022 deep noise suppression challenge. H Dubey, V Gopal, R Cutler, A Aazami, S Matusevych, S Braun, S E Eskimez, M Thakker, T Yoshioka, H Gamper, ICASSP. IEEEH. Dubey, V. Gopal, R. Cutler, A. Aazami, S. Matusevych, S. Braun, S. E. Eskimez, M. Thakker, T. Yoshioka, H. Gamper, et al., "ICASSP 2022 deep noise suppression challenge," in ICASSP. IEEE, 2022, pp. 9271-9275.
Image method for efficiently simulating small-room acoustics. B Jont, David A Allen, Berkley, The Journal of the Acoustical Society of America. 654Jont B Allen and David A Berkley, "Image method for efficiently sim- ulating small-room acoustics," The Journal of the Acoustical Society of America, vol. 65, no. 4, pp. 943-950, 1979.
| [] |
[
"Brain Chains as Topological Signatures for Alzheimer's Disease",
"Brain Chains as Topological Signatures for Alzheimer's Disease"
] | [
"Christian Goodbrake \nMathematical Institute\nUniversity of Oxford\n\n\nWillerson Center for Cardiovascular Modeling and Simulation\nOden Institute\nThe University of Texas at Austin\n\n",
"David Beers \nMathematical Institute\nUniversity of Oxford\n\n",
"Travis B Thompson \nMathematical Institute\nUniversity of Oxford\n\n\nDepartment of Mathematics\nTexas Tech University\n\n",
"Heather A Harrington \nMathematical Institute\nUniversity of Oxford\n\n\nWellcome Centre for Human Genetics\nUniversity of Oxford\n\n",
"Alain Goriely \nMathematical Institute\nUniversity of Oxford\n\n"
] | [
"Mathematical Institute\nUniversity of Oxford\n",
"Willerson Center for Cardiovascular Modeling and Simulation\nOden Institute\nThe University of Texas at Austin\n",
"Mathematical Institute\nUniversity of Oxford\n",
"Mathematical Institute\nUniversity of Oxford\n",
"Department of Mathematics\nTexas Tech University\n",
"Mathematical Institute\nUniversity of Oxford\n",
"Wellcome Centre for Human Genetics\nUniversity of Oxford\n",
"Mathematical Institute\nUniversity of Oxford\n"
] | [] | Topology is providing new insights for neuroscience. For instance, graphs, simplicial complexes, directed graphs, flag complexes, persistent homology and convex covers have been used to study functional brain networks, synaptic connectivity, and hippocampal place cell codes. We propose a topological framework to study the evolution of Alzheimer's disease, the most common neurodegenerative disease. The modeling of this disease starts with the representation of the brain connectivity as a graph and the seeding of a toxic protein in a specific region represented by a vertex. Over time, the accumulation of toxic proteins at vertices and their propagation along edges are modeled by a dynamical system on this graph. These dynamics provide an order on the edges of the graph according to the damage created by high concentrations of proteins. This sequence of edges defines a filtration of the graph. We consider different filtrations given by different disease seeding locations. To study this filtration we propose a new combinatorial and topological method. A filtration defines a maximal chain in the partially ordered set of spanning subgraphs ordered by inclusion. To identify similar graphs, and define a topological signature, we quotient this poset by graph homotopy equivalence, which gives maximal chains in a smaller poset. We provide an algorithm to compute this direct quotient without computing all subgraphs and then propose bounds on the total number of graphs up to homotopy equivalence. To compare the maximal chains generated by this method, we extend Kendall's dK metric for permutations to more general graded posets and establish bounds for this metric. We then demonstrate the utility of this framework on actual brain graphs by studying the dynamics of tau proteins on the structural connectome. We show that the proposed topological brain chain equivalence classes distinguish different simulated subtypes of Alzheimer's disease. | null | [
"https://export.arxiv.org/pdf/2208.12748v1.pdf"
] | 251,881,370 | 2208.12748 | e282df4a13fdb3a688cd06eab74607443ce5a17f |
Brain Chains as Topological Signatures for Alzheimer's Disease
Christian Goodbrake
Mathematical Institute
University of Oxford
Willerson Center for Cardiovascular Modeling and Simulation
Oden Institute
The University of Texas at Austin
David Beers
Mathematical Institute
University of Oxford
Travis B Thompson
Mathematical Institute
University of Oxford
Department of Mathematics
Texas Tech University
Heather A Harrington
Mathematical Institute
University of Oxford
Wellcome Centre for Human Genetics
University of Oxford
Alain Goriely
Mathematical Institute
University of Oxford
Brain Chains as Topological Signatures for Alzheimer's Disease
Topology is providing new insights for neuroscience. For instance, graphs, simplicial complexes, directed graphs, flag complexes, persistent homology and convex covers have been used to study functional brain networks, synaptic connectivity, and hippocampal place cell codes. We propose a topological framework to study the evolution of Alzheimer's disease, the most common neurodegenerative disease. The modeling of this disease starts with the representation of the brain connectivity as a graph and the seeding of a toxic protein in a specific region represented by a vertex. Over time, the accumulation of toxic proteins at vertices and their propagation along edges are modeled by a dynamical system on this graph. These dynamics provide an order on the edges of the graph according to the damage created by high concentrations of proteins. This sequence of edges defines a filtration of the graph. We consider different filtrations given by different disease seeding locations. To study this filtration we propose a new combinatorial and topological method. A filtration defines a maximal chain in the partially ordered set of spanning subgraphs ordered by inclusion. To identify similar graphs, and define a topological signature, we quotient this poset by graph homotopy equivalence, which gives maximal chains in a smaller poset. We provide an algorithm to compute this direct quotient without computing all subgraphs and then propose bounds on the total number of graphs up to homotopy equivalence. To compare the maximal chains generated by this method, we extend Kendall's dK metric for permutations to more general graded posets and establish bounds for this metric. We then demonstrate the utility of this framework on actual brain graphs by studying the dynamics of tau proteins on the structural connectome. We show that the proposed topological brain chain equivalence classes distinguish different simulated subtypes of Alzheimer's disease.
Introduction
Brain networks are an important research topic for understanding both healthy brain function and pathology [6]. In particular, network theory has been used to model the evolution of neurodegenerative diseases, a family of conditions characterised by progressive dysfunction of neurons until they ultimately die. The central idea of the 'prion-like' hypothesis is that some neurodegenerative diseases, such as Alzhemier's or Parkinson's are governed by the accumulation and propagation of toxic proteins on the connectome, the brain structural network [15].
Indeed, it is well appreciated that the most common neurodegenerative disease, Alzheimer's disease (AD), is characterized by the propagation of misfolded tau proteins over the connectome. The disease presentation differs across patients subpopulations, which has led to so-called AD subtypes. For example, in the clinic, the limbic subtype is more strongly associated to memory decline whereas the middle temporal lobe sparing (MTL) subtype is associated with changes in language proficiency [30]. Vogel et al. recently postulated that disease subtypes are characterized by misfolded protein at different starting locations in the brain called seeding regions or epicenters [30]. Over many decades, toxic proteins will propagate from an epicenter to the rest of the network and different epicenter locations can give rise to different toxic protein patterns. An open problem is early detection of a patient's AD subtype, which has implications to clinical care and treatment. The current gold standard for determining a patient's AD subtype is the postmorterm study of the distribution of tau neurofibrillary tangles within the brain. An open question is how to go about developing effective, non-invasive techniques to discern the progression of AD and a patient-specific subtype from that progression. This work proposes a topological approach for studying AD subtypes by combining a mechanistic model of AD progression, a recent hypothesis on AD subtypes, and the examination of filtration images under a topological equivalence relation.
Studying the brain with algebraic topology was first proposed in 1962 by Zeeman [33]. Since then, neurotopology has flourished into a well-established field. For example, grid cells or place cells can be studied with neural codes [8] and single neuron shape can be analysed as trees with the topological morphology descriptor [16,4]. Brain networks ranging from vascular, functional to structural have benefited from topological data analysis [26,28,10,5]. The most prominent algorithm in topological data analysis, persistent homology, takes in a filtration, a nested sequence of spaces built on data, and outputs a persistence module. Persistent homology has been applied to functional networks (from functional magnetic resonance imaging data) to study schizophrenia, epileptic seizures, and Alzheimer's disease [7,32,29].
Here, we start with a filtration, corresponding to toxic protein propagation or progression on a brain graph from a seeding site. The fundamental principle guiding this approach is to study aspects of progressions defined on a (fixed) graph by considering the spanning subgraphs, subgraphs containing a vertex set V , arising from filtrations generated by that progression. Since we are interested in comparing topological characteristics of one or more network progressions, given by different seeding sites, we will consider the set of spanning subgraphs modulo a topologically significant equivalence relation. We would like to distinguish the number of loops in each connected component to discern disease subtypes; therefore, we will require a finer invariant than homology.
The topological analysis of graphs via their geometric representations is a well-established method [1,13], with multiple interesting problems [20,21]. The most prominent example of translating a topological equivalence relationship to a graph theoretic equivalence relation is the graph homeomorphism, where two graphs are considered equivalent if their geometric realisations are homeomorphic. Here, we apply a similar approach to study the filtrations associated with different protein propagations; however, we choose a looser topological equivalence relation, namely that of homotopy equivalence [13]. We establish a unique representation for the induced equivalence classes, and examine the relationships between these classes induced by the spanning subgraphs partially ordered by inclusion.
In standard topology, homotopies are continuous curves whose points are continuous functions. Over the past couple decades, specialised combinatorial analogues of homotopy theories have been developed [2,3,12]. These combinatorial homotopies are path graphs whose vertices are graph homomorphisms. While the homotopy equivalence for graphs we utilise is induced by the standard topological definition applied to the graphs' geometric realisations, we introduce a notion of discrete homotopy for maximal chains in ordered posets-the homotopy poset. Further, the discrete nature of this homotopy allows us to define a discrete metric between these chains, which gives us a rigorous basis by which different neurodegenerative progressions can be compared. This discrete homotopy for chains in posets can likely be formulated in terms of the aforementioned combinatorial homotopies applied to maximal chains in graded posets.
We define a metric between maximal chains, which is bounded from below by a metric defined on top dimensional simplices of the order complex. While we do not construct the order complex associated with the chains appearing in our work, earlier work has explored the topological structure inherent in the combinatorial objects appearing in order theory [31,22,19]. The metric we define distinguishes quotiented filtrations based on the connectivity of the affected regions of the brain graph at each stage in disease progression. Thus, the proposed view yields a quantitative metric on the set of calculated brain chains, with some chains representing protein progressions that propagate the entire graph quickly whereas some brain chains have topological signatures that form local cycles close to the seeding site before spreading to the full brain graph.
Organization
This work sits at the intersection of algebraic topology, combinatorics, dynamical systems, graph theory, number theory, network science, neuroscience and order theory. Throughout, we provide necessary definitions and direct readers to fuller treatment elsewhere. In Section 2, we introduce the edge filtration of a graph, the connection between these filtrations, and the partially ordered set of spanning subgraphs SS (G). We introduce the graph homotopy polynomial, and the homotopy poset, the quotient of SS (G) by graph homotopy equivalence. This polynomial uniquely encodes the graph's homotopy equivalence class. Therefore, this polynomial can be used to compute any topological quantity that is invariant under homotopy equivalence. In Section 3, we present an algorithm using these polynomials to directly compute the homotopy poset (without having to construct the entire set of spanning subgraphs). In Section 4, we establish complexity bounds on the number of elements that are in the poset as the number of vertices grows to provide a qualitative view of how the previous algorithm scales, and discuss connections to number theory. In Section 5 we generalise Kendall's d K metric on permutations to graded posets, and use this to define the discrete homotopy metric on graded poset chains. In Section 6 we introduce left and right-covering conditions, sufficient structures that guarantee finite discrete homotopy distances between arbitrary maximal chains. We establish upper and lower bounds on this metric, and in Section 7, we apply these ideas to the problem of neurodegenerative diseases to distinguish and topologically describe subtypes of Alzheimer's disease.
Graph Edge Filtrations
We consider a simple graph G with vertex set V = {v i } with |V | = N vertices and edge set E = {e ij = {v i , v j }} with |E| = M edges. For the purposes of this section and the following section, we will consider G to be the complete graph on V . A similar analysis can be applied to non-complete graphs, but the direct construction presented in Section 3 only applies to complete graphs. We examine the poset of spanning subgraphs of G, SS (G), i.e. those subgraphs S ⊆ G containing all of the vertices in V , partially ordered by subgraph inclusion. This poset is isomorphic to the power set of E, P (E). This spanning subgraph poset is graded by Euler characteristic L = N − |E(S)|, or equivalently, since V is fixed, graded by the number of edges, |E(S)|, where we have denoted the edge set of a subgraph S as E(S). We denote this strict partial ordering relation with ≺, with denoting the covering relation, i.e. a "is covered by" b, a b if and only if a ≺ b, and there does not exist c such that a ≺ c ≺ b, which in our case, a b is equivalent to a ⊂ b and |E(b) \ E(a)| = 1.
We seek to understand and quantitatively compare the possible routes of propagation through G, thus we define edge filtrations.
Definition 2.1 (Edge Filtration). An edge filtration F = S 0 ⊂ S 1 ⊂ ... ⊂ S M of G is a sequence of spanning subgraphs of G such that E(S 0 ) = {}, S M = G, and |E (S i ) \ E (S i−1 ) | = 1
Equivalently, edge filtrations are maximal chains in SS (G), i.e. they are totally ordered subsets of SS (G) such that no element can be added without F ceasing to be totally ordered. Any edge filtration can be represented by a permutation Ξ of the edge set, since the edge sets of the subgraphs in this filtration are partial unions of this permutation. Given F, Ξ (F) can be recovered by taking successive set differences;
Ξ (F) = {e 1 , e 2 , ..., e M } with e i = S i \ S i−1 .(1)
We seek to rigorously compare the similarity of two edge filtrations F 1 , and F 2 , which we can do indirectly through their associated edge set permutations. These permutations can be compared through Kendall's d K metric [17]:
Definition 2.2 (Kendall's d K )
. Let Ξ 1 , Ξ 2 be permutations of E, and let σ be the element of the permutation group S M satisfying
Ξ 2 = σ (Ξ 1 ) .(2)
Kendall's d K (Ξ 1 , Ξ 2 ) is defined to be the length of the minimal representation of σ in terms of adjacent transpositions.
The distance d K (Ξ 1 , Ξ 2 ) is equivalently given as the number of operations to transform Ξ 1 into Ξ 2 using bubble sort, and is equal to the number of discordant pairs in Ξ 1 and Ξ 2 . Kendall's d K satisfies all the axioms of a metric, hence we can use it as a means of comparing the similarity of two permutations of a set. We therefore extend the definition of d K to compare edge filtrations:
d K (F 1 , F 2 ) = d K (Ξ (F 1 ) , Ξ (F 2 )) .
(3)
Example 2.1 (K 3 )
. Consider the complete graph on 3 vertices, K 3 . Denoting the three edges of this graph as {e 1 , e 2 , e 3 }, the eight distinct spanning subgraphs of this graph partially ordered by inclusion can be depicted by the following face poset. ), which makes constructing the spanning subgraph poset computationally expensive even for modest N . Additionally, we want to focus our attention on the topology of the underlying spanning subgraphs, rather than merely considering their edge sets as combinatorial subsets. We therefore seek a quotienting of SS (K N ) under some equivalence relation ∼ that preserves the graded structure, and identifies spanning subgraphs that are topologically equivalent in some sense. We further want this quotiented poset H (K N ) = SS (K N ) / ∼ to be directly constructed, rather than computing the entire SS (K N ), and then further compute all of the equivalence relations.
We will identify two spanning subgraphs if their geometric realisations are homotopy equivalent, and examine our edge filtrations under this equivalence relation. Definition 2.3 (homotopy equivalence). Two topological spaces X, and Y are homotopy equivalent if there exists continuous maps f : X → Y and g : Y → X such that the compositions g • f and f • g are homotopic to the identity maps on X and Y respectively, i.e. there exists two continuous one-parameter family of maps η t : X → X and µ t :
Y → Y for t ∈ [0, 1], such that η 0 = id X , η 1 = g • f , µ 0 = id Y , and µ 1 = f • g.
This quotienting is analogous to persistent homology, where a sequence of homology groups is obtained from a filtration. The underlying objects we are quotienting are subgraphs; therefore, reducing to homology groups destroys too much information since graphs only have two nontrivial homology groups, H 0 (G) ∼ = Z Q where Q is the number of connected components, and H 1 (G) ∼ = Z M −N +Q counting the number of loops. We use homotopy equivalence instead, as this preserves the number of loops in each connected component, rather than merely the number of loops in total.
In the general case determining whether two spaces are homotopy equivalent is difficult. Since the spaces we study are graphs, they have a simple enough structure for us to determine homotopy equivalence by explicitly constructing the homotopy equivalence class from a representative member. Every connected simple graph is homotopy equivalent to a wedge sum of circles [13], hence every general (possible disconnected) simple graph is homotopy equivalent to the disjoint union of a wedge sum of circles.
We record this data through a graph homotopy polynomial h G (x). Because the graph homotopy polynomial h G (x) is uniquely determined by the homotopy equivalence class of G, to determine if two graphs are homotopy equivalent, one must simply compute the graph homotopy polynomial for each and see if these polynomials match. Further, because the homotopy polynomial encodes a graph's homotopy equivalence classes, any topological quantity of a graph G that is preserved under homotopy equivalence can be computed from h G (x).
Example 2.2 (Topological computations from h G ).
• h G (1) is the number of connected components of G, the 0-th Betti number. Therefore
H 0 (G) ∼ = Z h G (1)) • The number of loops present in G is dh G dx (1), hence the Euler characteristic of G is L = N − |E| = h G (1) − dh G dx (1). • H 1 (G) ∼ = Z dh G dx (1)
Therefore, given an edge filtration F of K N , we can compute a sequence of graph homotopy polynomials. Because the original filtration was a maximal chain in SS (K N ) graded by the number of edges, or equivalently Euler characteristic, and homotopy equivalence preserves the Euler characteristic, the resulting sequence of graph homotopy polynomials remains a maximal chain in H (N ). We only have to compute H (N ).
Direct Construction of the Homotopy Poset
We seek to directly construct H (N ), the spanning subgraph poset of the complete graph on N vertices quotiented by graph homotopy equivalence. We do this utilising the graph homotopy polynomials defined in the previous section, exploiting the graded structure of the quotiented poset by Euler characteristic. We note that the unique lowest spanning subgraph in this poset is the subgraph S 0 containing no edges, which of necessity is simply the collection of N vertices, and its corresponding graph homotopy polynomial is simply h S0 (x) = N .
Next, we consider the effect that adding an edge to a subgraph has on that subgraph's homotopy polynomial. Since any subgraph is a disjoint union of connected components, at the graph level, the endpoints of a newly added edge either connect two previously disconnected components, which creates no new loops, and reduces the number of connected components by one, or the endpoints lie in the same previously connected component, increasing the number of loops in that component by one. To see the effects of this at the level of the graph homotopy polynomial, let us denote the homotopy polynomial of the original subgraph as h Sr (x), and the homotopy polynomial of the augmented subgraph as h Sr+1 (x). The polynomial h Sr (x) takes the form
h Sr (x) = ∞ k=0 a k x k ,(4)
where a k are non-negative integers, and we have employed a direct sum to indicate that these polynomials can in principle be of arbitrarily high degree, with all but finitely many terms being 0. In practice these homotopy polynomials can be implemented as an integer-keyed and integer valued dictionary, where keys are exponents, and values are coefficients.
In the case where the endpoints of the added edge lie in originally disjoint components of the subgraph, originally possessing i and j loops respectively, the number of components in the augmented subgraph possessing i and j loops each decrease by 1, with the number of components possessing i + j loops increased by 1. The transformation on the homotopy polynomial is thus
x i + x j → x i+j (5)
hence the polynomial transforms as follows
h Sr+1 (x) = h Sr (x) − x i − x j + x i+j .(6)
Note that this is true even in the event that i = j, or i = 0, or j = 0, provided that the incrementing/decrementing is done for each component, i.e. when i = j, a i ≥ 2, and when i = j, a i , a j ≥ 1.
The alternative is that the added edge's endpoints reside in the same originally disjoint component, increasing the number of loops appearing in that component by 1, say from i to i + 1. The transformation is then clearly
x i → x i+1 ,(7)
and the polynomial transformation is
h Sr+1 (x) = h Sr (x) − x i + x i+1 .(8)
These are the only potential transformations possible, hence any cover of h S (x) in H (N ) must arise from h S (x) by one of these transformations.
Because there are only finitely many such transformations, (by virtue of the finite number of nonzero terms in the direct sum), all the possible covers of any element of the poset can be exhaustively considered.
Algorithm 1 Possible Successors of h procedure Succs(h) α ← dict(h)
Initialise a dictionary with key-value pairs representing h S ← {}
Initialise an empty set for successors for y ∈ keys(α) do
g ← α if g (y) > 0 then g (y) −= 1 for z ∈ keys(g) do f ← g if f (z) > 0 then f (z) −= 1 f (y + z) += 1 S ← push f
Add the successor generated by merging at y and z end if end for g (y + 1) += 1 S ← push g Add the successor generated by promoting at y end if end for return S Return the set of successors end procedure Further, we note that the homotopy polynomial for any graph G with no cycles is h G = k for some positive integer k. Since removing edges cannot create a cycle, any spanning subgraph lower than G must have a homotopy polynomial h = k + r for some positive integer r. Therefore, any chain with lowest homotopy polynomial equal to some integer can be uniquely extended backwards by simply incrementing the integer by 1 at each stage. Similarly, any connected subgraph has a homotopy polynomial taking the form h = l k for some non-negative integer k. Since adding an edge cannot disconnect the graph, any graph higher than this has a homotopy polynomial of the form h = x k+r for positive integer r. Therefore any chain with highest homotopy polynomial equal to x k can be uniquely extended forward by incrementing the exponent k. We use this to extend the chains in Section 7 to maximal chains for comparison in the posets directly constructed in this section.
Because H (N ) possesses a unique minimal element, any element in H (N ) can be obtained through the repeated application of these transformations starting with this minimal element. However, not every element constructed in this way is the image of some subgraph in the original poset under the quotienting operation, so we next present a method for filtering out the constructed covers that cannot be obtained by quotienting a subgraph in the original poset.
We note that we can compute the Euler characteristic of a graph G in two different ways:
L = V − E or L = h G (1) − dh G dx (1)
. Since all spanning subgraphs S have the same number of vertices, N , the Euler characteristic of h S (x) can give the number of edges that must have been in any spanning subgraph that quotients to h S (x). Given h S (x), we can compute C (h S (x)), the minimum number of edges present in any simple graph (not necessarily on N vertices) that quotients to h S (x). We denote this number C (h S (x)), the edge cost of the homotopy polynomial. A key property of this cost function, is linearity:
C (h + g) = C (h) + C (g) .(9)
The addition of graph homotopy polynomials corresponds to the disjoint union of their preimages, and the disjoint union of two graphs has the same number of edges as the sum of the edges in the two separate graphs. We therefore only have to know C x k for all k, with C (h S (x)) being determined by linearity. Let us consider a few examples for small k, and the general pattern will be made clear. Clearly, C(x 0 ) = 0, since the graph consisting of a single vertex with no edges has a homotopy polynomial of x 0 , and it possesses no edges. Indeed, the cost of any graph with no edges must be zero, hence the cost of any integer is zero. Next, considering k = 1, the minimal simple graph with a single loop is the complete graph on 3 vertices, which has 3 edges, hence C (x) = 3.
As one final example, how many edges are required to compute a connected graph with two loops? If we consider K 3 , we have a minimal graph with one loop. To add another loop, we at least have to add another edge; however, because K 3 is a complete graph, it doesn't have room for an additional edge. We therefore need to add an additional vertex first, and then add two additional edges to first connect the new vertex to K 3 , and then to form the additional loop. Therefore, C x 2 = 5, and we see a more general pattern emerging: If a minimal representative of the graph polynomial x k is not a complete graph, then C x k+1 = C x k + 1. If the minimal representative of the graph polynomial x k is a complete graph, then C x k+1 = C x k + 2. Denoting C x k as C k , we obtain the sequence C k = 0, 3, 5, 6, 8, 9, ...
This sequence is the natural numbers with a set of numbers skipped, these skipped numbers B k = 1, 2, 4, 7, ... being one more than the number of edges in the complete graphs K k . To avoid having to compute these skipped numbers and manually extract them from the naturals, we provide the following direct expression for C k :
C 0 = 0,(11)C k>0 = k + 3 + √ 8k − 7 2 .(12)
The expression for C k>0 is obtained by inverting the expression for the number of edges in a complete graph on k vertices, with some additional modifications to ensure the numbers one above these numbers of complete graph edges are skipped. The general algorithm for constructing H (N ) is then as follows: Note that this algorithm is self terminating; once we reach layer N (N −1) 2 + 1, all candidate polynomials will violate the cost constraint and we will be left with no further terms to examine.
while A = {} do h ← pop A if h / ∈ V then Check to see if we have already processed h V ← push h m ← dh dl (1) − h(1) + N m is the edge allowance of h. B ← Succs(h) for g in B do if C(g) ≤ m + 1 then A ← push g E ← push {h, g} end if end for end if end while return V , E
Return the vertices and edges in the face poset of H (N ) end procedure Example 3.1 (H (3)). We compute the graded poset H (3) in two different ways. First, we explicitly compute the quotienting of the poset depicted in Figure 1. Note that the graphs appearing at each layer in this Figure are all isomorphic as graphs, so by counting connected components and loops, we easily deduce that the resulting quotiented poset will represented by the following path graph: Secondly, we want to compute this poset directly.
1. Begin with A ← {3}. 2. Pop 3 from A; add 3 to V . 3. m ← 0; B ← Succs (3) = {2, x + 2}
4. C (2) = 0 ≤ m + 1 and C (x + 2) = 3 > m + 1, so we add 2 to A, add the edge {3, 2} to E, and discard
x + 2.
Pop 2 from
A; add 2 to V . 6. m ← 1; B ← Succs (2) = {1, x + 1} 7. C (1) = 0 ≤ m + 1, and C (x + 1) = 3 > m + 1 which means that we add 1 to A, {2, 1} to E , and discard x + 1. 8. Pop 1 from A; add 1 to V . 9. m ← 2; B ← Succs (1) = {x} 10. C (x) = 3 ≤ m + 1, so we add x to A, {1, x} to E. 11. Pop x from A; add x it to V 12. m ← 3; B ← Succs (x) = {x 2 } 13
. C x 2 = 5 > m + 1, so we discard x 2 , and the algorithm terminates.
While in this trivial case, computing the quotienting directly is relatively straightforward due to the fact that the spanning subgraphs at each layer are obviously isomorphic to each other, and perhaps easier than directly computing H (3) using homotopy polynomials, the advantage of homotopy polynomials is quickly seen in the case of even slightly larger graphs.
Example 3.2 (H (5)). We want to directly construct the quotiented poset for K 5 by the above algorithm.
1. We begin with A = {5}. 2. As in the previous example, we don't satisfy the edge cost criterion with promoted polynomials until layer 3, so we pick up the algorithm at that state with
V = {5, 4, 3}, E = {{5, 4}, {4, 3}, {3, 2}, {3, x + 2}}, and A = {2, x + 2}. 3. Succs (2) = {1, x + 1}, both of which are retained, so we have V = {5, 4, 3, 2}, add edges {2, 1} and {2, x + 1}, and A = {x + 2, 1, x + 1}. 4. Succs (x + 2) = {x + 1, 2x + 1, x 2 + 2}, but only x + 1 is retained, so we have V = {5, 4, 3, 2, x + 2}, add edge {x + 2, x + 1}, and A = {1, x + 1}. 5. Succs (1) = {x} which is retained, so we have V = {5, 4, 3, 2, x + 2, 1}, add edge {1, x}, and A = {x + 1, x}. 6. Succs (x + 1) = {x, 2x, x 2 + 1}, of which only x and x 2 + 1 are retained, so we have V = {5, 4, 3, 2, x + 2, 1, x + 1}, add edges {x + 1, x} and {x + 1, x 2 + 1}, and A = {x, x 2 + 1}. 7. Succs(x) = {x 2 }, which is retained, so we have V = {5, 4, 3, 2, x + 2, 1, x + 1, x}, add edge {x, x 2 }, and A = {x 2 + 1, x 2 }. 8. Succs x 2 + 1 = {x 2 , x 3 +1, x 2 +x}, but only x 2 and x 3 +1 are retained, so we have V = {5, 4, 3, 2, x+ 2, 1, x + 1, x, x 2 + 1}, add edges {x 2 + 1, x 2 } and {x 2 + 1, x 3 + 1}, and A = {x 2 , x 3 + 1}. 9. Succs x 2 = {x 3 } and Succs x 3 + 1 = {x 3 , x 3 + x, x 4 }, but only x 3 is retained from either of these,
so after two passes through the while loop, we add edges {x 2 , x 3 } and {x 3 + 1, x 3 }, and A = {x 3 } 10. Taking successors causes the exponent in x 3 simply to increment one by one until we reach x 7 , which violates the edge cost constraint, and our algorithm terminates. There are 2 10 spanning subgraphs in K 5 ; these ten steps are much simpler than computing and comparing 1024 subgraphs explicitly, deriving their homotopy polynomials, and grouping the results according to all the induced covering relations derived from the subgraph inclusion relationship.
Complexity Bounds and Connections to Number Theory
Hence we may study H(M, L) via the size of the set Pol(M, L). The following notion will be extremely useful to us.
For convenience, we also define p S (n) := |Par S (n)| ∈ N ∪ {∞}. When d = 1 we allow ourselves to omit the word generalized, calling Par S (n) the partitions of n into elements of S.
The most famous example of this notion comes not from topology, but number theory. Let N ≥1 := {1, 2, 3, . . .}. The number p N ≥1 (n), often denoted simply by p(n), is called the number of partitions of n, the number of ways n can be written as a sum of positive natural numbers, up to commutativity.
For our purposes we will need to study more complicated kinds of generalized partitions. Let
Q := {(C k , 1 − k)} k∈N .(15)
Hence Q is the set of paired costs and Euler characteristics of l k , for k varying across nonnegative numbers.
There is a bijection between polynomials with Euler characteristic L and cost exactly M and Par Q (M, L), namely
∞ i=0 a k l k −→ f : (C i , 1 − i) → a i .(16)
Of course, what we are actually interested in is the set Pol(M, L), the polynomials with Euler characteristic L and cost less than or equal to M . With a slight modification to Q we can relate Pol(M, L) to a set of generalized partitions. Let
R := Q ∪ {(1, 0)}.(17)
There is a bijection between Pol(M, L) and Par R (M, L) given by
∞ i=0 a k l k −→ f : (C i , 1 − i) → a i , (1, 0) → M − ∞ i=0 C i a i .(18)
.
Hence, |Pol(M, L)| = p R (M, L). Using this in Equation 13, we have
H(M, L) = p R (M, L) − 1 L = 0, M > 0 p R (M, L) otherwise.(19)
We 19 gives the exception.
Next, we want to show that H(M, L) stabilises as a function of L when L is large enough. Intuitively, once we have more vertices than edges, adding more vertices cannot increase the number of permissible loops in any connected component. Instead, each homotopy equivalence class will merely have one more connected component for each vertex we added. As usual we first prove an analogous result for p R .
On the left we have a sum of elements of Z 2 with nonpositive second coordinate. On the right we have an element of Z 2 with positive second coordinate. This is a contradiction, and we are done.
Immediate is the following result about H(M, L). f :
(C k , 1 − k) → a k , (1, 0) → b −→ g : 1 → b, C k → a k for k ≥ 1 ,(22)
and this map has inverse
g : 1 → b, C k → a k for k ≥ 1 −→ f : (1, 0) → b, (C k , 1−k) → a k for k ≥ 1, (0, 1) → − ∞ k=1 a k (1−k) ,(23)
completing the proof.
At this point we make a definition that will be useful momentarily.
Definition 4.2. Let A ⊆ N ≥1 , and let A ≤n denote the subset of A of numbers less than or equal to n. The natural density of A is the limit
lim n→∞ |A ≤n | n ,(24)
should it exist.
This notion is helpful since [24,Theorem 4] gives us the following result.
Theorem 1. Let A ⊆ N ≥1 have natural density α > 0 and satisfy gcd(A) = 1. Then
log p A (n) ∼ π 2αn 3 .(25)
Using this result, we have
log p T (n) ∼ π 2n 3 ,(26)
since, letting Y = { n(n+1) 2 + 1 : n ∈ N}, we have
|T ≤n | = n − |Y ≤n | + 1 ≥ n − 2 √ n + 1,(27)
so T has natural density one. Therefore we have the asymptotic scaling
log H(M, L) ≤ log H(M ) ∼ π 2M 3 .(28)
Now we return to the original problem of obtaining rough bounds on the size of the poset H(N ). The size of the poset is the sum of the number of elements in each grading, giving
H(N ) = N (N −1)/2 M =0 H(M, N − M ).(29)
We use Equation 29 to construct crude bounds on the size of the poset. Bounding this expression below, we have
N (N −1)/2 M =0 H(M, N − M ) ≥ H(N, 0) = H(N ) − 1(30)
Bounding this expression above, we have
N (N −1)/2 M =0 H(M, N − M ) ≤ N (N −1)/2 M =0 H(M ) ≤ N (N − 1) 2 H N (N − 1) 2 ≤ N 2 2 H N 2 2 .(31)
These calculations show that the size of the poset H(N ) grows quickly in N :
Theorem 2. There exist functions φ, ψ : N → R such that π 2N 3 ∼ φ(N ) ≤ log H(N ) ≤ ψ(N ) ∼ π 1 3 N.(32)
Chain Comparisons
We have established the space of possible spanning subgraphs, partially ordered by inclusion, graded by Euler characteristic, and quotiented by homotopy equivalence. We now must construct a metric to compare chains in these posets analogous to our comparison in Section 2 using Kendall's d K . To do this, we take inspiration from the unquotiented poset, and generalise to the quotiented poset. After quotienting by homotopy equivalence, the underlying symmetric group structure is destroyed, which is apparent by observing that while the unquotiented posets are symmetric with respect to reversing the order relation (with this transformation being identified with replacing each element with its complement), but the quotiented poset is not symmetric with respect to reversing the order relation. Therefore, we seek to generalise Kendall's d K to chains in more general graded posets; we want to examine the impact of an adjacent transposition in Ξ on its associated edge filtration, which we denote F (Ξ). Suppose Ξ = {e 1 , e 2 ..., e k , e k+1 , ..., e M −1 , e M }, and let σ be the transposition between elements k and k + 1. Examining F (Ξ) and F (σ (Ξ)), we see that for i < k, S i (Ξ) = S i (σ (Ξ)), since Ξ and σ (Ξ) only differ at positions k and k + 1. At position k, S k (Ξ) contains e k , while S k (σ (Ξ)) contains e k+1 , hence these two chains differ at position k. However, at position k + 1, both of these chains contain elements e k and e k+1 , and hence are equal at position k + 1, and all following positions. Hence an adjacent transposition in Ξ results in a change in F at exactly one position. This motivates the definition of a discrete homotopy, through adjacent chains. Note that because the posets we are working in possess unique lowest and highest elements, all the discrete homotopies we will encounter can be considered relative discrete homotopies, i.e. those with fixed endpoints. With this, we can generalise Kendall's d K metric: Definition 5.3 (Discrete Homotopy Metric). Let F I and F F be maximal chains in a graded poset. The discrete homotopy distance d H (F I , F F ) between F I and F F is the length of the shortest discrete homotopy {F i }| d i=0 with F 0 = F I and F d = F F . This clearly reduces to Kendall's d K when the graded poset is the power set of some set partially graded by inclusion, however this definition can be applied to more general graded posets. This metric can be formulated as the standard metric on a graph, where we consider a graph whose vertices are maximal chains, and whose edges connect maximal chains differing at exactly one position. This metric is clearly bounded from below by 0, but more sharply, for given F I and F F , this metric is bounded from below by the number of locations where F I differs from F F . Example 5.1 (Complete Construction for K 4 ). In order to fully convey the entire analysis pipeline, we examine the smallest complete graph that generates a nontrivial filtration space after quotienting, i.e. the smallest complete graph generating a poset with more than one maximal chain after quotienting by homotopy equivalence: K 4 .
We begin with the complete graph on 4 vertices, with vertices and edges labelled as in Figure 4. Suppose we have two edge filtrations F 1 and F 2 , given by the respective sequences Ξ 1 = {e 1 , e 2 , e 3 , e 4 , e 5 , e 6 } and Ξ 2 = {e 6 , e 5 , e 4 , e 3 , e 2 , e 1 }. We obtain the spanning subgraph poset depicted in Figure 5, where the filtration F 1 is coloured red, and F 2 is coloured blue.
If we wanted to compare these two filtrations, we can use Kendall's d K , since d K (Ξ 1 , Ξ 2 ) = 15, and as shown by, these paths in Figure 5, these two chains are as far apart as possible when measured according to Kendall's d K . This diagram, however is quite large, even for the small graph K 4 , so we seek to quotient it by homotopy equivalence. Applying Algorithm 2, we obtain the poset depicted in Figure 6. We see clearly that the quotiented poset is significantly smaller than the unquotiented poset. Examining the two filtrations, we obtain the two sequences of homotopy polynomials F 1 / ∼ = {4, 3, 2, 1, x, x 2 , x 3 } and F 2 / ∼ = {4, 3, 2, x + 1, x, x 2 , x 3 }. These two chains are adjacent (differing only at position 3), hence they have discrete homotopy distance 1, and are precisely the 2 distinct maximal chains in Figure 6.
Sufficient Conditions for Bounded d H
For the unquotiented poset, we clearly have a finite discrete homotopy distance between arbitrary maximal chains, since the symmetric group on the edge set is a finite group generated by adjacent transpositions. This means discrete homotopy distances correspond to d K , taking its maximum value of M (M −1) 2 for a total order inversion. Therefore, the images of any two actual chains under quotienting will also have a finite discrete homotopy distance between them. We can take the discrete homotopy connecting them in the unquotiented poset, and map that sequence under the quotient map to obtain a sequence of chains in the quotiented poset. This sequence is not necessarily a discrete homotopy in the quotiented poset, since chains that initially differ may get mapped to the same quotiented chain, but the number of differences in successive chains cannot increase. Therefore, successive elements in the quotiented sequence of chains will differ in at most 1 position. Removing duplicated chains will yield a shorter discrete homotopy in the quotiented space. Therefore, any two actual chains in SS (K N ) will have a finite discrete homotopy connecting their images in H (N ), bounded from above by d K applied to the unquotiented chains. However, not all chains in a general quotiented poset are the images of chains in the corresponding unquotiented poset. We seek to define a more general structure that is sufficient for guaranteeing the existence of finite discrete homotopies between arbitrary maximal chains, since we may be interested in comparing general chains in quotiented graded posets.
To prove that this metric is finite for a more general graded poset, we have to compute an upper bound for arbitrary maximal chains. We introduce a condition which provides a sufficiency condition for the existence of arbitrary discrete homotopies. Definition 6.1 (left-covering condition). Let P be a finite graded poset such that for every z 1 , z 2 ∈ P satisfying z 0 z 1 and z 0 z 2 for some z 0 ∈ P , there exists z 3 satisfying z 1 z 3 and z 2 z 3 . Such a P satisfies the left-covering condition. This condition is expressed in the following diagram, where the existence of z 0 , z 1 , z 2 , and the solid arrows implies the existence of z 3 and the dashed arrows:
z 1 z 0 z 3 z 2
The right-covering condition is obtained by reversing the order relationship in the above definition, and is similarly expressed by the following diagram:
z 1 z 3 z 0 z 2
We have the following useful result:
Theorem 3. If a poset satisfies the left-covering condition, and has a unique lowest element, then arbitrary maximal chains can be connected by a finite discrete homotopy.
Proof. Let F I and F F be the two arbitrary maximal chains in a poset satisfying the left-covering condition that we want to connect through a discrete homotopy. We construct a recursive algorithm for building a discrete homotopy between these chains.
Because F I and F F are maximal, they are identical at position 0, with their value at position 0 being the unique lowest element. Next, suppose we have two chains, F 1 and F 2 , that we want to connect with a discrete homotopy that are identical up to position k. We construct two adjacent intermediate chains F M1 and F M2 , both identical with the two chains up to position k, with F M1 matching F 1 up to position k + 1, F M2 matching F 2 up to position k + 1.
The common elements of F M1 and F M2 up to and including position k are determined by the common elements of F 1 and F 2 . The value of F M1 at position k + 1, z 1 , is the value of F 1 at k + 1, and the value of F M2 at position k + 1, z 2 , is the value of F 2 .
All that remains is to extend F M1 and F M2 into maximal chains, and because F M1 and F M2 are adjacent and different at position k+1, these extensions must be identical. Because the poset satisfies the left-covering condition, there exists z 3 in position k + 2 that covers z 1 and z 2 , since z 1 and z 2 both cover the element in position k in all four of the chains considered here, hence we require F M1 and F M2 to both take the value z 3 and position k + 2.
If z 3 is contained in either F 1 or F 2 , then this extension can be taken to be simply the remainder of this chain. Otherwise, since the distinct values of F 1 at position k + 2 and z 3 both cover z 1 (similarly the value of F 2 and z 3 both cover z 2 ), there must exist some other element that covers z 3 by the left-covering condition; we use this element to extend F M1 and F M2 . This same argument shows that there must be an element we can use to extend these chains at position k + 3, and so on, until we reach the end of the poset, and hence have the maximal chains we seek.
Assume that there exists discrete homotopies {F j } 1 between F 1 and F M1 , and {F j } 2 between F M2 and F 2 . A discrete homotopy between F 1 and F 2 can then be obtained by concatenating these discrete homotopies, since {F j } 1 ends with the chain F M1 , {F j } 1 begins with the chain F M2 , and the chains F M1 and F M2 are adjacent.
We therefore have shown that if maximal chains in a poset satisfying the left-covering conditions, differing only in the last k positions can be connected with a discrete homotopy, then maximal chains differing only in the last k + 1 positions can be connected too. Obviously, maximal chains differing only in the last 0 positions (i.e. identical chains) are connected by a discrete homotopy of length 0, so we can recursively connect chains differing in all but position 0. Therefore, we can recursively construct a discrete homotopy between F I and F F . Dually, arbitrary maximal chains in posets satisfying the right-covering condition with unique highest elements can be connected with finite discrete homotopies, and the proof is obtained by dualising the above proof. The quotiented posets we are working with satisfy both the left and right-covering conditions with unique lowest and highest elements, hence both of these constructions can be used to establish upper bounds on the discrete homotopy metric between two arbitrary maximal chains we encounter.
This proof does not construct the optimal discrete homotopy, though it does bound its length from above. The exact computation of d H (F 1 , F 2 ) could be accomplished through a bidirectional search beginning from F 1 and F 2 , and moving alternatively through adjacent chains until a discrete homotopy connecting F 1 and F 2 is found. By construction this discrete homotopy will have minimal length.
In the many query setting, i.e. when we want to compute many distances between maximal chains in the same H (N ), it may be more efficient to construct all maximal chains, taking this set as the vertex set of a graph. The edges of this graph can then be taken to be the set of pairs of adjacent chains. This graph must only be constructed once; the computation of the discrete homotopy metric then becomes the problem of finding a shortest path through this unweighted graph.
Application to Alzheimer's disease
Alzheimer's disease (AD) is the main progressive form of dementia and is characterized by the accumulation of misfolded toxic tau proteins (τ P) that are known to be related to a number of dysfunctions, such as hypometabolism, inflammation and deficiencies in the axonal transport of necessary proteins, that are implicated in the death of axons. Neurofibrillary tangles, which are tangled masses of misfolded τ P that are deposited in neuronal cell bodies or their proximal dendrites, follow a structured progression pattern in AD, often referred to as Braak stages [27]. However, τ P deposition presents variations in patterns among individual patients and these variations form groupings, referred to as AD subtypes [23,30]. It has been postulated [30] that differences in τ P epicenter locations, the initial region of τ P pathology, might account for different subtypes of AD. Each subtype corresponds to a different pattern of damage, hence a change in the structure of the brain connectome. Here, we ask the simple but fundamental questions: are these different AD subtypes topologically different? Can we find a way, a signature, to distinguish them? Within our framework, we translate these questions to ask if mathematical models of brain network neurodegeneration can be used to produce distinct poset chains that reflect relevant characteristics of τ P progression in AD.
Subtype name Cortical seeding location Limbic
Entorhinal cortex MTL sparing Middle temporal gyrus Posterior Fusiform gyrus Temporal
Inferior temporal gyrus Table 1: Alzheimer's disease subtypes, of [30], and their associated cortical τ P seeding locations 7.1 Generating filtrations from graph neurodegeneration modelling
The structural connectome
First, we describe the basics of brain graphs. Here, we will use the so-called structural connectomes. Mathematically, a connectome is a connected undirected weighted graph G = (V, E) whose set of vertices V describes the regions of the brain and the set of edges E represents axonal connections between these regions. Connectomes are obtained from diffusion tensor imaging and many different sets of connectomes are publicly available. Here, we will use connectomes provided by the Human Connectome Project [9,18], which provides connectomes of size ranging from |V | = N = 83 to N = 1015 vertices, as shown in Fig. 7. Different weights can be considered depending on the application. Since we are interested in transport along edges, it is natural to use diffusive weights that define the adjacency matrix:
ω ij = n ij 2 ij , i, j = 1, . . . , N(33)
where n ij and ij are the number of axonal bundles and length between vertices i and j, respectively. Since these connectomes are too large for our study, we will further coarse-grain them to graphs of size with N = 5 to N = 18 by further regrouping the nodes in larger regions. The small connectome G with We assume that the epicenters of Table 1, corresponding to the various subtypes, are bilateral. Thus, we first consider the subgraph G H = (V H , E H ), of G, consisting of only those vertices in the left hemisphere of G. As τ P pathology develops on a time scale of a decade or more, we also restrict our attention to early tau pathology near the entorhinal cortex (limbic subtype,ṽ 1 ), middle temporal gyrus (MTL sparing subtype, v 2 ), fusiform gyrus (posterior subtype,ṽ 3 ) and inferior temporal gyrus (temporal subtype,ṽ 4 ). A nested set of computationally tractable small networksG 1 ⊂G 2 ⊂ . . . ⊂G 5 , were extracted from G H with N = 4, 6, 8, 12, 15 and 18. The extraction was carried out by: starting with the vertex set {ṽ 1 ,ṽ 2 ,ṽ 3 ,ṽ 4 } corresponding to the left entorhinal cortex, left middle temporal gyrus, left fusiform gyrus and left inferior temporal gyrus; considering the neighbours of the initial vertices and selecting, for each vertex, those neighbours whose edge weights w ij lie in the top 5% of all vertex neighbours; the previous process was repeated until a vertex threshold, M j , was reached resulting in a set of verticesṼ j ⊂ V H ; finally,G j was constructed asG j = (Ṽ j ,Ẽ j ) whereẼ j consisted of all edges from E H connecting the vertices ofṼ j . This method of subgraph construction assures thatG i ⊂G j whenever M i < M j .
Disease progression, damage, and filtration
We now consider the disease dynamics on the connectome. Models of neurodegeneration on graphs have been extensively studied, c.f. [25,27] and the citations therein. Here, we consider a network progression model, first defined in [11], of toxic τ P in AD along with associated axonal changes, due to the deleterious effects of τ P on the neurons located in the vertices, using the connectome shown in Figure 7. Explicitly, let c i be the concentration of a toxic protein at vertex i of G. The transport on the graph is determined by the weighted graph Laplacian with components
L ij = −w ij + δ ij N j=1 w ij , i, j ∈ {1, . . . , N },(34)
(with δ ii = 1 and δ ij = 0 for i = j), built from the symmetric weighted adjacency matrix (w ij ). Note that since the edges undergo changes in their properties, such as neuronal death and axonal retraction due to the effects of toxic proteins on neuron bodies, the edge weights, that determine connection strength, also evolve in time, w ij = w ij (t). Initially, the weights are given by the connectome data of healthy subjects as explained in the previous section. Hence, we have w ij (0) = ω ij . We assume that the evolution of the connection strength w ij , depends on the extent of the neuronal cell body damage in the gray matter brain regions represented by v i and v j . The local damage, in gray matter region v i , due to the effect of toxic proteins on synapses, plasticity, and eventual cell death is modeled by the variable q i ∈ [0, 1] (0 healthy, 1 maximal damage) and follows a simple first-rate damage law. Together, the model of [11] reads:
(a) t=0 (b) t = t1 > 0 (c) t = t2 > t1 (d) t = t3 > t2 (e) t = t4 > t3c i = ρ N j=1 w ij − δ ij N j=1 w ij c j + αc i (1 − c i ), i = 1, . . . , N,(35a)q i = βc i (1 − q i ), i = 1, . . . , N,(35b)w ij = −γw ij (q i + q j ), i, j = 1, . . . , N,(35c)
where α, β, γ, ρ are parameters. For the initial conditions we have
c i (0) = δ is i = 1, . . . , N,(36a)q i (0) = 0, i = 1, . . . , N,(36b)w ij (0) = ω ij , i, j = 1, . . . , N,(36c)
where s ∈ {1, . . . , N } is the seeding site and 1. In the context of network neurodegeneration models, we are interested in giving the order at which the weights associated to different graph edges will diminish, e.g. below a prescribed threshold, and thus provide the means to generate a filtration. We choose the parameters of (36) to be in the range, provided by inference models using patient data [27], that gives a typical time scale of the disease of 30 years. In practice we use the following values α = 3/4, β = 1/4, γ = 1/8, ρ = 1/100, = 1/20. These values give the correct staging of the disease as shown in [25]. From (36), it can be shown that all edge weights converge asymptotically to zero. Late in the disease, around time 20-25 years, the relative values of edge weights are ordered, so that if w ij (t) > w i j (t), then w ij (t ) > w i j (t ) for all t > t > 20 and all edges. Therefore, we can sample the edges at time t = 20 and order them by increasing weight to provide a filtration. This sampling process is similar to the illustrative filtration construction depicted in Figure 9. Proteins (green) diffuses from an epicenter, at t = 0, to connected vertices on the graph (Figure 9, first and third rows). Then the weights w ij decrease, reaching some prescribed threshold. The corresponding edges are marked in the first and third row by light gray, and are subsequently added to the filtration in the second and fourth row. Thus, for each choice of epicenter location s, we obtain a sequence of edges, representing a brain graph filtration, of edges given by:
Ξ = {e i1j1 , e i2j2 , ...} = {e 1 , e 2 , ...}.
(37) Figure 9 illustrates that different epicenters, depicted in the first and third row, yield different progression patterns which, in turn, produce different filtrations as seen in the second and fourth rows, respectively.
A small-t neurodegeneration model of AD subtypes
An interesting test problem for investigating the utility of brain chains stems from a recent hypothesis for AD subtypes and the use of brain network neurodegeneration modeling. AD subtypes can be defined via patterns of tau pathology and brain atrophy; brain atrophy is strongly correlated with τ P pathology [27], justifying a primary focus on τ P in AD subtyping. One of the largest-cohort PET imaging studies of AD subtypes, to date, advanced the hypothesis that AD subtypes may arise due to differences in epicenter location [30]; the subtypes discussed in [30] are enumerated in Table 1.
Results: H (N ) distinguishes three AD brain chains on subgraphs
In this section, we show that that simulated AD subtypes produce different chains for sufficiently large subgraphs. For graphs with N ≥ 15 we can identify three brain chains corresponding to the three AD subtypes of Table 1. As described above, filtrations were constructed on each networks of different size. We used of the alternate epicenter hypothesis [30] of AD subtypes to discern whether chains in the space of graph homotopy polynomials show any differences between AD subtypes. The limbic subtype was postulated to be characterized by an entorhinal cortex epicenter, the MTL sparing subtype seeding site was characterized a middle temporal gyrus epicenter, the posterior subtype corresponds to a fusiform gyrus epicenter and the temporal subtype coincides with an epicenter of the inferior temporal gyrus [30]. Four simulations were therefore carried out for each of the subgraphs shown in Figure 8. The first simulation (limbic subtype) used an initial value of c EC (0) = , with c i (0) = 0 elsewhere, while the other subtype simulations proceeded similarly with initial values prescribed in the vertex corresponding to their respective epicenter. The results of the AD subtype simulations, for each subgraphs are shown in Figure 10 for the limbic and MTL subtypes and in Figure 11 for the posterior and temporal subtypes. In each case, the chains, generated by the toxic protein progression filtration is shown in red while the underlying poset diagram for the homotopy polynomials is shown in blue. It is clear that for the subgraphs with N = 4 and N = 8, the chains are identical for both the limbic and MTL subtypes. The brain chains for the posterior and temporal subtypes, for N = 4 and N = 6 were identical to Fig 10a and Fig 10c, respectively. This is due to the paucity of the subgraphs as there is a lack of edges to permit sufficient variation in the chain on subgraphs of this size.
As the number of subgraph vertices increases, from N = 8, to N = 12 and N = 15, we see a clear pattern emerging in Figs. 10e-10j that distinguishes the limbic subtype from the MTL sparing subtype. Moreover, comparing Fig 10i and Fig 10j to Fig 11e and Fig 11f, we see that the N = 15 subgraph differentiates three distinct subtype brain chains. The first chain corresponds to the limbic subtype (Fig 10i), the second chain corresponds to the MTL sparing subtype (Fig 10j) and the third chain corresponds to both the posterior and temporal subtypes (Fig 11e and Fig 11f).
At the highest resolution of N = 18 (figures not shown), all of the brain chains follow identical, early downward trajectories through H (N ). Following this initial descent, the limbic (Fig 10i) and MTL sparing Fig. 10. Results corresponding to the brain subgraphs with N = 4, 6 (Fig 8a-Fig 8b) are identical to those of Fig 10a-Fig 10d. ( Fig 10j) brain chains both show unique upticks through H (N ) while the posterior and temporal brain chains (Fig 11e and Fig11f) continue to trend downward, through H (N ), in an identical fashion. A downward trend through H (N ) is reflective of a filtration that favors a branching out towards unique neighbors, at the expense of loop formation, whereas an upward trend reflects the promotion of loops in the progression filtration. A closer inspection shows that the limbic chain (Fig 10i) is generated by a progression filtration that connects nearby nodes until there are three disconnected components in the spanning subgraph. Loop formation is then heavily prioritized in one of the components until it joins with a second, loopless, component and continues to form loops until it is finally joined with the final component. The limbic subtype corresponds to the chain that forms loops most aggressively and this can be seen in the two early 'spikes' of Fig 10i. The MTL sparing subtype chain (Fig 10j) connects unique vertices, initially, until there are two distinct components in the filtration; one component quickly forms loops, evident in the first upward spike, until joining with the second component and continuing the loop formation. Finally, the posterior and temporal chain (Fig 11e and Fig 11f) are identical and reflect a filtration that connects unique vertices, as evidenced by the maximal downward trend until a single connected component is formed, after which loop formation dominates the filtration. Taken together, these observations suggest that the proposed equivalence relation, using this discrete homotopy approach, partitions the progression filtrations into interpretable brain chains. Two brain chains are unique to two AD subtypes filtrations, whereas two other AD subtype filtrations are identified to the same brain chain. Future work examining brain chains constructed from patient data, may also provide a novel means of AD subtype detection.
Conclusion
We have considered H (N ) on six nested brain connectome graphs and applied the seeding hypothesis of AD subtypes [30] to generate filtrations from the simulated progression of misfolded τ P protein in AD. The progression filtrations can be recast in the language of chains through the poset of homotopy polynomials established by considering the poset of spanning graphs, on an underlying set of vertices, and taking a quotient by homotopy equivalence. This quotient gives maximal chains, brain chains, with distinct patterns in the space H (N ). Some of these brain chains correspond to the subtypes of AD, which emerge relatively early in a simulated AD disease process. Examination of brain chains in H (N ) is a novel topological biomarker that may be used to assess a patient's AD subtype with clinical imaging. For example, from the image, a patient's brain chain could be computed and then compared to known AD subtype brain chains using the discrete homotopy distance. Since clinical data is more heterogeneous, there may be looping dynamics occurring in multiple components and these topological signatures could be interpreted with the homotopy polynomials.
In future, we hope to improve computation of the proposed algorithm to study larger graphs as well as provide an exact expression for the size of these posets. While a small graph could not distinguish limbic and MTL AD subtypes, these became unique brain chains for medium-size graphs. A larger brain graph analysis may enable distinguishing between posterior and temporal AD subtypes. We would also like to explore the sensitivity of brain chains and their topological signatures to changes in dynamical system parameters, or consider other neurodegerative disease dynamics.
In terms of applications, this general mathematical pipeline is by no means exclusive to neurodegeneration. One may consider, for instance, the failure of the wires of an electrical circuit, the cessation of friendship or business relations within a community of people, the frequency of movement patterns of animals between geographical locations or damage propagation in open-cell foams, among others. Different applications may motivate further mathematical research, which may consist of using different equivalence relations, such as graph isomorphism, homeomorphism, or some other weaker topological relation. Further, graphs are inherently combinatorial objects, and can be viewed as the 1-skeletons of simplicial or CW-complexes. Combinatorial results derived for simplicial and CW-complexes can similarly be applied to graphs as a special case, for instance homology and cohomology [14]. These combinatorial and topological graph theories can similarly be applied to sequences of subgraphs, as we do with homotopy equivalence; the resulting sequence of groups may distinguish different conditions. As algorithms and their implementation continue to improve in computational topology, so does tailoring topological theory to solve concrete real-world problems.
Figure 1 :
1SS (K 3 ), the spanning subgraphs of K 3 partially ordered by subgraph inclusion. The size of SS (K N ) grows quite quickly as N , the number of vertices in V , increases (SS (K N )
Definition 2. 4 (
4Graph Homotopy Polynomial). Let G be a graph. For each connected component of G, associate the term x k where k is the number of loops in this component. The graph homotopy polynomial h G (x) is the sum of these terms.
Figure 2 :
2H (3)
Figure 3 :
3H (5) directly computed using graph homotopy polynomials.
A natural question with any new algorithm is time complexity. We do not answer the time complexity of Algorithm 2 directly, but instead examine the poset H(N ) as N grows. In this section, we establish rough bounds for the size of this poset by instead asking an easier question: how many elements are in H(N ) atEuler characteristic L = N − M . Equivalently, we could ask the following question: How many graphs are there up to homotopy equivalence with M edges and Euler characteristic L? We call the quantity answering this questions H(M, L). Let Pol(M, L) denote the set of polynomials with nonnegative integer coefficients with cost less than or equal to M and Euler characteristic equal to L. Asking the size of |Pol(M, L)| is equivalent to asking H(M, L), with one notable exception. The number of graphs with M edges and Euler characteristic L up to homotopy is exactly the number of elements of Pol(M, L) that can be attained by the edge contraction construction from a graph with M edges and L + M vertices. For most values of M and L, every element of Pol(M, L) can be attained by this construction. However Pol(M, 0) contains the zero polynomial, which if M > 0 cannot be obtained as the homotopy polynomial of a real graph, since N = M − L, and the zero polynomial is the homotopy polynomial of the empty graph. Hence in this case H(M, L) = |Pol(M, L)| − 1.
Definition 4. 1 .
1Let S ⊆ Z d and n ∈ Z d , and N := {0, 1, 2, 3, . . .}. The set of generalized partitions of n into elements of S, denoted Par S (n), refers to the set of all functions f : S → N that are zero with finite exceptions satisfying s∈S sf (s) = n.
now deduce properties of p R (M, L) in order to do the same for H(M, L). We have an injection from Par R (M, L) to Par R (M + 1, L), given by f →f , wheref = f except on (1, 0), wheref (1, 0) = f (1, 0) + 1. Similarly, we have an injection from Par R (M, L) to Par R (M, L + 1) given by f →f , wheref = f except on (0, 1), wheref (0, 1) = f (0, 1) + 1. This shows: Proposition 1. p R (M, L) is weakly increasing in both coordinates. From this we can deduce the following about H(M, L).
Corollary 1 .
1H(M, L) is weakly increasing in L. H(M, L) is also weakly increasing in M , with the exception that H(0, 0) = 1, while H(1, 0) = H(2, 0) = 0. Proof. For the first statement, by Equation 19 and the previous proposition, we only need to check that H(M, −1) ≤ H(M, 0) when M > 0, or equivalently, that p R (M, −1) < p R (M, 0) when M > 0. This is true since the map f →f from Par R (M, −1) to Par R (M, 0) is not a surjection. Indeed, the function g ∈ Par R (M, 0) which sends (1, 0) to M and everything else to zero cannot be in the image of this map. For the second statement, we always have H(M, L) = H(M + 1, L) by Equation 19 and the previous proposition unless M = L = 0. Here, direct computation shows p R (0, 0) = p R (1, 0) = p R (2, 0) = 1, but p R (3, 0) = 2. Applying Equation
Proposition 2 .
2p R (M, L 1 ) = p R (M, L 2 ) whenever L 1 , L 2 ≥ 0. Proof. It suffices to show that the map f →f from Par R (M, L) to Par R (M, L + 1) is a surjection and hence a bijection when L ≥ 0. If the map f →f is not surjective, that means there is a map g ∈ p R (M, L + 1) such that g(0, 1) = 0. Letting W = R − {(0, 1)}, we see s∈W sg(s) = s∈R sg(s) = (M, L + 1).
Corollary 2 .
2H(M, L 1 ) = H(M, L 2 ) whenever L 1 , L 2 > 0. In light of this result, it makes sense to define H(M ), the limit of H(M, L) as L approaches infinity. The fact that H(M ) is dependent on only one variable suggests that we might be able write H(M ) as the number of partitions of M into elements of some subset of N ≥1 . To this end, we define the set T := {1} ∪ {C k } k∈N ≥1 = {1, 3, 5, 6, 8, 9, . . .}. (21) It turns out H(M ) is the number of partitions of M into elements of T : Proposition 3. H(M ) = p T (M ). Proof. By the previous corollary it suffices to show that H(M, 1) = p T (M ). Then by Equation 19 and the last proposition it suffices to show that p R (M, 0) = p T (M ). We have a map from Par R (M, 0) to Par T (M ):
Definition 5. 1 (
1Adjacent Chains). Two maximal chains F 1 and F 2 are adjacent if they differ in exactly one position.
Definition 5 . 2 (
52Discrete Homotopy). Let {F i }| d i=0 be a sequence of maximal chains in a graded poset with the property that F k and F k+1 are adjacent for all k ∈ {0, ..., d − 1}. The sequence {F k } is a discrete homotopy of length d.
Figure 4 :
4The labelled complete graph K 4 .
Figure 5 :
5A depiction of SS (K 4 ), partially ordered by subgraph inclusion, with filtration F 1 in red and F 2 in blue.
Figure 6 :
6H (4) where each equivalence class is represented by its homotopy polynomial.
Figure 7 :
7A structural connectome, with N = 83, constructed from human magnetic resonance images. Vertices represent cortical areas and edges represent bundles of axons connecting these cortical areas; individual vertices are colored by their classification into 83 distinct anatomical regions.
Figure 8 :
8Nested brain region subgraphs extracted from the left hemisphere of the full connectome shown in Figure 7. The left entorhinal cortex and left middle temporal gyrus, respectively, correspond to the top left (royal blue) and bottom right (dark blue) vertices, respectively, shown in Figure 8a. N = 83 contains 41 vertices in each hemisphere that represent 41 distinct bilateral cortical gray-matter regions in addition to a vertex representing the brain stem.
Figure 9 :
9Filtrations corresponding to illustrative protein propagation on a structural connectome. Illustrations of protein propagation (first row, third row), with different epicenters. Progression in time occurs from left to right, green signifies a high toxic burden in a vertex, black edges are healthy while edges marked in light grey indicate a critical change in the edge weight due to the toxic protein in neighbouring vertices. The ordered sequence of light grey edges define a filtration associated to the network progression (second row, fourth row).
) N = 12, Limbic subtype (h) N = 12, MTL subtype (i) N = 15, Limbic subtype (j) N = 15, MTL subtype
Figure 10 :
10Vertices are homotopy equivalence classes of spanning subgraphs of complete graphs of N vertices. Brain chains (shown in red) corresponding to the limbic (left column) and MTL (right column) simulated subtypes of AD through the space, H (N ), of homotopy polynomials for the nested left hemisphere graphs ofFigure 8
Figure 11 :
11Brain chains corresponding to the posterior (left column) and temporal (right column) AD subtypes, as in
Algorithm 2 Direct Construction of H (N ) procedure HomotopyPolynomialPoset(N ) A ← {N } Initialise a set of polynomials to process with the polynomial N V ← {} Initialise vertex set as empty E ← {} Initialise edge set as empty
AcknowledgementsThe authors thank Daniele Celoria for fruitful discussions and helpful comments on this manuscript. AG is grateful for the support by the Engineering and Physical Sciences Research Council of Great Britain under research grants EP/R020205/1. HAH gratefully acknowledges EPSRC EP/R005125/1 and EP/T001968/1, the Royal Society RGF\EA\201074 and UF150238, and Emerson Collective. CG gratefully acknowledges the support by NIH fellowship grant 1F32HL162423-01. DB and HAH are members of the Centre for Topological Data Analysis, funded in part by EPSRC EP/R018472/1. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
Topological graph theory. A survey. D Archdeacon, Congressus Numerantium. 11518D. Archdeacon. Topological graph theory. A survey. Congressus Numerantium, 115(5-54):18, 1996.
Foundations of a connectivity theory for simplicial complexes. H Barcelo, X Kramer, R Laubenbacher, C Weaver, Advances in Applied Mathematics. 262H. Barcelo, X. Kramer, R. Laubenbacher, and C. Weaver. Foundations of a connectivity theory for simplicial complexes. Advances in Applied Mathematics, 26(2):97-128, 2001.
H Barcelo, R Laubenbacher, Perspectives on A-homotopy theory and its applications. Discrete mathematics. 298H. Barcelo and R. Laubenbacher. Perspectives on A-homotopy theory and its applications. Discrete mathematics, 298(1-3):39-61, 2005.
Barcodes distinguish morphology of neuronal tauopathy. D Beers, D Goniotaki, D P Hanger, A Goriely, H A Harrington, arXiv:2204.03348arXiv preprintD. Beers, D. Goniotaki, D. P. Hanger, A. Goriely, and H. A. Harrington. Barcodes distinguish mor- phology of neuronal tauopathy. arXiv preprint arXiv:2204.03348, 2022.
Persistent homology analysis of brain artery trees. The annals of applied statistics. P Bendich, J S Marron, E Miller, A Pieloch, S Skwerer, 10198P. Bendich, J. S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain artery trees. The annals of applied statistics, 10(1):198, 2016.
Complex brain networks: graph theoretical analysis of structural and functional systems. E Bullmore, O Sporns, Netw Neurosci. 104312E. Bullmore and O. Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Netw Neurosci, 10(4):312, 2009.
Promises and pitfalls of topological data analysis for brain connectivity analysis. L Caputi, A Pidnebesna, J Hlinka, NeuroImage. 238118245L. Caputi, A. Pidnebesna, and J. Hlinka. Promises and pitfalls of topological data analysis for brain connectivity analysis. NeuroImage, 238:118245, 2021.
What can topology tell us about the neural code? Bulletin of the. C Curto, American Mathematical Society54C. Curto. What can topology tell us about the neural code? Bulletin of the American Mathematical Society, 54(1):63-78, 2017.
The Connectome Mapper: An Open-Source Processing Pipeline to Map Connectomes with MRI. A Daducci, S Gerhard, J.-P Thiran, PLoS One. 71248121A. Daducci, S. Gerhard, J.-P. Thiran, et al. The Connectome Mapper: An Open-Source Processing Pipeline to Map Connectomes with MRI. PLoS One, 7(12):e48121, 2012.
. P Expert, L.-D Lord, M L Kringelbach, G Petri, Topological neuroscience. P. Expert, L.-D. Lord, M. L. Kringelbach, and G. Petri. Topological neuroscience, 2019.
Neuronal oscillations on evolving networks: dynamics, damage, degradation, decline, dementia, and death. A Goriely, E Kuhl, C Bick, Physical review letters. 12512128102A. Goriely, E. Kuhl, and C. Bick. Neuronal oscillations on evolving networks: dynamics, damage, degradation, decline, dementia, and death. Physical review letters, 125(12):128102, 2020.
A Grigor'yan, Y Lin, Y Muranov, S.-T Yau, arXiv:1407.0234Homotopy theory for digraphs. arXiv preprintA. Grigor'yan, Y. Lin, Y. Muranov, and S.-T. Yau. Homotopy theory for digraphs. arXiv preprint arXiv:1407.0234, 2014.
A Hatcher, C U Press, C U , of Mathematics. Algebraic Topology. Algebraic Topology. Cambridge University PressA. Hatcher, C. U. Press, and C. U. D. of Mathematics. Algebraic Topology. Algebraic Topology. Cambridge University Press, 2002.
Simplicial complexes of graphs. J Jonsson, Springer Science & Business MediaJ. Jonsson. Simplicial complexes of graphs. Springer Science & Business Media, 2007.
Self-propagation of pathogenic protein aggregates in neurodegenerative diseases. M Jucker, L Walker, Nature. 5501M. Jucker and L. Walker. Self-propagation of pathogenic protein aggregates in neurodegenerative dis- eases. Nature, 5(501):45-51, 2013.
A topological representation of branching neuronal morphologies. L Kanari, P Lotko, M Scolamiero, R Levi, J Shillcock, K Hess, H Markram, Neuroinformatics. 161L. Kanari, P. D lotko, M. Scolamiero, R. Levi, J. Shillcock, K. Hess, and H. Markram. A topological representation of branching neuronal morphologies. Neuroinformatics, 16(1):3-13, 2018.
A new measure of rank correlation. M G Kendall, Biometrika. 301/2M. G. Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81-93, 1938.
The braingraph.org database of high resolution structural connectomes and the brain graph tools. C Kerepesi, B Szalkai, B Varga, Cognitive Neurodynamics. 11C. Kerepesi, B. Szalkai, B. Varga, et al. The braingraph.org database of high resolution structural connectomes and the brain graph tools. Cognitive Neurodynamics, 11:483-486, 2017.
D N Kozlov, Trends in topological combinatorics. arXiv preprint math/0507390. D. N. Kozlov. Trends in topological combinatorics. arXiv preprint math/0507390, 2005.
The subgraph homeomorphism problem. A S Lapaugh, R L Rivest, Journal of Computer and System Sciences. 202A. S. LaPaugh and R. L. Rivest. The subgraph homeomorphism problem. Journal of Computer and System Sciences, 20(2):133-149, 1980.
An exact algorithm for subgraph homeomorphism. A Lingas, M Wahlen, Journal of Discrete Algorithms. 74A. Lingas and M. Wahlen. An exact algorithm for subgraph homeomorphism. Journal of Discrete Algorithms, 7(4):464-468, 2009.
Using the Borsuk-Ulam theorem: lectures on topological methods in combinatorics and geometry. J Matoušek, A Björner, G M Ziegler, J. Matoušek, A. Björner, and G. M. Ziegler. Using the Borsuk-Ulam theorem: lectures on topological methods in combinatorics and geometry.
Neuropathologically defined subtypes of Alzheimer's disease with distinct clinical characteristics: a retrospective study. M Murray, N Graff-Radford, O Ross, D Dickson, The Lancet Neurology. 109M. Murray, N. Graff-Radford, O. Ross, D. Dickson, et al. Neuropathologically defined subtypes of Alzheimer's disease with distinct clinical characteristics: a retrospective study. The Lancet Neurology, 10(9):785-796, 2011.
Asymptotic density and the asymptotics of partition functions. M B Nathanson, Acta Mathematica Hungarica. 873M. B. Nathanson. Asymptotic density and the asymptotics of partition functions. Acta Mathematica Hungarica, 87(3):179-195, 2000.
Front propagation and arrival times in networks with application to neurodegenerative diseases. P Putra, H Oliveri, T Thompson, A Goriely, Biorxiv preprint. P. Putra, H. Oliveri, T. Thompson, and A. Goriely. Front propagation and arrival times in networks with application to neurodegenerative diseases. Biorxiv preprint, pages 1-24, 2022.
Cliques of neurons bound into cavities provide a missing link between structure and function. M W Reimann, M Nolte, M Scolamiero, K Turner, R Perin, G Chindemi, P Lotko, R Levi, K Hess, H Markram, Frontiers in computational neuroscience. 48M. W. Reimann, M. Nolte, M. Scolamiero, K. Turner, R. Perin, G. Chindemi, P. D lotko, R. Levi, K. Hess, and H. Markram. Cliques of neurons bound into cavities provide a missing link between structure and function. Frontiers in computational neuroscience, page 48, 2017.
Predicting brain atrophy from tau pathology: a summary of clinical findings and their translation into personalized models. A Schäfer, P Chaggar, T Thompson, A Goriely, E Kuhl, Brain Multiphysics. 2100039A. Schäfer, P. Chaggar, T. Thompson, A. Goriely, and E. Kuhl. Predicting brain atrophy from tau pathology: a summary of clinical findings and their translation into personalized models. Brain Multi- physics, 2:100039, 2021.
The importance of the whole: topological data analysis for the network neuroscientist. A E Sizemore, J E Phillips-Cremins, R Ghrist, D S Bassett, Network Neuroscience. 33A. E. Sizemore, J. E. Phillips-Cremins, R. Ghrist, and D. S. Bassett. The importance of the whole: topological data analysis for the network neuroscientist. Network Neuroscience, 3(3):656-673, 2019.
Topological data analysis of task-based fmri data from experiments on schizophrenia. B J Stolz, T Emerson, S Nahkuri, M A Porter, H A Harrington, Journal of Physics: Complexity. 2335006B. J. Stolz, T. Emerson, S. Nahkuri, M. A. Porter, and H. A. Harrington. Topological data analysis of task-based fmri data from experiments on schizophrenia. Journal of Physics: Complexity, 2(3):035006, 2021.
Four distinct trajectories of tau deposition identified in Alzheimer's disease. J Vogel, A Young, N Oxtoby, O Hannsson, Nature Medicine. 275J. Vogel, A. Young, N. Oxtoby, O. Hannsson, et al. Four distinct trajectories of tau deposition identified in Alzheimer's disease. Nature Medicine, 27(5):871-881, 2021.
M L Wachs, Poset topology: tools and applications. arXiv preprint math/0602226. M. L. Wachs. Poset topology: tools and applications. arXiv preprint math/0602226, 2006.
A spatiotemporal brain network analysis of alzheimer's disease based on persistent homology. J Xing, J Jia, X Wu, L Kuang, Frontiers in Aging Neuroscience. 142022J. Xing, J. Jia, X. Wu, and L. Kuang. A spatiotemporal brain network analysis of alzheimer's disease based on persistent homology. Frontiers in Aging Neuroscience, 14, 2022.
The topology of the brain and visual perception. E C Zeeman, Topology. 3E. C. Zeeman. The topology of the brain and visual perception. Topology of, 3:240-256, 1962.
| [] |
[
"A spectral solver for solar inertial waves",
"A spectral solver for solar inertial waves"
] | [
"Jishnu Bhattacharya \nCenter for Space Science\nNew York University Abu Dhabi\nP.O. Box 129188Abu DhabiUAE\n",
"Shravan M Hanasoge \nCenter for Space Science\nNew York University Abu Dhabi\nP.O. Box 129188Abu DhabiUAE\n\nDepartment of Astronomy and Astrophysics\nTata Institute of Fundamental Research\nMumbai -400005India\n"
] | [
"Center for Space Science\nNew York University Abu Dhabi\nP.O. Box 129188Abu DhabiUAE",
"Center for Space Science\nNew York University Abu Dhabi\nP.O. Box 129188Abu DhabiUAE",
"Department of Astronomy and Astrophysics\nTata Institute of Fundamental Research\nMumbai -400005India"
] | [] | Inertial waves, which are dominantly driven by the Coriolis force, likely play an important role in solar dynamics, and additionally, provide a window into the solar subsurface. The latter allows us to infer properties that are inaccessible to the traditional technique of acoustic-wave helioseismology. Thus, a full characterization of these normal modes holds promise in enabling the investigation of solar subsurface dynamics. In this work, we develop a spectral eigenvalue solver to model the spectrum of inertial waves in the Sun. We model the solar convection zone as an anelastic medium, and solve for the normal modes of the momentum and energy equations. We demonstrate that the solver can reproduce the observed mode frequencies and linewidths well, not only of sectoral Rossby modes, but also the recently observed highfrequency inertial modes. In addition, we believe that the spectral solver is a useful contribution to the numerical methods on modeling inertial modes on the Sun. | 10.3847/1538-4365/aca09a | [
"https://export.arxiv.org/pdf/2211.03323v1.pdf"
] | 253,383,780 | 2211.03323 | c816fce22bf05ef8ca4ffde1742e4ee0d9809857 |
A spectral solver for solar inertial waves
7 Nov 2022 Draft version November 8, 2022
Jishnu Bhattacharya
Center for Space Science
New York University Abu Dhabi
P.O. Box 129188Abu DhabiUAE
Shravan M Hanasoge
Center for Space Science
New York University Abu Dhabi
P.O. Box 129188Abu DhabiUAE
Department of Astronomy and Astrophysics
Tata Institute of Fundamental Research
Mumbai -400005India
A spectral solver for solar inertial waves
7 Nov 2022 Draft version November 8, 2022Typeset using L A T E X preprint style in AASTeX63
Inertial waves, which are dominantly driven by the Coriolis force, likely play an important role in solar dynamics, and additionally, provide a window into the solar subsurface. The latter allows us to infer properties that are inaccessible to the traditional technique of acoustic-wave helioseismology. Thus, a full characterization of these normal modes holds promise in enabling the investigation of solar subsurface dynamics. In this work, we develop a spectral eigenvalue solver to model the spectrum of inertial waves in the Sun. We model the solar convection zone as an anelastic medium, and solve for the normal modes of the momentum and energy equations. We demonstrate that the solver can reproduce the observed mode frequencies and linewidths well, not only of sectoral Rossby modes, but also the recently observed highfrequency inertial modes. In addition, we believe that the spectral solver is a useful contribution to the numerical methods on modeling inertial modes on the Sun.
INTRODUCTION
Measurements of oscillations in the Sun and stars provide a powerful means of constraining their interior structure and dynamics. For decades, p-mode seismology, where pressure is the restoring force for the oscillations, has been the focus of most efforts. However, recent discoveries of inertial modes of oscillations appear to have paved the way to a new form of seismology. Coriolis force and buoyancy-related effects are the restoring mechanisms for inertial modes, which, as a consequence, are differently sensitive than acoustic oscillations to interior rotation and structure parameters such as turbulent viscosity, the buoyancy frequency etc. Thus, inertial modes have the potential to provide altogether new constraints on aspects of the solar interior. Because the restoring mechanism is tightly connected to rotation, the associated frequencies of these modes are comparable to the rotation rate.
Rossby waves, a class of inertial modes, have been widely studied in terrestrial (Pedlosky 1987(Pedlosky , 2003 and astrophysical settings (Lou 2000;Lanza et al. 2009;Zaqarashvili et al. 2015Zaqarashvili et al. , 2021. They have been observed prominently in the Sun recently, although their characteristics are somewhat unusual. The first such characteristic is that, despite the high degree of stratification and the presumably strong convective motion in the outer envelope as well as deeper inside, including distinct layers of rotational shear in both latitude and radius, the frequencies of the observed Rossby waves match closely with the canonical Rossby-Haurwitz dispersion relation derived for 2D (see Zaqarashvili et al. 2021), ω = 2Ω/(m + 1), where ω/2π is the wave frequency, Ω the angular velocity of rotation, and m is the azimuthal order. Secondly, only one branch of the Rossby-mode dispersion is observed, i.e., for ℓ = m in the Rossby-Haurwitz theory, where ℓ is spherical-harmonic degree. There is a whole range of lower frequencies, where ℓ > m, that are predicted by the Rossby-Haurwitz theory but not (yet) observed in the Sun. Finally, only waves with vorticity that is symmetric around the equator are observed (Löptien et al. 2018;Gizon et al. 2021;Hanson et al. 2022). The reasons for these exceptional behaviors are not known, although they may contain important insight into solar dynamics.
Another open problem is why thermal Rossby waves, so prominently predicted in numerical simulations, are not observed in the Sun. This may, of course, derive from aspects of solar structure that are not properly accounted for in numerical models. Provost et al. (1981); Saio (1982) formulated the equations governing Rossby waves in the Sun, but their formulations did not consider the extensive range of perturbations, such as entropy gradients and significant radial and latitudinal rotational shear. A more thorough study of the impact of these effects on inertial modes requires the analysis of a more general spectrum of the linearized Navier-Stokes equations. The low frequencies of oscillation (∼0.5 µHz or lower) suggests that inertial modes are decoupled from acoustic oscillations at much higher frequencies (∼3000 µHz). This separation of timescales allows us to invoke the anelastic approximation (Gough 1969;Braginsky & Roberts 1995), in which acoustic waves are filtered out and the time-derivative term of density in the mass-conservation equation is neglected. The complexity of the resultant equation, especially in the context of various additional perturbations, makes it resistant to purely analytical approaches. The numerical study of the linearized Navier-Stokes equations in the anelastic limit for the Sun was first described in a seminal series of papers by Gilman & Glatzmaier (1981), who focused on the onset of convection and the associated flow systems. Gilman & Glatzmaier (1981), however, did not investigate the properties of Rossby waves in any great detail. Awaiting more detailed observations, this line of investigation has taken a back seat in the intervening decades.
Recent high-quality measurements of Rossby and inertial oscillation frequencies (Löptien et al. 2018;Liang et al. 2019;Hanasoge & Mandal 2019;Mandal & Hanasoge 2020;Proxauf et al. 2020;Gizon et al. 2021;Mandal et al. 2021;Hanson et al. 2022), have made this an opportune time to revisit these efforts. How to determine the relevant parameters and infer the underlying physics forms the focus of this article. In particular, we describe the equations and methodology that we apply to determine the spectrum of the anelastic equation. The use of spectral numerical techniques ensures high accuracy with a limited number of grid points. Anelasticity additionally reduces the dimensionality of the relevant set of eigenvalues and eigenfunctions and solutions.
A few comments are in order on how our method is similar to and distinct from existing approaches. Several numerical investigations into the spectrum of Rossby waves have been carried out recently, notably by Bekki et al. (2022) and Triana et al. (2022). The former set of authors uses a finitedifference approach, whereas the latter uses a spectral approach. The former solve for inertial and acoustic modes simultaneously, while the latter restrict themselves to the momentum equation for incompressible Rossby waves. Our work is similar to both in the sense that we solve an equation similar to that of the former, but it is similar to the latter because we use the anelastic approximation to eliminate acoustic oscillations and use a similar numerical scheme. Spectral approaches often permit an accurate estimation of the spectrum at lower resolutions than a finite-difference approach (Weideman & Trefethen 1988), so this work aims to bring the analysis of inertial waves in the Sun closer to a form that can be used more accurately in forward or inverse problems at a reasonable computational expense. It also sheds possible light on the high-frequency inertial modes reported by Hanson et al. (2022).
METHOD
Equation of motion
We describe the motion of fluid in the solar interior as small perturbations about a hydrostatic reference state, characterized by the steady-state densityρ, temperaturep and pressurep. We note that the knowledge of pressure and density specifies the temperatureT through the equation of state, which, for an ideal gas, is given byp =ρRT where R is the ideal gas constant. We denote deviations to the reference state parameters by primed variables, e.g., ρ ′ represents a small fluctuation about the reference densityρ. We do not apply a prime to the small-amplitude fluid velocity u, since the reference state only contains axisymmetric rotation. Rotation distorts the spherically symmetric reference model of the Sun, as, in the rotating frame, the pressure gradient balances gravity as well as the centrifugal force (Provost et al. 1981). In our analysis, we neglect this distortion, which limits the accuracy of our results to the first order in the angular frequency for slowly rotating stars.
Small-amplitude oscillatory modes on the Sun occur over various widely separated time-scales. Acoustic and surface-gravity modes observed on the Sun have time-scales of minutes, whereas inertial modes oscillate over time-scales of months, corresponding to the solar rotation period. In our analysis, we filter out acoustic waves by applying the anelastic approximation to the equations of motion (Gilman & Glatzmaier 1981;Glatzmaier 2014). The key result that we use is that the equation of mass conservation may be expressed in the form ∇ · (ρu) = 0.
(1)
This implies that the divergence of velocity may be expressed as
∇ · u =ρ d dr 1 ρ u r .(2)
We choose to represent this equation in terms of the negative inverse density scale height η ρ = d (lnρ) /dr for notational convenience. The divergence of the velocity may therefore be expressed as ∇ · u = −η ρ u r . We also define D rρ = ∂ r + η ρ , which satisfies ∂ r (ρf (r)) = ρD rρ f (r) for an arbitrary radial function f (r). Analogous to η ρ , we may define the temperature scale η T = d lnT /dr that dictates the contribution to the stratification of the medium to energy diffusion. We plot the negative of the stratification functions η ρ and η T in Figure 1 (which are the inverse of the density and temperature scale heights) respectively. The sharp rise close to the surface indicates the high degree of stratification in near-surface layers. Neglecting centrifugal forces, the time-evolution of the velocity u in a stratified, non-magnetic, convective solar interior is governed by the linearized Navier-Stokes equations, which may be expressed in the anelastic approximation as
D t u = −2Ω × u − ∇ p ′ ρ − S ′ c p g + 1 ρ F ν ,(3)
where D t = ∂ t + u · ∇ is the material derivative in the rotating frame, g is the acceleration due to gravity, and S ′ is the entropy perturbation, and the viscous force F ν describes the loss of energy to smaller, unresolved length scales. In the assumption of constant kinematic viscosity and zero bulk viscosity, the viscous force F ν may be expressed as
F ν = ν ∇ · ρ ∇u + (∇u) T − 2 3 ∇ (ρ∇ · u) .(4)
Equation (3) includes the contribution of global-scale circulations such as meridional flows through the transport term u · ∇, but we have not included this in the present analysis. The impact of meridional flows on solar Rossby waves has been studied by Gizon et al. (2020). We have also assumed the Cowling approximation in dropping the perturbation to the acceleration due to fluctuations in the gravitational field.
We express the linearized energy equation in terms of the entropyS, and its perturbation S ′ , as
ρT D t S ′ = κ∇ · ρT ∇S ′ ,(5)
where κ is the thermal conductivity. The anelastic approximation assumes an isentropic background medium -in which case ∇S = 0 -but convection in the Sun is driven by a mild entropy gradient in the convective envelope. The assumption built into our analysis is that, despite this gradient, the background medium remains close to isentropic. To model the solar convection zone, we include a radial entropy gradient arising due to a departure from adiabaticity:
dS dr = η ρ γδ,(6)
where δ is the super-adiabatic temperature gradient, and γ is the adiabatic exponent that we choose as 1.64. The value of δ is nearly zero in the convective envelope, since a small deviation from the adiabatic state is enough to drive large-scale convective instabilities, while in the radiative interior it takes the value δ ≈ −0.1. We model the profile of δ in the convention zone following Rempel (2005), such that it transitions from a positive value in the convection zone to asymptotically approach −10 −3 in the radiative zone. We plot the model of δ(r) that we have chosen in Figure 1. The profiles of the thermal conductivity and viscosity are as yet unspecified. Gizon et al. (2021) assume that the main source of dissipation is turbulence, implying that both the coefficients are of similar magnitudes. On such grounds, one might expect a radially varying conductivity, with the surface layers that harbor the strongest flows having a correspondingly high degree of diffusion, which drops rapidly towards the solar interior. Arguing along these lines, Fan & Fang (2014) chose a profile that varies as 1/ √ ρ. While such a model is physically more realistic, we disregard radial variations of conductivity and viscosity in the present work. However, we may include such a profile in our analysis, albeit at additional algebraic expense.
The set of equations (1), (3) and (5) must be supplemented by appropriate boundary conditions on the velocity and entropy to complete the system. While the exact boundary condition at the solar surface is hard to pinpoint -the surface being in vigorous motion with radiative transport -we follow Jones et al. (2011) and assume that the top and the bottom surfaces are impenetrable and stress-free, and that there is no entropy flux across the boundaries. These conditions may be expressed as
u r = ∂ r u θ r = ∂ r u φ r = 0, on r = r in , and r = r out ,(7)
where r in and r out are the inner and outer radial extremities of the domain, respectively. The boundary condition on entropy may be expressed as
dS ′ dr = 0, on r = r in , and r = r out .(8)
Aside from this, we also assume that all the quantities go to zero at the poles, such that the functions are single valued. In our analysis, we do not include the azimuthally invariant contribution to inertial waves, but it can be incorporated by adding appropriate boundary conditions at the poles.
Uniformly rotating frame
In a frame that is rotating at a constant angular velocity Ω, the material derivative D t u of small velocity fluctuations is effectively equal to the partial time derivative ∂ t u, as the quadratic term u · ∇u is much smaller in magnitude. We take the curl of Equation (3) to eliminate the pressure perturbation term, and obtain the equation of motion for the vorticity to be
∂ t ∇ × u = 2 (Ω · ∇) u + ∇ × 1 ρ F ν − ∇ S ′ c p × g.(9)
The three vector components of Equation (9) are not all independent, as the vorticity ∇ × u is solenoidal. Without loss of generality, we may choose the independent equations that we work with to be the radial component of Equation (9), and the radial component of a further curl of Equation (9). We note that, in the anelastic approximation, we may obtain the pressure perturbation p ′ from the velocity u and the entropy perturbation S ′ , and therefore do not lose information by eliminating it from our equations. We proceed by switching to temporal frequency domain by assuming a harmonic time-dependence of exp(iωt) for the velocity, where the temporal frequency ω may be complex. The real part of the temporal frequency corresponds to oscillations, whereas the imaginary part signifies a decaying mode if positive, and a growing mode if negative. The choice of sign in the real part of the temporal frequency differs from conventional usage, where retrograde waves tend to have negative frequencies for positive m. In our analysis, the real part of the frequencies of retrograde modes are positive for m > 0.
The anelastic velocity field has two independent components, and therefore it may equivalently be expressed in terms of two scalar stream functions. We may express the velocity as
u =ρ −1 [∇ × ∇ × (ρ W (r, θ, φ) e r ) + ∇ × (ρ V (r, θ, φ) e r )],(10)
where the radial and poloidal components of velocity may be derived from the stream function W , whereas the toroidal component may be derived from V . We expand the stream functions on a basis of Chebyshev polynomials in radius and spherical harmonics in angular coordinates, as
V (r, θ, φ) = ℓmq V ℓmq T q (r)P ℓm (cos θ) exp (imφ) ,(11)W (r, θ, φ) = ℓmq W ℓmq T q (r)P ℓm (cos θ) exp (imφ) ,(12)
where T q represents the Chebyshev polynomial of degree q,r is the normalized radius defined as r = (r − r mid )/(∆r/2), where ∆r = r out − r in is the radial expanse of the domain, r mid is the midpoint of the radial span of the domain, andP ℓm (cos θ) represents the normalized associated Legendre polynomial for an angular degree of ℓ and an azimuthal order m. Azimuthal symmetry of the background implies that Equation (9) decouples into individual equations for each m. In subsequent analysis, we suppress m in the subscript wherever it is unambiguous, with the understanding that we solve the equations separately for each m. We also choose to refer to the field components for each m with the subscript m, for example V m (r, θ) = ℓq V ℓmq T q (r)P ℓm (cos θ).
Differential rotation
It is well known that the Sun rotates differentially, with the fluid at the equator circulating faster than that at the poles. Waves in the Sun propagate through the spatially varying rotation field and are dragged along with the rotating fluid. We denote the angular velocity of rotation of the Sun by Ω(r, θ) and track the Sun in a frame that rotates at a uniform angular velocity Ω 0 . We denote the differential rotation angular velocity by ∆Ω = Ω − Ω 0 . The angular velocities Ω and Ω 0 are directed along theẑ-axis, and we express the angular velocity of differential rotation as ∆Ω = ∆Ωẑ. The velocity of the fluid as measured in the tracking frame in this case may be expressed as a combination of the local rotation velocity u Ω = ∆Ω × r = u Ωφ e φ , and the intrinsic fluid velocity u f , as
u = u f + u Ω .(13)
The material derivative in this case contains a non-zero contribution at a linear order in u arising from the u · ∇u term. We rewrite this term as
u · ∇u = 1 2 ∇ |u| 2 − u × ω,(14)
where ω = ∇ × u is the vorticity, which may be expressed using Equation (13) as
ω = ∇ × u f + [2∆Ω − r∇ · ∆Ω + r∂ r (∆Ω)] .(15)
We denote the intrinsic vorticity ∇ × u f by ω f , and the vorticity of differential rotation by ω Ω . The equation of motion, barring centrifugal terms, becomes
∂ t u = u × ω − 2Ω × u − ∇ p ′ ρ + 1 2 u 2 − S ′ c p g + 1 ρ F ν .(16)
The two extra terms u × ∇ × u and ∇( 1 2 u 2 ), which were O(|u| 2 ) in the uniform rotation case, now contribute to O(|u|) through a coupling between the fluid velocity and differential rotation. The latter, however, does not contribute to the equation of motion for the vorticity, so the only extra term arising due to differential rotation is the former. We linearize the u × ω term using Equation (13) to obtain
u × ω ≈ u f × ω Ω + u Ω × ω f .(17)
Aside from its effect on the equation of motion, differential rotation also alters the entropy equation through a Doppler-shift term −m∆ΩS ′ , and a latitudinal transport term (u θ /r)∂ θS . An expression for ∂ θS may be obtained by invoking the Taylor-Proudman theorem (Miesch et al. 2006). However, accounting for this term requires a resolution much higher than that we have access to. Since this term is proportional to cos θ∂ r Ω − (sin θ/r) ∂ θ Ω, and we restrict ourselves to equatorial modes where (∂ θ Ω)/Ω 0 is close to zero, we might expect the contribution of this term to not be substantial. Bearing this in mind, we don't include this term in our analysis. However, one would need to retain this term to study high-latitude modes.
Expressing the equations in this form makes it easier to carry out a spherical-harmonic decomposition. The situation simplifies significantly if the differential rotation is purely radial, without a latitudinal gradient. Such a profile is purely hypothetical, as the solar rotation rate varies considerably between the equator and the poles. Even so, it lets us pinpoint the contribution of the latitudinal variation of the rotation rate on the spectrum of inertial waves. In this work, we look at two rotation profiles: (a) a constant angular velocity ∆Ω, which, while not differential, implies that the Sun is rotating at a rate different from the tracking frame, (b) a radial profile ∆Ω(r) of the angular velocity. Details of this radial variation are discussed in Section 3. It is possible to extend our analysis to latitudinal and radial solar-like rotation at additional algebraic expense.
Equations for the stream functions
We express Equations (3) and (5) in terms of the stream functions V m , W m and S ′ m as an array of differential equations in r -one for each ℓ. We normalize our fields to express them all in dimensions of velocity, choosing to work with the fields V m /R ⊙ , iW m /R 2 ⊙ and ΩR ⊙ S ′ m /c p instead, which enables us to compare their magnitudes. The equation for the associated-Legendre components of V m may be expressed in the form
ω Ω V ℓ R ⊙ = ℓ ′ T V V,ℓℓ ′ V ℓ ′ R ⊙ + T V W,ℓℓ ′ iW ℓ ′ R 2 ⊙ ,(18)
where the operators T V V and T V W are given by
T V V,ℓℓ ′ = δ ℓℓ ′ 2m ℓ (ℓ + 1) − iE ν R 2 ⊙ ∂ 2 r − ℓ (ℓ + 1) r 2 + η ρ ∂ r − 2 r ,(19)T V W,ℓℓ ′ = − 2 ℓ (ℓ + 1) R ⊙ ℓ ′ (ℓ ′ + 1) [cos θ] ℓℓ ′ D rρ − 2 r + [sin θ∂ θ ] ℓℓ ′ D rρ − ℓ ′ (ℓ ′ + 1) r ,(20)
where E ν = ν/ΩR 2 ⊙ is the viscous Ekman number, and the matrix elements of the angular operators -denoted by A ℓℓ ′ -are defined as
A ℓℓ ′ ,m = π 0 dθ sin θP ℓm (cos θ) A (θ)P ℓ ′ m (cos θ) .(21)
We derive the explicit forms of the matrix elements of the angular operators in Appendix A. Similarly, the equation for the associated-Legendre components of W m may be expressed as
ω Ω B W W,ℓ iW ℓ R 2 ⊙ = ℓ ′ T W V,ℓℓ ′ V ℓ ′ R ⊙ + T W W,ℓℓ ′ iW ℓ ′ R 2 ⊙ + T W W,ℓℓ ′ ΩR ⊙ S ′ ℓ ′ c p ,(22)
where the operators are given by
B W W,ℓ = R 2 ⊙ ∂ r D rρ − ℓ (ℓ + 1) r 2 ,(23)T W V,ℓℓ ′ = − 2R ⊙ ℓ (ℓ + 1) ℓ ′ (ℓ ′ + 1) [cos θ] ℓℓ ′ ∂ r − 2 r + [sin θ∂ θ ] ℓℓ ′ ∂ r − ℓ ′ (ℓ ′ + 1) r ,(24)T W W,ℓℓ ′ = δ ℓℓ ′ R 2 ⊙ m 2 ℓ (ℓ + 1) ∂ r D rρ − ℓ (ℓ + 1) r 2 − 2η ρ r − iE ν R 4 ⊙ ∂ 2 r − ℓ (ℓ + 1) r 2 ∂ 2 r − ℓ (ℓ + 1) r 2 + 4 r η ρ + ∂ r − 2 r r ∂ 2 r − ℓ (ℓ + 1) r 2 η ρ r +∂ r η ρ ∂ r − 2 r D rρ + ℓ (ℓ + 1) r 2 − ℓ (ℓ + 1) r 2 2η ρ ∂ r − 2 r + 2 3 η 2 ρ ,(25)T W S,ℓℓ ′ = −δ ℓℓ ′ g (r) Ω 2 R ⊙ .(26)
The terms in braces in T W W,ℓℓ ′ represent the contribution of the viscous force, in which all terms aside from the first arise from the stratification of the medium. The operator B W W,ℓ on the left-hand side is a key hurdle in casting the set of equations as a standard eigenvalue equation instead of a generalized one. An approach not explored in this work would involve inverting B W W,ℓ to reduce the system to a standard eigenvalue problem, which would reduce the computational load significantly.
The entropy equation may be expressed as
ω Ω (ΩR ⊙ ) S ′ ℓ c p = T SW,ℓℓ ′ iW ℓ ′ R 2 ⊙ + T SS,ℓℓ ′ (ΩR ⊙ ) S ′ ℓ ′ c p ,(27)
where the operators are given by
T SW,ℓℓ ′ = δ ℓℓ ′ R 3 ⊙ ℓ (ℓ + 1) r 2 ∂ r S c p ,(28)T SS,ℓℓ ′ = −δ ℓℓ ′ iE κ R 2 ⊙ ∂ 2 r + 2 r ∂ r − ℓ (ℓ + 1) r 2 + (η ρ + η T ) ∂ r ,(29)
where E κ = κ/(ΩR 2 ⊙ ) is the thermal Ekman number. Unlike the equations for the velocity stream functions, the diagonal operator in the entropy equation is purely diffusive, which indicates that modes where the entropy perturbation is the dominant term are primarily decaying or growing.
We translate the boundary conditions in Equations (7) and (8) to the stream functions to obtain independent radial constraints for each ℓ:
q V ℓq q 2 − ∆r r out = 0,(30)q (−1) q V ℓq q 2 + ∆r r in = 0,(31)q (±1) q W ℓq = 0,(32)q (±1) q q 2 S ′ ℓq = 0,(33)
where the sum, in each case, is over the Chebyshev coefficients of the field. While these boundary conditions complete the system, the operator T W W,ℓℓ ′ in Equation (25) represents a fourth-order differential equation in radius, and we require additional boundary conditions to ensure uniqueness. We therefore supplement the system with additional zero-Neumann constraints on W ℓ (r), requiring that its derivatives go to zero at the radial extremities. Experimentation shows that these additional constraints lead to smooth eigenvectors, while leaving the eigenvalues relatively unchanged (within 1% for sectoral modes). We may express this Neumann constraint as
q (±1) q q 2 W ℓq = 0.(34)
These boundary conditions are to be satisfied for each ℓ, so a total of n ℓ harmonic degrees leads to 8n ℓ constraints. Differential rotation introduces additional terms in the equations. In this work, we look at a simplified scenario where the differential rotation rate depends solely on the radius. We define the fractional differential rotation rate ∆Ω = ∆Ω/Ω, and evaluate the extra terms to be
T D,V V,ℓℓ ′ = δ ℓℓ ′ m∆Ω 2 ℓ (ℓ + 1) − 1 ,(35)T D,V W,ℓℓ ′ = − 2 ℓ (ℓ + 1) R ⊙ ℓ ′ (ℓ ′ + 1) [cos θ] ℓℓ ′ ∆Ω D rρ − 2 r − d dr ∆Ω + [sin θ∂ θ ] ℓℓ ′ ∆Ω D rρ − ℓ ′ (ℓ ′ + 1) r − ℓ ′ (ℓ ′ + 1) 2 d dr ∆Ω ,(36)T D,W V,ℓℓ ′ = − 1 ℓ (ℓ + 1) R ⊙ (4ℓ ′ (ℓ ′ + 1) [cos θ] ℓℓ ′ + (ℓ ′ (ℓ ′ + 1) + 2) [sin θ∂ θ ] ℓℓ ′ ) d dr ∆Ω + ∆Ω∂ r + ∇ 2 h sin θ∂ θ ℓℓ ′ d dr ∆Ω + ∆Ω ∂ r + 2 r ,(37)T D,V W,ℓℓ ′ = δ ℓℓ ′ mR 2 ⊙ 2 ℓ (ℓ + 1) − 1 d dr ∆Ω D rρ + ∆Ω ∂ r D rρ − ℓ (ℓ + 1) r 2 −∆Ω 2η ρ r + d 2 dr 2 ∆Ω + d dr ∆Ω ∂ r + 2 r ,(38)T D,SS,ℓℓ ′ = −δ ℓℓ ′ m∆Ω,(39)
where ∇ 2 h is the lateral component of the Laplacian operator. As a test case, we may consider the case of a constant ∆Ω, which corresponds to a uniformly rotating medium tracked at a different rate, in which case the terms simplify considerably.
Numerical implementation
We evaluate the spectrum of Rossby waves through an exact diagonalization of the matrix representation of the differential operator. The case where the rotation profile does not vary in the lateral directions is simpler to address, as the radial and angular terms are separated in variable, and each term appearing in the matrix may be represented as a Kronecker product of radial and angular operators. In general, this is not the case, e.g., the differential rotation velocity may not be separable in the radial and angular coordinates, and the matrices may need to be computed using a spectral or a pseudo-spectral approach in the angular coordinates.
In our analysis, we choose the coefficient of kinematic viscosity ν to be 2×10 12 cm 2 /s (or equivalently, an Ekman number of ν/(Ω 0 R 2 ⊙ ) ≈ 1.45 × 10 −4 ), and set the thermal conductivity to be identical to the viscosity. We find that this value produces reasonable matches to the line-widths obtained by Proxauf et al. (2020). This is not a best-fit estimate, but consistent with the choice made by Bekki et al. (2022), although a slightly lower value of 10 12 cm 2 /s had been suggested by Gizon et al. (2021). The actual situation regarding the values of the transport coefficients is indeed very different (Schumacher & Sreenivasan 2020), but what we have done is not an uncommon practice in numerical simulations of the Sun. We choose the radial domain of our analysis to correspond roughly to the solar convection zone; however, we set the lower boundary of the domain to r = 0.6R ⊙ to adequately capture the sharp change in the adiabaticity profile at the base of the convection zone. We have verified that setting the lower boundary at 0.71R ⊙ does not change the results appreciably. We place the outer boundary at 0.985R ⊙ to avoid the steep stratification in the outermost layers of the Sun.
In discretizing the operators, we follow Triana et al. (2022) and split our analysis into two sets of modes: one for which the toroidal stream function V is symmetric about the equator, and the other for which it is antisymmetric. In the symmetric case, only the spherical harmonic degrees ℓ = m, m + 2, m + 4 · · · contribute to V ; whereas in the antisymmetric case, the contribution comes from ℓ = m + 1, m + 3, m + 5 · · · . We note that the stream function W and the entropy perturbation S have the opposite parity to V , so in the former case the spherical harmonic orders that contribute to W are ℓ = m+ 1, m+ 3, m+ 5 · · · , and the reverse in the latter. Such a separation effectively doubles the angular resolution of the eigenvalue problem. We also note that the toroidal stream function V and the radial component of vorticity share the same angular profile, whereas the poloidal stream function W shares its angular profile with the radial component of velocity. The arguments of equatorial symmetry, therefore, extend naturally to these fields as well.
We have developed a Julia implementation of the approach presented above. We choose the Julia language (Bezanson et al. 2017) as it is a high-performance, high-level language ideally suited to numerical applications. We make use of the freely available library ApproxFun.jl (Olver & Townsend 2014) to represent the radial operators as banded matrices (we note that similar functionality is provided in python by the Dedalus project (Burns et al. 2020) and in Matlab by the library Chebfun (Driscoll et al. 2014), the latter being the inspiration for ApproxFun). The domain space of the operators are expanded in a basis of Chebyshev polynomials, and the range space in a basis of ultraspherical polynomials U α q (x) (which are also known as Gegenbauer polynomials), with the order α corresponding to the highest order of the radial derivative that appears in the equation. We use the term "order" in this context to denote the exponent α in the weight (1 − x 2 ) α−1/2 that features in the definition of the inner product with respect to which the polynomials are orthogonal, and this differs from the degree q of the polynomial. This approach is similar to that used by Triana et al. (2022). Such a sparse representation makes the evaluation of the operator matrices computationally inexpensive, and permits a purely spectral approach as opposed to a pseudo-spectral one. Owing to azimuthal symmetry, the angular operators may be represented as banded matrices in the basis of associated Legendre polynomials for a single m, and we have expanded on this in Appendix A. We use 60 points in radial Chebyshev degrees, and 30 in harmonic degree ℓ, to obtain the discrete representations of the operators for each m. The matrix representations of self-adjoint radial operators thus obtained are not necessarily symmetric, and an approach similar to that used by Aurentz & Slevinsky (2020) might improve the convergence of eigenvalues, although we have not explored this aspect in the present work.
We seek to solve the system
Mx = ω Ω Bx (40) Cx = 0,(41)
where M is the matrix representation of the restoring forces, B contains the matrix representation of the double-curl, x = (V ℓmq , W ℓmq , S ′ ℓmq ) is the vector of the Chebyshev-Legendre basis components of the stream functions and the entropy perturbation, and C is the matrix corresponding to the boundary conditions from Equations (7) and (8). We start by computing the matrix Z whose columns form a basis for the null-space of C. We describe how we choose Z in Appendix B. By construction, an arbitrary vector w
x = Zw(42)
satisfies the constraint in Equation (41). We rewrite the eigenvalue problem in Equations (40) and (41) as
Z T MZ w = ω Ω Z T BZ w,(43)
where w is unconstrained. We solve for the spectrum of eigenvalues and the corresponding eigenvectors w using LAPACK, and transform back to x using Equation (42). Unlike Triana et al. (2022) where the authors solve a sparse eigenvalue problem, we solve a dense one, which produces the full spectrum of eigenvalues and makes identifying spectral ridges easier. Unfortunately, our approach does not take advantage of the sparsity of the operator matrix. We note that the matrix of operators that we thus construct is not Hermitian, which presents a significant challenge to the stability of solutions. To obtain a set of eigenvectors stable to changing resolution, we perform a diagonal scaling of Equation (43) as a preconditioning step, and rewrite the system as
D 1 Z T MZ D −1 2 (D 2 w) = ω Ω D 1 Z T BZ D −1 2 (D 2 w) .(44)
Matrices D 1 and D 2 are block-diagonal, with the diagonal blocks equal to α i I, where α i are real numbers chosen such that each block of D 1 Z T MZ D −1 2 has an absolute maximum value of order 1. While the choice of the matrices is carried out in a somewhat ad-hoc manner, we found that the solutions are insensitive to the exact choices once the maximum absolute values from all the blocks have the same order of magnitude. We note that such a scaling leaves the eigenvalues unchanged, and, following the diagonalization, we undo the scaling by left-multiplying the computed eigenvectors by D −1 2 . Solutions to an eigenvalue equation in a Chebyshev basis are generally accurate if the eigenfunctions are smooth enough to be resolved on the Gauss-Chebyshev nodes (Weideman & Trefethen 1988). Bearing this in mind, we impose the following filters to constrain the set of solutions:
• Eigenfunctions must satisfy the boundary conditions to within a tolerance of 10 −5 .
• The original, unconstrained eigensystem (Equation (40)) must be satisfied to within 0.01%.
• 90% of the spectral power of the surface profile of the eigenfunction must lie within ℓ ≤ m + 6
where ℓ is the spherical harmonic degree.
• 90% of the spectral power of the depth profile of the eigenfunction at the equator must lie within n ≤ 6, where n is the degree of the Chebyshev polynomial.
• The imaginary parts of the eigenfrequencies must be non-negative, which eliminates growing modes.
• The angular profile of the eigenfunction at the surface must have its peak, as well as 30% of the area under the curve within the latitudes of ±30 • .
These filters constrain us to decaying inertial mode solutions that have converged at the chosen resolution, and the exact cutoffs imposed are discretionary. We note that an equatorial filter precludes the study of high-latitude and critical-latitude modes that have been measured by and numerically studied by Bekki et al. (2022). Although such a restrictive filter is implemented here, this is not fundamental to our analysis, and this may be relaxed in future studies to investigate the categories of modes beyond those in this work.
The code has been made freely available on Github under the MIT license 1 .
RESULTS
We plot the spectrum of Rossby waves in a uniformly rotating medium tracked in a frame rotating at the same rate in Figure 2, where the top and bottom left panels represent the spectrum obtained for symmetric and antisymmetric modes respectively. The dotted line represents the sectoral, thinshell dispersion relation ω = 2Ω/(m + 1). The modes that lie close to this relation contain most of their spectral power at ℓ = m, and we refer to these as sectoral modes. In general, however, spectral power for a mode is spread across a range of harmonic degrees, as we might expect in a rotating medium that lacks spherical symmetry. The sectoral modes also have no radial nodes in the V (x), or equivalently in the radial component of vorticity. We find several ridges below this analytical relation, corresponding to modes where the radial vorticity has increasing number of radial nodes. Interestingly, the spectrum also reveals a high-frequency ridge for modes with radial vorticity that is symmetric about the equator, with spectral power peaking at ℓ = m + 2. The top-right panel represents the line-widths of the latitudinally symmetric sectoral modes (corresponding to the dotted line in the top left panel). The bottom-left panel depicts latitudinally antisymmetric modes (denoted by circles), along with the frequencies of the high-frequency Rossby ridge as measured by Hanson et al. (2022). We find two distinct ridges that lie near the observed frequencies, but the modelled frequencies are lower than the observed ones. This is somewhat different from Triana et al. (2022), whose numerical frequencies are a little higher in absolute magnitude than the observed ones.
There are various differences in the equations used in this work and that used by Triana et al. (2022), so such a disparity in the result is perhaps not surprising. We also recognize the possibility that, due to a lower resolution used in our work as compared to that used by Triana et al. (2022), some modes investigated by them do not appear in our spectrum. However, most of the power in the radial component of vorticity is concentrated at ℓ = m + 1 in the high-frequency modes that we obtain, which makes them resemble the observations by Hanson et al. (2022). It is therefore tempting to identify these as high-frequency Rossby-mode candidates. The bottom-left panel depicts the line widths of the high-frequency waves as obtained by us (circles), compared with that measured by Hanson et al. (2022). The line widths that we obtain are roughly consistent with the measured ones; as already remarked, this consistency might offer additional constraints on the coefficient of viscosity. We plot the real part of the latitudinally symmetric toroidal stream function V m (r, θ) for m = 14 in Figure 3. We also plot the stream functions for an antisymmetric, high-frequency mode for m = 10 in Figure 4. The toroidal stream function V (and, as an extension, the radial vorticity) has no radial node, which makes them likely candidates for fundamental oscillation modes. The entropy perturbation peaks at the base of the convection zone, where there is a sharp change in the adiabatic index, whereas the velocity stream functions peak close to the solar surface. This result is somewhat in contrast to Triana et al. (2022), who find that there is a substantial poloidal component deep within the convection zone. It is unclear how strongly their result depends on their choice of incompressibility in the equation of continuity, which ignores the radial stratification of density. Further work might be necessary to reconcile their results with ours. Aside from the high-frequency ridge discussed above, a slightly lower-frequency ridge is visible in the bottom left subplot of Figure 2, which corresponds to modes having one radial node.
We plot the spectrum of inertial waves in a radially differentially rotating Sun in Figure 5. The rotation velocity in our model has the functional form corresponding to the subsurface profile at the solar equator. The left panel illustrates the rotation rate as a function of radius, while the right panel shows the dispersion relation for the sectoral Rossby modes (circles), and a best-fit dispersion relation assuming a constant shift in the angular velocity by ∆Ω 0 (dashed line). Not all eigenfunctions converge to smooth solutions at the resolution used in the latter case, and as a consequence, there are gaps in the spectral ridges. Unfortunately, increasing the resolution beyond this becomes challenging due to resource limitations, and alternate algorithms might be necessary to achieve convergence for these modes. The interesting observations in these are that the Doppler shift appears to push the modes from ones propagating in a retrograde sense to prograde ones. The switch occurs for an m where the oscillation frequency 2(Ω 0 + ∆Ω 0 )/(m + 1) becomes comparable to the Doppler shift m∆Ω 0 , assuming a constant shift ∆Ω 0 . However, this frequency is specific to the tracking frame chosen and not intrinsic to the oscillation in the Sun, and additionally, this effected will be mitigated by latitudinal differential rotation, which -owing to a reduction in the rotation rate with increasing latitude -might shift the frequencies in the opposite sense. Further work is therefore necessary to establish the spectrum in the background of solar-like differential rotation that varies both in radius and latitude. Interestingly, the modes for m > 10 closely follow the dispersion relation obtained for a constant shift in the rotation rate, whereas the lower m modes depart from this relation as expected, as these modes extend deeper into the Sun. The best-fit angular velocity ∆Ω 0 therefore provides an estimate of the depth sensitivity of these waves.
CONCLUSION
We have described a spectral numerical technique by which to extract the eigenbasis of the linear anelastic operator, which corresponds to inertial waves in the Sun. We can reproduce the central frequencies and line widths of Rossby waves on the Sun, as measured by Proxauf et al. (2020). Our results also qualitatively agree with that by Triana et al. (2022) and support the identification of the high-frequency spectral ridge observed by Hanson et al. (2022) as inertial waves, although a uniformly rotating model of the Sun appears to produce somewhat lower oscillation frequencies.
Further investigation into this in presence of solar-like differential rotation remains to be carried out. Interestingly, we also see a high-frequency ridge for equatorially symmetric radial vorticity, which, to our knowledge, has not been observed on the Sun.
The accuracy and generality of the method presented in this work offer promise for future investigations into solar structure and dynamics using measurements of inertial oscillations. In further work, we will focus on improving the computational expense of the approach, to render it feasible to use techniques such as Markov-chain Monte Carlo (MCMC) for determining the best-fit set of parameters that can explain the observed mode frequencies, in addition to allowing the estimation of proper uncertainties. The setup can be extended in a straightforward manner to include magnetism, since the Lorentz force term is very similar in structure to the form that differential rotation takes.
ACKNOWLEDGMENTS
This material is based upon work supported by Tamkeen under the NYU Abu Dhabi Research Institute grant G1502. We also acknowledge support from the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under award OSR-CRG2020-4342. This research was carried out on the High-Performance Computing resources at New York University Abu Dhabi.
APPENDIX
A. MATRIX ELEMENTS OF ANGULAR OPERATORS
We evaluate the matrix elements of cos θ and sin θ∂ θ in the basis of normalized associated Legendre polynomials. For an operator A(θ), the matrix elements in an associated Legendre basis are A ℓℓ ′ ,m = π 0 dθ sin θP ℓm (cos θ) A (θ)P ℓ ′ m (cos θ) .
(A1)
In subsequent analysis, we suppress the subscript m, with the understanding that we compute the matrix for a specific azimuthal order. We note that if the action of an operator A(θ) onP ℓm (cos θ) is described by
A(θ)P ℓm (cos θ) = ℓ ′ C ℓℓ ′P ℓ ′ m (cos θ) ,(A2)
the coefficients C ℓℓ ′ represent the matrix elements of the transpose of A, that is, A ℓℓ ′ = C ℓ ′ ℓ . We use the relations cos θP ℓm (cos θ) = (ℓ − m + 1) (ℓ + m + 1) (2ℓ + 1) (2ℓ + 3)P ℓ+1m (cos θ) + (ℓ − m) (ℓ + m) (2ℓ − 1) (2ℓ + 1)P ℓ−1m (cos θ) ,
sin θ∂ θPℓm (cos θ) = ℓ cos θP ℓm (cos θ) − 2ℓ + 1 2ℓ − 1 (ℓ 2 − m 2 )P ℓ−1m (cos θ) ,
to evaluate the matrix elements of cos θ and sin θ∂ θ in the normalized associated Legendre polynomial basis. We may use these as building blocks to compute the matrix elements of operators that may be expressed as the product of these terms.
B. RADIAL BASIS
Given the constraints in Equations (30)-(34), we compute three sets of bases -one for each of V ℓ , W ℓ and S ′ ℓ -that automatically satisfy the constraints, which enables us to expand the field in the corresponding basis. We use two different approaches to compute the basis. For V ℓ , given n Chebyshev coefficients, we represent the constraints for each ℓ as a 2 × n matrix, and compute an orthogonal basis for its null space through a full singular-value decomposition (see e.g. Porcelli et al. 2015). The Julia standard library LinearAlgebra conveniently contains a function "nullspace" that provides an implementation of this algorithm.
The bases for W and S ′ are easier to compute through a basis-recombination approach (Heinrichs 1989). We define our basis in terms of the normalized radiusr = (r − r mid )/(∆r/2), which takes the values ±1 at the radial boundaries of the domain. The first basis that we choose is Q q (r) = 1 −r 2 2 T q (r) ,
which satisfies Q q (±1) = Q ′ q (±1) = 0.
We may therefore expand W ℓ (r) as
W ℓ (r) = q W ℓq Q q (r) .(B7)
For the Neumann condition on S ′ (r), we choose the basis to be P q (r) = T q (r) − p 2 (p + 2) 2 T q+2 (r) .
(B8)
Using the result T ′ q (±1) = (±1) q+1 q 2 , we obtain P ′ q (±1) = 0, so we may expand S ′ ℓ (r) as
S ′ ℓ (r) = q S ′ ℓq P q (r) .(B9)
Collectively, these three bases ensure that the functions V ℓ (r), W ℓ (r), and S ′ ℓ (r) satisfy the boundary conditions. We note that the choice of the bases is not unique, and the bases Q q and P q do not form orthogonal sets. However, orthogonality is not crucial, and a different choice of basis, for example the orthogonal basis obtained through the "nullspace" function, does not change the results, which enhances our confidence in the solutions. The choice made here leads to smooth basis elements, and, as a consequence, the results are easier to interpret in this basis.
We plot the first few basis functions for each of the three fields in Figure 6.
Figure 1 .
1Left panel: Inverse scale heights for density (ρ) and temperature (T ), multiplied by the solar radius. Right panel: Deviation from adiabatic temperature gradient δ(r).
Figure 2 .
2Top left: Spectrum of Rossby waves in a uniformly rotating Sun, for modes where the toroidal stream function V (x) is symmetric about the equator. Bottom left: Same as top left, but for modes for which V (x) is antisymmetric about the equator. The error bars indicate the high-frequency Rossby modes detected by Hanson et al. (2022, ring-diagram spectra). Top right: Line widths of sectoral, symmetric Rossby modes (ω ≈ 2Ω m+1 ), as measured by Proxauf et al. (2020), compared with our results. Bottom right: Line widths of high-frequency Rossby modes from observations (Hanson et al. 2022, error bars), and our work (circles).
Figure 3 .Figure 4 .Figure 5 .
345Left: Profile of the normalized toroidal stream function V (x) for the sectoral mode for m = 14, assuming that the Sun is rotating uniformly. The top panel is the angular profile at the surface, while the bottom left panel is the depth profile at the equator. Right: The angular profiles of the various toroidal stream functions V m (r, θ) for m = 14 at r = r out . The index n indicates the number of radial nodes. Antisymmetric high-frequency eigenfunction for m = 10, having a peak frequency of 175 nHz (nearest the high-frequency ridge frequency as measured byHanson et al. 2022). The fields are all in units of velocity, albeit within an arbitrary overall normalization factor. Left panel: A smoothed radial profile of differential rotation at the solar equator that we use in our analysis. Right panel: Doppler shifted dispersion relation of the sectoral modes (circles), and a least-square fit assuming a uniform rotation rate Ω 0 + ∆Ω 0 , where Ω 0 is the angular velocity of the tracking frame. The horizontal dotted line in the left panel indicates ∆Ω 0 /2π, which is the best-fit uniform rotation rate obtained from this fit.
Figure 6 .
6First few basis functions chosen to satisfy the boundary conditions on V ℓ (r) (left panel), W ℓ (r) (middle panel) and S ′ ℓ (r) (right panel).
https://github.com/jishnub/RossbyWaveSpectrum.jl
. J L Aurentz, R M Slevinsky, 10.1016/j.jcp.2020.109383Journal of Computational Physics. 410109383Aurentz, J. L., & Slevinsky, R. M. 2020, Journal of Computational Physics, 410, 109383, doi: 10.1016/j.jcp.2020.109383
. Y Bekki, R H Cameron, L Gizon, 10.1051/0004-6361/202243164A&A. 66216Bekki, Y., Cameron, R. H., & Gizon, L. 2022, A&A, 662, A16, doi: 10.1051/0004-6361/202243164
. J Bezanson, A Edelman, S Karpinski, V B Shah, SIAM review. 5965Bezanson, J., Edelman, A., Karpinski, S., & Shah, V. B. 2017, SIAM review, 59, 65
. S I Braginsky, P H Roberts, 10.1080/03091929508228992Geophysical and Astrophysical Fluid Dynamics. 79Braginsky, S. I., & Roberts, P. H. 1995, Geophysical and Astrophysical Fluid Dynamics, 79, 1, doi: 10.1080/03091929508228992
. K J Burns, G M Vasil, J S Oishi, D Lecoanet, B P Brown, 10.1103/PhysRevResearch.2.023068Physical Review Research. 223068Burns, K. J., Vasil, G. M., Oishi, J. S., Lecoanet, D., & Brown, B. P. 2020, Physical Review Research, 2, 023068, doi: 10.1103/PhysRevResearch.2.023068
T A Driscoll, N Hale, L N Trefethen, Chebfun Guide. Pafnuty PublicationsDriscoll, T. A., Hale, N., & Trefethen, L. N. 2014, Chebfun Guide (Pafnuty Publications).
. Y Fang, F , 10.1088/0004-637X/789/1/35ApJ. 78935http://www.chebfun.org/docs/guide/ Fan, Y., & Fang, F. 2014, ApJ, 789, 35, doi: 10.1088/0004-637X/789/1/35
. P A Gilman, G A Glatzmaier, 10.1086/190714ApJS. 45335Gilman, P. A., & Glatzmaier, G. A. 1981, ApJS, 45, 335, doi: 10.1086/190714
. L Gizon, D Fournier, M Albekioni, 10.1051/0004-6361/202038525A&A. 642178Gizon, L., Fournier, D., & Albekioni, M. 2020, A&A, 642, A178, doi: 10.1051/0004-6361/202038525
. L Gizon, R H Cameron, Y Bekki, 10.1051/0004-6361/202141462A&A. 6526Gizon, L., Cameron, R. H., Bekki, Y., et al. 2021, A&A, 652, L6, doi: 10.1051/0004-6361/202141462
G A Glatzmaier, Introduction to Modeling Convection in Planets and Stars. Princeton University PressGlatzmaier, G. A. 2014, Introduction to Modeling Convection in Planets and Stars (Princeton University Press)
. D O Gough, 10.1175/1520-0469(1969)026<0448:TAAFTC>2.0.CO;2Journal of Atmospheric Sciences. 26448Gough, D. O. 1969, Journal of Atmospheric Sciences, 26, 448, doi: 10.1175/1520-0469(1969)026<0448:TAAFTC>2.0.CO;2
. S Hanasoge, K Mandal, 10.3847/2041-8213/aaff60ApJL. 87132Hanasoge, S., & Mandal, K. 2019, ApJL, 871, L32, doi: 10.3847/2041-8213/aaff60
. C S Hanson, S Hanasoge, K R Sreenivasan, 10.1038/s41550-022-01632-zNature Astronomy. Hanson, C. S., Hanasoge, S., & Sreenivasan, K. R. 2022, Nature Astronomy, doi: 10.1038/s41550-022-01632-z
W Heinrichs, 10.1090/S0025-5718-1989-0972370-0Mathematics of Computation. 53103Heinrichs, W. 1989, Mathematics of Computation, 53, 103, doi: 10.1090/S0025-5718-1989-0972370-0
. C A Jones, P Boronski, A S Brun, 10.1016/j.icarus.2011.08.014Icarus. 216Jones, C. A., Boronski, P., Brun, A. S., et al. 2011, Icarus, 216, 120, doi: 10.1016/j.icarus.2011.08.014
. A F Lanza, I Pagano, G Leto, 10.1051/0004-6361:200810591A&A. 493193Lanza, A. F., Pagano, I., Leto, G., et al. 2009, A&A, 493, 193, doi: 10.1051/0004-6361:200810591
. Z.-C Liang, L Gizon, A C Birch, T L Duvall, 10.1051/0004-6361/201834849A&A. 626Liang, Z.-C., Gizon, L., Birch, A. C., & Duvall, T. L. 2019, A&A, 626, A3, doi: 10.1051/0004-6361/201834849
. B Löptien, L Gizon, A C Birch, 10.1038/s41550-018-0460-xNature Astronomy. 2568Löptien, B., Gizon, L., Birch, A. C., et al. 2018, Nature Astronomy, 2, 568, doi: 10.1038/s41550-018-0460-x
. Y.-Q Lou, 10.1086/309387ApJ. 5401102Lou, Y.-Q. 2000, ApJ, 540, 1102, doi: 10.1086/309387
. K Mandal, S Hanasoge, 10.3847/1538-4357/ab7227ApJ. 891125Mandal, K., & Hanasoge, S. 2020, ApJ, 891, 125, doi: 10.3847/1538-4357/ab7227
. K Mandal, S M Hanasoge, L Gizon, 10.1051/0004-6361/202141044A&A. 65296Mandal, K., Hanasoge, S. M., & Gizon, L. 2021, A&A, 652, A96, doi: 10.1051/0004-6361/202141044
. M S Miesch, A S Brun, J Toomre, 10.1086/499621ApJ. 641618Miesch, M. S., Brun, A. S., & Toomre, J. 2006, ApJ, 641, 618, doi: 10.1086/499621
S Olver, A Townsend, 10.1175/1520-0469(1969)026<0448:TAAFTC>2.0.CO;2Proceedings of the 1st Workshop for High Performance Technical Computing in Dynamic Languages -HPTCDL '14. the 1st Workshop for High Performance Technical Computing in Dynamic Languages -HPTCDL '14Piscataway, NJIEEE PressOlver, S., & Townsend, A. 2014, in Proceedings of the 1st Workshop for High Performance Technical Computing in Dynamic Languages - HPTCDL '14 (IEEE Press, Piscataway, NJ)
J Pedlosky, 10.1007/978-3-662-05131-3Waves in the Ocean and Atmosphere. New York, NY; Berlin, HeidelbergSpringerGeophysical Fluid DynamicsPedlosky, J. 1987, Geophysical Fluid Dynamics (Springer New York, NY), doi: https://doi.org/10.1007/978-1-4612-4650-3 -. 2003, Waves in the Ocean and Atmosphere (Springer Berlin, Heidelberg), doi: https://doi.org/10.1007/978-3-662-05131-3
. M Porcelli, V Binante, M Girardi, C Padovani, G Pasquinelli, 10.1007/s10092-014-0112-1Calcolo. 52167Porcelli, M., Binante, V., Girardi, M., Padovani, C., & Pasquinelli, G. 2015, Calcolo, 52, 167, doi: 10.1007/s10092-014-0112-1
. J Provost, G Berthomieu, A Rocca, A&A. 94126Provost, J., Berthomieu, G., & Rocca, A. 1981, A&A, 94, 126
. B Proxauf, L Gizon, B Löptien, 10.1051/0004-6361/201937007A&A. 63444Proxauf, B., Gizon, L., Löptien, B., et al. 2020, A&A, 634, A44, doi: 10.1051/0004-6361/201937007
. M Rempel, 10.1086/428282ApJ. 6221320Rempel, M. 2005, ApJ, 622, 1320, doi: 10.1086/428282
. H Saio, 10.1086/159945ApJ. 256717Saio, H. 1982, ApJ, 256, 717, doi: 10.1086/159945
. J Schumacher, K R Sreenivasan, 10.1103/RevModPhys.92.041001Reviews of Modern Physics. 9241001Schumacher, J., & Sreenivasan, K. R. 2020, Reviews of Modern Physics, 92, 041001, doi: 10.1103/RevModPhys.92.041001
. S A Triana, G Guerrero, A Barik, J Rekier, arXiv:2204.13007arXiv e-printsTriana, S. A., Guerrero, G., Barik, A., & Rekier, J. 2022, arXiv e-prints, arXiv:2204.13007. https://arxiv.org/abs/2204.13007
. J A C Weideman, L N Trefethen, 10.1137/0725072SIAM Journal on Numerical Analysis. 251279Weideman, J. A. C., & Trefethen, L. N. 1988, SIAM Journal on Numerical Analysis, 25, 1279, doi: 10.1137/0725072
. T V Zaqarashvili, R Oliver, A Hanslmeier, 10.1088/2041-8205/805/2/L14ApJL. 80514Zaqarashvili, T. V., Oliver, R., Hanslmeier, A., et al. 2015, ApJL, 805, L14, doi: 10.1088/2041-8205/805/2/L14
. T V Zaqarashvili, M Albekioni, J L Ballester, 10.1007/s11214-021-00790-2SSRv. 21715Zaqarashvili, T. V., Albekioni, M., Ballester, J. L., et al. 2021, SSRv, 217, 15, doi: 10.1007/s11214-021-00790-2
| [
"https://github.com/jishnub/RossbyWaveSpectrum.jl"
] |
[
"Amodal Completion and Size Constancy in Natural Scenes",
"Amodal Completion and Size Constancy in Natural Scenes",
"Amodal Completion and Size Constancy in Natural Scenes",
"Amodal Completion and Size Constancy in Natural Scenes"
] | [
"Abhishek Kar [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"Shubham Tulsiani \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"João Carreira [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"Jitendra Malik [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"Abhishek Kar [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"Shubham Tulsiani \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"João Carreira [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n",
"Jitendra Malik [email protected] \nUniversity of California\n94720Berkeley -BerkeleyCA\n"
] | [
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA",
"University of California\n94720Berkeley -BerkeleyCA"
] | [] | We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes. | 10.1109/iccv.2015.23 | [
"https://arxiv.org/pdf/1509.08147v2.pdf"
] | 1,873,193 | 1509.08147 | 6ce5f60ff1721dd43e03b79cab07a768adaa76c9 |
Amodal Completion and Size Constancy in Natural Scenes
Abhishek Kar [email protected]
University of California
94720Berkeley -BerkeleyCA
Shubham Tulsiani
University of California
94720Berkeley -BerkeleyCA
João Carreira [email protected]
University of California
94720Berkeley -BerkeleyCA
Jitendra Malik [email protected]
University of California
94720Berkeley -BerkeleyCA
Amodal Completion and Size Constancy in Natural Scenes
We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.
Introduction
Consider Figure 1. Humans can effortlessly perceive two chairs of roughly the same height and tell that one is much closer than the other, though still further away than the person, who is taller than the chairs. Compare this to what a state-of-the-art object detector tells us about the image: that there are two chairs, 120 and 40 pixels tall, and one person with 200 pixels from top to bottom. How can we enable computer vision systems to move beyond this crude 2D representation and allow them to capture richer models of their environments, such as those that humans take for granted?
The 3D world is a lot more structured than it looks like from the retina (or from a camera sensor), where objects jump around with each saccade and grow and shrink as we move closer or farther from them. We do not perceive any of this because our brains have learned priors about how visual inputs correlate with the underlying environment, and this allows us to directly access realistic and rich models Figure 1: Perceiving the veridical size of objects in realistic scenes, from a single image, requires disentangling size and depth, being able to compensate for occlusions and to determine intrinsic camera parameters. We tackle all three of these problems, leveraging recent developments in object recognition and large annotated object and scene datasets. of scenes. The priors we use can be categorized as being related to either geometry or familiarity.
Image projection properties, such as the fact that the distance of an object from the camera dictates apparent size and that parallel lines in the scene vanish in the image, provide useful signal for perceiving structure. Familiarity cues are complementary and impose expectations on individual objects and configurations -we expect most objects to be supported by another surface and we have the notion of familiar size -similar objects are of similar sizes. In this work, we exploit geometry and familiarity cues and develop a framework to build richer models of the visual input than those given by current computer vision systems, which are still largely confined to the 2D image plane.
The notion that certain geometrical cues can aid perception has been known since the time of Euclid -the points in the image where objects touch the ground together with their perceived heights allows inference of real world object size ratios [3]. Familiarity cues, on the other hand must be learned, which can be done using available annotations and building upon rapid recent progress in object recognition, more robustly harnessed to explain novel images. Similar ideas have been proposed by Hoiem et al. [14,15] and Gupta et al. [12] who studied the interaction between object detection and scene layout estimation and showed that, by reasoning over object sizes within their 3D environment, as opposed to within the image, one could perform better object detection. Lalonde et al. [20] and Russell et al. [23] also tackled a problem similar to operationalizing size constancy and inferred object sizes of annotated objects. These works, while sharing similar goals to ours, were limited in their scope as they assumed fully visible instances -object recognition technology at the time being a limiting factor. In this paper, we aim for veridical size estimation in more realistic settings -where occlusions are the rule rather than the exception. Occlusions present a significant technical challenge as they break down a number of assumptions(e.g. in Figure 1 not modeling occlusions would yield an incorrect estimate of the relative depths of the two chairs shown).
To overcome these challenges, we first introduce amodal completion. This is a very well studied ability of human perception, primarily in the context of amodal edge perception [17], building on theories of good continuation [24]. In the context of objects, amodal completion manifests itself as inference of the complete shape of the object despite visual evidence for only parts of it [2]. In Section 2, we tackle the amodal completion task and frame it as a recognition problem, formalized as predicting the full extent of object bounding boxes in an image, as opposed to only the visible extent. We build amodal extent predictors based on convolutional neural networks which we train on the challenging PASCAL VOC dataset. In Section 3, we propose a formulation that, leveraging amodally completed objects, can disentangle relative object sizes and object distances to the camera. This geometric reasoning allows us only to infer distances for objects up to a scaling ambiguity in each image. To overcome this ambiguity, we show in Section 4 that it is possible to leverage statistical dependencies between scenes and intrinsic camera parameters, and learn to predict focal lengths of scenes from large scale scene datasets. Finally, we present qualitative results exhibiting veridical size estimation in complex scenes.
Amodal Completion
"Almost nothing is visible in its entirety, yet almost everything is perceived as a whole and complete" [22]. Classic computer vision approaches have traditionally been impoverished by trying to explain just what we see in an image. For years, standard benchmarks have focused on explaining the visible evidence in the image -not the world behind it. For example, the well-studied task of predicting the bounding box around the visible pixels of an object has been the goal of current object detection systems. As humans, not only can we perceive the visible parts of the chair depicted in Figure 1, we can confidently infer the full extent of the actual chair.
This representation of objects, that humans can effortlessly perceive, is significantly richer than what current systems are capable of inferring. We take a step forward towards achieving similar levels of understanding by attacking the task of perceiving the actual extent of the object, which we denote as amodal completion. The amodal representation of objects enables us to leverage additional scene information such as support relationships, occlusion orderings etc. For example, given the amodal and visible extents of two neighboring objects in the image, one can figure out if one is occluded by the other. Explicitly modeling amodal representations also allow us to implicitly model occlusions patterns rather than trying to "explain them away" while detecting objects. As described in Section 3, we can use these representations to infer real world object sizes and their relative depths just from images.
The primary focus of object recognition systems [11,8] has been to localize and identify objects, despite occlusions, which are usually handled as noise. Several recently proposed recognition systems do explicitly model occlusion patterns along with detections and provide a mechanism for obtaining amodal extent of the object [10,28,32]. However, these approaches have been shown to work only on specific categories and rely on available shape models or depth inputs, for learning to reason over occlusions. In contrast, we aim to provide a generic framework that is not limited by these restrictions. Our proposed framework is described below.
Formulation: Given a candidate visible bounding box, we tackle the task of amodal completion -the input to our system is some modal bounding box (e.g. obtained via a detection system) and we aim to predict the amodal extent for the object. We frame this task as predicting the amodal bounding box, which is defined as the bounding box of an object in the image plane if the object were completely visible, i.e. if inter-object occlusions and truncations were absent. The problem of amodal box prediction can naturally be formulated as a regression task -given a noisy modal bounding box of an object we regress to its amodal bounding box coordinates. The amodal prediction system is implicitly tasked with learning common occlusion/truncation patterns and their effects on visible object size. It can subsequently infer the correct amodal coordinates using the previously learned underlying visual structure corresponding to occlusion patterns. For example, the learner can figure out that chairs are normally vertically occluded by tables and that it should extend the bounding box vertically to predict the full extent of the chair.
Let b = (x, y, w, h) be a candidate visible (or modal) bounding box our amodal prediction system receives ((x, y) are the co-ordinates of the top-left corner and (w, h) are the width and height of the box respectively) and b * = (x * , y * , w * , h * ) be the amodal bounding box of the corresponding object, our regression targets are
( x−x * w , y−y * h , (x+w)−(x * +w * ) w , h−h * h ).
Our choice of targets is inspired by the fact that for the y dimension, the height and bottom of the box are the parameters we actually care about (see Section 3) whereas along the x dimension the left co-ordinate is not necessarily more important than the right.
Learning: We use a Convolutional Neural Network (CNN) [9,21] based framework to predict the co-ordinates of the amodal bounding box. THe hypothesis is that the amodal prediction task can be reliably addressed given just the image corresponding to the visible object region -seeing the left of a car is sufficient to unambiguously infer the full extent without significantly leveraging context. Based on this observation, we extract from input image I, the region corresponding to the detection box b and train the CNN using targets derived as above from the amodal box b * . We impose an L 2 penalty on the targets and regress from the extracted CNN image features to the targets. We initialize our model using the AlexNet [19] CNN pretrained for Imagenet [5] classification and then finetune the model specific to our task using backpropagation. Training is carried out with jittered instances of the ground truth bounding box to enable generalization from noisy settings such as detection boxes and also serve as data augmentation.
We train two variants of the above network -classspecific and class agnostic. Both these systems comprise of 5 convolutional layers followed by 3 fully-connected layers. The class-specific network has separate outputs in the last layers for different classes and is trained with positive examples from a specific class whereas the class agnostic network has a single set of outputs across all classes. Intuitively, the class-specific network learns to leverage occlusion patterns specific to a particular class (e.g. chair occluded by a table) whereas the class agnostic network tries to learn occlusion patterns common across classes. Another argument for a class agnostic approach is that it is unreasonable to expect annotated amodal bounding box data for a large number of categories. A two-stage system, where we first predict the visible bounding box candidates and then regress from them to amodal boxes, enables leveraging these class agnostic systems to generalize to more categories. As we demonstrate in Section 3, this class agnostic network can be applied to novel object categories to learn object sizes. Figure 2: Generating amodal bounding boxes for instances in PASCAL VOC. We use the 3D models aligned to images from the PASCAL 3D+ [29] and render them with their annotated 3D pose to obtain binary masks. We then use the tightest fitting bounding box around the mask as our ground truth amodal bounding box.
Dataset: For the purpose of amodal bounding box prediction, we need annotations for amodal bounding boxes (unlike visible bounding box annotations present in all standard detection datasets). We use the PASCAL 3D+ [29] dataset which has approximate 3D models aligned to 12 rigid categories on PASCAL VOC [7] to generate these amodal bounding box annotations. It also contains additional annotations for images from ImageNet [5] for each of these categories ( 22k instances in total from ImageNet). For example, it has between 4 different models aligned to "chair" and 10 aligned to "cars". The different models primarily distinguish between subcategories (but might also be redundant). The 3D models in the dataset are first aligned coarsely to the object instances and then further refined using keypoint annotations. As a consequence, they correctly capture the amodal extent of the object and allow us to obtain amodal ground-truth. We project the 3D model fitted per instance into the image, extract the binary mask of the projection and fit a tight bounding box around it which we treat as our amodal box ( Figure 2). We train our amodal box regressors on the detection training set of PASCAL VOC 2012 (det-train) and the additional images from ImageNet for these 12 categories which have 3D models aligned in PASCAL 3D+ and test on the detection validation set (detval) from the PASCAL VOC 2012 dataset.
Experiments: We benchmark our amodal bounding box predictor under two settings -going from ground truth visible bounding boxes to amodal boxes and in a detection setting where we predict amodal bounding boxes from noisy detection boxes. We compare against the baseline of using the modal bounding box itself as the amodal bounding box (modal bbox) which is in fact the correct prediction for all untruncated instances. Table 1 summarizes our experi-ments in the former setting where we predict amodal boxes from visible ground truth boxes on various subsets of the dataset and report the mean IoU of our predicted amodal boxes with the ground truth amodal boxes generated from PASCAL 3D+. As expected, we obtain the greatest boost over the baseline for truncated instances. Interestingly, the class agnostic network performs as well the class specific one signaling that occlusion patterns span across classes and one can leverage these similarities to train a generic amodal box regressor.
To test our amodal box predictor in a noisy setting, we apply it on bounding boxes predicted by the RCNN [11] system from Girshick et al. We assume a detection be correct if the RCNN bounding box has an IoU > 0.5 with the ground truth visible box and the predicted amodal bounding box also has an IoU > 0.5 with the ground truth amodal box. We calculate the average precision for each class under the above definition of a "correct" detection and call it the Amodal AP (or AP am ). Table 2 presents our AP am results on VOC2012 det-val. As we can see again, the class agnostic and class specific systems perform very similarly. The notable improvement is only in a few classes (e.g. diningtable and boat) where truncated/occluded instances dominate. Note that we do not rescore the RCNN detections using our amodal predictor and thus our performance is bounded by the detector performance. Moreover, the instances detected correctly by the detector tend to be cleaner ones and thus the baseline (modal bbox) of using the detector box output as the amodal box also does reasonably well. Our RCNN detector is based on the VGG16 [25] architecture and has a mean AP of 57.0 on the 12 rigid categories we consider. Here occ and trunc refer to occluded and truncated instances respectively. The class specific and class agnostic methods refer to our variations of the training the amodal box regressors (see text for details) and modal bbox refers to the baseline of using the visible/modal bounding box itself as the predicted amodal bounding box.
Armed with amodal bounding boxes, we now show how we tackle the problem of inferring real world object sizes from images.
Untangling Size and Depth
Monocular cues for depth perception have been wellstudied in psychology literature and there are two very important cues which emerge that tie object size and depth -namely familiar size and relative size. Familiar size is governed by the fact that the visual angle subtended by an object decreases with distance from the observer and prior knowledge about the actual size of the object can be leveraged to obtain absolute depth of the object in the scene. Relative size, on the other hand, helps in explaining relative depths and sizes of objects -if we know that two objects are of similar sizes in the real world, the smaller object in the image appears farther. Another simple cue for depth perception arises due to perspective projection -an object further in the world appears higher on the image plane. Leveraging these three cues, we show that one can estimate real world object sizes from just images. In addition to object sizes, we also estimate a coarse viewpoint for each image in the form of the horizon and camera height.
The main idea behind the algorithm is to exploit pairwise size relationships between instances of different object classes in images. As we will show below, given support points of objects on the ground and some rough estimate of object sizes, one can estimate the camera height and horizon position in the image -and as a result relative object depths. And in turn, given object heights in the image and relative depths, one can figure out the real world object scale ratios. Finally, exploiting these pairwise size evidences across images, we solve for absolute real world sizes (upto a common scale factor or the metric scale factor). Note that we use size and height interchangeably here as our notion of object size here actually refers to the object height.
Camera Model: We use a simplified perspective camera model similar to Hoiem et al. [14]. Let f be the focal length of the camera, θ x the camera tilt angle along the x-axis, h c the height of the camera, y h be the horizon position in the image, y b i be the ground support point for the i th object in the image and d i be the distance of the i th object from the camera along the camera axis (z axis). We assume that the images have been corrected for camera roll and all pixel coordinates are with respect to the optical center (assumed to be center of the image). Figure 3 provides a toy illustration of our model and parameters.
Assuming that the world frame is centered at the camera with its y axis aligned with the ground, the projection of a world point X = (X w , Y w , Z w ) in the image in homogeneous co-ordinates is given by:
x y 1 = 1 Z w f 0 0 0 f 0 0 0 1 1 0 0 0 0 cos θ x sin θ x 0 0 − sin θ x cos θ x 0 X w Y w Z w 1
For a world point corresponding to the ground contact point of object i, given by (X w , −h, d i ), its corresponding y coordinate in the image y b i is given by:
y b i = f (−hc/di+tan θx)
1+(hc/di) tan θx
Under the assumption of the tilt angle being small (tan θ x ≈ θ x ) and height of the camera being not too large compared to object distance (hθ x d i ), our approximation is
y b i = − f h c d i + f θ x(1)
Here f θ x corresponds to the position of the horizon (y h ) in the image. Repeating the above calculation for the topmost point of the object and subtracting from Eq. 1, we obtain
h i = f H i d i(2)
where h i refers to the height of the object in the image and H i is the real world height of the object. Our model makes some simplifying assumptions about the scene namely, objects are assumed to rest on the same horizontal surface (here, the ground) and camera tilt is assumed to be small. We observe that for the purpose of size inference, these assumptions turn out to be reasonable and allow us to estimate heights of objects fairly robustly.
Inferring Object Sizes: The important observation here is that the sizes of objects in an object category are not completely random -they potentially follow a multimodal distribution. For example, different subcategories of boats may
Algorithm 1 Object Size Estimation
Initialize: Initial size estimates H and cluster assignments while not converged do
for all images k ∈ Dataset do (h c , y h ) ← SolveLeastSquares(y b , h, H) for all pairs (i, j) of objects in k do Hi Hj ← hi hj y b j −y h y b i −y h
(1) end for end for log H ← least squares with pairwise constraints (1) GMM cluster log scales (log H) Reassign objects to clusters end while represent the different modes of the size distribution. Given some initial sizes and size cluster estimates, our algorithm for size estimation (Algorithm 1) works by estimating the horizon and camera height per image (by solving a least squares problem using Eq. 1 and Eq. 2 for all the objects in an image). With the horizon and height estimated per image, we obtain pairwise height ratios Hi Hj = hi hj y b j −y h y b i −y h for each pair of objects in an image. We obtain multiple such hypotheses across the dataset which we use to solve a least squares problem for log H -the log height for each size cluster. Finally, we cluster the log sizes obtained in the previous step to obtain new size clusters and iterate. Note that H refers to the vector with heights of various classes and H i refers to the real world size of the i th object.
This particular model is equivalent to solving a latent variable model where the latent variables are the cluster memberships of the instances, the estimated variables are heights corresponding to the size clusters and the horizon and camera height for each image. The loss function we try to minimize is the mean squared error between the ground contact point predicted by the model and the amodal bounding box. Finally, the log of the object heights are assumed to be a Gaussian mixture. This final assumption ties in elegantly with psychophysics studies which have found that our mental representation of object size (referred to as assumed size [16,1,6]) is proportional to the logarithm of the Figure 4: Inferred log size distributions of 12 object categories on PASCAL VOC. We use our class agnostic amodal bounding box predictor to predict amodal boxes for all instances in VOC 2012 det-val and use them with our object size estimation system to estimate size distributions for various categories. The plots above show distributions of the log size with the mean size being shown by the orange line.
real world object size [18].
Our image evidences in the above procedure include the ground support points and heights for all the objects in the image. Note that amodal bounding boxes for objects provide us exactly this information. They account for occlusions and truncations and give us an estimate of the full extent of the object in the image. The above algorithm with occluded/truncated visible bounding boxes would fail miserably and we use our amodal bounding box predictor to first "complete" the bounding boxes for us before using our size inference algorithm to infer object heights.
Inferring Object Size Statistics on PASCAL VOC: We used our size estimation system on PASCAL VOC to estimate size distributions of objects. First, we use our class agnostic amodal bounding box predictor on ground truth visible bounding boxes of all instances on VOC 2012 det-val to "upgrade" them to amodal boxes. We initialize our system with a rough mean height for each object class obtained from internet sources (Wikipedia, databases of cars etc.) and run our size estimation algorithm on these predicted amodal boxes. Figure 4 shows the distributions of log sizes of objects of various categories in PASCAL VOC. Most categories exhibit peaky distributions with classes such as "boat" and "chair" having longer tails owing to comparatively large intra class variation. Note that we experimented with using multiple size clusters per class for this experiment but the peaky, long tailed nature of these distributions meant that a single Gaussian capturing the log size distribu-tions sufficed. In addition to inferring object sizes, we also infer the horizon position and height of the camera. The median height of the camera across the dataset was 1.4 metres (roughly the height at which people take images) and also exhibited a long tailed distribution (please refer to supplementary for details). Some examples of amodal bounding boxes estimated for all instances from visible bounding boxes and horizons are shown in Figure 5.
Scenes and Focal Lengths
The focal length of a camera defines its field of view and hence determines how much of a scene is captured in an image taken by the camera. It is an important calibration parameter for obtaining metric, as opposed to projective, measurements from images. The focal length is usually calibrated using multiple images of a known object [30], such as a chessboard, or as part of bundle adjustment [26], from multiple images of realistic scenes. It is one of the best studied sub-fields of computer vision -e.g. see [13]. Well known existing approaches require a minimum set of vanishing lines [27] or exploit Manhattan-world assumptions [4]. These techniques are very precise and elegant, but not generally applicable (e.g. beach or forest images, etc.).
We propose instead a learning approach that predicts focal length based on statistical dependencies between scene classes and fields of view. Given a same scene, images taken with large focal lengths will have fewer things in them than those captured with small focal lengths and this provides a cue for determining focal length. However certain scenes also have more things than others. This ambiguity can be resolved by training a predictor with many images of each scene class, taken with different focal lengths.
Additionally, certain scenes tend to be pictured with preferred focal lengths. As an example, consider a scene class of "pulpits". If a picture of a pulpit is taken with a short focal length, then the whole church will be visible and that image will not be tagged as a pulpit scene. In order for a pulpit to be dominant in a picture taken with a short focal length camera, then the photographer would have to be unnaturally close to it.
Data: We use the Places database [31], a large dataset that provides a dense sampling of scenes in natural images: it has 205 scene classes, as diverse as swimming pool and rope bridge, and 2.5 million images. We were able to scrape focal length metadata from EXIF tags of approximately 20k examples, on average 100 per class, and split these into a training set having 15k and a validation set of 5k images.
Learning: We considered the problem of predicting the ratio of the focal length to the camera sensor width, which when multiplied by the size of the image in pixels gives the desired the focal length in pixels. We clustered the logarithm of this ratio into 10 bins using k-means and formulated the prediction problem as classification, using a softmax loss. Images in the bin with highest and smallest focal length ratio are shown in Figure 6. We experimented finetuning different popular convolutional networks, including two trained on Imagenet classification -AlexNet [19] and VGG-Deep16 [25] -and a network trained on the Places scenes -the PlacesNet [31].
Results:
The results are shown in Table 3 and suggest that focal length can indeed be predicted directly from images, Table 3: Focal length misclassification rate (top-1, top-3 and top-5 predictions) of networks pretrained on object images from Imagenet and the Places dataset. Lower is better. Figure 6: Example images from the Places dataset from clusters with the largest (up) and smallest (down) focal lengths. Note how images with small focal lengths tend to be more cluttered. A pattern we observed is that dangerous or unaccessible scenes, such as those having volcanos, wild animals and boats tend to be captured using very-high focal lengths, which is rational.
at least approximately, and that pretraining on annotated scene class data makes a good match with this task. Our best model can predict correct focal length quite repeatably among the top-three and top-five predictions. As baselines, we measure chance performance, and performance when picking the mode of the distribution on the training setthe bin having most elements. Note that the bins are unbalanced (please refer to the supplementary material for the distribution of focal lengths across our dataset). Note that our goal is not high precision of the type that is necessary for high-fidelity reconstruction; we aim for a coarse estimate of the focal length that can be robustly computed from natural images. Our results in this section are a first demonstration that this may be feasible.
Conclusion
We have studied the problem of veridical size estimation in complex natural scenes, with the goal of enriching the visual representations inferred by current recognition systems. We presented techniques for performing amodal completion of detected object bounding boxes, which together with geometric cues allow us to recover relative ob-ject sizes, and hence achieve a desirable property of any perceptual system -size constancy. We have also introduced and demonstrated a learning-based approach for predicting focal lengths, which can allow for metrically accurate predictions when standard auto-calibration cues or camera metadata are unavailable. We strived for generality by leveraging recognition. This is unavoidable because the size constancy problem is fundamentally ill-posed and can only be dealt with probabilistically.
We also note that while the focus of our work is to enable veridical size prediction in natural scenes, the three components we have introduced to achieve this goal -amodal completion, geometric reasoning with size constancy and focal length prediction are generic and widely applicable. We provided individual evaluations of each of these components, which together with our qualitative results demonstrate the suitability of our techniques towards understanding real world images at a rich and general level, beyond the 2D image plane.
Figure 3 :
3Toy example illustrating our camera model and parameters. Please refer to the text for detailed explanations.
Figure 5 :
5Amodal bounding box prediction and size estimation results on images in PASCAL VOC. The solid rectangles represent the visible bounding boxes and the dotted lines are the predicted amodal bounding boxes with heights in meters. The horizontal red line denotes the estimated horizon position for the image.
Table 1 :
1Mean IoU of amodal boxes predicted from the visible bounding box on various subsets of the validation set in PASCAL VOC.
Table 2 :
2AP am for our amodal bounding box predictors on VOC 2012 det-val. AP am is defined as the average precision when a detection is assumed to be correct only when both the modal and amodal bounding boxes have IoU > 0.5 with their corresponding ground truths.
ConvNet top-1 top-3 top-5Chance
90.0
70.0
50.0
Mode Selection
60.2
26.4
8.7
AlexNet-Imagenet
57.1
18.8
3.9
VGG-Deep16-Imagenet 55.8
15.9
3.3
PlacesNet-Places
54.3
15.3
3.1
AcknowledgementsThis work was supported in part by NSF Award IIS-1212798 and ONR MURI-N00014-10-1-0933. Shubham Tulsiani was supported by the Berkeley fellowship and João Carreira was supported by the Portuguese Science Foundation, FCT, under grant SFRH/BPD/84194/2012. We gratefully acknowledge NVIDIA corporation for the donation of Tesla GPUs for this research.
Retinal and assumed size cues as determinants of size and distance perception. J Baird, Journal of Experimental Psychology. 5J. Baird. Retinal and assumed size cues as determinants of size and distance perception. Journal of Experimental Psy- chology, 1963. 5
Amodal volume completion: 3d visual completion. Computer Vision and Image Understanding. T P Breckon, R B Fisher, T. P. Breckon and R. B. Fisher. Amodal volume completion: 3d visual completion. Computer Vision and Image Under- standing, 2005. 2
The optics of euclid. H E Burton, J. Opt. Soc. Am. 2H. E. Burton. The optics of euclid. J. Opt. Soc. Am., 1945. 2
Using vanishing points for camera calibration. B Caprile, V Torre, International Journal of Computer Vision. 6B. Caprile and V. Torre. Using vanishing points for camera calibration. International Journal of Computer Vision, 1990. 6
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2009. 3
The influence of assumed size on apparent distance. W Epstein, The American Journal of Psychology. 5W. Epstein. The influence of assumed size on apparent dis- tance. The American Journal of Psychology, 1963. 5
The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal- network.org/challenges/VOC/voc2012/workshop/index.html. 3
Object detection with discriminatively trained partbased models. P F Felzenszwalb, R B Girshick, D Mcallester, D Ramanan, IEEE Trans. on Pattern Analysis and Machine Intelligence. 2P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ra- manan. Object detection with discriminatively trained part- based models. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010. 2
Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. K Fukushima, Biological Cybernetics. 3K. Fukushima. Neocognitron: A self-organizing neural net- work model for a mechanism of pattern recognition unaf- fected by shift in position. Biological Cybernetics, 1980. 3
Parsing occluded people. G Ghiasi, Y Yang, D Ramanan, C C Fowlkes, IEEE Conference on Computer Vision and Pattern Recognition. G. Ghiasi, Y. Yang, D. Ramanan, and C. C. Fowlkes. Parsing occluded people. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. 2
Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, IEEE Conference on Computer Vision and Pattern Recognition. 24R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. 2, 4
Blocks world revisited: Image understanding using qualitative geometry and mechanics. A Gupta, A A Efros, M Hebert, European Conference on Computer Vision. A. Gupta, A. A. Efros, and M. Hebert. Blocks world re- visited: Image understanding using qualitative geometry and mechanics. In European Conference on Computer Vision, 2010. 2
Multiple view geometry in computer vision. R Hartley, A Zisserman, Cambridge university pressR. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 6
Putting objects in perspective. D Hoiem, A A Efros, M Hebert, International Journal of Computer Vision. 24D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. International Journal of Computer Vision, 2008. 2, 4
Representations and techniques for 3D object recognition and scene interpretation. D Hoiem, S Savarese, Morgan & Claypool PublishersD. Hoiem and S. Savarese. Representations and techniques for 3D object recognition and scene interpretation. Morgan & Claypool Publishers, 2011. 2
Size as a cue to distance: Static localization. W H Ittelson, The American Journal of Psychology. 5W. H. Ittelson. Size as a cue to distance: Static localization. The American Journal of Psychology, 1951. 5
Organization in vision: Essays on Gestalt perception. G Kanizsa, Praeger PublishersG. Kanizsa. Organization in vision: Essays on Gestalt per- ception. Praeger Publishers, 1979. 2
Journal of Experimental Psychology: human perception and performance. T Konkle, A Oliva, Canonical visual size for real-world objectsT. Konkle and A. Oliva. Canonical visual size for real-world objects. Journal of Experimental Psychology: human per- ception and performance, 2011. 6
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems. 37A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. 3, 7
Photo clip art. J.-F Lalonde, D Hoiem, A A Efros, C Rother, J Winn, A Criminisi, In ACM Transactions on Graphics. 2J.-F. Lalonde, D. Hoiem, A. A. Efros, C. Rother, J. Winn, and A. Criminisi. Photo clip art. In ACM Transactions on Graphics (TOG), 2007. 2
Backpropagation applied to hand-written zip code recognition. Y Lecun, B Boser, J Denker, D Henderson, R E Howard, W Hubbard, L D , In Neural Computation. 3Y. LeCun, B. Boser, J. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-written zip code recognition. In Neural Computation, 1989. 3
Vision science: Photons to phenomenology. S E Palmer, MA. 2MIT press CambridgeS. E. Palmer. Vision science: Photons to phenomenology. MIT press Cambridge, MA, 1999. 2
Building a database of 3d scenes from user annotations. B C Russell, A Torralba, IEEE Conference on Computer Vision and Pattern Recognition. B. C. Russell and A. Torralba. Building a database of 3d scenes from user annotations. In IEEE Conference on Com- puter Vision and Pattern Recognition, 2009. 2
From fragments to objects: Segmentation and grouping in vision. T F Shipley, P J Kellman, Elsevier130T. F. Shipley and P. J. Kellman. From fragments to objects: Segmentation and grouping in vision, volume 130. Elsevier, 2001. 2
Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. K Simonyan, A Zisserman, 47K. Simonyan and A. Zisserman. Very deep convolu- tional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. 4, 7
Bundle adjustment -a modern synthesis. B Triggs, P F Mclauchlan, R I Hartley, A W Fitzgibbon, Vision algorithms: theory and practice. SpringerB. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgib- bon. Bundle adjustment -a modern synthesis. In Vision algorithms: theory and practice. Springer, 2000. 6
Camera calibration by vanishing lines for 3-d computer vision. L.-L Wang, W.-H Tsai, IEEE Trans. on Pattern Analysis and Machine Intelligence. 6L.-L. Wang and W.-H. Tsai. Camera calibration by vanish- ing lines for 3-d computer vision. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1991. 6
Data-driven 3d voxel patterns for object category recognition. Y Xiang, W Choi, Y Lin, S Savarese, IEEE Conference on Computer Vision and Pattern Recognition. Y. Xiang, W. Choi, Y. Lin, and S. Savarese. Data-driven 3d voxel patterns for object category recognition. In IEEE Con- ference on Computer Vision and Pattern Recognition, 2015. 2
Beyond pascal: A benchmark for 3d object detection in the wild. Y Xiang, R Mottaghi, S Savarese, IEEE Winter Conference on Applications of Computer Vision. Y. Xiang, R. Mottaghi, and S. Savarese. Beyond pascal: A benchmark for 3d object detection in the wild. In IEEE Win- ter Conference on Applications of Computer Vision, 2014. 3
A flexible new technique for camera calibration. Z Zhang, IEEE Trans. on Pattern Analysis and Machine Intelligence. 6Z. Zhang. A flexible new technique for camera calibration. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000. 6
Learning deep features for scene recognition using places database. B Zhou, A Lapedriza, J Xiao, A Torralba, A Oliva, Advances in Neural Information Processing Systems. 67B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In Advances in Neural Information Processing Sys- tems, 2014. 6, 7
Towards scene understanding with detailed 3d object representations. International Journal of Computer Vision. M Z Zia, M Stark, K Schindler, M. Z. Zia, M. Stark, and K. Schindler. Towards scene un- derstanding with detailed 3d object representations. Interna- tional Journal of Computer Vision, 2014. 2
| [] |
[
"Characterization of anomalous diffusion through convolutional transformers",
"Characterization of anomalous diffusion through convolutional transformers"
] | [
"Nicolás Firbas \nDBS -Department of Biological Sciences\nNational University of Singapore\n16 Science Drive 4117558Singapore, Singapore\n",
"Òscar Garibo-I-Orts \nVRAIN -Valencian Research Institute for Artificial Intelligence\nUniversitat Politècnica de València\n46022ValènciaSpain\n",
"Miguelángel Garcia-March \nIUMPA -Instituto Universitario de Matemática Pura y Aplicada\nUniversitat Politècnica de València\n46022ValènciaSpain\n",
"J Alberto Conejero †[email protected]. \nIUMPA -Instituto Universitario de Matemática Pura y Aplicada\nUniversitat Politècnica de València\n46022ValènciaSpain\n"
] | [
"DBS -Department of Biological Sciences\nNational University of Singapore\n16 Science Drive 4117558Singapore, Singapore",
"VRAIN -Valencian Research Institute for Artificial Intelligence\nUniversitat Politècnica de València\n46022ValènciaSpain",
"IUMPA -Instituto Universitario de Matemática Pura y Aplicada\nUniversitat Politècnica de València\n46022ValènciaSpain",
"IUMPA -Instituto Universitario de Matemática Pura y Aplicada\nUniversitat Politècnica de València\n46022ValènciaSpain"
] | [] | The results of the Anomalous Diffusion Challenge (AnDi Challenge)[30]have shown that machine learning methods can outperform classical statistical methodology at the characterization of anomalous diffusion in both the inference of the anomalous diffusion exponent α associated with each trajectory (Task 1), and the determination of the underlying diffusive regime which produced such trajectories (Task 2). Furthermore, of the five teams that finished in the top three across both tasks of the AnDi challenge, three of those teams used recurrent neural networks (RNNs). While RNNs, like the long short-term memory (LSTM) network, are effective at learning long-term dependencies in sequential data, their key disadvantage is that they must be trained sequentially. In order to facilitate training with larger data sets, by training in parallel, we propose a new transformer based neural network architecture for the characterization of anomalous diffusion. Our new architecture, the Convolutional Transformer (ConvTransformer) uses a bi-layered convolutional neural network to extract features from our diffusive trajectories that can be thought of as being words in a sentence. These features are then fed to two transformer encoding blocks that perform either regression (Task 1) or classification (Task 2). To our knowledge, this is the first time transformers have been used for characterizing anomalous diffusion. Moreover, this may be the first time that a transformer encoding block has been used with a convolutional neural network and without the need for a transformer decoding block or positional encoding. Apart from being able to train in parallel, we show that the ConvTransformer is able to outperform the previous state of the art at determining the underlying diffusive regime (Task2) in short trajectories (length 10-50 steps), which are the most important for experimental researchers. | 10.1088/1751-8121/acafb3 | [
"https://export.arxiv.org/pdf/2210.04959v1.pdf"
] | 252,815,397 | 2210.04959 | 884f180b1262a4c2ada0ad4ca0ec762822bfde2d |
Characterization of anomalous diffusion through convolutional transformers
October 12, 2022 10 Oct 2022
Nicolás Firbas
DBS -Department of Biological Sciences
National University of Singapore
16 Science Drive 4117558Singapore, Singapore
Òscar Garibo-I-Orts
VRAIN -Valencian Research Institute for Artificial Intelligence
Universitat Politècnica de València
46022ValènciaSpain
Miguelángel Garcia-March
IUMPA -Instituto Universitario de Matemática Pura y Aplicada
Universitat Politècnica de València
46022ValènciaSpain
J Alberto Conejero †[email protected].
IUMPA -Instituto Universitario de Matemática Pura y Aplicada
Universitat Politècnica de València
46022ValènciaSpain
Characterization of anomalous diffusion through convolutional transformers
October 12, 2022 10 Oct 2022Anomalous Diffusion through Convolutional Transformers 2anomalous diffusionmachine learningrecurrent neural networksconvolutional networkstransformersattention
The results of the Anomalous Diffusion Challenge (AnDi Challenge)[30]have shown that machine learning methods can outperform classical statistical methodology at the characterization of anomalous diffusion in both the inference of the anomalous diffusion exponent α associated with each trajectory (Task 1), and the determination of the underlying diffusive regime which produced such trajectories (Task 2). Furthermore, of the five teams that finished in the top three across both tasks of the AnDi challenge, three of those teams used recurrent neural networks (RNNs). While RNNs, like the long short-term memory (LSTM) network, are effective at learning long-term dependencies in sequential data, their key disadvantage is that they must be trained sequentially. In order to facilitate training with larger data sets, by training in parallel, we propose a new transformer based neural network architecture for the characterization of anomalous diffusion. Our new architecture, the Convolutional Transformer (ConvTransformer) uses a bi-layered convolutional neural network to extract features from our diffusive trajectories that can be thought of as being words in a sentence. These features are then fed to two transformer encoding blocks that perform either regression (Task 1) or classification (Task 2). To our knowledge, this is the first time transformers have been used for characterizing anomalous diffusion. Moreover, this may be the first time that a transformer encoding block has been used with a convolutional neural network and without the need for a transformer decoding block or positional encoding. Apart from being able to train in parallel, we show that the ConvTransformer is able to outperform the previous state of the art at determining the underlying diffusive regime (Task2) in short trajectories (length 10-50 steps), which are the most important for experimental researchers.
Introduction
It could be said that the study of diffusion began in 1827 when Brown first observed the motion, which now carries his namesake, of pollen from Clarkia pulchella suspended in water [5]. This movement results from small particles being bombarded by the molecules of the liquid in which they are suspended, as was first conjectured by Einstein and later verified by Perrin [32]. Though Brown never managed to explain the movement he observed, we now know that Brownian motion is a kind of normal diffusion.
To describe diffusion, we can consider the following analogy: Let us imagine a particle being an ant, or some other diminutive explorer, we can then think of mean squared displacement (MSD), which can be written as x 2 , as the portion of the system that it has explored. For normal diffusion such as Brownian motion, the relation between the portion of explored region and time is linear, x 2 ∼ t. As time progresses, the expected value of distance explored by our ant (MSD) will remain constant. In contrast to normal diffusion, anomalous diffusion is characterized by x 2 ∼ t α , α = 1. Anomalous diffusion can be further subdivided into super-diffusion and sub-diffusion, when α > 1 or α < 1, respectively. To continue using the analogy of our ant, an intuitive example of sub-diffusion would be diffusion on a fractal. In this case, it is easy to see how, as time progresses and our ant ventures into zones of increasing complexity, its movement will in turn be slowed. Thus the relationship of space explored and time will be x 2 ∼ t α , α < 1. Conversely, if we give our ant wings and have it randomly take flight at random times t i sampled from t −σ−1 with flight times positively correlated to the wait time, then for σ ∈ (0, 2) we would have a super-diffusive Lévy flight trajectory.
Since the discovery of Brownian motion, many systems have shown diffusive behavior that deviates from the normal one, where MSD scales linearly with time. These systems can range from the atomic scale to complex organisms such as birds. Examples of such diffusive systems include ultra-cold atoms [33], telomeres in the nuclei of cells [4], moisture transport in cement-based materials, the free movement of arthropods [31], and the migration patterns of birds [39]. Anomalous diffusive patterns can even be observed in signals that are not directly related to movement, such as heartbeat intervals and DNA [6, pg. 49-89]. The interdisciplinary scope of anomalous diffusion highlights the need for modeling frameworks that are able to quickly and accurately characterize diffusion in real-life scenarios, where data is often limited and noisy.
Despite the importance of anomalous diffusion in many fields of study [23], detection and characterization remain difficult to this day. Traditionally, mean squared displacement (MSD(t) ∼ t α ) and its anomalous diffusion exponent α have been used to characterize diffusion. In practice, computation of MSD is often challenging as we often work with a limited number of points in the trajectories, which may be short and/or noisy, highlighting a need for a robust method for real-world conditions. The problem with using α to characterize anomalous diffusion is that trajectories often have the same anomalous diffusion exponent while having different underlying diffusive regimes. An example would be the motion messenger RNA (mRNA) in a living E. coli cell. The individual trajectories of the mRNA share roughly the same α despite their trajectories being quite distinct [26].
Being able to classify trajectories based on their underlying diffusive regime is useful because it can shed light on the underlying behavior of the particles undergoing diffusion. This could be more important for experimental researchers, which may be more concerned with how a particle moves not necessarily how much it has moved. In this vein, the AnDi (Anomalous Diffusion) Challenge organizers identified the following five diffusive models [28]: the continuous-time random walk (CTRW) [34], fractional Brownian motion (FBM) [22], the Lévy walk (LW) [15], annealed transient motion (ATTM) [24], and scaled Brownian motion (SBM) [18], with which to classify trajectories. This information is not meant to supplant traditional MSD-based analysis, rather, it is meant to give us additional information about the underlying stochastic process behind the trajectory. For example, for a particular exponent α, one may not have access to an ensemble of homogeneous trajectories. Moreover, one cannot assure that all measured trajectories have the same behavior and can therefore be associated with the same anomalous exponent α. In these cases, it may be possible to explain the behavior of the diffusing particles by using what we know about five models mentioned above.
The first applications of machine learning methods to the study of diffusion aimed to discriminate among confined, anomalous, normal qualitatively, and directed motion [8,16]. These ML models did not extract quantitative information nor determining did they determine the underlying physical model. At first long short-term memory (LSTM) recurrent neural networks [13] were considered for the analysis of anomalous diffusion trajectories from experimental data in [3]. Later, Muñoz-Gil et al. [27] computed the distances between consecutive positions in raw trajectories and normalized them by dividing by the standard deviation. Then, their cumulative sums fed random forest algorithms that permit to infer the anomalous exponent α and to classify the trajectory in one of these models, CTRW, FBM, or LW. Random forests and gradient boosting methods were already considered for the study of fractional anomalous diffusion of single-particle trajectories in [14,20].
The results of the AnDi Challenge [30] showed that machine learning (ML) algorithms outperform traditional statistical techniques in the inference of the anomalous diffusion exponent (Task 1) and in the classification of the underlying diffusion model (Task 2), across one, two, and three dimensions. Some of the most successful techniques consisted of: a couple of convolutional layers combined with some bidirectional LSTM layers and a final dense layer [9], two LSTM layers of decreasing size with a final dense layer [1], a WaveNet encoder with LSTM layers [17] , or the extraction of classical statistics features combined with three deep feed-forward neural networks [10].
As we can see, the best performing methods from the AnDi Challenge were either entirely based on LSTMs recurrent neural networks or incorporated them as part of a larger architecture. For many years, LSTM have been one of the most successful techniques in natural language processing (NLP) and time series analysis. As a matter of fact, Google Translate algorithm is a stack of just seven large LSTM layers [41]. However, since the landmark paper Attention is All You Need [38], transformers have become the dominant architecture in NLP, where they have surpassed previous models based on convolutions and recurrent neural networks [40]. Inspired by the transformers' success and by drawing a parallel between the sequential nature of language and the diffusion of a single particle, we propose a new architecture combining convolutional layers with transformers, that we will call a convolutional transformer: the Convolutional Transformer (ConvTransformer)
The Convolutional Transformer
The ConvTransformers method has been applied to both the inference the anomalous diffusion exponent α (Task 1) and the determination of the underlying diffusion model (Task 2). As the name suggests, the ConvTransformer uses two convolutional layers followed by a transformer encoding block. However, unlike the transformer in [38], our method uses only two transformer encoding blocks in sequence without a transformer decoding block or positional encoding. The convolutional layers behave as an encoder extracting both linear and non-linear features from the trajectory while retaining spatiotemporal awareness eliminating the need for positional encoding. These features are then passed to the transformer encoding layers where attention is performed upon them. The ConvTransformer structure can be intuitively understood if we consider a single trajectory is akin to a sentence. In this analogy the CNNs are used to create pseudo-words, which are the features produced by the CNNs. Finally, we perform attention twice on the pseudo-words with our transformer encoder, which allows us to determine which features are the most important, and from there we are able to obtain either our α or the underlying diffusive regime model.
The ConvTransformer does not require positional encoding because the CNN kernel moves across the trajectory to create the features. As the CNN kernel moves along the trajectory, it learns positional information, negating the need for positional encoding prior to the transformer encoding block. This was assessed by testing the ConvTransformer on the Task 2 with five-fold validation on a training set of size 50K (32K for training, 8K for validation, and 10K for testing), using the same set of hyperparameters, with and without the trigonometric encoding scheme used in Vaswani et al. 2017 [38]. A five-fold validation of models trained with and without positional encoding showed showed that model classification accuracy decreased with positional encoding to a mean 72.39% and standard deviation of 4.86 from 75.66% and standard deviation of 1.54. Thus, positional encoding did not improve ConvTransformer performance, and it was omitted from the model.
In Figure 1, we show a diagram detailing the structure of the ConvTransformer. As was previously said, the ConvTransformer uses two convolutional layers, one which scales our trajectory up to 20 features, and a second one takes those 20 features and outputs 64 features. This structure allows the CNN to learn lower-level features first and then refine those features in the subsequent layer. Both convolutional layers use a kernel size of 3, a stride of 1, and are followed by a rectified linear unit function (ReLU), and a dropout with a 5% probability of setting a learned parameter to 0 to avoid over-fitting. At the end of the convolutional block, we do a pooling with kernel size 2, which cuts the length of our output in half. This helps conserve video memory (VRAM), optimizing resource consumption and democratizing the model as it can run in consumer-grade hardware. The transformer encoding block follows the basic structure of the transformer encoding block from [38]. It uses wide multi-headed attention with 16 attention heads. The attention mechanism is then followed by layer normalization and dropout. This output feeds two linear layers separated by a ReLU, which ultimately goes to another dropout. This transformer encoding block then feeds into another transformer encoding block. This output of this final transformer encoding block then goes to a max function, which gets the largest value of the output tensor by column. Finally, the output of the ConvTransformer feeds a linear layer that outputs size one for Task 1 or size five for each of the categories in Task 2.
Methods
Generation of Data sets
All of the data sets used to train and test our models were generated with the Python 3 package provided by the AnDi Challenge [30]. The code was made freely available by the AnDi Challenge organizers at: https://github.com/AnDiChallenge/ANDI_datasets.
Hyper-Parameter Selection
In order to select the hyper-parameters, we chose a relatively small dataset to train all permutations of these hyper-parameters using five-fold validation to ensure that model performance for a given hyper-parameter set was reliable across runs. The hyperparameters sets were assessed using a data set with 50K trajectories of lengths [10,1000] across both Task 1 and Task 2, which was broken up into 32K training, 8K validation, and 10K testing. Then the sets of tested hyper-parameters were evaluated as per the five-fold validation, and the hyper-parameters were chosen for the final model, except the Learning Rate (LR), which has to be scaled up with respect to training set size and batch size [35]. In the selection process, model performance and feasibility were assessed to ensure that model training could take place on our hardware within a reasonable amount of time. Different sets of hyper-parameters were tested for both Task 1 and Task 2 but, in the end, we found that the same hyper-parameters work well for both tasks. These final hyper-parameters can be found in Table 1.
In order to generalize learning rate to larger training data sets, we used the results from [35] to relate the noise scale (g) during training to batch size (B), training size (N), and learning rate ( ), as shown in Equation 1.
g ≈ · N B(1)
During the training process, we found that an LR of 0.01 worked well across both tasks. Using Equation 1, this would give us an equivalent LR of 2.133 × 10 −5 when training with 1.35 × 10 6 trajectories. We used this value as a baseline and we ended up setting an LR of 2.133 × 10 −4 for training the final model on a larger data set, as can be seen in Table 1.
Our testing revealed that using smaller batches and more heads improved performance. However, decreasing the batch size greatly increases the running time. Ideally, we would have used 32 or more heads. However, we were constrained by our equipment's 8GB video memory (GTX 1070). Thus, with the above set of hyperparameters, we strove to attain a good balance of speed and performance with our hardware constraints.
Model Training
All model training was conducted in Python 3.8.5 using Pytorch in Jupyter Notebook 6.0.3. For simplicity, we trained all our models for Task 1 and Task 2 on data sets of size 2 million using the same split as before: 75% of the data for training and 25% for testing, with the training set further broken as 90% for training and 10% for validation to be used by Early Stopping to halt the training [25]. Thus, the final training sets were broken up into: 1.35 × 10 6 million trajectories for training, 1.50 × 10 5 for validation, and 5 × 10 5 for testing.
For Task 1, we trained 12 models, each model corresponding to a batch of trajectory lengths: [10,20], [21,30], [31,40], [41,50] [29] generates trajectories with anomalous exponent α ∈ [0.05, 2) in intervals of 0.05. This means that 39 different alpha values can be generated, and there are five diffusion models for a total of 195 different kinds of trajectories. After generating data sets of size 2×10 6 would ensure that each of the 195 combinations of diffusion model and α has a representative sample size of about 10 5 . Naturally, this is lower after splitting the data sets into training, test, and validation. However, data sets of this size were a good compromise between model performance and training time on our hardware.
In order to accelerate the training of the 12 models, we reduced the patience of our early stopping function from ten, used in hyper-parameter selection, to five while maintaining the number of epochs at 100. Additionally, we conducted the training so that models inherit the parameter state of a previously trained model. This has two advantages: Firstly, it indirectly exposes the model to more unique trajectories, as the model will inherit a parameter state that was trained on a different data set, thus reducing overfitting. Secondly, it jump-starts the training of each model with a parameter state that was trained on longer trajectories, which should contain relevant information for classifying shorter trajectories.
To implement this training scheme, we trained the first ConvTransformer on the easiest dataset, trajectories of length [801−1000], as can be inferred from our testing and the results in the AnDi Challenge [30]. Then the parameter state of this model is used as the starting parameters state for the next model, which will be trained on trajectories of lengths [601 − 800] and so forth until the final model is trained on trajectories of length [10,20]. Once we have completed the first training pass through, we loop back to the top and repeat the process, with model [801, 1000] from round two inheriting the parameter state of model [10,20] from the first training round. Finally, the round two models are tested on every testing data set, and the best models at each trajectory length are selected. This final selection process resulted in 11 models as the models Finally, for Task 2, a single model was used across all trajectory lengths (10 to 1000) as we were to improve upon the state art while maintaining parsimony, as we show in Figure 3. The transformer was trained using 100 epochs, patience of 10, and a single data set with trajectory lengths [10, 1000].
Results
We use the AnDi Interactive Tool ‡ extensively in our testing in order to be able to assess our model's performance against the current state of the art. However, in order to gain further insight into model's performance under different combinations of trajectory type (ATTM, CTRW, LW, FBM, SBM), trajectory length, anomalous diffusion exponent (α), and signal to noise ratio (SNR), defined as SNR= σ disp /σ noise , where σ disp is the standard deviation of the displacements and σ noise is the standard deviation of the Gaussian white noise. We generated datasets for all of the permutations seen in Table 2 We have used the performance of our model on these data sets to make the figures in the following sections. In order to improve model comparability we will use the following metrics of performance:
• The Mean Average Error (MAE) is defined as
1 N N j=1 |α j,pred − α j,true |,(2)
where α ·,pred and α ·,true are the predicted and true α values respectively.
• The F1-Score is the harmonic mean of precision and recall and it is defined as True Pos. True Pos. + 1 2 (False Pos. + False Neg.).
(
For our purposes, we have used the micro averaged F1-score, that is biased by class frequencies, as it has been considered in the AnDi Challenge.
Using the AnDi Challenge interactive tool, we can see how the ConvTransformer would have performed in the AnDi Challenge in Table 3. Overall the ConvTransformer would have placed in the middle of the top ten of the AnDi Challenge. However, ConvTransformer shines in classifying short trajectories (Task 2). Here, it outperforms the top three models in length trajectories [10,50]. Impressively, the ConvTransformer manages these results by training on a single data set that is small when compared to the training set used by team UPV-MAT, described in [9], whose model came within the margin of error of ours at the classification of short trajectories in one dimension. Team UPV-MAT's model was trained using 4 × 10 6 trajectories [9], while we used only 1.35 × 10 6 . This is noteworthy because we observed that during the period of patience, after the final saved parameter state, the ConvTransformer continues to converge to a training loss of zero. This indicates that the network is not fully saturated. Thus, it is possible that given a larger training set, the ConvTransformer would learn, which should lead to an increase in performance.
Regression of the anomalous diffusion exponent (Task 1)
We first show the performance of our model with the AnDi Interactive tool, see Figure 2 .The ConvTransformer, as well as the top performers in the AnDi Challenge seen in Table 3, had the most difficulty inferring the α of ATTM and SBM diffusive regimes, with ATTM being far more problematic. This makes sense if we consider the way ATTM trajectories are generated. The displacements of particles undergoing ATTM are distributed BM (D, t, ∆t), where BM generates a Brownian motion trajectory of length t sampled at times ∆t, with diffusivity coefficient D. Additionally, in ATTM D is re-sampled every t ∼ D σ/α . This means that every time t a particle in ATTM will change diffusive regime in a manner that may obscure α. Similar to ATTM, SBM also experiences changes in D, the diffusivity coefficient. However, in SBM D(t) = Dψ(t) [21]. On the surface it would appear as though a gradual change in diffusivity should not appear to pose as much difficulty as the regime shifts in ATTM. However, it may have contributed to the difficulty of inferring α in SBM trajectories.
ConvTransformer performance scales as expected with trajectory length, see Figure 3. That is to say, as trajectory length increases, the model performance also improves. Notably, performance scaled less erratically than in other models, such as the best performing model in Task 1 [9]. Interestingly, noise does not affect model accuracy as heavily at shorter trajectory lengths, and the performance difference between trajectories with respect to the SNR appears to stabilize after trajectories of length ∼ 200. In Figure 4, we can see the performance breakdown by the underlying diffusive regime. These plots shed more light on the performance issues when regressing α for ATTM. From Figure 4, it is evident that most of the difficulty with regressing α in ATTM appears to occur in heavy sub-diffusive trajectories at α ≈ 0.1, regardless of noise. There, we can see a roughly bi-modal distribution with two clusters at α ≈ 0.4 and α ≈ 0.9, with the more significant peak at about 0.9, as shown by the median value line.
Additionally, the ConvTransformer shows similar confusion patterns in the regression task for the SBM model, where it confuses highly super-diffusive trajectories with, roughly, normal diffusion ( Figure 4). In both of these cases, the regime shift in ATTM and the change in D in SBM could be making heavy anomalous diffusion (both super and sub-diffusion) appear as though it was normal diffusion. However, these effects could also be an artifact of the training data since all diffusive regimes can exhibit normal diffusion, so there will be more trajectories with α = 1 than either super or sub-diffusion, or a combination of both effects. Figure 5 takes a closer look at model performance by trajectory length and type. Once again, most trajectories, with the exception of the ones generated by the SBM model, perform very similarly at SNR 1 and SNR 2, which shows notably worse performance at SNR 1 on all trajectory lengths. It can be verified via the AnDi Interactive Tool that the same effect occurs in the top three models (UPV-MAT, HNU, Table 3, where SBM performance is the most sensitive to additional noise in the trajectory.
eduN) shown in
When regressing the anomalous diffusion exponent, the sensitivity of machine learning models to added noise in SBM trajectories warrants further study. Recently, Szarek [36] has encountered a similar lack in resiliency to noise using an RNN-based model, like UPV-Mat, HNU, and eduN models. It appears that the difficulty in working with SBM is an inherent characteristic of SBM trajectories, as opposed to the neural network architecture used for inference of α. This is further substantiated by our transformer-based method encountering the same problem.
In terms of model performance for different values of α, the ConvTransformer perform best at α ≈ .9 ( Figure 6). However, for long trajectories, those with 200 or more points, the model performs best roughly between α ∈ [0.25, 0.5]. The latter scenario seems to be closer to the truth if we examine the model performance at various levels of α by trajectory type as in Figure 7. Our ConvTransformer performs best roughly in the middle of the domain of α of each trajectory type, with an adequate, though not optimal, performance at α ≈ 1 across all trajectory types (Figure 7). Part of the reason performance is overestimated, at α ≈ 1, when pooling the trajectory types may be that CTRW and SBM perform best at α ≈ 1, and these two types of diffusion can be super and sub-diffusive. Thus, they have more testing points and skew the pooled values in Figure 6.
Classification of trajectories according to the anomalous diffusion generating model (Task 2)
ConvTransformer performance in the classification of trajectories according to the anomalous diffusion generating model (Task 2) presents results in the average overall, with respect to the ten best models of the AnDi Challenge [30]. However, as we mentioned earlier, our ConvTransformer shines in short trajectories. As with the inference of α (Task 1), ATTM trajectories proved to be the most difficult to work with. These trajectories were most often confused with SBM (Figures 8 and 10). As we have said, this may be because both models have changes in the diffusivity coefficient D. If we imagine a short ATTM trajectory, where D only changes a few times, the Noise affects ConvTransformer the least at both short and long trajectory lengths ( Figure 9). Performance at the lower noise data improves faster with respect to trajectory length. The most significant difference between the two curves, in Figure 9, occurs at trajectories of length ∼ 200, after which the SNR 1 curve converges towards SNR 2. This indicates that longer trajectories are most helpful when dealing with noisy trajectories that are roughly 200 to 600 dispersals in length.
When looking at F1-score by the underlying diffusion model, we can see that ConvTransformer performance varies significantly across our five diffusive regimes Figure 11). That being said, unlike Task 1, performance change with respect to noise remains fairly constant across the different kinds of diffusion. The outlier to this behavior is short CTRW trajectories at SNR 1 ( Figure 11). In this case, model performance is better for the shortest trajectories and then drops off before resuming the expected convergence behavior of F1-Score with respect to the trajectory length. The cause of this artifact in the F1-Score is that at SNR 1 and short trajectory lengths ( [10,50]), the ConvTransformer is inclined to classify the other diffusive regimes, with the exception of LW, as CTRW ( Figure 10). It is noteworthy that the ConvTransformer can make the distinction between LW and CTRW as LW can be considered a special case of CTRW [30].
In terms of ConvTransformer performance in classification (Task 2) with regards to α, we can see that ConvTransformer performs better at a value of α ≈ 0.5 and at the higher-end α ≥ 1.5, with an apparent plateauing behavior at the upper end of the α domain in longer trajectories with lower noise (Figure 12). In Figure 13 we again look at F1-Score as a function of α. However, this time we look at the relationship in terms of the underlying diffusive model. Most diffusive models retain the relationship seen in Figure 12, within their respective domains of α. However, CTRW and LW deviate from this behavior. Both CTRW and LW appear to have a more linear relationship between F1-Score and α, with CTRW performing best at low values of α and LW performing best at higher values of α. This relationship strength (F1-Score ∼ α) appears to be exacerbated by noise. Figure 10. Confusion matrices showing ConvTransformer classification accuracy (Task 2) at different noise levels. Trajectories of length greater than 500 were omitted because although model performance improves at these lengths it does so as we would expect from Figure 9 and does not provides further information.
Conclusions
The primary purpose of this paper was to introduce our new architecture, the ConvTransformer, for the analysis of anomalous diffusion trajectories. To the best of our knowledge, this is the first transformer based architecture to characterize anomalous ConvTransformer trajectory classification accuracy (F1-Score) by trajectory length as a function of the anomalous diffusion exponent (α).
Figure 13.
ConvTransformer trajectory classification accuracy (F1-Score) by underlying diffusive regime as a function of the anomalous diffusion exponent (α).
diffusion. Indeed it is only recently that anyone else has produced a convolutional transformer (for computer vision) [12,19], with the development of their models being concurrent with ours. However, our ConvTransformer stands out in that it does not use positional encoding and only uses the transformer encoding block from [38]. As such, it is simpler and easier to implement while still providing state-of-the-art results in trajectory classification (Task 2) in short and noisy trajectories.
Inspired by the success of transformers in NLP we set out to replace the recurrent bidirectional LSTM part of the architecture in [9] by transformers. We have improved the classification of short trajectories accuracy with a model that is trained pretty fast since it can be trained in parallel. When we first started working on this model, there was no native support in PyTorch for transformers. However, when writing this manuscript, transformer encoders and decoders are natively supported. As such, we expect further improvements as ease of implementation and optimized code will lead to more accessibility. This should lead to improved iterations of the model and a finer hyperparametrization tuning. Additionally, the increased optimization and access to newer hardware should increase our ConvTransformers performance for improved usability in experimental research.
Apart from the direct practical implementation of our model in experimental research, going forwards, we would also like to focus on model interpretability. One of the issues plaguing deep learning is the black box effect. When looking at models, we are often only interested in what we can predict or characterize and tend to overlook what we can learn from parameter weighting. Traditionally, parameter weight would allow us to see simple relation between the input features and our desired prediction. For example, birth weight is a strong predictor of adult height [37]. Furthermore, with traditional models like regression, parameter selection leads to discarding information which also informs us about the features that are not relevant to our subject of study. With the rise of deep learning models, we are no longer looking at features, but rather we ingest the data directly and allow our models to discern these features for themselves. With the exception of deep learning models that use feature engineering, as we saw with group UCL and their CONDOR model [10]. The naive approach to modeling brought about by ML means that we not only lose all information about features, but we also do not know what features are important.
As we know from Clark et al. [7], in the context of NLP, transformer attention heads tend to focus on specific aspects of syntax. For instance, some attention heads may focus entirely on the next token, while others may attend almost entirely to the periods or breaks in a sentence. Following this logic, it is highly likely that some of our ConvTransformer attention heads are specializing on specific features of the trajectories. Hence, a transformer based architecture could be used to determine what trajectory features are important. In this manner, we could recover some model interpretability, and learn from machine learn models in a similar way to how we have traditionally learned from regression.
Bibliography
Figure 1 .
1Visual representation of the ConvTransformer structure.
trained on [401, 500], [601 − 800], and [501 − 600] outperformed other models at their native trained trajectory lengths. Thus, our compiled model for Task 1 consists of 11 models, each of which is in charge of certain trajectory lengths.
Figure 2 .
2ConvTransformer performance in the regression task measured by the AnDi Interactive tool. Lighter colours represent higher frequencies.
Figure 3 .
3ConvTransformer performance (MAE) in the regression of the anomalous diffusion exponent by SNR as a function of trajectory length.
Figure 4 .
4Heat map of ConvTransformer performance in the regression of the anomalous diffusion exponent showing True and Predicted α by the underlying diffusion model. The blue line denotes the median value of the true α values. Predicted values of α are shown from [0, 2]. However, there were a few instances where the ConvTransformer predicted values marginally less than 0 and greater than 2.
Figure 5 .
5ConvTransformer performance in the regression of the anomalous diffusion exponent (MAE) shown as a function of trajectory length and trajectory type.
Figure 6 .
6ConvTransformer performance in the regression of the anomalous diffusion exponent (MAE) by trajectory length as a function of α, the anomalous diffusion exponent.
Figure 7 .
7ConvTransformer performance in the regression of the anomalous diffusion exponent (MAE) by underlying diffusive regime as a function of α, the anomalous diffusion exponent.diffusivity coefficient can increase with time (D ∼ Φ(t)) in a way that ATTM could mimic SBM.
Figure 8 .
8Confusion matrices of ConvTransformer trajectory classification accuracy (Task 2) obtained from the AnDi Interactive Tool.
Figure 9 .
9ConvTransformer trajectory classification accuracy (F1-Score) by SNR as a function of trajectory length(
Figure 11 .
11ConvTransformer trajectory classification accuracy (F1-Score) as a function of the trajectory length.
Figure 12 .
12Figure 12. ConvTransformer trajectory classification accuracy (F1-Score) by trajectory length as a function of the anomalous diffusion exponent (α).
Table 1. Table of final hyper-parameters used to train the models for Task 1 and 2.Parameter
Value
Batch Size
32
Num. Heads
16
CNN Dropout
0.05
Trans. Dropout
0
Learn Rate
0.0002133
Num. Epoch
100
Patience
10
and [801,1000]. All datasets are of size 2 × 10 6 , with the aforementioned training/test/validation split ratio. By default, the andi-datasets package, [51,100], [101,200], [201,300], [301,400],
[401,500], [501,600], [601,800],
.Diff. Model
Traj. Length
SNR
α
ATTM
10, 20,. . . , 50, 100, 200, . . . , 600, 800, 1000 1,2 0.1, 0.2 . . . 1.0
CTRW
10, 20,. . . , 50, 100, 200, . . . , 600, 800, 1000 1,2 0.1, 0.2 . . . 1.0
FBM
10, 20,. . . , 50, 100, 200, . . . , 600, 800, 1000 1,2 0.1, 0.2 . . . 1.9
LW
10, 20,. . . , 50, 100, 200, . . . , 600, 800, 1000 1,2 1.0, 1.1 . . . 1.9
SBM
10, 20,. . . , 50, 100, 200, . . . , 600, 800, 1000 1,2 0.1, 0.2 . . . 1.9
Table 2. A testing dataset of size 2000 was generated for all the permutations of each
row in the table.
Table 3 .
3Rank is the overall ranking on the entire AnDi Challenge test data set in one dimension. The MAE, and F1-Scores are calculated only for short trajectories, of length 10 to 50 in one dimension, by the AnDi Interactive Tool.
‡ http://andi-challenge.org/interactive-tool/
Classification, inference and segmentation of anomalous diffusion with recurrent neural networks. A Argun, G Volpe, S Bo, J. Phys. A Math. Theor. 542021A. Argun, G. Volpe, and S. Bo. Classification, inference and segmentation of anomalous diffusion with recurrent neural networks. J. Phys. A Math. Theor., 54, 2021.
An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. S Bai, J Z Kolter, V Koltun, arXiv:1803.01271arXiv preprintS. Bai, J.Z. Kolter, and V. Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
Measurement of anomalous diffusion using recurrent neural networks. S Bo, F Schmidt, R Eichhorn, G Volpe, Phys. Rev. E. 100110102S. Bo, F. Schmidt, R. Eichhorn, and G. Volpe. Measurement of anomalous diffusion using recurrent neural networks. Phys. Rev. E, 100(1):010102, 2019.
Transient anomalous diffusion of telomeres in the nucleus of mammalian cells. I Bronstein, Y Israel, E Kepten, S Mai, Y Shav-Tal, E Barkai, Y Garini, Phys. Rev. Lett. 10318102I. Bronstein, Y. Israel, E. Kepten, S. Mai, Y. Shav-Tal, E. Barkai, and Y. Garini. Transient anomalous diffusion of telomeres in the nucleus of mammalian cells. Phys. Rev. Lett., 103:018102, Jul 2009.
on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. R Brown, Philosoph. Mag. 421A brief account of microscopical observations made in the months of juneR. Brown. A brief account of microscopical observations made in the months of june, july and august 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Philosoph. Mag., 4(21):161-173, 1828.
Fractals in science with a MS-DOS program Diskette. A Bunde, SpringerA. Bunde. Fractals in science with a MS-DOS program Diskette. Springer, 1994.
What does BERT look at? an analysis of BERT's attention. K Clark, U Khandelwal, O Levy, C D Manning, K. Clark, U. Khandelwal, O. Levy, and C.D. Manning. What does BERT look at? an analysis of BERT's attention, 2019.
Automatic detection of diffusion modes within biological membranes using back-propagation neural network. P Dosset, P Rassam, L Fernandez, C Espenel, E Rubinstein, E Margeat, P.-E Milhiet, BMC Bioinform. 171P. Dosset, P. Rassam, L. Fernandez, C. Espenel, E. Rubinstein, E. Margeat, and P.-E. Milhiet. Automatic detection of diffusion modes within biological membranes using back-propagation neural network. BMC Bioinform., 17(1):1-12, 2016.
Efficient recurrent neural network methods for anomalously diffusing single particle short and noisy trajectories. Ò Garibo-I Orts, A Baeza-Bosca, M A Garcia-March, J A Conejero, J. Phys. A Math. Theor. 5450504002Ò. Garibo-i Orts, A. Baeza-Bosca, M.A. Garcia-March, and J.A. Conejero. Efficient recurrent neural network methods for anomalously diffusing single particle short and noisy trajectories. J. Phys. A Math. Theor., 54(50):504002, nov 2021.
Characterization of anomalous diffusion classical statistics powered by deep learning (CONDOR). A Gentili, G Volpe, Journal of Physics A: Mathematical and Theoretical. 5431314003A. Gentili and G. Volpe. Characterization of anomalous diffusion classical statistics powered by deep learning (CONDOR). Journal of Physics A: Mathematical and Theoretical, 54(31):314003, jul 2021.
Single-particle diffusion characterization by deep learning. N Granik, L E Weiss, E Nehme, M Levin, M Chein, E Perlson, Y Roichman, Y Shechtman, Biophys. J. 1172N. Granik, L.E. Weiss, E. Nehme, M. Levin, M. Chein, E. Perlson, Y. Roichman, and Y. Shechtman. Single-particle diffusion characterization by deep learning. Biophys. J., 117(2):185-192, 2019.
CMT: Convolutional neural networks meet vision transformers. J Guo, K Han, H Wu, C Xu, Y Tang, C Xu, Y Wang, J. Guo, K. Han, H. Wu, C. Xu, Y. Tang, C. Xu, and Y. Wang. CMT: Convolutional neural networks meet vision transformers, 2021.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural Comput. 98S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735-1780, 1997.
Classification of particle trajectories in living cells: Machine learning versus statistical testing hypothesis for fractional anomalous diffusion. J Janczura, P Kowalek, H Loch-Olszewska, J Szwabiński, A Weron, Phys. Rev. E. 1023J. Janczura, P. Kowalek, H. Loch-Olszewska, J. Szwabiński, and A. Weron. Classification of particle trajectories in living cells: Machine learning versus statistical testing hypothesis for fractional anomalous diffusion. Phys. Rev. E, 102(3), Sep 2020.
Lévy statistics in a hamiltonian system. J Klafter, G Zumofen, Phys. Rev. E. 49J. Klafter and G. Zumofen. Lévy statistics in a hamiltonian system. Phys. Rev. E, 49:4873-4877, Jun 1994.
Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach. P Kowalek, H Loch-Olszewska, J Szwabiński, Phys. Rev. E. 100332410P. Kowalek, H. Loch-Olszewska, and J. Szwabiński. Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach. Phys. Rev. E, 100(3):032410, 2019.
WaveNet-based deep neural networks for the characterization of anomalous diffusion (WADNet). D Li, Q Yao, Z Huang, J. Phys. A: Math. Theor. 5440404003D. Li, Q. Yao, and Z. Huang. WaveNet-based deep neural networks for the characterization of anomalous diffusion (WADNet). J. Phys. A: Math. Theor., 54(40):404003, sep 2021.
Self-similar gaussian processes for modeling anomalous diffusion. S C Lim, S V Muniandy, Phys. Rev. E. 6621114S.C. Lim and S.V. Muniandy. Self-similar gaussian processes for modeling anomalous diffusion. Phys. Rev. E, 66:021114, Aug 2002.
Convtransformer: A convolutional transformer network for video frame synthesis. CoRR, abs. Zhouyong Liu, Shun Luo, Wubin Li, Jingben Lu, Yufan Wu, Chunguo Li, Luxi Yang, Zhouyong Liu, Shun Luo, Wubin Li, Jingben Lu, Yufan Wu, Chunguo Li, and Luxi Yang. Convtransformer: A convolutional transformer network for video frame synthesis. CoRR, abs/2011.10185, 2020.
Impact of feature choice on machine learning classification of fractional anomalous diffusion. H Loch-Olszewska, J Szwabiński, Entropy. 22122020H. Loch-Olszewska and J. Szwabiński. Impact of feature choice on machine learning classification of fractional anomalous diffusion. Entropy, 22(12), 2020.
Random diffusivity models for scaled brownian motion. A F Dos Santos Maike, L. Menon Junior, Chaos, Solitons & Fractals. 144110634A.F. dos Santos Maike and L. Menon Junior. Random diffusivity models for scaled brownian motion. Chaos, Solitons & Fractals, 144:110634, 2021.
Fractional Brownian motions, fractional noises and applications. B B Mandelbrot, J W Van Ness, SIAM Review. 104B.B. Mandelbrot and J.W. Van Ness. Fractional Brownian motions, fractional noises and applications. SIAM Review, 10(4):422-437, 1968.
A review of progress in single particle tracking: from methods to biophysical insights. C Manzo, M F Garcia-Parajo, Rep. Prog. Phys. 7812124601C. Manzo and M.F. Garcia-Parajo. A review of progress in single particle tracking: from methods to biophysical insights. Rep. Prog. Phys., 78(12):124601, 2015.
Nonergodic subdiffusion from Brownian motion in an inhomogeneous medium. P Massignan, C Manzo, J A Torreno-Pina, M F García-Parajo, M Lewenstein, G J Lapeyre, Phys. Rev. Lett. 112150603P. Massignan, C. Manzo, J.A. Torreno-Pina, M. F. García-Parajo, M. Lewenstein, and G.J. Lapeyre. Nonergodic subdiffusion from Brownian motion in an inhomogeneous medium. Phys. Rev. Lett., 112:150603, Apr 2014.
Early stopping for Pytorch. B , Mehus Sunde, 2020B. Mehus Sunde. Early stopping for Pytorch, 2020. [Online; accessed 6-Nov-2020].
Anomalous diffusion models and their properties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. R Metzler, J.-H Jeon, A G Cherstvy, E Barkai, Phys. Chem. Chem. Phys. 1644R. Metzler, J.-H. Jeon, A.G. Cherstvy, and E. Barkai. Anomalous diffusion models and their properties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. Phys. Chem. Chem. Phys., 16(44):24128-24164, 2014.
Single trajectory characterization via machine learning. G Muñoz-Gil, M A Garcia-March, C Manzo, J D Martín-Guerrero, M Lewenstein, New J. Phys. 2213010G. Muñoz-Gil, M.A. Garcia-March, C. Manzo, J.D. Martín-Guerrero, and M. Lewenstein. Single trajectory characterization via machine learning. New J. Phys., 22:013010, 2020.
The Anomalous Diffusion challenge: objective comparison of methods to decode anomalous diffusion. G Muñoz-Gil, G Volpe, M A García-March, R Metzler, M Lewenstein, C Manzo, Emerging Topics in Artificial Intelligence (ETAI) 2021. G.Volpe, J.B. Pereira, D.Brunner, and A. OzcanSPIE118042021G. Muñoz-Gil, G. Volpe, M.A. García-March, R. Metzler, M. Lewenstein, and c. Manzo. The Anomalous Diffusion challenge: objective comparison of methods to decode anomalous diffusion. In G.Volpe, J.B. Pereira, D.Brunner, and A. Ozcan, editors, Emerging Topics in Artificial Intelligence (ETAI) 2021, volume 11804, page 1180416. Int. Soc. Opt. Photonics, SPIE, 2021.
Andichallenge/andi datasets: Challenge 2020 release. G Muñoz-Gil, B Requena, G Volpe, M A Garcia-March, C Manzo, G. Muñoz-Gil, B. Requena, G. Volpe, M.A. Garcia-March, and C. Manzo. Andichal- lenge/andi datasets: Challenge 2020 release. May 2021.
Objective comparison of methods to decode anomalous diffusion. G Muñoz-Gil, G Volpe, M A Garcia-March, E Aghion, A Argun, C B Hong, T Bland, S Bo, N Conejero, J A Firbas, Nature Communications. 1212021G. Muñoz-Gil, G. Volpe, M.A. Garcia-March, E. Aghion, A. Argun, C.B. Hong, T. Bland, S. Bo, N. Conejero, J.A.and Firbas, and et al. Objective comparison of methods to decode anomalous diffusion. Nature Communications, 12(1), 2021.
Anomalous diffusion on the servosphere: A potential tool for detecting inherent organismal movement patterns. N Nagaya, N Mizumoto, M S Abe, S Dobata, R Sato, R Fujisawa, PLOS ONE. 126N. Nagaya, N. Mizumoto, M.S. Abe, S. Dobata, R. Sato, and R. Fujisawa. Anomalous diffusion on the servosphere: A potential tool for detecting inherent organismal movement patterns. PLOS ONE, 12(6):1-15, 06 2017.
Movement brownien et realite molec. J Perrin, Ann. Chim. Phys. 18J. Perrin. Movement brownien et realite molec. Ann. Chim. Phys., 18:1-114, 1909.
Observation of anomalous diffusion and fractional self-similarity in one dimension. Yoav Sagi, Miri Brook, Ido Almog, Nir Davidson, Phys. Rev. Lett. 10893002Yoav Sagi, Miri Brook, Ido Almog, and Nir Davidson. Observation of anomalous diffusion and fractional self-similarity in one dimension. Phys. Rev. Lett., 108:093002, Mar 2012.
Anomalous transit-time dispersion in amorphous solids. H Scher, E W Montroll, Phys. Rev. B. 12H. Scher and E.W. Montroll. Anomalous transit-time dispersion in amorphous solids. Phys. Rev. B, 12:2455-2477, Sep 1975.
Don't decay the learning rate, increase the batch size. S L Smith, P J Kindermans, C Ying, Q V Le, S.L. Smith, P.J. Kindermans, C. Ying, and Q.V. Le. Don't decay the learning rate, increase the batch size, 2018.
Neural network-based anomalous diffusion parameter estimation approaches for gaussian processes. D Szarek, Int. J. Adv. Eng. Sci. Appl. Math. 132-3D. Szarek. Neural network-based anomalous diffusion parameter estimation approaches for gaussian processes. Int. J. Adv. Eng. Sci. Appl. Math., 13(2-3):257-269, 2021.
Birth Weight and Length as Predictors for Adult Height. H T Sørensen, S Sabroe, K J Rothman, M Gillman, F H Steffensen, P Fischer, T I A Serensen, Amer. J. Epidem. 1498H.T. Sørensen, S. Sabroe, K.J. Rothman, M. Gillman, F.H. Steffensen, P. Fischer, and T.I.A. Serensen. Birth Weight and Length as Predictors for Adult Height. Amer. J. Epidem., 149(8):726-729, 04 1999.
. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Attention is all you needA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2017.
Unravelling the origins of anomalous diffusion: from molecules to migrating storks. O Vilk, E Aghion, T Avgar, C Beta, O Nagel, A Sabri, R Sarfati, D K Schwartz, M Weiss, D Krapf, R Nathan, R Metzler, M Assaf, O. Vilk, E. Aghion, T. Avgar, C. Beta, O. Nagel, A. Sabri, R. Sarfati, D.K. Schwartz, M. Weiss, D. Krapf, R. Nathan, R. Metzler, and M. Assaf. Unravelling the origins of anomalous diffusion: from molecules to migrating storks, 2021.
Huggingface's transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Brew, abs/1910.03771CoRRT. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771, 2019.
Google's neural machine translation system: Bridging the gap between human and machine translation. Y Wu, M Schuster, Z Chen, Quoc V Le, M Norouzi, W Macherey, M Krikun, Y Cao, Q Gao, K Macherey, arXiv:1609.08144arXiv preprintY. Wu, M. Schuster, Z. Chen, Quoc V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, and et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
| [
"https://github.com/AnDiChallenge/ANDI_datasets."
] |
[
"Efficient and Privacy Preserving Group Signature for Federated Learning",
"Efficient and Privacy Preserving Group Signature for Federated Learning"
] | [
"Sneha Kanchan ",
"Jae Won Jang ",
"Jun Yong Yoon ",
"Jun Bong ",
"Choi ",
"Sneha Kanchan ",
"JunJae Won Jang ",
"Yong Yoon ",
"Bong Jun Choi ",
"\nSchool of Computer Science and Engineering\nSchool of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea\n",
"\nSchool of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea\n",
"\nSchool of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea\n",
"\nSoongsil University\nSeoulSouth Korea\n"
] | [
"School of Computer Science and Engineering\nSchool of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea",
"School of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea",
"School of Computer Science and Engineering\nSoongsil University\nSeoulSouth Korea",
"Soongsil University\nSeoulSouth Korea"
] | [] | Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy. Training is done using the raw data on the users' device, called clients, and only the training results, called gradients, are sent to the server to be aggregated and generate an updated model. However, we cannot assume that the server can be trusted with private information, such as metadata related to the owner or source of the data. So, hiding the client information from the server helps reduce privacy-related attacks. Therefore, the privacy of the client's identity, along with the privacy of the client's data, is necessary to make such attacks more difficult. This paper proposes an efficient and privacy-preserving protocol for FL based on group signature. A new group signature for federated learning, called GSFL, is designed to not only protect the privacy of the client's data and identity but also significantly reduce the computation and communication costs considering the iterative process of federated learning. We show that GSFL outperforms existing approaches in terms of computation, communication, and signaling costs. Also, we show that the proposed protocol can handle various security attacks in the federated learning environment. Moreover, we provide security proof of our protocol using a formal security verification tool. | 10.2139/ssrn.4165422 | [
"https://arxiv.org/pdf/2207.05297v2.pdf"
] | 250,451,124 | 2207.05297 | c2e57799496b8243484394d99144f95d3dc153bf |
Efficient and Privacy Preserving Group Signature for Federated Learning
Sneha Kanchan
Jae Won Jang
Jun Yong Yoon
Jun Bong
Choi
Sneha Kanchan
JunJae Won Jang
Yong Yoon
Bong Jun Choi
School of Computer Science and Engineering
School of Computer Science and Engineering
Soongsil University
SeoulSouth Korea
School of Computer Science and Engineering
Soongsil University
SeoulSouth Korea
School of Computer Science and Engineering
Soongsil University
SeoulSouth Korea
Soongsil University
SeoulSouth Korea
Efficient and Privacy Preserving Group Signature for Federated Learning
10.1145/nnnnnnn.nnnnnnnCCS Concepts: • Computer systems organization → Embedded sys- temsTrustPrivacy• Networks → Network reliability Additional Key Words and Phrases: Federated Learning, Group Signature, Privacy Preservation, Authentication, Efficiency, Adversarial Server ACM Reference Format:
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy. Training is done using the raw data on the users' device, called clients, and only the training results, called gradients, are sent to the server to be aggregated and generate an updated model. However, we cannot assume that the server can be trusted with private information, such as metadata related to the owner or source of the data. So, hiding the client information from the server helps reduce privacy-related attacks. Therefore, the privacy of the client's identity, along with the privacy of the client's data, is necessary to make such attacks more difficult. This paper proposes an efficient and privacy-preserving protocol for FL based on group signature. A new group signature for federated learning, called GSFL, is designed to not only protect the privacy of the client's data and identity but also significantly reduce the computation and communication costs considering the iterative process of federated learning. We show that GSFL outperforms existing approaches in terms of computation, communication, and signaling costs. Also, we show that the proposed protocol can handle various security attacks in the federated learning environment. Moreover, we provide security proof of our protocol using a formal security verification tool.
INTRODUCTION
Federated Learning (FL) is a machine learning method ensuring that raw data is not distributed to other devices. The motivation behind FL is to secure users' sensitive information. For example, uploading real-time images increases the risk of revealing personal information to the server or eavesdroppers. The data breach has become a serious threat that even if organizations spend a massive budget on securing their data, they are still getting attacked. In 2020, the investment to secure companies' data grew to 53 billion. However, 30 billion records were compromised [1]. It shows how serious and important it has become to secure our data. It is often illegal to share data from one place to another, even from one country to another. Therefore, FL will be essential in sharing machine learning models without compromising privacy.
The clients in a FL network participate in updating the system model by sending their local model updates to a central system, say a server. Then, the server can finalize the global model to be deployed. During this process, the clients do not send their raw data. They send only locally trained gradients. Next, the server collects gradients from all the clients and aggregates those to generate an updated global model. Still, there are many scenarios at various stages of the FL process where network entities, including the server, are compromised, requiring additional measures to preserve privacy even if the data is not being sent in raw form, as shown in Figure 1. The communication process mainly involves the participating clients and the server. If any of the entities in the network is compromised, the secrecy of this technique is at stake. In that case, the compromised entity can attempt to infer the sensitive information or the source of information from the received gradients based on the sender, revealing the sender identity [2] and increasing the risk of inference attacks. Hence, it is crucial to secure the privacy of the sender and the information even if only the updated gradient is being shared. Moreover, since FL is a decentralized network, entities of the network can be active or passive attackers themselves [3].
1.1.1 Masking Based. Bonawitz et al. [4] proposed a double masking scheme in which messages are shared with servers after double masking it. Each user shares double sets of secret keys with every participant. After adding their share of secret keys to the message, all participants send messages to the server. Every secret key gets canceled out when a certain number of secret keys are combined. Hence, actual data is hidden, and the method effectively protects privacy. However, the computation cost is exponentially high because the secret keys must be distributed to each client for each message. In 2020, Li, Yong et al. [5] proposed a single masking FL framework based on the chained secure multiparty computing (SMC) technique (chain-PPFL). They used a chained-communication mechanism to transfer the masked information between participants with a serial chain frame. Their experimental result shows they can achieve competent accuracy and convergence rate while providing privacy preservation for FL. However, their costs are higher than many existing techniques. In 2021, Ang Li et al. [6] addressed the issue of heterogeneous data in separate clients and the bandwidth limitation of a mobile device. They proposed the FedMask FL framework, where only heterogeneous binary masks are communicated between the server and the end devices. The clients do not learn the global model. Still, they learn from a personalized and structured sparse DNN model composed by applying the learned binary mask to the fixed parameters (frozen parameters) of the local model. However, the model has high complexity.
1.1.2 Homomorphic Encryption (HE) Based. Wan et al. [7] proposed a Verifiable Federated Matrix Factorization (VPFedMF) technique, which highlights the shortcomings of earlier proposed additive homomorphic encryption and FedMF, as these had significant performance overheads, weaker security model, and nonassurance of computation integrity of model-aggregation. In comparison, VPFedMF's masking-based lightweight and secure aggregation provide cryptographic guarantees on the confidentiality of gradient and user-side verification for the correctness of the aggregated model.
Group Based.
In 2018, Hartmann [8] proposed samplingbased groups in FL in which users were divided into various groups related to their physical traits like location, region, language, etc. The tasks of the members are divided by their groups, where different groups execute independent tasks. Differential privacy (DP) is used for maintaining privacy. However, the computation cost of the algorithm is high, and it is not scalable in more extensive networks.
Blockchain Based.
In 2020, Kim et al. [9] proposed a blockchain FL (BlockFL), in which blockchain was used to exchange information between clients. Their protocol is based upon on-device machine learning, and there is no centralized data for training. This model focuses on exchanging global models with their associated minor, and they receive awards based on the trained samples transmitted. However, storage and speed limitations are major bottlenecks with on-device training. Recently, in 2022, Zhu et al. [10] proposed a double-masked blockchain-based FL technique, in which they claimed to provide better privacy with lower communication costs between blockchain and end-users. However, it has high computation costs for computing double masks. Also, the problem of sharing secrets with other participants and user dropouts remains. Li et al. [11] presented a FL-based privacy preservation technique in autonomous driving where the original data is stored in local vehicles, and only updated information is shared with the server. Based on blockchain, drivers are authenticated with zero-knowledge proof. They used homomorphic encryption in their traceable identitybased approach because the server cannot be trusted completely.
Limitations of Existing Approaches
1.2.1 Masking Based.
The cost of communication is relatively high. Each client needs to communicate with other clients in the network to share the secret keys, making the complexity in polynomial time, which is highly inefficient. Also, the client needs to trust another client with their secret keys, which is risky because nodes in a decentralized wireless system cannot be trusted. The situation worsens when the server and a client (or few clients) conspire together to extract information about a particular client ( Figure 1). If the server can acquire a particular number of secret keys from the compromised clients, it can reveal the actual data sent by a client. Moreover, the list of clients participating in the FL process is disclosed publicly so that clients may know the participating clients to share their secret keys. However, the list is a piece of additional information for the attacker, who may target a particular client from the list.
1.2.2
Homomorphic Encryption based. Clients send encrypted data to a server, aggregating the received encrypted data without decrypting it. HE can reduce the model inversion or data leakage risks even if the server is malicious. However, it has a high complexity making it challenging to be implemented in practice. The server might need some privacy-relevant data to aggregate the message. Also, the output obtained from HE may contain more noise than the original data, so the accuracy of the model trained using these data is likely to be reduced.
1.2.3
Differential Privacy Based. DP ensures that the outcome does not reveal whether an individual's data has been included in the database or not [12]. It helps to secure the privacy of individual data as the behavior of the output data hardly changes when a single individual joins or leaves the dataset. However, if data is very much diversified or we have to retrieve data from a vast dataset, DP adds noise to data in each step, reducing accuracy.
Blockchain Based.
Blockchain is well-suited technology to protect the distributed data in FL. However, there is redundant data on devices and in different rounds, which makes it inefficient for the iterative process of FL. Because of its complexity and storage requirements, scalability is a challenge.
Our Approach
We are motivated to implement privacy in FL using a less complex, more efficient, and secure technology to solve the abovementioned issues. Hence, we propose a privacy-preserving Group Signature scheme for FL (GSFL). A group signature (GS) verifies a group instead of a particular client. There can be multiple groups, and each group consists of an admin. The admin is assumed to be a trusted entity in the network, assigning the keys to each group member to generate the signature. No one other than the admin knows the signer. Since the server cannot infer the particular identity from the received information, the risk of inference attacks is reduced because there is no linking between the clients and their gradients. GSFL has several advantages over the existing privacy-preserving techniques for FL, as explained below.
• There is no need to share keys with other clients as in masking: A client communicates only with an admin and a server. It considerably saves communication costs. • GS does not require data to be aggregated in encrypted form as in HE: The trained model is encrypted with the server's public keys; hence only the server can decrypt it. However, the server does not know the source. • Even if the data is diversified, there is no need to add noise to the data as in DP: Since the noise level is lower in the original trained data, the accuracy rate is also higher. • Privacy preservation and authentication are done together, takes lesser storage: The members in a group share the same signature, and hence, the server needs to store less number of signatures to authenticate the clients.
GS is fundamentally different from anonymity, as later, there is no way to trace back the sender of the anonymous data. Hence, the information received from anonymous clients cannot be trusted. However, in GS, clients authenticate their identity as valid group members, and the admin can trace it. Hence the information shared by the group member is verifiable, making it more trustworthy than anonymous data.
Implementing GS in FL changes the communication process, but it does not affect the learning process. Therefore, it can be used in combination with different FL models. However, the original GS contains several verification parameters, increasing the packet's size. The distributed learning process in FL incurs a high communication cost from multiple iterations involving thousands of clients. Hence, communication efficiency is one of the critical performance metrics. Therefore, we aim to provide a new group signature algorithm that is efficient for the iterative process of FL.
Our Contribution
The contribution of our proposed work is summarized below:
• Design a new group signature tailored for FL: The proposed GS protocol provides an efficient integration with the iterative process of FL. • Provide identity preservation of clients: GS hides the identity of individual clients to provide enhanced privacy protection. • Provide authentication of clients: Clients are authenticated using a common GS. • Maintain the secrecy of network: The security of the network is verified by a formal verification tool AVISPA (Automated Validation of Internet Security Protocols and Applications) [13]. • Provide enhanced security and efficiency: Our protocol achieves significantly lower computation and communication costs than existing algorithms while protecting against a more comprehensive range of security attacks.
Paper Organization
The remainder of the paper is organized as follows. Section 2 presents preliminaries and our system model. Section 3 presents the detail of our proposed protocol. A comprehensive security analysis is provided in Section 4. The performance analysis of our proposed protocol compared with existing algorithms is provided in Section 5. Finally, the conclusion of our paper is given in Section 6. The list of symbols and abbreviations used in the paper are presented in Table 1.
PRELIMINARIES AND OUR SYSTEM SETUP
Federated Learning involves an iterative process where users collaboratively train machine learning models while protecting the privacy of user data, as implemented in Google Gboard [14]. End devices (clients) process their data locally, and instead of the raw data, they send only updates to the centralized system (server). After this, data from all the clients would be aggregated at the server to create an updated global model, and this model is again sent back to clients. The iterative process continues until an optimum global model is obtained.
Privacy Preservation in Federated Learning
FL aims to minimize the information shared by the clients. Although the server can only see the updates, if the server can identify the client of a particular update, it can guess the original data because it has some additional information based on the previous iterations or causes inference attacks [2]. Hence, it is vital to protect the client's identity along with data. This can be achieved in various ways, as listed below:
• Homomorphic Encryption: HE hides the individual updates and enables the aggregation process on encrypted data, which leads to malleability [15]. The server does not know which actual data was sent, and data is aggregated without revealing the original content. However, the server can add something to the encrypted data, which is very much disadvantageous and may affect the integrity of the data. Moreover, it is complex to perform computation on encrypted data, and the source may need to reveal part of their secret key, a significant threat in a wireless scenario. Hence, HE is not entirely suitable as it is very complex and may lead to various security threats. • Masking [16]: It hides the data using masks. The server cannot unmask individual data from a separate device and can be unmasked only when combined with other devices' data. So, the server can only see combined data and does not know the source. However, the cost of sharing masks is very high, as it requires each client to communicate with others for mask sharing. • Group Signature: It hides the identity of the data owner. The server will not know which update is coming from which device [17]. However, the updates are visible to the server.
We have chosen the group signature-based privacy preservation technique for our work. It can preserve the integrity of the original information to achieve high accuracy while having relatively low
Group Signature
Group signature is proposed by [17] in 1991, which shows how a person can be authenticated as a member of a particular group without revealing their identity. Only a valid member can sign the group signature. The receiver can always verify that the signature is from a valid group but can never verify the actual individual who signed the message. However, the issuer of the group signature can always find out the original signer of the message. Hence, a group signature assures privacy, authentication, and non-repudiation at the same time.
In our initial work [18], we have attempted to integrate GS in FL. As shown in Figure 2, the admin issues to the client , and this is used to create a group signature using other global parameters provided by the admin. Client signs its update using the group signature generated by it. It also sends a few parameters in the message along with the group signature, which is used for the verification process by the server. This process is repeated in each iteration of the FL process.
The use of a group signature assures that the privacy of the source of information is preserved even without the need for client-to-client communication. As shown in Figure 2, the server cannot fetch , but it can still verify the signature for authentication. This requires a much lower cost, and this process is significantly beneficial compared to traditional masking/double masking techniques. In this paper, we propose a new and improved group signature scheme for FL, where the computation and communication cost is significantly reduced for the iterative process.
Bilinear Pairing
Bilinear pairing is the pairing of two multiplicative and isomorphic cyclic groups 1 and 2 to map into a third group , with a mapping : 1 × 2 → . This map holds the bilinearity, nondegeneracy, and computability, as given in [19]. Many identity-based encryption systems are based on these bilinear pairing concepts. The bilinear pairing can be used to solve a Decisional Diffie-Hellman (DDH) problem, which is the base of our identity-based cryptosystem. We have assumed 1 , 2 , and as the generator of the group 1 , 2 and , respectively, all of order a prime number . We assume that Strong Diffie-Hellman (SDH) holds for ( 1 , 2 ), and signature generation is based on a new Zero-Knowledge Proof of Knowledge (ZKPK) of the solution to an SDH problem [20].
Problem Formulation
We aim to improve the traditional group signature process considering the iterative process of FL. Traditionally, the group signature sends all the binding and tracing parameters in its signature so it can be recovered at the receiver side. It requires certain variables at the receiver side to verify that it is a valid group signature. Compared to the baseline protocol presented in [18], the proposed protocol improves the efficiency of group signature by avoiding repetitive transmission of some variables. We assume that a signature is valid for at least a session of the FL. We also assume that the admin is the trusted third party for all communication members, including clients and servers. Admin communicates with the server to share the latest valid group signature for the session. Let us call this signature as . The tracing variable set ( 1 , 2 , 3 ) contains three variables of 128-bit each, which is used to verify the signature at the receiver side. However, ( 1 , 2 ) does not require any private key, separating one client from another. Hence, we can save computation and communication costs if the admin sends these variables directly to the server instead of the clients computing and sending them to the server.
The Algorithm
A novel efficient group signature for the iterative process of federated learning is presented in this subsection. The protocol diagram is shown in Figure 3.
Step 1: Generating Group Signature. Below are the steps to generate a group signature for a group.
(1) Client sends a Group Signature Request ( ( )) to the Admin.
(2) Admin computes Tracing Variables 1 and 2 :
1 ← , 2 ← , where , ∈
are the randomly chosen exponents selected by the admin. Admin also computes for the -th client as: ← 1/( + ) ∈ 1 . At this stage, the admin also computes the group signature for the group, , in the same way clients compute it in step 5 of this algorithm. This signature is stored on the admin for verification purposes.
(3) Admin sends a Group Signature Reply to the clients in the group generated as: ← ( 1 , 2 , , , ), and sends the tracing variables and the group signature ( 1 , 2 , ) to the server. (4) The tracing variable generated at the client-side is computed as: 3 ← ℎ + , 1 ← , 2 ← . (5) Each client randomly selects the following five binding values, , , , 1 , and 2 . Each client computes its binding variables as:
1 ← , 2 ← , 3 ←ˆ( 3 , 2 )ˆ(ℎ, ) − −ˆ( ℎ, 2 ) − 1 − 2 , 4 ← 1 − 1 , 5 ← 2 − 2 .
(6) Each client computes its challenger as: ← ( , 1 , 2 , 3 , 1 , 2 , 3 , 4 , 5 ) ∈ . (7) It computes signing variables from challenger: 1 ← + , 2 ← + , 3 ← + , 4 ← 1 + 1 , 5 ← 2 + 2 . (8) Finally, the group signature is generated as:
← ( 3 , , 1 , 2 , 3 , 4 , 5 ).
Compared with the baseline, the final signature consist of only 3 , instead of the original set ( 1 , 2 , 3 ). Although the size of the algorithm is reduced, the overall unknown key size is reduced from 384-bits to 128-bits since the keys ( 1 , 2 ) are known to the server and the same for all clients. Hence, to increase the unknown variable key size, we have increased the size of all three keys from 128bits to 256-bits. The resulting size of the unknown key is still 256bits compared to 384-bits earlier, but it is infeasible to break in real-time. Hence, it is safe to use. Please note that this signature is used for all iterations of the same session in Figure 3. Hence, the signature generation step is only required once and not for subsequent iterations.
Step 2: Encryption. Encryption is done to protect the message from outsiders. We used ElGamal encryption for our protocol, as given in [21]. The server publishes its public key set as ( , , , ℎ), where ℎ = and is the private key of server. The message is encrypted with the server's public key, so the server can decrypt it to check the message signed with GS. The client encrypts message = as follows:
(1) Chooses a random integer ∈ {1, . . . , − 1}.
(2) Compute = ℎ .
(3) Compute 1 = .
(4) Compute 2 = × .
Then, the message set ( 1 , 2 , ) is sent to the server.
Step 3: Decryption. Since the message is encrypted with the server's public key, it must be first decrypted at the server's side using the server's private key . Server follows the following steps:
(1) Compute ′ = 1 .
(2) Since 1 = , ′ = ( ) = = ℎ = .
(3) Compute = 2 ′ −1 . The inverse of ′ can be computed as ′ −1 = − 1 [22].
Step 4: Server to Admin Communication. The server checks with the admin for the latest group signature of the group.
(1) The server sends a request to the admin for the latest group signature of the group ( ( )). (2) Admin replies to the server with the latest group signature version ( ( 1, 2)) that contains ( , 1 , 2 ), where denotes the signature of that particular group and ( 1 , 2 ) are the tracing variables that were sent to the clients in the first step.
Step 5: Verification. The server needs to verify the group signature associated with message .
(1) In first iteration of a session, server collects all public entities, the latest 1 , 2 and the available for the particular group from the admin, and computes the verification variables as:
1 ← − 1 ,˜2 ← − 2 , 3 ←ˆ( 3 , 2 )ˆ(ℎ, ) − − (ℎ, 2 ) − 1 − 2 (( 3 , )/ ( 1 , 2 )) , 4 ← 1 − 1 ,˜5 ← 2 − 2 .
Then, the server computes the challenger from the received information, denoted as˜, as: ← ( , 1 , 2 , 3 ,˜1,˜2,˜3,˜4,˜5). (2) If =˜, then the signature is verified, and with particular ( 1 , 2 ) is stored at server side. Otherwise, the server rejects the received message. In this way, the server knows the message but does not know the signer of the message. The server receives the model updates from all participating members in this way.
(3) After the first iteration, the server simply fetches the ( , 1 , 2 ) from its local storage for all the subsequent iterations of the session. It saves the computation and communication cost of the server, and it does not need to compute those keys again. Step 6: Federated Aggregation. The main task of the server is to compute an aggregated average of the updates received. The formula for local training and gradient aggregation in FedAvg [16] is given as:
∀ , +1 ← − , +1 ← Σ =1 +1 ,(1)
where is the fixed learning rate, is the number of clients, is the fraction of clients that participate in the process, is the model parameter, is the current model, and +1 is the subsequent model. We take ′ = Σ ′ , where ′ is the total number of samples in the dataset for client , and we set = ′ ′ . After gradient aggregation, the global model is redistributed to all clients, and the process repeats until a specific condition set by the server is satisfied.
.1 Security Proof Using AVISPA tool
We have provided a formal security verification using AVISPA, which gives the result based on the goal of the algorithm. The algorithm is written in HLPSL code which checks for the authenticity and secrecy of the keys and secret messages. As a proof of concept, we configured to have six entities: server, client 1, client 2, client 3, client 4, and admin. At first, the server sends the global model to all clients in the network. Then, each client sends a group signature request to the admin, to which the admin replies with the necessary keys for the clients. Each client signs its message with the group signature and sends it to the server. The process is iterated. The goal of the algorithm comprises all the secret protocols, majorly named secrecy and authentication, as given below. The simulation code, written in HLPSL, shows that the algorithm can protect secrets between the admin (named A2) and the clients (named C1, C2, C3, and C4), and between the server (named S) and the clients. The proper authentication of all four clients is done, shown by the authentication code.
g o a l s e c r e c y _ o f sec_A2_C1 , sec_A2_C2 , sec_A2_C3 , sec_A2_C4 , sec_S_C1 , sec_S_C2 , sec_S_C3 , s e c _ S _ C 4 a u t h e n t i c a t i o n _ o n r a n d 1 _ a u t h , r a n d 2 _ a u t h , r a n d 3 _ a u t h , r a n d 4 _ a u t h end g o a l
The simulation result is given in Figure 4, which confirms that our algorithm is safe. Figure 5 lists all the knowledge acquired by an intruder. It is shown that the attacker can only acquire public keys or encrypted messages.
Security Attacks and Their Prevention
Security is the main essence of any networking algorithm. Federated learning enhances machine learning through iterative updates without compromising the user's privacy. Hence, the main aim of the algorithm is to learn while protecting personal data. However,
Security Against
Honest but Curious Server. The server cannot be fully trusted when it comes to the privacy of the data. The server might not be interested in the client's data, but it can still peek into some sensitive information if data is not kept securely. An honest but curious server is a server or any other communication member that wants to learn sensitive information about other members from the legitimately received messages.
In our setting, the server receives updates as ( , ) from a list of selected client devices. Since all clients have the same group signature, the server cannot distinguish the particular signer of . is the encrypted message with the server's public key. Hence, other members cannot decrypt it. The server can only see the updates, neither raw messages nor the sender of the message. Thus, it makes it difficult to correlate the messages to earlier messages. Therefore, the honest but curious user will not be able to know any sensitive information.
Security Against Inference Attacks.
Since the sender of the message is not known even to the server, no entities in the network can relate a message to its previous message. Each message contains only updates and does not contain the entire message. Moreover, raw messages are never sent in the network and are always sent in a processed form. Hence, inferring from previous messages or the same message is impossible.
Security Against Attack on Selected Clients.
Corrupted clients may join their hands to manipulate the input for the FL process. Hence, the list of clients selected for FL should not be disclosed to other clients. However, clients must be notified about their own selection. We have used encrypted replies from the server-side instead of the open list to ensure this. The key for decryption is only available to the receiver. So, other clients will not be able to know the clients selected for the FL.
Security Against Sybil
Attacks. In a Sybil attack, one user creates multiple fake identities and communicates in the network. This type of attack is challenging, especially in wireless and P2P networks. It is difficult for a receiver to recognize fake identities, leading to other attacks, e.g., data poisoning attacks. Each client is registered with the admin in our model and has a valid group signature. Admin first validates the client's identity, then adds it to Chai et al. [26] Brendan et al. [16] Sun et al. [27] Xu et al. [28] GSFL (Proposed) 1. Honest But Curious Server the group. It also continuously monitors the suspicious participants, and if any, it immediately revokes them from the group and updates the group signature. It also maintains a revocation table, mentioning the reason for the revocation of any member.
✗ ✗ ✗ ✗ ✗ ✓ 2. Inference Attack ✗ ✗ ✗ ✗ ✗ ✓ 3. Selected/Target Clients ✗ ✗ ✗ ✗ ✗ ✓ 4. Sybil Attack ✓ ✓ ✓ ✓ ✗ ✓ 5. Denial of Service Attack ✗ ✗ ✗ ✗ ✗ ✓ 6. Intruder Creating a Valid Channel ✗ ✗ ✓ ✓ ✓ ✓
Security Against Intruders Creating Valid
Channels. An intruder can create a valid channel in-network if it has a valid group signature. It can pretend to be a valid group member if its signature matches the legitimate member's signature. However, we have used strong encryption and hashing techniques to generate the signature, and the elements to generate the signature are not published in the network. Hence, an intruder cannot create a valid channel in the network.
PERFORMANCE ANALYSIS
The performance of federated learning is sensitive to the communication and computation costs due to a large number of iterative distributed learning processes on massive resource-constrained client devices. A lower latency will help to achieve higher accuracy.
In this section, we show that the proposed protocol can keep the computation and communication costs low while providing protections against various security attacks, as discussed in Section 4. We evaluate the performance of the proposed protocol with existing algorithms in terms of computation, communication, and signaling costs.
Computation Cost
The computation cost is the number of CPU cycles needed to formulate the message. Each operation for computing the various parameters of messages requires a different number of CPU cycles, called the unit cost of those operations. We have taken various unit costs of an operation from [23]. These unit costs are multiplied by the number of occurrences in the entire process. Table 3 lists the computation cost of operations used in our analysis.
The computation cost for the entire process can be computed in three parts: (1) computation cost at the admin ( ), (2) computation cost at each client ( ), and (3) computation cost at server ( ). Let us assume that there are 200 clients in the network, and the server has selected 100 ( = 100) of them for the FL process in each iteration. In each simulation session, there are 1000 iterations. The computation cost is directly proportional to the number of iterations involved.
Computation Cost at the Admin (
).
• : The admin generates the latest keys necessary to sign the messages, which is basically , 1 , and 2 . It also generates 2 random number and . Hence, the total computation cost at the admin is = + 3 + 2 = 1.22 + 3(1.0) + 2(0.045) = 4.31 ms.
Computation Cost at Each Client (
).
• : Client uses the keys sent by admin to sign and encrypt the message. Hence, the cost for client can be given as = + . • Signature Generation Cost ( ): The signature generation mainly includes computing 3 , binding variables, and signing variables computed as
← 3 + + + ,(2)
where each term is computed as 3 Table 4. Since the encryption mechanism is not specified in refereed papers, we have assumed that all algorithms have used the ElGamal algorithm for cryptography. Since they have not mentioned any signatures, we have assumed other algorithms have used their digital signature to verify them. In the digital signature, a message is encrypted with the private key of the sender and decrypted at the receiver using the sender's public key. So, we assumed two encryption and two decryption for a single transmission between two clients. The cost of the client-to-client communication needed for secure key transmission is denoted as − .
Communication Cost
This section describes the cost caused by the size of the packet transmitted from clients and the admin to the server. Note that the packets sent from the server to the clients have no specific change, which will be the same as other traditional approaches. Hence, we are not comparing the packet size from the server to clients.
For our algorithm, the packet size can be given by the final output message we are sending in the network, which is ( , ). The size of is same for all algorithms, suppose it is 1024-bits. On [16] 2412 120600 241200 361800 2412000 Sun et al. [27] 363 18150 36300 54450 363000 Xu et al. [28] 712 35600 71200 106800 712000 Proposed (GSFL) 690 34500 69000 103500 690000 the other hand, signature varies depending on the algorithms. In our case, it is given as ← ( 1 , 2 , 3 , , , , , 1 , 2 ). Each variable in a signature is 171-bits long, and there are nine variables in the signature. So, the total signature size is 9 × 171 = 1539-bits. The message structure is given in Table 5. The total size of the message per communication is the sum of all fields calculated as = + + + + + + = 16+16+128+1024+1539+8+32 = 2763 = 345 bytes. Note that the client sends a message twice to the server: (1) request message for FL training and (2) actual data. Therefore, the total message size from the client will be 690 bytes.
In the algorithms that use personal signatures (not a group signature), each client sends a double encrypted message to the server, one for encryption and another for signature. Therefore, there is no separate section for signature in the final packet, and the size is saved. However, clients may need to send a separate message for further privacy preservation. For comparison, the message size of other algorithms is obtained as described below:
(1) In Bonawitz et al. [16], each client sends two messages to the server. However, it also sends − 1 messages to other clients participating in the FL. Suppose there are = 100 clients participating in FL. The total packet size for all communication in one iteration of FL would be = + + + + = 16 + 128 + 1024 + 8 + 32 = 1080 = 135 bytes. In addition to the server, each participating client needs to communicate with other −1 clients. These messages contain all other fields as above messages other than the payload, which makes it 184-bits or 23 Bytes. So, the total size of all messages will be ( − 1) × 23 + , which is (99 × 23 + 135) = 2412 bytes.
(2) In Sun et al. [27], we have considered the clients communicate with each other after ten iterations of FL. Hence, the total bandwidth after every ten iterations would be the same as [16]. However, no communication takes place for the next nine training iterations, and the bandwidth is saved for nine iterations. So, computing the average bandwidth for 1000 iteration will result in (1000×135+99×100×23)/1000 = 362.7 bytes. (3) Similarly, the communication cost for other existing algorithms has been computed, and the result is given in Table 6.
Signalling Cost
This section compares the total number of message transmissions between client and server in a session. For handshaking and exchanging authentication information, the client and server exchange a few messages which are not part of actual updates. Those transmissions are costs of the network and need to be minimized. The timing diagram for our algorithm is given in Figure 8. We can compute that the total number of signals transmitted for an iteration of FL is + + + + + 1 + , where is the total number of clients in the network, and is the total number of clients chosen for the FL. We assumed 2000 clients in the network where half of them are chosen by the server to participate in FL (i.e., = 2000 and = 1000). In our algorithm, the signalling cost is 2000 + 2000 + 2000 + 1000 + 1000 + 1 + 2000 = 10001. On the contrary, the signaling cost in [16] would be the highest among all algorithms we have referred to. Other steps would be the same, but in place of admin messages to the client, there are ( − 1) messages for the client-to-client communication, resulting in 2000 + 2000 + 1000 + 1000(1000 − 1) + 1000 + 1 + 2000 = 1007001 signals. Similarly, we can compute the signaling cost of other algorithms, and the output is given in Table 7.
CONCLUSION
We proposed a new group signature protocol to handle clients' privacy preservation in federated learning. The proposed protocol efficiently integrates the traditional group signature with federated learning by considering the iterative learning process. Moreover, it does not contradict the architecture and assumptions of the traditional federated learning approach. We provided a mathematical analysis of the computation, communication, and signaling cost, which are closely associated with the performance of federated learning algorithms. Also, we have provided a comprehensive security analysis and proved the safety of the proposed protocol using a formal verification tool. Our protocol outperforms existing algorithms in terms of efficiency and handles various security attacks in the federated learning environment. There are still many challenges that remain. One of our future work is to handle dynamic changes in group members. We are also working on client selection of the federated learning process while the identity of the clients is hidden from the server.
Fig. 1
1Scenario when client/clients and the server are compromised
Fig. 2
2A simple integration of traditional group signature in federated learning (baseline) 3 PROPOSED PROTOCOL 3.
Fig. 3
3Proposed group signature generation and communication for iterative process of federated learning .
Fig. 4
4The
Fig. 5
5Intruder Knowledge like any network technology, FL is also vulnerable to various attacks if not implemented properly. Hence, we have tried to show the possible attacks in the FL network and how our algorithm prevents them from taking place.
It is the cost of encrypting the message that clients send to the server. The encryption cost can be computed according to[24] as = 2 + = 2(1.0) + 0.612 = 2.612 ms.
Fig. 6
6Comparison of computation cost ( = 100, = 200) • Hence, the total computation cost at the client is = + = 29.723 + 2.612 = 32.335 ms. 5.1.3 Computation Cost at the Server ( ). • Decryption Cost ( ): It is the cost of decrypting the message sent by the client. The decryption cost is computed as 40.666 ms. Putting all above values in (3), we get = 40.666 + 0.067 = 40.733 ms. • Aggregation Cost ( ): It is the cost needed to aggregate the gradients from the clients participating in the federated learning process. The aggregation cost in FedAvg can be calculated by Σ =1 +1 as given in (1), where = . Hence, total cost of aggregation for n clients would be n multiplications, − 1 additions, and 1 division, where ( = ) is the total number of participating clients. ≈ + = .001 + .612 = .613 • Hence, the total computation cost at the server is = + + = 2.613 + 40.733 + 0.613 = 43.959 ms.
Fig. 7 Fig. 8
78Comparison of communication cost ( = 100, = 200) Comparison of signalling cost ( = 100, = 200)
Table 1
1List of symbols and abbreviations used complexity compared to other works, especially when there are many clients and iterations in FL.Notations and Symbols Description
Corresponding signing component of client
1 , 2 , 3 , 4 , 5
Binding variables to compute hashed message
1 , 2 , 3
Tracing variables to trace the original signer
( 1 , 2 ,
)
Set of encrypted messages
Cost of binding variables
Cost of verifying binding variables
−
Cost of client to client communication
Cost of signing variables
Cost of tracing variables
Cost of aggregation
Cost of decryption
Cost of encryption
Cost of verification
, , , 1 , 2
Binding values to compute signing variables
Federated Learning
Private key of admin
Group ID
Group Signature-based Federated Learning
ℎ
Hash function
Latest group signature
Total number of clients in the network
Total number of clients participating in FL
1 , 2 , 3 , 4 , 5
Signing variables to compute final signature
/
Group Signature
Number of iterations
Updated gradients after local training
, , ,
Exponents chosen for signature
/
Real identity of client
Table 2
2Protection against various security attacksType of Attack
Algorithms
Runhua Xu et al.
[25]
Table 3
3Unit cost of computing operationsNotation
Operation
Unit Cost (ms)
Symmetric encryption
0.161
Addition/subtraction
0.001
Bilinear pairing
4.51
Division
1.22
Exponentiation
1.0
Hash function
0.067
Modulus
1.24
Multiplication
0.612
Point multiplication
1.25
Random number generation
0.045
XOR
0.002
Table 4
4Comparison of average computation cost for number of iterations ( = 100)Algorithm
Operations
Cost of Operations (seconds)
= 1
= 50
= 100
= 150
= 1000
Runhua Xu et al. [25]
+
+ (
+
+
)
0.59
29.45
58.9
88.353
589.025
Chai et al. [26]
+
+ (3(
+
) +
)
1.63
81.7
163.4
245
1634
Bonawitz et al. [16]
+
+ 9900 + (
+
+
) 10.49 524.5
1049
1573.35
10489
Sun et al. [27]
+
+ 99 + (
+
+
)
0.68
34.4
68.8
103.2
688
Xu et al. [28]
−
+
+ 2
+ 2
9.9
495
990
1486
9908
Proposed (GSFL)
+
+
+
( − 1)(
+
+
+
)
0.08
2.36
4.69
23.3
46.6
Table 5
5Message structure and sizes of fields (in bits) However, in the next iterations, we do not need to compute the cost of signature generation. The same signature is used for all the iterations of a session. Hence, we have to add only encryption, decryption, verification and aggregation cost for next iterations, Similarly, we can compute the computation cost of existing algorithms. The comparison of computation costs is summarized inMsg.
Size
16
16
128
1024
1539
8
32
Finally, the total computation cost of our algorithm is calculated
as
=
+
+
= 43.959 + 32.33 + 4.31 = 80.599
ms.
The above is the cost of computation for the first iteration of
FL. which is
+
+
+
= 2.612 + 2.613 + 40.733 + 0.613 =
46.571 ms.
Hence, the computation cost when:
• t = 50,
= 1 × 80.6 + 49 × 46.571 = 2362.579
• t = 100,
= 1 × 80.6 + 99 × 46.571 = 4691.129
• t = 150,
= 1 × 80.6 + 149 × 46.571 = 7019.679
• t = 1000,
= 1 × 80.6 + 999 × 46.571 = 46605.029
Table 6
6Comparison of average communication cost for number of iterations ( = 100)Algorithm
size of messages (in bytes)
= 1
= 50
= 100
= 150
= 1000
Runhua Xu et al. [25] 2412 120600 241200 361800 2412000
Chai et al. [26]
603
30150
60300
90450
603000
Bonawitz et al.
Table 7
7Comparison of average signaling cost for number of iterations ( = 100)Algorithm
Operations
Cost of Operations (seconds)
= 1
= 50
= 100
= 150
= 1000
Runhua Xu et al. [25]
+ + + × ( + + + 1) +
1,101
15,850
30,900
45,950
301,800
Chai et al. [26]
+ + × ( + + 1 + + ) +
1,001
20,650
40,700
60,750
401,600
Bonawitz et al. [16]
+ + × ( + ( − 1) + + 1) +
10,701 505,650 1,010,700 1,515,750 10,101,600
Sun et al. [27]
+ × ( ( − 1)/100 + ) +
1,490
10,350
20,300
30,250
199,400
Xu et al. [28]
+ + × ( + ( − 1) + + 1) +
10,701 505,650 1,010,700 151,5750 10,101,600
Proposed (GSFL)
+ + + × ( + + 1) +
1,001
10,850
20,900
30,950
201,800
• Trovato et al.
, Vol.1, No. 1, Article . Publication date: July 2022. Efficient and Privacy Preserving Group Signature for Federated Learning • 3
, Vol. 1, No. 1, Article . Publication date: July 2022.
ACKNOWLEDGEMENT This research was supported by the MSIT, Korea, under the National Research Foundation (NRF), Korea (2022R1A2C4001270), and the ITRC support program (IITP-2020-2020-0-01602) supervised by the IITP.
. Chris O' Brien, Chris O'Brien. https://venturebeat.com/2021/03/29/canalys-more-data-breaches- in-2020-than-previous-15-years-despite-10-growth-in-cybersecurity-spending 2021
Source Inference Attacks in Federated Learning. Hongsheng Hu, 2021 IEEE International Conference on Data Mining (ICDM). Auckland, New ZealandIEEEHu, Hongsheng, et al. "Source Inference Attacks in Federated Learning." 2021 IEEE International Conference on Data Mining (ICDM). IEEE, Auckland, New Zealand, Dec. 2021.
Secure and Privacy-Aware Blockchain Design: Requirements, Challenges and Solutions. Sidra Aslam, Aleksandar Tošić, Michael Mrissa, Journal of Cybersecurity and Privacy. 11Aslam, Sidra, Aleksandar Tošić, and Michael Mrissa. "Secure and Privacy-Aware Blockchain Design: Requirements, Challenges and Solutions. " Journal of Cyberse- curity and Privacy vol.1, no.1, pp.164-194, Mar. 2021.
Practical secure aggregation for privacy-preserving machine learning. Keith Bonawitz, ACM SIGSAC Conference on Computer and Communications Security. Dallas, Texas, USABonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving ma- chine learning. " proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. Dallas, Texas, USA, pp.1175-1191, Oct. 2017.
Privacy-preserving federated learning framework based on chained secure multiparty computing. Yong Li, IEEE Internet of Things Journal. 88Li, Yong, et al. "Privacy-preserving federated learning framework based on chained secure multiparty computing. " IEEE Internet of Things Journal, vol.8, no.8, pp.6178- 6186, Apr. 2021.
FedMask: Joint Computation and Communication-Efficient Personalized Federated Learning via Heterogeneous Masking. Ang Li, Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. the 19th ACM Conference on Embedded Networked Sensor SystemsNew York, NY, USALi, Ang, et al. "FedMask: Joint Computation and Communication-Efficient Per- sonalized Federated Learning via Heterogeneous Masking." Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, New York, NY, USA, pp.42-55, Nov. 2021.
Towards Privacy-Preserving and Verifiable Federated Matrix Factorization. Xicheng Wan, arXiv:2204.01601arXiv preprintWan, Xicheng, et al. "Towards Privacy-Preserving and Verifiable Federated Matrix Factorization. " arXiv preprint arXiv:2204.01601, Apr. 2022.
. Florian Hartmann, Federated Learning línea. Último acceso: 15 10 2019Hartmann, Florian. Federated Learning línea]. Available: https://florian. github. io/federated-learning/, [Último acceso: 15 10 2019], May. 2018.
Blockchained on-device federated learning. Hyesung Kim, IEEE Communications Letters. 246Kim, Hyesung, et al. Blockchained on-device federated learning. IEEE Communica- tions Letters, vol.24, no.6, pp.1279-1283, Jun. 2020
Secure verifiable aggregation for blockchain-based federated averaging. Saide Zhu, High-Confidence Computing. 2100046Zhu, Saide, et al. "Secure verifiable aggregation for blockchain-based federated averaging. " High-Confidence Computing vol.2, no.1, pp.100046, Mar. 2022.
Privacy-preserved federated learning for autonomous driving. Yijing Li, IEEE Transactions on Intelligent Transportation Systems. Li, Yijing, et al. "Privacy-preserved federated learning for autonomous driving. " IEEE Transactions on Intelligent Transportation Systems, pp. 1-12,Jun. 2021.
Federated learning with differential privacy: Algorithms and performance analysis. Kang Wei, IEEE Transactions on Information Forensics and Security. 15Wei, Kang, et al. "Federated learning with differential privacy: Algorithms and performance analysis. " IEEE Transactions on Information Forensics and Security vol.15, pp.3454-3469, Apr. 2020.
The AVISPA tool for the automated validation of internet security protocols and applications. Alessandro Armando, SpringerBerlin, HeidelbergArmando, Alessandro, et al. "The AVISPA tool for the automated validation of in- ternet security protocols and applications. " International conference on computer aided verification. Springer, Berlin, Heidelberg, pp.135-165, Jul. 2005.
Applied federated learning: Improving google keyboard query suggestions. Timothy Yang, arXiv:1812.02903arXiv preprintYang, Timothy, et al. "Applied federated learning: Improving google keyboard query suggestions. " arXiv preprint arXiv:1812.02903 2018.
Reconciling Non-malleability with Homomorphic Encryption. Manoj Prabhakaran, Mike Rosulek, Journal of Cryptology. 303Prabhakaran, Manoj, and Mike Rosulek. "Reconciling Non-malleability with Ho- momorphic Encryption. " Journal of Cryptology vol 30, no.3 pp. 601-671.2017
Communication-efficient learning of deep networks from decentralized data. Brendan Mcmahan, Artificial Intelligence and Statistics. PMLR. 54McMahan, Brendan, et al. Communication-efficient learning of deep networks from decentralized data Artificial Intelligence and Statistics. PMLR, vol.54, pp.1273-1282, Apr. 2017.
Workshop on the Theory and Application of of Cryptographic Techniques. David Chaum, Eugène Van Heyst, Springer547Berlin, HeidelbergGroup signaturesChaum, David, and Eugène van Heyst. "Group signatures. " Workshop on the The- ory and Application of of Cryptographic Techniques. Springer, Berlin, Heidelberg, vol.547, pp.257-265, May. 2001.
Group Signature Based Federated Learning Approach for Privacy Preservation. Sneha Kanchan, Bong Jun Choi, 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). Cape Town, South AfricaIEEEKanchan, Sneha, and Bong Jun Choi. "Group Signature Based Federated Learning Approach for Privacy Preservation. " 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET). IEEE, Cape Town, South Africa, Dec. 2021.
GSIS: A secure and privacy-preserving protocol for vehicular communications. Xiaodong Lin, IEEE Transactions on vehicular technology. 566Lin, Xiaodong, et al. "GSIS: A secure and privacy-preserving protocol for vehicular communications. " IEEE Transactions on vehicular technology, vol.56, no.6, pp.3442- 3456, Dec. 2007.
Dan Boneh, Xavier Boyen, Hovav Shacham, Short group signatures Annual international cryptology conference. Berlin, HeidelbergSpringer3152Boneh, Dan, Xavier Boyen, and Hovav Shacham, Short group signatures Annual international cryptology conference, Springer, Berlin, Heidelberg, vol.3152, pp.41- 45, Aug. 2004.
On the security of ElGamal based encryption. Yiannis Tsiounis, Moti Yung, SpringerBerlin, HeidelbergTsiounis, Yiannis, and Moti Yung. "On the security of ElGamal based encryption. " International Workshop on Public Key Cryptography. Springer, Berlin, Heidelberg, pp.117-134, Jan. 1998.
ElGamal encryption. Wikipedia, Wikimedia Foundation. "ElGamal encryption."Wikipedia, Wikimedia Foundation, 24 Feb. 2019. https://en.wikipedia.org/wiki/ElGamal_encryption
Security enhanced group based aka protocol for m2m communication in an iot enabled lte/lte-a. Balu L Parne, Shubham Gupta, Narendra S Chaudhari Segb, network IEEE Access. 6Parne, Balu L., Shubham Gupta, and Narendra S. Chaudhari Segb: Security en- hanced group based aka protocol for m2m communication in an iot enabled lte/lte-a network IEEE Access vol.6, pp.3668-3684, Jan. 2018.
Linear generalized ElGamal encryption scheme. Demba Sow, Pascal Lafourcade, Cryptology ePrint Archive. Sow, Demba, and Pascal Lafourcade. "Linear generalized ElGamal encryption scheme. " Cryptology ePrint Archive. Paris, France, Jul. 2020.
Runhua Xu, arXiv:2103.03918Privacy-Preserving Federated Learning over Vertically Partitioned Data. arXiv preprintXu, Runhua, et al. FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data. arXiv preprint arXiv:2103.03918, Jun. 2021.
Di Chai, arXiv:2011.09655A Benchmark System with a Comprehensive Evaluation Model for Federated Learning arXiv preprint. Chai, Di, et al.FedEval: A Benchmark System with a Comprehensive Evaluation Model for Federated Learning arXiv preprint arXiv:2011.09655, Nov. 2020.
. Tao Sun, Dongsheng Li, Bao Wang, arXiv:2104.11375Decentralized Federated Averaging arXiv preprintSun, Tao, Dongsheng Li, and Bao Wang. Decentralized Federated Averaging arXiv preprint arXiv:2104.11375, Apr. 2021.
Learning Rate Optimization for Federated Learning Exploiting Over-the. Chunmei Xu, arXiv:2102.02946-air Computation arXiv preprintXu, Chunmei, et al. Learning Rate Optimization for Federated Learning Exploiting Over-the-air Computation arXiv preprint arXiv:2102.02946, Apr. 2021.
| [] |
[
"Adaptive Expectations, Confirmatory Bias, and Informational Efficiency Namur Center for Complex Systems Adaptive Expectations, Confirmatory Bias, and Informational Efficiency *",
"Adaptive Expectations, Confirmatory Bias, and Informational Efficiency Namur Center for Complex Systems Adaptive Expectations, Confirmatory Bias, and Informational Efficiency *"
] | [
"Gani Aldashev \nRempart de la Vierge\n5000NamurBelgium\n",
"Timoteo Carletti \nRempart de la Vierge\n5000NamurBelgium\n",
"Simone Righi [email protected]. \nRempart de la Vierge\n5000NamurBelgium\n",
"Gani Aldashev \nRempart de la Vierge\n5000NamurBelgium\n",
"Timoteo Carletti \nRempart de la Vierge\n5000NamurBelgium\n",
"Simone Righi \nRempart de la Vierge\n5000NamurBelgium\n"
] | [
"Rempart de la Vierge\n5000NamurBelgium",
"Rempart de la Vierge\n5000NamurBelgium",
"Rempart de la Vierge\n5000NamurBelgium",
"Rempart de la Vierge\n5000NamurBelgium",
"Rempart de la Vierge\n5000NamurBelgium",
"Rempart de la Vierge\n5000NamurBelgium"
] | [] | We study the informational efficiency of a market with a single traded asset. The price initially differs from the fundamental value, about which the agents have noisy private information (which is, on average, correct). A fraction of traders revise their price expectations in each period. The price at which the asset is traded is public information. The agents' expectations have an adaptive component and a social-interactions component with confirmatory bias. We show that, taken separately, each of the deviations from rationality worsen the information efficiency of the market. However, when the two biases are combined, the degree of informational inefficiency of the market (measured as the deviation of the long-run market price from the fundamental value of the asset) can be non-monotonic both in the weight of the adaptive component and in the degree of the confirmatory bias. For some ranges of parameters, two biases tend to mitigate each other's effect, thus increasing the informational efficiency. | 10.1016/j.jebo.2011.03.001 | [
"https://arxiv.org/pdf/1009.5075v1.pdf"
] | 15,546,330 | 1009.5075 | c8bd91f499808bfe4fce7c464cf726e5caf5b0ff |
Adaptive Expectations, Confirmatory Bias, and Informational Efficiency Namur Center for Complex Systems Adaptive Expectations, Confirmatory Bias, and Informational Efficiency *
26 Sep 2010 23 September 2010 September 28, 2010
Gani Aldashev
Rempart de la Vierge
5000NamurBelgium
Timoteo Carletti
Rempart de la Vierge
5000NamurBelgium
Simone Righi [email protected].
Rempart de la Vierge
5000NamurBelgium
Gani Aldashev
Rempart de la Vierge
5000NamurBelgium
Timoteo Carletti
Rempart de la Vierge
5000NamurBelgium
Simone Righi
Rempart de la Vierge
5000NamurBelgium
Adaptive Expectations, Confirmatory Bias, and Informational Efficiency Namur Center for Complex Systems Adaptive Expectations, Confirmatory Bias, and Informational Efficiency *
26 Sep 2010 23 September 2010 September 28, 2010arXiv:1009.5075v1 [q-fin.TR] University of Namur 8, rempart de la vierge, B5000 Namur (Belgium) http://www.naxys.be * The authors thank the National Bank of Belgium for financial support. † Corresponding author. Department of Economics and CRED, University of Namur (FUNDP). Mailing address: Department of Economics, 8 Rempart de la Vierge, 5000 Namur, Belgium. Email: [email protected]. ‡ NaXys -Namur Center for Complex Systems, University of Namur (FUNDP). Mailing address: 8 Rem-part de la Vierge, 5000 Namur, Belgium. Email: [email protected]. § Department of Economics, University of Namur (FUNDP). Mailing address: Department of Economics, 1informational efficiencyconfirmatory biasagent-based modelsasset pricing JEL codes: G14D82D84
We study the informational efficiency of a market with a single traded asset. The price initially differs from the fundamental value, about which the agents have noisy private information (which is, on average, correct). A fraction of traders revise their price expectations in each period. The price at which the asset is traded is public information. The agents' expectations have an adaptive component and a social-interactions component with confirmatory bias. We show that, taken separately, each of the deviations from rationality worsen the information efficiency of the market. However, when the two biases are combined, the degree of informational inefficiency of the market (measured as the deviation of the long-run market price from the fundamental value of the asset) can be non-monotonic both in the weight of the adaptive component and in the degree of the confirmatory bias. For some ranges of parameters, two biases tend to mitigate each other's effect, thus increasing the informational efficiency.
Introduction
In most economic interactions, individuals possess only partial information about the value of exchanged objects. For instance, when a firm "goes public", i.e. launches an initial public offering of its shares, no financial market participant has the complete information concerning the future value of the profit stream that the firm would generate. The fundamental question, going back to Hayek (1945), is then: To which extent the market can serve as the aggregator of this dispersed information? In other words, when is the market informationally efficient, i.e. that the market price converges to the value that would obtain if all market participants were to have full information about the fundamental value of the asset exchanged?
Most of the studies that address this question are based on the assumption that individual market participants are rational. Under full rationality, the seminal results on the informational efficiency of centralized markets have been established by Grossman (1976), Wilson (1977), Milgrom (1981), and, for decentralized markets, by Wolinsky (1990), Blouin and Serrano (2001), and Duffie and Manso (2007).
However, research in experimental economics and behavioral finance indicates that traders do not behave in the way consistent with the full-rationality assumption. For instance, Haruvy et al. (2007) find that traders have adaptive expectations, i.e. they give more importance to the past realized price of the asset than the fully-rational agent would. Along a different dimension, Rabin and Schrag (1999) discuss the evidence that individuals suffer from the so-called confirmatory (or confirmation) bias: they tend to discard the new information that substantially differs from their priors. Understanding whether (and under which conditions) the financial markets are informationally efficient when agents do not behave fully rationally remains an open question.
In this paper, we study the informational efficiency of a market with a single traded asset.
The price initially differs from the fundamental value, about which the agents have noisy private information (which is, on average, correct). A fraction of traders revise their price expectations in each period. The price at which the asset is traded is public information. We show that, taken separately, each of the deviations from rationality worsen the information efficiency of the market. However, when the two biases are combined, the degree of informational inefficiency of the market (measured as the deviation of the long-run market price from the fundamental value of the asset) can be non-monotonic both in the weight of the adaptive component and in the degree of the confirmatory bias. For some ranges of parameters, two biases tend to mitigate each other's effect, thus increasing the informational efficiency.
The paper is structured as follows. Section 2 presents the setup of the model. Section 3 derives analytical results for each bias taken separately. In Section 4, we present the simulation results when two biases are combined. Finally, Section 5 discusses the implication of our results and suggests some future avenues for research.
The model
Consider a market with N participants, each endowed with an initial level of wealth equal to W 0 > 0. The amount L 0 ∈ (0, W 0 ] is in liquid form. Time is discrete (e.g. to mimic the daily opening and closure of a financial market), denoted with t = 0, 1, .... Market participants trade a single asset, whose price in period t we denote with P t . This price is public information. Prices are normalized in such a way that they belong to the interval
[0, 1].
At the beginning of each period t, every agent i can place an order to buy or short sell 1 unit of the asset, on the basis of her expectation about the price for period t, denoted with P e,i t . Placing an order implies a fixed, small but positive transaction cost c, i.e. 0 < c ≪ 1. At the end of the period, each agent i learns the price P t at which the trade is settled (as explained below).
The agent i then constructs her price expectation for the next period and decides to participate in the trading in period t + 1 according to the expected next-period gain, i.e. if
P e,i t+1 − P t − c > 0 .(1)
Moreover, she participates as a buyer if her price expectation for the next period exceeds the current price, i.e.
P e,i t+1 > P t ,(2)
or as a seller if, on the contrary,
P e,i t+1 < P t .(3)
The way in which agents form their next-period price expectations differs from the standard rational-expectation benchmark in the following way. First deviation is the fact that agents give weight to the past public prices, i.e. they have (partially) adaptive expectations. Secondly, they can influence each other's expectations via social interactions with confirmatory bias.
Formally, suppose that in every period a fraction, γ ∈ [0, 1], of the agents makes a revision of their price expectations. An agent revises her price expectation by analyzing the past price of the asset and by randomly encountering some other agent (at zero cost), and possibly exchanging her own price expectation with this partner. In these encounters, the agents have a confirmatory bias, i.e. each agent tends to ignore the information coming from the other agent if it differs too much from her own. If, on the contrary, this difference is not too large , i.e. smaller than some fixed threshold, which we denote with σ, then the agent incorporates this information into her price expectation. The remaining (1 − γ)N agents do not revise their expectations in the current period.
Summarizing, the expectation formation process of agent i meeting agent j is:
P e,i t+1 = αP t + (1 − α) P e,i t if P e,i t − P e,j t ≥ σ P e,i t + P e,j t 2 otherwise ,(4)
and it is analogous for P e,j t+1 . Here, α measures the relative weight of the past price. If α = 1, the agents have purely adaptive expectations (and social interactions play no role). If α = 0 and σ = 1 the agents (that revise their expectations) completely disregards the past and fully integrate all the information coming from the social interactions.
Our objective is to analyze the price formation under the different values of the parameters α, σ, and γ.
Concerning the market microstructure, we assume that the market is centralized, with a simple price response to excess demand. In other words, the market mechanism is similar to the Walrasian auctioneer. More precisely, the price formation mechanism functions as follows:
1. There exists a hypothetical price at period t+ 1 that would (approximately) equate the number of buy-orders and sell-orders. Let us denote it with P * t+1 . From (2) and (3), P * t+1 is the solution of the equation:
n B (x) = n S (x) ,
where n B (x) and n S (x) are the numbers of buyers and sellers at price x. Whenever there are several solutions to this equation, P * t+1 denotes the average of the values that solve the equation.
2. Out of equilibrium, the price adjustment depends on the size of the excess demand or excess supply relative to the size of the population; in other words, denoting β(x) = |n B (x) − n S (x)| /N, the price adjustment process is:
P t+1 = β(P t )P * t+1 + (1 − β(P t ))P t .(5)
Thus, the deviation from the equilibrium does not disappear instantly. However, the price moves in the direction that eliminates the excess demand or supply, and, moreover, the speed of adjustment depends on the size of disequilibrium (relative to the size of the population). 1 3. Given that each agent that participates in the market in period t places an order for one unit of the asset, the number of exchanges that occurs is min{n B (P t ), n S (P t )}.
Then, each seller i updates her wealth by W i t+1 = W i t + P t − P t+1 − c, and her liquidity by
L i t+1 = L i t + P t − c. Similarly, for a buyer j, we have W j t+1 = W j t − P t + P t+1 − c and L j t+1 = L j t − P t − c.
4. If an agent's liquidity dries up to zero, then she leaves the market. At her place, at the beginning of the next period enters a new agent with wealth W 0 , liquidity L 0 , and the next-period price expectation randomly drawn from the [0, 1] interval.
In this setting, consider an initial public offering (IPO) of the asset. At time t = 0, the asset gets introduced in the market at some price P 0 . Let's also suppose that, on average, the agents have the unbiased information about its fundamental value. In particular, let's suppose that the initial price expectations of the agents is a uniform distribution in the [0, 1] interval, i.e. the fundamental value of the asset is 1 2 . However, the initial price P 0 differs from the fundamental value. The questions that we pose are:
• Does the market price P t converge to the fundamental value of the asset?
• If not, how large is the deviation of the long-run price P t as t → ∞, from the fundamental value?
• How does this deviation depend on the weight of history α (i.e. the "adaptiveness" of agents' expectations), the prominence of the confirmatory bias of the traders σ, and the frequency with which agents adjust their expectations, γ?
Analytical results
We can characterize analytically the answers to the above questions for some of the values of the parameters. This requires a further assumption that the number of market participants (N) and every agent's initial wealth and liquidity (W 0 and L 0 ) are sufficiently large.
Purely adaptive expectations
Consider first the case where agents discard the social interactions and consider only the past price. In other words, α = 1 in (4). We analyze separately two sub-cases: (i) all agents revise their expectations in every period, i.e. γ = 1; and (ii) only a fraction of agents revise their expectations in every period, i.e. γ < 1.
(i) In the case γ = 1, Eq. (4) simply reduces to: P e,i t+1 = P t for any i and t.
However, this is also the hypothetical price that equates buyers and sellers, i.e. P * t+1 = P t , and thus β(P t ) = 0. Finally, from (5) we get P t+1 = P t . This means that the market price doesn't evolve: P t = P 0 in every period. Intuitively, if all agents revise their expectations in every period and have purely adaptive expectations, once the initial price P 0 is announced, every agent immediately revises her next-period expectation, substituting it with P 0 . Given that every agents does so, no agents is interested in trading, and the price does not evolve.
(ii) Next, consider the case γ < 1, with γN being sufficiently large. Without loss of generality, suppose that P 0 > 1/2. We prove that the market reaches the long-run equilibrium, after a few periods, with the long-run market price deviating from the initial price by a value smaller than c(1 − γ).
We need the following preliminary result.
Proposition 1. Consider a population of agents divided into two groups: agents in the first group, whose size is N 1 , have expectations uniformly distributed in [0, 1], and agents in the second group, whose size is N 2 >> N 1 , all have the price expectation equal to some fixed P ∈ (0, 1). Then, the price P * defined at point 1 is given by P * =P − c ifP > 1/2 and P * =P + c ifP < 1/2. IfP = 1/2, then P * =P .
Proof. We consider only the caseP > 1/2 (the proof for the caseP < 1/2 is analogous,
while the caseP = 1/2 is trivial). Define the functions θ c (x, P ) θ c (x, P ) = 1 if x > P + c 0 otherwise , and η c (x, P ) = 1 − θ −c (x, P ).
Then for a sufficiently large N 2 , the numbers of sellers and buyers at price x ∈ [0, 1] are respectively given by:
n S (x) = (x − c)N 1 + N 2 θ c (x,P ) and (6) n B (x) = (1 − x − c)N 1 + N 2 η c (x,P ).
This follows from the trading protocol, given that for a price sufficiently close toP (i.e.
with a deviation less than c), only the first group of agents participates in the trading, and that the expectations are uniformly distributed in the first group. On the other hand, if
x <P − c (or x >P + c), the second group also participates in the trading as buyers (sellers).
Then, the difference in the number of buyers and sellers is
∆(x) = (1 − 2x)N 1 + N 2 η c (x,P ) − θ c (x,P ) ,(7)
and, therefore, P * becomes the price at which the sign of ∆(x) changes (or the average of these values, if more than one exist). We can then easily prove that
∆ − = lim x→(P −c) − ∆(x) = (1 − 2P + 2c)N 1 + N 2 > 0.(8)
Finally, using the assumption
N 2 >> N 1 , we get ∆ − > 0 > ∆(x) for all x >P − c. This implies that P * =P − c.
This proposition has the following Corollary 2. Suppose the assumptions of Proposition 1 hold. If a third group of agents (of arbitrary size) with price expectationP , such that |P −P | < c, joins the market, then P * does not change.
We can now analyze the market dynamics under the assumptions α = 1 and γ < 1 with γN large.
During the first period N 1 = (1 − γ)N agents do not revise their expectations. These expectations are uniformly distributed in [0, 1]. Contrarily, N 2 = γN agents revise their expectations, which now becomes the IPO price P 0 (i.e. P e,i 1 = P 0 ). Proposition 1 ensures that P * 1 = P 0 − c. Moreover, the size of market disequilibrium is small: using the definition, we get β(P 0 ) = (2P 0 − 1)(1 − γ). Finally, the end-of-period 1 price P 1 will be
P 1 = β(P 0 )(P 0 − c) + (1 − β(P 0 ))P 0 = P 0 − β(P 0 )c.(9)
Note that this price is c-close to P 0 , given that β(P 0 ) is small.
During the next period, γN agents revise their expectations, while (1 − γ)N agents do not revise. Then, on average, we have that the second group contains N 2 = γ(1 − γ)N agents, for which P e,i 2 = P e,i 1 = P 0 , the first group contains N 1 = (1 − γ) 2 agents (who do not revise their initial expectations), and, moreover, there exists a third group of agents, of
size N 3 = γ(2 − γ), for whom P e,i 2 = P 1 = P 0 − β(P 0 )c.
We can then apply Corollary 2 and conclude that P * 2 = P 0 − c. Computing the next-period market disequilibrium β(P 1 ), we can easily observe that β(P 1 ) ∼ (1 − γ) 2 . Therefore, the next-period price P 2 will be
P 2 = β(P 1 )(P 0 −c)+(1−β(P 1 ))P 1 ∼ P 0 −β(P 0 )c+O(1−γ) 2 ∼ P 0 −c(1−γ)+O(1−γ) 2 . (10)
Thus, the market price varies as long as there exist agents that have not yet revised their initial expectations. However, the market price does not move too far from P 0 . Assuming the extreme-case scenario where in every period the same (1 − γ) agents happen to be the ones that do not revise their expectations, the number of periods that pass before the market price converges to its steady-state value equals − log N/ log(1 − γ). Figure 1 confirm our theoretical findings.
Numerical simulations presented in
[Insert Figure 1 about here]
Social interactions
Next, consider the setting where the agents' expectations have no adaptive component (i.e. α = 0), and the agents that revise their expectations rely on the social interactions with other agents. Then, the relevant parameter is the extent of the confirmatory bias (σ) that the agents have. We derive analytical results for the the cases of the extreme form of confirmatory bias (σ << 1) and for that of virtually no bias (σ ∼ 1).
Social interactions: large confirmatory bias
Consider the extreme form of confirmatory bias, i.e. that whenever two agents meet, neither of them adjusts her price expectation, no matter how close their past-period expectations are. We will prove that the market is fully informationally efficient in the long-run, but convergence to this efficient outcome takes arbitrary long time.
In every period, γN agents engage in social interactions (without influencing each other's price expectations). This implies that no agent revises her price expectation. Therefore, the mean price expectation (which we denote with P e ) does not change either; namely, from (4),
P e t+1 = P e t .
Under the assumption that the initial price expectations are uniformly distributed, we obtain P * t+1 = P e t+1 = 1/2. The market disequilibrium is thus given by
β(P t ) = |1 − 2P t |.(11)
Therefore, the market price evolves according to the equation
P t+1 = |1 − 2P t | 2 + (1 − |1 − 2P t |) P t = P t + |1 − 2P t |(1 − 2P t ) 2 .(12)
Let us define the mapping
f (P ) = P − (2P −1) 2 2 if P ≥ 1/2 P + (2P −1) 2 2 if P < 1/2 .(13)
The evolution of the market price is determined by the dynamic system
P t+1 = f (P t ).(14)
This mapping has a unique fixed point at P = 1/2. Moreover, this is an attractor, whose strength decreases the closer we are to the fixed point:
P t − 1/2 ∼ a/t.
Finally, note that if P 0 = 1/2, then (12) implies that P t = 1/2 for all t.
[Insert Figure 2 about here]
Social interactions: small confirmatory bias
Let us now consider the opposite extreme, i.e. the agent that uses for updating her expectation that of her partners in social interactions even when such expectations diverge radically from hers. Assume for the moment that all agents revise their expectations in every period (γ = 1). We will prove that in this case the market is fully informationally efficient in the long-run, and that convergence to this efficient outcome occurs within in a finite number of periods (that essentially depends on the transaction cost c).
Given that the expectation-revision rule (4) with α = 0 and γ = 1 preserves the mean price expectation, we trivially get P e t+1 = P e t = 1/2.
This follows from the fact that the initial price expectations are uniformly distributed in Because of the transaction cost is positive, the market activity stops once all the expectations fall inside the interval whose width is smaller than 2c. This happens after a time T ∼ −2 (1 + log 2 c).
Let us assume now that the price expectations have a dispersion large enough, so that the market activity still does not stop. Then, we can easily compute the market disequilibrium β(P t ):
β(P t ) = 2 t/2 |1 − 2P t |,
which implies that the next-period price is given by:
P t+1 = P t + 2 t/2−1 |1 − 2P t |(1 − 2P t ).(15)
Let us introduce the auxiliary variable x, defined as P t − 1/2 = x t /2 t/2 . This allows us to fully describe the market price evolution with the dynamic system given by the function g(x):
g(x) = √ 2x − 2 √ 2x 2 if x ≥ 0 √ 2x + 2 √ 2x 2 if x < 0 .(16)
This mapping has three fixed points: x = 0 (unstable), and x = ±(2 − √ 2)/4, that are stable.
We can thus conclude that, as long as the market activity continues, the market price converges to 1/2, given that in all cases P t = xt 2 t/2 + 1 2 → 1 2 . These findings are supported by numerical simulations, whose results we report in Figure 4.
[Insert Figure 4 about here]
In a similar fashion, we can study the case γ = 1. In this case, in every period γN agents revise their expectations and, because they have a very small confirmatory bias (i.e. σ ∼ 1), they influence each other's expectations. Then, overall there is a tendency for the expectations to converges (because of the process driven by Eq. (4) with α = 0). We should keep in mind, however, that in every period (1 − γ)N agents do not revise their expectations.
Zero weight given to the past prices in forming the next-period expectation (α = 0) implies that the mean price expectation does not change, P e t+1 = P e t . Hence P e t = 1/2 and P * t = 1/2 for all t. Moreover the expectations are distributed in an interval whose width (denoted with ∆ Pe (t)) shrinks to zero, but slower than in the case γ = 1. The simulations presented in the right panel of Figure 3, allow us to see that this narrowing of expectation dispersion follows approximately the law ∆ Pe (t) ∼ 1 2 qγ t , where q γ = aγ + b, a = 0.61 ± 0.02 and b = −0.13 ± 0.01, independent of σ.
Thus, the price disequilibrium can be estimated as:
β(P t ) ∼ 2 qγt |1 − 2P t |,
which implies the following price dynamics:
P t+1 = P t + 2 qγt−1 |1 − 2P t |(1 − 2P t ).
Introducing a new variable y t such that P t = 1/2 + y t /2 qγt , we obtain the following difference equation for the evolution of y t : y t+1 = 2 qγ y t − 2 qγ+1 |y t |y y . This mapping has three fixed points, y = 0 (unstable), and y = ±(1 − 2 qγ )/2 qγ+1 (stable).
We can finally conclude that, similar to the results above, the market price converges to 1/2 as long as the market run. The market activity stops once all the expectations fall inside the interval whose width is smaller than 2c. This happens after time T ∼ −(1 + log 2 c)/q γ .
Simulation results
When the price expectations of agents have both the adaptive component and confirmatory bias, obtaining analytical results is beyond reach. We thus proceed by running numerical simulations. In what follows, we vary the values of α and σ, from 0 to 1, in steps of 0.01. For each pair of values (α, σ) the market is simulated 10 times. The cost of a trading transaction is fixed at c = 0.005. Each simulation can run for 100 steps, being this a time interval large enough for the market price to converge to the steady-state. Note that in the simulations we define the steady-state as the situation in which the difference between the market prices in periods t and t + 1 differ by a value smaller than 0.0001. We then look at the degree of market informational inefficiency in the long-run, i.e. how far the market price diverges from the fundamental value of the asset (averaging across the 10 simulations). We also look at the average volatility of the market price, as measured by the standard deviation in the market price in the last 90% of the steps of the simulation.
The agents have a relatively low level of wealth. Remember that if the outcomes of the trading strategy of a trader lead to losses that, accumulated over several periods, exhaust her wealth, she quits the market and is replaced by another trader with a randomly drawn initial price expectation. Given that traders have a relatively low level of wealth, a certain number of them will quit the market and this implies that the turnover rate of traders is relatively high. This means that some amount of noise gets continuously injected into the market.
[Insert Figure 5 about here] while those closer to dark red indicate higher inefficiency. Figure 6 describes the volatility of the market price, while Figure 7 shows the average number of traders that exit the market as their wealth hits the zero bound.
[Insert Figures 6 and 7 about here]
Analyzing these figures, we obtain the following findings.
Fixing the value of σ, as we move from the extreme-left point (α = 0) to the right, the average deviation of the long-run market price from the fundamental value (0.5) first decreases and then increases -at least for some values of σ. In other words: In all three figures, we find that for very high values of α, the degree of the informational inefficiency of the market is very high. Clearly, when traders put a large weight to the past price in forming expectations, the initial price becomes very important. When receiving information which indicates that the value of the asset is low (even in the absence of confirmatory bias), traders tend to give little weight to it -basically, all that matters is the past price. In this case, the initial price strongly influences the aggregate expectation formation process (the expectations of all agents quickly converge upwards to some point between the initial price and the fundamental value) and, given that in our case the initial price strongly differs from the fundamental value, the long-run market price stays largely above the fundamental value.
Consider now the situation with the most extreme form of confirmatory bias, i.e. all traders completely ignore the information that comes from others. As α declines, the traders give less weight to the past prices and more weight to their own expectations of the previous period. Therefore, the agents whose initial expectations are very low do not move their next-period expectations upwards too much. At the same time, the market price keeps falling, driven by the Walrasian auctioneer (which also implies the downward move in the expectations of the agents whose initial expectations are high). These two inter-related processes -the upward drift of price expectations of initially low-expectation agents and the downward pressure on the market price converge to some value relatively close to the fundamental one.
As α declines further, we observe that the market inefficiency rises again. This is due to the fact that for the lower values of α, the first process (upward move in expectations of the initially low-expectation agents) becomes slower than the second one (i.e. downward move in the market price). Thus, the low-expectation agents keep making negative profits, eventually hit the zero-wealth bound, and exit (we can note this by looking at Figure 7: the number of agents that exit the market increases at the bottom-left part of the figure). There is a sufficiently high exit rate of these agents from the market so as to soften the downward move in the market price, which means that the long-run price at which the system settles down is higher than in the situation in which the exit of traders is negligible.
As the degree of confirmatory bias of agents becomes smaller (i.e. the value of σ increases), the channel that leads to the exit of low-expectation traders softens down, as there is now an additional mechanism that creates an upward pressure on the expectations of those traders: the integration of information that comes from their peers. Notice (in Figure 7) that the exit rate is lower at the higher values of σ.
Furthermore, comparing across the three panels of Figure 5, one notes that as the fraction of agents that revise their expectations in each period (γ) increases, the areas of σ in which the non-monotonicity in α occurs becomes smaller: Proposition 4. The tendency of the market inefficiency to be non-monotonic in α is stronger, the lower is the fraction of agents that revise their price expectations in each period.
This happens because the higher frequency of expectation revision (larger γ) and the likelihood to integrate the information coming from other traders (larger σ) act in a complementary fashion: if the rate of revision of price expectations is relatively low, the "openness of mind" (i.e. low degree of confirmatory bias) has a relatively small effect on the mitigation of the exit channel. It is only when the agents revise their expectations relatively frequently, that the "openness of mind" starts to have a real bite, and the upward-sloping part on the left side of the relation between market inefficiency and α starts to disappear.
Let's now fix the value of α = 0.25 on Figure 5A, α = 0.1 on Figure 5B, and α = 0.05 on But why the market inefficiency would rise as the agents become even more "openminded"? To understand this, we need to note that this phenomenon occurs only when the adaptive component is not too small. Then, the fact that the initial price differs substantially from the fundamental value plays a role. The agents have an early-stage upward drift in expectations. At the same time, the market price starts to fall. If the agents are very "openminded", this implies that they 'excessively' integrate the early upward price drift into their expectations, which, in turn, implies that the price at which the market settles in the long run is relatively high. If, instead, the agents' confirmatory bias is stronger, the decline in the market price is faster than the 'propagation' of the upward-drifting expectations: this is why the market settles at the price relatively close to the fundamental value.
This analysis suggests a very interesting and potentially more general insight: when market participants suffer from more than one deviation from fully rational behavior (in our case, adaptive expectations and confirmatory bias), at least in some range of bias parameters, the two biases mitigate each other. Given our analysis, it should not be difficult to construct examples of asset markets with traders that have multiple sources of biases, that exhibit the same price behavior as under full rationality, and in which the price behavior would deviate from the full-rationality benchmark as soon as one of the bias sources is eliminated.
Next, looking across the three panels of Figure 5, we also note that the values of α at which we have found the non-monotonicity of the market inefficiency decrease at higher values of γ. In other words: Proposition 6. The weight of the adaptive component in the expectations (α) at which the non-monotonicity of the market inefficiency in the degree of confirmatory bias occurs decreases with the fraction of agents that revise their expectations in every period.
To capture the intuition behind this result, we need to conduct the following thought experiment. Let's fix a point with sufficiently high values of α and σ, for example (0.4, 0.4).
Next, let's increase the frequency of revision of expectations by the agents, from γ = 0.2 to γ = 0.8. We then observe that the market inefficiency increases. This indicates the importance of the frequency with which agents revise their price expectations for the propagation of the 'excessive' integration of the upward drift into the expectations, as noted above. In other words, at higher frequency of expectation revision, this 'excessive' integration of the upward drift channel swamps the opposite (i.e. the quantity-of-information) channel more easily.
The quantity-of-information channel starts to play a role only when the early-stage upward drift is sufficiently small (i.e., history weighs relatively little in the expectation formation).
If we measure the degree of market inefficiency, while varying σ along a fixed α, in ranges different from those where the non-monotonicity occurs (for example, α = 0.1 and α = 0.3 on Figure 5A), we see that at the higher values of α the relationship between the degree of market inefficiency and σ is negative, while at the lower values of α, this relationship is positive. In other words:
Conclusion
This paper has studied the informational efficiency of an agent-based financial market with a single traded asset. The price initially differs from the fundamental value, about which the agents have noisy private information (which is, on average, correct). A fraction of traders revise their price expectations in each period. The price at which the asset is traded is public information. The agents' expectations have an adaptive component (i.e. the past price influences their future price expectation to some extent) and a social-interactions component with confirmatory bias (i.e. agents exchange information with their peers and tend to discard the information that differs too much from their priors).
We find that the degree of informational inefficiency of the market (measured as the deviation of the long-run market price from the fundamental value of the asset) can be non-monotonic both in the weight of the adaptive component and in the degree of the confirmatory bias. For some ranges of parameters, two biases tend to mitigate each other's effect, thus increasing the informational efficiency.
Our findings complement the well-known results in the theory of markets showing that the allocative efficiency can be obtained even under substantial deviation from individual rationality of agents Sunder 1993, 1997). We show that deviations from individual rationality, under certain conditions, can also facilitate the informational efficiency of markets. The key condition for this property is that the various behavioral biases that agents possess should mutually dampen their effects on the price dynamics.
Given the potential importance of this insight for financial economics, the natural extension of this work is to test its' predictions experimentally. This would require to construct experimental financial markets with human traders, similar to the setting of Haruvy et al.
The agents' expectations have an adaptive component and a social-interactions component with confirmatory bias.
[0, 1 ]Figure 3
13, hence with average value 1/2, which also equals the hypothetical Walrasian-auctioneer price P * t = 1/2. Moreover, the equation(4)implies that price expectations follow the Deffuant dynamic(Deffuant et al. 2000, Weisbuch et al. 2002. In other words, the dispersion of price expectations, denoted with ∆ Pe , shrinks to zero according to (see the left panel of
Figure 5 (
5Panels A, B, and C) report the informational inefficiency of the market (as measured by the divergence of the steady-state market price from the fundamental value of the asset) for the cases in which the fraction of agents that revise their expectations in every period is γ = 0.2, 0.5, and 0.8, respectively. The market inefficiency is a function of the weight of the adaptive component in the price expectations of traders α and of the degree of confirmatory bias σ. Colors closer to dark blue indicate lower level of market inefficiency,
Proposition 3 .
3Market inefficiency can be non-monotonic in the weight of the adaptive component (α) in the price expectations of agents.
Figure 5C .
5CAs we move from the point at the bottom (σ = 0) upwards, the average deviation of the steady-state market price from the fundamental value (0.5) first decreases and then increases. In other words: Proposition 5. Market inefficiency can be non-monotonic in the degree of confirmatory bias of agents. The first part of the non-monotonic relationship is easy to explain: as an agent suffers less from confirmatory bias, she starts to integrate at least some of the information about the fundamentals contained in the price expectations of another trader (incidentally, this phenomenon occurs only when the adaptive component in the price expectation is relatively small).
Proposition 7 .
7The slope of the relationship of market inefficiency in the degree of confirmatory bias (σ) can be of opposite sign at different values of the weight of the adaptive component (α).The above discussion has already hinted at the potential explanation why this reversal of the relationship occurs. At sufficiently high values of α, the early-stage upward drift is very important and the smaller confirmatory bias of agents only helps to propagate this drift into the price expectations. At sufficiently low values of α, the early-stage upward drift matters much less and the smaller confirmatory bias becomes beneficial for the informational efficiency of the market, because it helps to integrate more of the relatively unbiased information into the expectations. In other words, in both cases the smaller confirmatory bias (i.e. higher σ) plays the role of the catalyzer; what differs in the two cases is the initial unbiasedness of expectations.
Figure 1 :
1Purely adaptive expectations (α = 1). Left panel: semilog plot of the market inefficiency as a function of γ. Right panel: time to convergence to the steady state as a function of γ. There are N = 1000 agents, the transaction cost c = 0.005. Each point represents the average over 50 simulations.: numerical simulations, solid line : analytical results.
( 2007 )Figure 2 :
20072, with the additional feature of allowing agents to share their information (in some restricted form). The outcomes of interest in such an experiment would be both the evolution of market price of the asset and the elicited price expectations of traders. Social interaction without adaptive component (α = 0 and σ ∼ 0). Left panel: semilog plot of the difference between the simulated and analytical market inefficiency with σ = 5.0 · 10 −4 and P 0 = 0.6 or 0.8. Right pane:l difference between the simulated and analytical market inefficiency as a function of σ. There are N = 1000 agents, the transaction cost c = 0.005.
Figure 3 :
3Evolution of [log 2 ∆ Pe (t)]/t as a function of σ (left panel) and as a function of γ (right panel). There are N = 1000 agents, the transaction cost c = 0.005.
Figure 4 :
4Social interaction without adaptive component (α = 0 and σ ∼ 0). Left panel: difference between the simulated and analytical market inefficiency as a function of σ. Right panel: the ration of the simulated time to convergence over the analytical one, σ. There are N = 1000 agents, the transaction cost c = 0.005.
Figure 5 :Figure 6 :Figure 7 :
567The degree of market efficiency as a function of the adaptive component (α) and the confirmatory bias (σ). Panel A: γ = 0.2, Panel B γ = 0.5 and Panel C : γ = 0.8. The initial price is 0.9. Volatility of the market price (calculated over the last 90% of the time span) as a function of the adaptive component (α) and the confirmatory bias (σ). Panel A: γ = 0.2, Panel B γ = 0.5 and Panel C : γ = 0.8. The initial price is 0.9. Number of agents exiting the market as a function of the adaptive component (α) and the confirmatory bias (σ). Panel A: γ = 0.2, Panel B γ = 0.5 and Panel C : γ = 0.8. The initial price is 0.9.
σ Time to convergence: simulated / analytical0.995
0.9955
0.996
0.9965
0.997
0.9975
0.998
0.9985
0.999
0.9995
1
1.26
1.28
1.3
1.32
1.34
1.36
1.38 x 10
−3
σ
deviation of simulated inefficiency
from the analytical prediction
0.995
0.9955
0.996
0.9965
0.997
0.9975
0.998
0.9985
0.999
0.9995
1
1
1.05
1.1
1.15
1.2
1.25
1.3
1.35
1.4
1.45
1.5
We avoid the shortcoming of assuming a constant β(P t ). As discussed byLeBaron (2001), if β(P t ) is assumed to be constant, the behavior of the simulated market is extremely sensitive to the value of β, which makes it difficult to interpret the results.
A decentralized market with common values uncertainty: Non-steady states. M Blouin, R Serrano, Review of Economic Studies. 68Blouin, M., and Serrano, R. 2001. A decentralized market with common values uncer- tainty: Non-steady states. Review of Economic Studies 68: 323-346.
Mixing beliefs among interacting agents. G Deffuant, D Neau, F Amblard, G Weisbuch, Advances in Complex Systems. 3Deffuant, G., Neau, D., Amblard, F., and Weisbuch, G. 2000. Mixing beliefs among interacting agents. Advances in Complex Systems 3: 87-98.
Information percolation in large markets. D Duffie, G Manso, American Economic Review. 97Duffie, D., and Manso, G. 2007. Information percolation in large markets. American Economic Review 97: 203-209.
Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. D Gode, S Sunder, Journal of Political Economy. 101Gode, D., and Sunder, S. 1993. Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy 101: 119-137.
What makes markets allocationally efficient?. D Gode, S Sunder, Quarterly Journal of Economics. 112Gode, D., and Sunder, S. 1997. What makes markets allocationally efficient? Quarterly Journal of Economics 112: 603-630.
On the efficiency of competitive stock markets when traders have diverse information. S Grossman, Journal of Finance. 31Grossman, S. 1976. On the efficiency of competitive stock markets when traders have diverse information. Journal of Finance 31: 573-585.
Traders' expectations in asset markets: Experimental evidence. E Haruvy, Y Lahav, C Noussair, American Economic Review. 97Haruvy, E., Lahav, Y., and Noussair, C. 2007. Traders' expectations in asset markets: Experimental evidence. American Economic Review 97: 1901-1920.
The uses of knowledge in society. F Hayek, American Economic Review. 35Hayek, F. 1945. The uses of knowledge in society. American Economic Review 35: 519- 530.
A builder's guide to agent-based financial markets. B Lebaron, Quantitative Finance. 1LeBaron, B. 2001. A builder's guide to agent-based financial markets. Quantitative Finance 1: 254-261.
Rational expectations, information acquisition, and competitive bidding. P Milgrom, Econometrica. 49Milgrom, P. 1981. Rational expectations, information acquisition, and competitive bid- ding. Econometrica 49: 921-943.
First impressions matter: A model of confirmatory bias. M Rabin, J Schrag, Quarterly Journal of Economics. 114Rabin, M., and Schrag, J. 1999. First impressions matter: A model of confirmatory bias. Quarterly Journal of Economics 114: 37-82.
Meet, discuss and segregate. G Weisbuch, G Deffuant, F Amblard, J Nadal, Complexity. 7Weisbuch, G., Deffuant, G., Amblard, F., and Nadal, J. 2002. Meet, discuss and segre- gate! Complexity 7: 55-63.
Incentive efficiency of double auctions. R Wilson, Review of Economic Studies. 44Wilson, R. 1977. Incentive efficiency of double auctions. Review of Economic Studies 44: 511-518.
Information revelation in a market with pairwise meetings. A Wolinsky, Econometrica. 58Wolinsky, A. 1990. Information revelation in a market with pairwise meetings. Econo- metrica 58: 1-23.
| [] |
[
"Charged Q-balls in gauge mediated SUSY breaking models",
"Charged Q-balls in gauge mediated SUSY breaking models",
"Charged Q-balls in gauge mediated SUSY breaking models",
"Charged Q-balls in gauge mediated SUSY breaking models"
] | [
"Jeong-Pyong Hong \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n",
"Masahiro Kawasaki \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n\nKavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n",
"Masaki Yamada \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n\nKavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n",
"Jeong-Pyong Hong \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n",
"Masahiro Kawasaki \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n\nKavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n",
"Masaki Yamada \nInstitute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan\n\nKavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n"
] | [
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Kavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan",
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Kavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan",
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Kavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan",
"Institute for Cosmic Ray Research\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8582KashiwaChibaJapan",
"Kavli IPMU (WPI)\nUTIAS\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan"
] | [] | It is known that after Affleck-Dine baryogenesis, spatial inhomogeneities of Affleck-Dine field grow into non-topological solitons called Q-balls. In gauge mediated SUSY breaking models, sufficiently large Q-balls with baryon charge are stable while Q-balls with lepton charge can always decay into leptons. For a Q-ball that carries nonzero B and L charges, the difference between the baryonic component and the leptonic component in decay rate may induce nonzero electric charge on the Q-ball. This implies that charged Q-ball, also called gauged Q-ball, may emerge in our universe. In this paper, we investigate two complex scalar fields, a baryonic scalar field and a leptonic one, in an Abelian gauge theory. We find stable solutions of gauged Q-balls for different baryon and lepton charges. Those solutions shows that a Coulomb potential arises and the Q-ball becomes electrically charged as expected. It is energetically favored that some amount of leptonic component decays, but there is an upper bound on its amount due to the Coulomb force. The baryonic decay also becomes possible by virtue of electrical repulsion and we find the condition to suppress it so that the charged Q-balls can survive in the universe. | 10.1103/physrevd.92.063521 | [
"https://arxiv.org/pdf/1505.02594v1.pdf"
] | 119,290,712 | 1505.02594 | ac6e5d695f5f27304849f5554b119d79b0a6803e |
Charged Q-balls in gauge mediated SUSY breaking models
11 May 2015
Jeong-Pyong Hong
Institute for Cosmic Ray Research
The University of Tokyo
5-1-5 Kashiwanoha277-8582KashiwaChibaJapan
Masahiro Kawasaki
Institute for Cosmic Ray Research
The University of Tokyo
5-1-5 Kashiwanoha277-8582KashiwaChibaJapan
Kavli IPMU (WPI)
UTIAS
The University of Tokyo
5-1-5 Kashiwanoha277-8583KashiwaJapan
Masaki Yamada
Institute for Cosmic Ray Research
The University of Tokyo
5-1-5 Kashiwanoha277-8582KashiwaChibaJapan
Kavli IPMU (WPI)
UTIAS
The University of Tokyo
5-1-5 Kashiwanoha277-8583KashiwaJapan
Charged Q-balls in gauge mediated SUSY breaking models
11 May 2015
It is known that after Affleck-Dine baryogenesis, spatial inhomogeneities of Affleck-Dine field grow into non-topological solitons called Q-balls. In gauge mediated SUSY breaking models, sufficiently large Q-balls with baryon charge are stable while Q-balls with lepton charge can always decay into leptons. For a Q-ball that carries nonzero B and L charges, the difference between the baryonic component and the leptonic component in decay rate may induce nonzero electric charge on the Q-ball. This implies that charged Q-ball, also called gauged Q-ball, may emerge in our universe. In this paper, we investigate two complex scalar fields, a baryonic scalar field and a leptonic one, in an Abelian gauge theory. We find stable solutions of gauged Q-balls for different baryon and lepton charges. Those solutions shows that a Coulomb potential arises and the Q-ball becomes electrically charged as expected. It is energetically favored that some amount of leptonic component decays, but there is an upper bound on its amount due to the Coulomb force. The baryonic decay also becomes possible by virtue of electrical repulsion and we find the condition to suppress it so that the charged Q-balls can survive in the universe.
Introduction
Affleck-Dine mechanism [1] is a promising candidate for baryogenesis in supersymmetric (SUSY) theories due to its consistency with the observational bound on reheating temperature which avoids the gravitino problem [2]. In the Affleck-Dine mechanism, baryon asymmetry is generated through the dynamics in the phase direction of a complex scalar field which carries nonzero baryon charge [3]. The scalar field is called Affleck-Dine field. Affleck-Dine field is spatially homogeneous when it starts to oscillate, but spatial inhomogeneities due to quantum fluctuations grows exponentially into non-topological solitons, which are called Q-balls [4,5,6]. Q-ball is a spherical condensate of a scalar field and is defined as a solution in a global U(1) theory which minimizes energy of the system with its charge fixed [7]. Qball is known to decay into quarks or leptons, so that the final baryon number is carried by quarks produced through the decay of Q-balls. However, in gauge mediated SUSY breaking models, a baryonic Q-ball, with sufficiently large charge, can be stable against decay into nuclei when energy per charge of Q-ball is smaller than the proton mass [8]. On the other hand, for a leptonic Q-ball, there exist decay channels into leptons. In this case, baryon number of the universe is generated from lepton asymmetry through sphaleron effect, i.e. leptogenesis.
We focus on Q-balls that carry both baryon and lepton charges. In fact, these Q-balls can be formed when we consider the Affleck-Dine baryogenesis with u c u c d c e c flat direction, for instance. In this case, it is possible that its lepton component can decay into leptons while its baryon component cannot decay into baryons. This implies that the difference between the baryonic component and the leptonic component in decay rate may induce an electric charge. Therefore, through the decay of the leptonic components only, an electric charge is induced even if the neutral Q-ball was formed at the beginning. The electric charge is expected to make differences in the experimental signatures of the relic Q-balls. For instance, the neutral Q-balls can be detected by Super-Kamiokande [9,10,11], which probes the absorption of protons, but this detector is not suited for detection of the charged Q-balls since the charged Q-balls cannot absorb the protons due to the electrical repulsion. The charged Q-balls are likely to behave as some kind of nuclei, and are known to be detectable by such detectors as MACRO [9,12] and the observational bounds on mass and flux of the relic charged Q-balls are obtained [10].
In fact, electrically charged Q-ball has been studied in the literature and is called gauged Q-ball [13]. However, the previous works studied gauged Q-balls in one scalar field theories, so that their results cannot be applied to Q-balls generated after the Affleck-Dine baryogenesis. There is a previous work which also discussed the evolution of Q-balls in two scalar model [14,15], in which the gauge field was neglected. However, in the course of decay, it is expected that an electrical repulsion arises, so that the gauge field must be taken into account.
In this paper, we consider a simplified model where there are two complex scalar fields and U(1) gauge field. One of the complex fields carries baryon charge and positive electric charge, while the other one carries lepton charge and negative electric charge. Our main purpose is to demonstrate that gauged Q-balls may be realized in our universe through the decay process of the lepton component, even if Q-balls are initially neutral. We find stable solutions for different baryon and lepton charges, taking into account the effect of the gauge field. If we suppose that only the leptonic component decays off, a sequence of solutions with B = const. represents the decay process. We examine quantitatively whether such a process is energetically allowed.
In the following sections, we first review the main properties of global Q-balls and gauged Q-balls, i.e. neutral and charged Q-balls respectively. Subsequently, in order to discuss the evolution of a Q-ball which is formed from the flat direction, we consider a two scalar model and find sequences of gauged Q-ball solutions with B = const. We calculate the total energy of Q-balls and gauge fields and examine whether the leptonic decay is energetically favorable. Finally, we give our conclusions in Sec. 5.
Global Q-ball
In this section, we briefly review the main properties of a global Q-ball, which is a stable configuration of a complex scalar field with a fixed conserved charge. Consider a theory of a complex scalar field with a global U(1) charge. The Lagrangian is written as
L = ∂ µ Φ * ∂ µ Φ − V (Φ)(1)
where V (Φ) is a scalar potential and is normalized so that V (0) = 0. Let us renormalize the field for later convenience as
Φ ≡ 1 √ 2 φ.(2)
The global U(1) charge density q is given by
q = 1 2i φ * φ − φφ * .(3)
Q-ball is defined as a solution which minimizes energy of the system with its charge fixed. Using the Lagrange multiplier method, we only need to minimize the following function:
E ω ≡ E + ω Q − 1 2i d 3 x(φ * φ − φφ * )(4)
where E is given by
E = d 3 x 1 2 |φ| 2 + |∇φ| 2 + V (φ) .(5)
Equation (4) is rewritten as
E ω = d 3 x 1 2 |∂ t φ − iωφ| 2 + d 3 x 1 2 |∇φ| 2 + V ω (φ) + ωQ,(6)V ω (φ) ≡ V (φ) − 1 2 ω 2 |φ| 2 .(7)
By minimizing the first term of Eq. (6), we can derive the time dependence of the solution as
φ(x, t) = e iωt φ(x).(8)
Moreover, it is known that the solution which minimizes the second term of Eq. (6) is real and spatially symmetric [16]. Therefore, the radial direction φ(r) is a solution of the following equation:
d 2 φ dr 2 + 2 r dφ dr + ω 2 φ(r) − ∂V (φ) ∂φ = 0.(9)
Here, in order to avoid a singularity at r = 0 and to find a specially localized solution, we set boundary conditions as
dφ dr (0) = 0, φ(∞) = 0.(10)
Now let us find the condition that there exists a solution of Eq. (9). First, we redefine the spatial coordinate and the potential as
r → t,(11)−V ω (φ) = 1 2 ω 2 φ 2 − V (φ).(12)
The equation of motion then becomes and we see that it is equivalent to the equation of motion of a classical point particle under the potential −V ω . The boundary conditions Eq. (10) imply that the classical particle is initially at rest and converges toward the origin asymptotically. In order that this kind of motion is possible, the potential −V ω should be in an appropriate shape as shown in Fig. 1. Quantitatively, −V ω must satisfy the following conditions:
φ + 2 tφ − ∂V ω ∂φ = 0 (13) −V ω (φ) φ (Ⅲ) (Ⅱ) (Ⅰ)max[−V ω (φ)] > 0,(14)−V ω (0) < 0.(15)
The first condition is necessary in order for the particle to possess the potential energy enough to reach the origin. The second means that the particle must be subjected to a backward force near the origin and is also necessary in order for the particle to stop at the origin, not to roll over it. The conditions above are rewritten as the conditions to ω as
ω 2 0 < ω 2 < V (0),(16)ω 2 0 ≡ min 2V (φ) φ 2 .(17)
Furthermore, it generally holds that
dE dQ = ω,(18)
for any Q-ball solution, which can be easily shown by taking a variation of energy in the following way:
δE = d 3 x ωδωφ 2 + ω 2 φδφ − ∆φδφ + V (φ)δφ = ω d 3 x δωφ 2 + 2ωφδφ = ωδQ,(19)
where we used the equation of motion Eq. (9). Therefore, the condition Eq. (16) is rewritten as
ω 0 < dE dQ < V (0) ≡ m Φ .(20)
It is noted that the second inequality indicates that a Q-ball with a charge Q is energetically favorable compared to a Q-ball with a charge Q − 1 and a particle, which is consistent with the definition of Q-ball solution.
Here, let us identify the scalar field φ as a D-and F-flat direction in gauge mediated SUSY breaking models. In this case, the potential of the flat direction is mainly given by its soft mass term. However, the soft mass is suppressed for energy scale larger than messenger scale and the potential becomes flat 1 . If we approximate the potential as V = V 0 = const., there exists an analytic solution [8] given by
φ = φ 0 sin(ωr)/ωr, r < R ≡ π/ω 0, r > R (21)(22)
whose energy is then written as
E = 4π √ 2 3 V 1/4 0 Q 3/4 .(23)
Then, from dE/dQ = ω, which is proven above,
ω = √ 2πV 1/4 0 Q −1/4 .(24)
1 The exact form of the potential is derived in Ref. [17].
This indicates that for a large charge, dE/dQ may become small enough. Therefore, for a baryonic Q-ball with a large baryon number, it may hold that
dE dB < m p ,(25)
where m p ( 1GeV) is the proton mass. This means that the Q-ball is stable against the decay into protons. On the other hand, if Q-ball carries a lepton number, it can decay into leptons. This is the motivation of our assumption in Sec. 4 that baryonic component of Q-ball is stable while leptonic component can decay into leptons.
Gauged Q-ball
In this section, we consider a complex scalar field that is charged under U(1) gauge symmetry. Although this toy model is not motivated in SUSY theories, we investigate it to see the effect of gauge force on Q-balls. In this case, we must solve the equation of motion for the gauge field A µ as well as the complex scalar field. The spatially localized configuration in this theory is called gauged Q-ball. The Lagrangian is written as
L = (D µ Φ) * D µ Φ − V (Φ) − 1 4 F µν F µν ,(26)D µ ≡ ∂ µ − ieA µ(27)
where V (Φ) is a scalar potential. We parameterize the scalar field in the same way as in the case of global Q-ball:
Φ ≡ 1 √ 2 φ,(28)
φ(x, t) = e iωt φ(r).
For the gauge field, we adopt the following parameterization [13].
A 0 = A 0 (r),(30)A i = 0.(31)
The first indicates that we are searching the spatially symmetric solution and the second implies that we are assuming the absence of a magnetic field, which means in turn the absence of an electric current. The equations of motion are then given by
d 2 φ dr 2 + 2 r dφ dr + φg 2 − dV dφ = 0,(32)d 2 g dr 2 + 2 r dg dr − e 2 φ 2 g = 0,(33)
where we redefine the gauge field to absorb ω as g ≡ ω − eA 0 . Note that g is gauge invariant. We set boundary conditions as
φ(∞) = 0, dφ dr (0) = 0,(34)g(∞) = ω, dg dr (0) = 0,(35)
to avoid singularities at r = 0. Note that when φ(r) → 0 as r → ∞, the gauge field asymptotes to a certain constant as r → ∞ by Eq. (33). Thus the boundary condition g(∞) = ω is just a definition of ω. Note that Eq. (33) can be rewritten as
(r 2 g ) = e 2 r 2 φ 2 g.(36)
This implies that if g(0) > 0, then g becomes positive for r > 0, so that g increases, while in the opposite case g(0) < 0, g decreases. In either case, g 2 always increases.
Here, let us consider an analogy in a similar way as considered in the previous section (Fig. 2). Eq. (32) is analogous to the equation of motion for a classical particle which is subjected to the potential −V g = −V + 1 2 g 2 φ 2 . As mentioned above, g 2 always increases, so that the effective mass term g 2 φ 2 /2 increases with time. From this we can derive some important properties of gauged Q-ball. Since the effective mass increases, it is energetically possible that the particle reaches the origin even if it moves away from the origin at the beginning, which means that there exists radially non-monotonic solution. We can interpret this kind of solution as the result of the scalar field being pushed outward due to the electrical repulsion. We show both kinds of solutions for gauge mediation-like model in Fig. 3, where we approximate the potential as V (Φ) = m 4 Φ ln(1 + Φ 2 /m 2 Φ ). Indeed the non-monotonic solutions arise for charges larger than those of the monotonic ones.
It holds that
dE dQ = ω,(37)
for gauged Q-ball as well, whose proof is similar to that for global Q-ball [18]. The energy of the Q-ball is given by whose variation leads to
E = d 3 x 1 2 (∇φ) 2 + 1 2 (∇A 0 ) 2 + 1 2 φ 2 (ω − eA 0 ) 2 + V (φ) ,(38)−V g φ −V g φδE = d 3 x −∆φδφ − A 0 δ∆A 0 + φδφ(ω − eA 0 ) 2 + (ω − eA 0 )δ(ω − eA 0 )φ 2 + V δφ = d 3 x 2φδφ(ω − eA 0 ) 2 + (ω − eA 0 )δ(ω − eA 0 )φ 2 − A 0 δ∆A 0 = d 3 x [(ω − eA 0 )δq − A 0 δ∆A 0 ] = ωδQ − d 3 x [eA 0 δq + A 0 δ∆A 0 ] = ωδQ.(39Q ≡ qd 3 x (40) = d 3 x(ω − eA 0 )φ 2 .(41)
The charge of a gauged Q-ball has an upper limit, above which there is no localized solution. This is contrast to the case of global Q-balls, where Q-ball solutions exist for arbitrarily large Q. The blue circles in Fig. 4 are the upper limits on charges, and the results can be fitted as
log 10 Q max = −1.0 × 10 −1 − 1.9 log 10 (e 2 ),(42)
which is shown by the blue line. For a global Q-ball, the condition
dE dQ < m Φ ,(43)
was necessary for the existence of solution (see Eq. (16)). However, for a gauged Q-ball, it is possible that
dE dQ > m Φ ,(44)
which implies a Q-ball with a charge Q−1 and a particle at infinity are energetically favorable compared to a Q-ball with a charge Q. This behavior arises when the charge is large enough and therefore may be interpreted as a result of electrical repulsion. Even so, a gauged Q-ball may exist as itself because a Q-ball with a charge Q is still energetically favorable compared to a Q-ball with a charge Q − 1 and a particle near the surface just after emission, due to the large Coulomb potential. Thus, gauged Q-ball is expected to be a metastable solution if Eq. (44) is satisfied. The red circles in Fig. 4 demonstrates when dE/dQ becomes m Φ . The results can be fitted as
log 10 Q = −1.1 × 10 −1 − 1.6 log 10 (e 2 ).(45)
This dependence can be explained in the following way. If we approximate the energy of an emitted particle dE/dQ as the energy with electricity switched off, plus Coulomb energy,
dE dQ = ω 0 + e 2 Q 4πR = ω 0 + e 2 Q 4π 2 ω 0 = ω 0 + e 2 Q 2 √ 2π V 1/4 0 Q −1/4 ,(46)
where we used the analytic expression in the previous section with R = π/ω 0 , ω 0 = √ 2πV 1/4 0 Q −1/4 . The charge of Q-ball at which dE/dQ = m Φ is thus given by
Q = 2 √ 2πm Φ e 2 V 1/4 0 4/3 ,(47)
or
log 10 Q = 4 3 log 10 2 √ 2πm Φ V 1/4 0 − 4 3 log 10 (e 2 ) 9.9 × 10 −1 − 1.3 log 10 (e 2 ),(48)
where we used m Φ ω 0 and set V 0 into V 1/4 0 = 1.6m Φ , whose value is appropriate for the solutions we are dealing with. This estimation roughly explains our numerical solutions of Fig. 4.
Q-balls in two scalar model with U(1) gauge field
Since our main interest is in the evolution of global Q-balls formed from the flat direction which is responsible for the Affleck-Dine mechanism, we must consider the case of several numbers of scalar fields coupled with a gauge field. Here we consider the simplest case in which the flat direction consists of two scalar fields which carry baryon and lepton numbers respectively, and the gauge field is Abelian. The electric charges must be opposite since the flat direction is neutral. The Lagrangian is then written as
L = (D µ Φ 1 ) * D µ Φ 1 + (D µ Φ 2 ) * D µ Φ 2 − V (Φ 1 , Φ 2 ) − 1 4 F µν F µν ,(49)D µ Φ 1 = (∂ µ − ieA µ )Φ 1 ,(50)D µ Φ 2 = (∂ µ + ieA µ )Φ 2(51)
and baryon and lepton charges are
B = 1 i d 3 x(Φ * 1 D 0 Φ 1 − Φ 1 (D 0 Φ 1 ) * ) ≡ d 3 xb,(52)L = 1 i d 3 x(Φ * 2 D 0 Φ 2 − Φ 2 (D 0 Φ 2 ) * ) ≡ d 3 xl(53)
where b and l are baryon and lepton number densities. Since we assign the positive and negative electric charges for B and L components, respectively, the total electric charge is given by
Q = B − L.(54)
We find stable solutions and calculate their energies, and examine if leptonic decay is energetically allowed. First, we adopt the same parameterization as before:
Φ i ≡ 1 √ 2 φ i , i = 1, 2 (55) φ 1 (x, t) = e iω 1 t φ 1 (r),(56)φ 2 (x, t) = e iω 2 t φ 2 (r),(57)A i = 0,(58)A 0 = A 0 (r).(59)
The equations of motion then become
d 2 φ 1 dr 2 + 2 r dφ 1 dr + φ 1 (ω 1 − eA 0 ) 2 − ∂V ∂φ 1 = 0,(60)d 2 φ 2 dr 2 + 2 r dφ 2 dr + φ 2 (ω 2 + eA 0 ) 2 − ∂V ∂φ 2 = 0,(61)d 2 A 0 dr 2 + 2 r dA 0 dr + eφ 2 1 (ω 1 − eA 0 ) − eφ 2 2 (ω 2 + eA 0 ) = 0(62)
with the boundary conditions given by
φ 1 (∞) = φ 2 (∞) = 0, (63) dφ 1 dr (0) = dφ 2 dr (0) = 0,(64)A 0 (∞) = dA 0 dr (0) = 0.(65)
We prove here
∂E ∂B L = ω 1 ,(66)∂E ∂L B = ω 2(67)
for later use, which is analogous to dE/dQ = ω for 1-scalar gauged Q-ball. The energy of the system is
E = d 3 x 1 2 (∇φ 1 ) 2 + 1 2 (∇φ 2 ) 2 + 1 2e 2 (e∇A 0 ) 2 + 1 2 φ 2 1 (ω 1 − eA 0 ) 2 + 1 2 φ 2 2 (ω 2 + eA 0 ) 2 + V (φ 1 , φ 2 ) ,(68)
and its variation with respect to φ 1 , φ 2 and A 0 is given by
δE = d 3 x −∆φ 1 δφ 1 − ∆φ 2 δφ 2 − A 0 δ∆A 0 + ∂V ∂φ 1 δφ 1 + ∂V ∂φ 2 δφ 2 + d 3 x φ 1 δφ 1 (ω 1 − eA 0 ) 2 + φ 2 1 (ω 1 − eA 0 )δ(ω 1 − eA 0 ) + d 3 x φ 2 δφ 2 (ω 2 + eA 0 ) 2 + φ 2 2 (ω 2 + eA 0 )δ(ω 2 + eA 0 ) = d 3 x [(ω 1 − eA 0 )δb + (ω 2 + eA 0 )δl − A 0 δ∆A 0 ] = ω 1 δB + ω 2 δL + d 3 xA 0 [e(−δb + δl) − δ∆A 0 ] = ω 1 δB + ω 2 δL,(69)
where we used Eqs. (60), (61), and (62). The proof can be easily generalized into the case of arbitrary number of scalar fields. We assume that the decay of the Q-ball always takes place if energetically allowed and that the evolution of the Q-ball can be approximated as a sequence of gauged Q-ball solutions. Then the decay of the leptonic component only can be represented by a sequence of gauged Qball solutions with the same baryon number arranged in descending order of lepton number, which is expected to decrease due to the decay. The results for B = 1.7×10 4 and B = 8.4×10 4 are shown in Fig. 5 and in Fig. 6 respectively, where we also used the following approximate form of the gauge mediation potential:
V (Φ 1 , Φ 2 ) = m 4 Φ 1 ln 1 + |Φ 1 | 2 m 2 Φ 1 + m 4 Φ 2 ln 1 + |Φ 2 | 2 m 2 Φ 2 + e 2 2 |Φ 1 | 2 − |Φ 2 | 2 2 .(70)
Here, we include the D-term potential, which arises since the D-flat condition |Φ 1 | = |Φ 2 | is not valid anymore, and in addition, we assume m Φ 1 = m Φ 2 ≡ m Φ . We see that as the leptonic component decays, the gauge field, or the Coulomb potential arises. Therefore, it is expected that through decay process, the initially formed neutral Q-ball may evolve into the charged or gauged Q-ball, which means that the charged Q-balls may emerge in our universe. Next, if we look at the energy, we see that in Fig. 5, the energy decreases along the decay, which indicates that the particle which comes out from the Q-ball has positive energy. This means that the Q-ball emits free (i.e. unbounded to the Q-ball) particles until the leptonic component completely vanishes. Whereas in Fig. 6, the energy starts to increase in the middle of the decay, which means that the energy of the emitted particle becomes negative. This in turn means that the particle starts to be bound to the Q-ball. This may be understood in the context of quantum mechanics of many-body system as a cloudy bound state which is analogous to that of an atomic system. We also see a slight decrease in energy near the end of the decay, which means that the emitted particle becomes free again. This is interpreted as follows. As shown in Fig. 6, the leptonic component concentrates at the center while the baryonic one locates outside away from the center. Since the baryonic component is far enough from the surface of the leptonic component, from which the particle is emitted, the particle is initially accelerated outward enough to escape from the Q-ball eventually. We show the solutions with various baryon numbers in Fig. 7. 2 The black circles indicates the solutions with B = L, which are realized at the Q-ball formation after the Affleck-Dine mechanism 3 . From there, the energy decreases along with L, which means that free particles are emitted and the Q-ball becomes electrically charged. The emission continues until the leptonic component vanishes for B < ∼ 3 × 10 4 . However, for B > ∼ 3 × 10 4 , the energy starts to increase in the middle of the decay, which means that the particle starts to form a cloud of bounded leptonic particles. In Fig. 8, we show the electric charge of Q-ball at which (∂E/∂L) B = 0. From the figure, (∂E/∂L) B = 0 occurs for Q 1.6 × 10 4 (71) 2 The gaps in some data are due to the switching of algorithm for the computation. 3 In the previous studies, the solutions were assumed to remain to be B = L, except for Ref. [14]. and it is independent of baryon charge. This can be understood in the following way. When we approximate the energy of an emitted leptonic particle (∂E/∂L) B as the energy with electricity switched off plus Coulomb energy,
∂E ∂L B ≈ ω 0L − e 2 Q 4πR L ω 0L − e 2 Q 4π 2 ω 0L(72)
where we used the analytic expression with R = π/ω 0 . Equating (∂E/∂L) B to zero, we obtain
Q = 4π 2 e 2 2.0 × 10 4 .(73)
This roughly explains Fig. 8. As in the case of 1-scalar gauged Q-ball, it is possible that due to the electrical repulsion. In order to check this, we only need to examine ω 1 of each solution since (∂E/∂B) L = ω 1 as proven above. We illustrate where (∂E/∂B) L = m Φ by black squares in Fig. 7. We plot the electric charge Q at which (∂E/∂B) L becomes m Φ for each B in Fig. 9. The results can be fitted as log 10 Q = 3.0 + 3.0 × 10 −1 log 10 B
(∂E/∂B) L > m Φ(74)
This can be explained as follows. In the same way as deriving Eq. (72), the condition
m Φ = ∂E ∂B L (76) ω 0B + e 2 Q 4πR B = ω 0B + e 2 Q 4π 2 ω 0B e 2 Q 4π 2 ω 0B = e 2 Q 2 √ 2π V 1/4 0 B −1/4 .(77)
Thus, we obtain
Q = 2 √ 2πm Φ V 1/4 0 e 2 B 1/4 ,(78)
where we used the same approximation as before. This roughly explains our numerical results. The decay into protons is also expected to occur due to the electrical repulsion, even if we assume that the initial neutral Q-ball is stable against it. If this happens, baryonic component also decays off, and therefore charged Q-balls cannot be left in the universe. However, if the leptonic decay stops before the electric charge becomes large enough for the baryonic decay to occur, the evolution stops and charged Q-balls may survive. One way to stop the leptonic decay is that the leptonic cloud is close enough to the surface of the Q-ball so that the particle cannot come out. We derive the condition that this happens before the baryonic decay occurs. Let us suppose that the emitted leptons are electrons and roughly estimate the size of the leptonic cloud as Bohr radius. Then the condition that the cloud 4. radius becomes equal to the size of the Q-ball is written as 4π m e Qe 2 = R =
π ω 0 = 1 √ 2 V −1/4 0 B 1/4 ,(80)Q = 4 √ 2πV 1/4 0 B −1/4 m e e 2 .(81)
This must happen for
Q < 2 √ 2πm p V 1/4 0 e 2 B 1/4 ,(82)
that is, before the baryonic decay starts to occur, where we have used the analysis parallel to that below Eq. (76). Therefore, the condition that the leptonic decay stops before baryonic decay occurs is given by
B > 4V 0 m 2 e m 2 p ∼ 10 30 V 0 (10 6 GeV) 4 .(83)
This implies that if the baryon number is large enough, the leptonic decay may stop before the baryonic decay takes place, so that the charged Q-ball may survive as a relic in the universe. We see that for
B > 4V 0 π 4 m 4 e ∼ 10 35 V 0 (10 6 GeV) 4 ,(84)
Bohr radius is already smaller than the Q-ball size when the cloud is about to be formed, which means that the evolution stops without forming the cloud. Thus, more accurately, if
4V 0 m 2 e m 2 p < B < 4V 0 π 4 m 4 e ,(85)
then the charged Q-balls are expected to survive with a cloud surrounding it, and if
B > 4V 0 π 4 m 4 e ,(86)
then the charged Q-balls are expected to survive without the cloud 4 . Finally, the dashed line in Fig. 7 illustrates Q max for each B, above which Q-ball solutions cannot exist. This results from electrical repulsion due to the gauge field.
Conclusions and discussion
In this paper, we considered gauged Q-balls in the two scalar model in order to discuss the evolution of neutral Q-balls which are formed from the flat direction in the Affleck-Dine mechanism and the possibility of realization of gauged Q-balls during their evolution. We approximated the evolution as a sequence of charged Q-ball solutions, which is implied from the situation that only the leptonic component decays off. As a result, a Coulomb potential arises and the Q-ball becomes electrically charged as expected. In other words, it is energetically favored for leptonic decay to occur. However, since there is an upper bound on charge of gauged Q-ball, the amount of decay is limited as well, which we examined quantitatively. In addition, if the baryon number of the initially formed Q-ball is large enough, the electric charge of the Q-ball grows enough so that the particle which is emitted from the Q-ball is bound to it. The baryonic decay is also expected to occur by virtue of the electrical repulsion, which leads to the vanishing of the charged Q-balls in the universe. However, it is expected that if the leptonic cloud is close enough to the surface of the Q-ball, the leptonic decay can stop before the electric charge becomes large enough for the baryonic decay to occur, so that the evolution stops and charged Q-balls can survive. We roughly estimated when this can happen, and as a consequence, we found that there exists a lower bound on the baryon number.
Suppose that dark matter consists of Q-balls. Since a Q-ball is known to absorb protons and emit pions, the neutral Q-balls can be detected by Super-Kamiokande [9,10,11] or IceCube [19], while these detectors are not suited for detection of the charged Q-balls since charged Q-balls cannot absorb the protons due to the electrical repulsion. Thus, the electric charge is expected to make differences in the experimental signatures of the relic Q-balls. The charged Q-balls are expected to behave as some kind of nuclei, and are known to be detectable by such detectors as MACRO [9,12] and the observational bounds on mass and flux of the relic charged Q-balls are obtained [10]. The emitted particles are also expected to contribute to the energy components of the universe and leptonic particles must satisfy the observational bounds on neutrino component from WMAP [20], which must be verified in future work.
Although we assumed that the decay occurs if energetically allowed, we did not specify which kind of decay we are considering, for instance species of particle which is emitted, an actual decay rate etc. These informations must be added when we consider electrodynamical effects and actually solve time development of Q-ball. Lastly, here we are considering the simplest model of a flat direction, but in reality, we may need to consider more complex flat directions, possibly in nonabelian gauge theories as well.
Figure 1 :
1Several kinds of shape of −V ω (φ) depending on ω. (I) : ω 2 0 < ω 2 < V (0), (II) : ω 2 < ω 2 0 , (III) : ω 2 > V (0). The potential of type (II) is appropriate for the existence of the solution.
Figure 2 :Figure 3 :
23Mechanical Gauged Q-balls in gauge mediation-like model with e 2 = 0.002 and m Φ = 1.
Figure 4 :
4Illustration of upper limits on Q, and where dE/dQ = m Φ Here we used Eqs. (32) and (33), and the charge of the Q-ball is given by
Figure 5
5: 2-scalar gauged Q-balls in gauge mediation model with e 2 = 0.002, m Φ = 1 and B = 1.7 × 10 4 .
Figure 6
6: 2-scalar gauged Q-balls in gauge mediation model with e 2 = 0.002, m Φ = 1 and B = 8.4 × 10 4 .
Figure 7 :
7Solutions with various baryon numbers in the E − L plane. The black circles are the solutions with B = L and the black squares are the solutions with (∂E/∂B) L = m Φ . (∂E/∂B) L = m Φ is written as
Figure 8 :
8Electric charge Q when (∂E/∂L) 10 −1 log 10 B,
Figure 9 :
9Electric charge when (∂E/∂B) L = m Φ .
If B > 4V 0 /π 4 m 4 e , the baryonic decay starts to occur when Q = 4m p /m e e 2 , and this is larger than Q = 4π 2 /e 2 , which is when the evolution is expected to stop. Thus, there is no need to worry that the baryonic decay occurs before the evolution stops.
. I Affleck, M Dine, Nucl. Phys. 249361I. Affleck and M. Dine, Nucl. Phys. B249, 361 (1985).
. M Kawasaki, K Kohri, T Moroi, A Yotsuyanagi, Phys. Rev. 7865011M. Kawasaki, K. Kohri, and T. Moroi, A. Yotsuyanagi, Phys. Rev. D78, 065011 (2008).
. T Gherghetta, C Kolda, S Martin, Nucl. Phys. 46837T. Gherghetta, C. Kolda, and S. Martin, Nucl. Phys. B468, 37 (1996).
. A Kusenko, M Shaposhnikov, Phys. Lett. 41846A. Kusenko and M. Shaposhnikov, Phys. Lett. B418, 46 (1998).
. S Kasuya, M Kawasaki, Phys. Rev. 614130S. Kasuya and M. Kawasaki, Phys. Rev. D61, 04130 (2000).
. S Kasuya, M Kawasaki, Phys. Rev. 6223512S. Kasuya and M. Kawasaki, Phys. Rev. D62, 023512 (2000).
. S Coleman, Nucl. Phys. 262263S. Coleman, Nucl. Phys. B262, 263 (1985).
. G Dvali, A Kusenko, M Shaposhnikov, Phys. Lett. 41799G. Dvali, A. Kusenko, and M. Shaposhnikov, Phys. Lett. B417, 99 (1998).
. A Kusenko, V Kuzmin, M E Shaposhnikov, P G Tinyakov, hep-ph/9712212Phys. Rev. Lett. 803185A. Kusenko, V. Kuzmin, M. E. Shaposhnikov and P. G. Tinyakov, Phys. Rev. Lett. 80, 3185 (1998), [hep-ph/9712212].
. J Arafune, T Yoshida, S Nakamura, K Ogure, arXiv:hep-ph/0005103J. Arafune, T. Yoshida, S. Nakamura, and K. Ogure, arXiv:hep-ph/0005103.
. Y Super-Kamiokande, Takenaga, hep-ex/0608057Phys. Lett. 647Super-Kamiokande, Y. Takenaga et al., Phys. Lett. B647, 18 (2007), [hep-ex/0608057].
. M Macro, Ambrosio, hep-ex/9904031Eur. Phys. J. 13MACRO, M. Ambrosio et al., Eur. Phys. J. C13, 453 (2000), [hep-ex/9904031].
. K Lee, J A Stein-Schabes, R Watkins, L M Widrow, Phys. Rev. 391665K. Lee, J. A. Stein-Schabes, R. Watkins and L. M. Widrow, Phys. Rev. D39, 1665 (1989).
. M Kawasaki, F Takahashi, arXiv:hep-ph/0403199M. Kawasaki and F. Takahashi, arXiv:hep-ph/0403199.
. I M Shoemaker, A Kusenko, arXiv:0809.1666hep-thI. M. Shoemaker and A. Kusenko, arXiv:0809.1666 [hep-th].
. A Kusenko, Phys. Lett. 404285A. Kusenko, Phys. Lett. B404, 285 (1997).
. A De Gouve'a, T Moroi, H Murayama, Phys. Rev. 561281A. de Gouve'a, T. Moroi, and H. Murayama ,Phys. Rev. D56, 1281 (1997).
. I E Gulamov, E Ya, M N Nugaev, Smolyakov, arXiv:1311.0325hep-thI. E. Gulamov, E. Ya. Nugaev and M. N. Smolyakov, arXiv:1311.0325 [hep-th].
. S Kasuya, M Kawasaki, T Yanagida, arXiv:1502.00715hep-phS. Kasuya, M. Kawasaki, T. Yanagida, arXiv:1502.00715 [hep-ph].
. M Kawasaki, K Miyamoto, K Nakayama, T Sekiguchi, JCAP. 120222M. Kawasaki, K. Miyamoto, K. Nakayama and T. Sekiguchi, JCAP. 1202, 022 (2012).
| [] |
[
"STNet: Selective Tuning of Convolutional Networks for Object Localization",
"STNet: Selective Tuning of Convolutional Networks for Object Localization"
] | [
"Mahdi Biparva \nDepartment of Electrical Engineering and Computer Science\nYork University Toronto\nM3J 1P3ONCanada\n",
"John Tsotsos [email protected] \nDepartment of Electrical Engineering and Computer Science\nYork University Toronto\nM3J 1P3ONCanada\n"
] | [
"Department of Electrical Engineering and Computer Science\nYork University Toronto\nM3J 1P3ONCanada",
"Department of Electrical Engineering and Computer Science\nYork University Toronto\nM3J 1P3ONCanada"
] | [] | Visual attention modeling has recently gained momentum in developing visual hierarchies provided by Convolutional Neural Networks. Despite recent successes of feedforward processing on the abstraction of concepts form raw images, the inherent nature of feedback processing has remained computationally controversial. Inspired by the computational models of covert visual attention, we propose the Selective Tuning of Convolutional Networks (STNet). It is composed of both streams of Bottom-Up and Top-Down information processing to selectively tune the visual representation of convolutional networks. We experimentally evaluate the performance of STNet for the weakly-supervised localization task on the ImageNet benchmark dataset. We demonstrate that STNet not only successfully surpasses the state-of-the-art results but also generates attention-driven class hypothesis maps. | 10.1109/iccvw.2017.319 | [
"https://arxiv.org/pdf/1708.06418v1.pdf"
] | 4,715,563 | 1708.06418 | 73aa6778a13f047a422d7bbfb57d096d08dae52a |
STNet: Selective Tuning of Convolutional Networks for Object Localization
Mahdi Biparva
Department of Electrical Engineering and Computer Science
York University Toronto
M3J 1P3ONCanada
John Tsotsos [email protected]
Department of Electrical Engineering and Computer Science
York University Toronto
M3J 1P3ONCanada
STNet: Selective Tuning of Convolutional Networks for Object Localization
Visual attention modeling has recently gained momentum in developing visual hierarchies provided by Convolutional Neural Networks. Despite recent successes of feedforward processing on the abstraction of concepts form raw images, the inherent nature of feedback processing has remained computationally controversial. Inspired by the computational models of covert visual attention, we propose the Selective Tuning of Convolutional Networks (STNet). It is composed of both streams of Bottom-Up and Top-Down information processing to selectively tune the visual representation of convolutional networks. We experimentally evaluate the performance of STNet for the weakly-supervised localization task on the ImageNet benchmark dataset. We demonstrate that STNet not only successfully surpasses the state-of-the-art results but also generates attention-driven class hypothesis maps.
Introduction
Inspired by physiological and psychophysical findings, many attempts have been made to understand how the visual cortex processes information throughout the visual hierarchy [6,15]. It is significantly supported by reliable evidence [9,12] that information is processed in both directions throughout the visual hierarchy: the data-driven Bottom-Up (BU) processing stream convolves the input data using some forms of information transformation. In other words, the BU processing stream shapes the visual representation of the input data via a hierarchical cascading stages of information processing. On the other hand, the task-driven Top-down (TD) processing stream is perceived to modulate the visual representation such that the task requirements are completely fulfilled. Consequently, TD processing stream plays the role of projecting the task knowledge over the formed visual representation to achieve In the BU stream, features are collectively extracted and transferred to the top of the hierarchy at which label prediction is generated. The TD processing, on the other hand, selectively activate part of the structure using attention processes. task requirements.
In recent years, while the learning approaches have been getting matured, various models and algorithms have been developed to present a richer visual representation for various visual tasks such as object classification and detection, semantic segmentation, action recognition, and scene understanding [1,7]. Regardless of the algorithms used for representation learning, most attempts benefit from BU processing paradigm, while TD processing has very rarely been targeted particularly in the computer vision community. In recent years, convolutional networks, as a BU processing structure, have shown to be quantitatively very successful on the visual tasks targeted by popular benchmark datasets [16,25,11,10].
Attempts in modeling visual attention are attributed to the TD processing paradigm. The idea is using some form of facilitation or suppression, the visual representation is selected and modulated in a TD manner [27,17]. Visual atten-tion has two modes of execution [4,14]: Overt attention attempts to compensate for the lack of visual acuity throughout the entire field of view in a perception-cognition-action cycle by the means of an eye-fixation controller. In the nutshell, the eye movement keeps the highest visual acuity at the fixation while leaving the formed visual representation intact. Covert attention, on the other hand, modulates the shaped visual representation, while keeping the fixation point unchanged.
We strive to account for both the BU and TD processing in a novel unified framework by proposing STNet, which integrates attentive selection processes into the hierarchical representation. STNet has been experimentally evaluated on the task of object localization. Unlike all previous approaches, STNet considers the biologically-inspired method of surround suppression [26] to selectively deploy high-level task-driven attention signals all the way down to the early layers of the visual hierarchy. The qualitative results reveal the superiority of STNet on this task over the performance of the state-of-the-art baselines.
The remaining of the paper is organized as follows. In Section 1, we review the related work on visual attention modeling in the computer vision community. Section 3 presents the proposed STNet model in details. Experiments are conducted in Section 4 in which STNet performance is qualitatively and quantitatively evaluated. Finally, the paper is ended by a conclusion in Section 5.
Related Work
In recent years, the computer vision community has gained momentum in improving the evaluation results on various visual tasks by developing various generations of deep learning models. convolutional networks have shown their superiority in terms of the learned representation for tasks inherently related to visual recognition such as object classification and detection, semantic segmentation, pose estimation, and action recognition.
Among various visual attention models [2], covert visual attention paradigm involves the scenario in which eye movement is not considered throughout the modeling approach. Fukushima's attentive Neocognitron [8] proposes that attention can be seen as a principled way to recall noisy, occluded, and missing parts of an input image. TD processing is realized as a form of facilitatory gain modulation of hidden nodes of the BU processing structure. Selective Tunning Model of visual attention [28], on the other hand, proposes TD processing using two stages of competition in order to suppress the irrelevant portion of the receptive field of each node. Weights of the TD connections, therefore, are determined as the TD processing continues. Furthermore, only the BU hidden nodes falling on the trace of TD processing are modulated while leaving all the rest intact.
Various attempts have been made to model an implicit form of covert attention on convolutional networks. [24] proposes to maximize the class score over the input image using the backpropagation algorithm for the visualization purposes. [29] introduces an inversed convolutional networkto propagate backward hidden activities to the early layers. Harnessing the superiority of global AVERAGE pooling over global MAX pooling to preserve spatial correlation, [31] has defined a weighted sum of the activities of the convolutional layer feeding into the global pooling layer.
Recently, an explicit notion of covert visual attention has gained interest in the computer vision community [3,30] for the weakly-supervised localization task. Having interpreted ReLU activation and MAX pooling layers as feedforward control gates, [3] proposes feedback control gate layers which are activated based on the solution of an optimization problem. Inspired closely by Selective Tunning model of visual attention, [30] formulates TD processing using a probabilistic interpretation of the Winner-Take-All (WTA) mechanism. In contrast to all these attempts that the TD processing is densely deployed in the same fashion as BU processing, we propose a highly sparse and selective TD processing in this work. The localization approach in which the learned representation of the visual hierarchy is not modified is commonly referred to as weakly supervised object localization [24,18,31,3,30]. This is in contrast with the supervised localization approach in which the visual representation is fine-tunned to better cope with the new task requirements. Additionally, Unlike the formulation for the semantic segmentation task [20,19,21], bounding box prediction forms the basis of performance measure. We evaluate experimentally STNet in this paradigm and provide the evidence that selective tuning of convolutional networks better addresses object localization in the weakly-supervised regime.
Model
STNet
An integration of the conventional bottom-up processing by convolutional networks with the biologically-plausible attentive top-down processing in a unified model is proposed in this work. STNet consists of two interactive streams of processing: The BU stream has the role of forming the representation throughout the entire visual hierarchy. Apparently, information is very densely processed layer by layer in a strict parallel paradigm. The BU pathway processes information at each layer using a combination of basic operations such as convolution, pooling, activation, and normalization functions. The TD stream, On the other hand, develops a projection of the task knowledge onto the formed hierarchical representation until the task requirements are fulfilled. Depending on the type of the task knowledge, the projections may be realized computa-tionally using some primitive stages of attention processing. The cascade flow of information throughout both streams is layer by layer such that once information at a layer is processed, the layer output is fed into the next adjacent layer as the input according to the hierarchical structure.
Any computational formulation of the visual hierarchy representing the input data can be utilized as the structure of the BU processing stream as long as the primary visual task could be accomplished. Convolutional neural networks trained in the fully supervised regime for the primary task of object classification are mainly focused in this paper. Having STNet composed of a total of L layers, the BU processing structure is composed of ∀l ∈ {0, . . . , L}, ∃ z l ∈ R H l ×W l ×C l , where z l is the three dimensional feature volume of hidden nodes at layer l with the dimension of width W l , height H l and C l number of channels.
Structure of the Top-Down Processing
Based on the topology and connectivity of the BU processing stream, an interactive structure for the attentive TD processing is defined. According to the task knowledge, the TD processing stream is initiated and consecutively traversed downward layer by layer until the layer that satisfies task requirements is reached. A new type of nodes is defined to interact with the hidden nodes of the BU processing structure. According to the TD structure, gating nodes are proposed to collectively determine the TD information flow throughout the visual hierarchy. Furthermore, they are very sparsely active since the TD processing is tuned to activate relevant parts of the representation.
The TD processing structure consists of ∀l ∈ {0, . . . , L}, ∃ g l ∈ R H l ×W l ×C l , where g l is the three dimensional (3D) gating volume at layer i having the exact size of it's hidden feature volume counterpart in the BU processing structure. We define function RF (z) to return the set of all the nodes in the layer below that falls inside the receptive field of the top node according to the connectivity topology of the BU processing structure.
Having defined the structural connectivity of both the BU and TD processing streams, we now introduce the attention procedure that locally processes information to determine connection weights of the TD processing structure and consequently the gating node activities at each layer. Once the information flow in the BU processing stream reaches the top of the hierarchy at layer L, the TD processing is initiated by setting the gating node activities of the top layer as illustrated in Fog. 1. Weights of the connections between the top gating node g L and all the gating node in the layer below within the RF (g L ) are computed using the attentive selection process. Finally, the gating node activities of layer L − 1 are determined according to the connection weights. This attention procedure is consecutively executed layer by layer downward to a layer the task requirements
Stages of Attentive Selection
Weights of the connections of the BU processing structure are learned by the backpropagation algorithm [23] in the training phase. For the TD processing structure, however, weights are computed in an immediate manner using the deterministic and procedural selection process from the Post-Synaptic (PS) activities. We define ∀ g l w,h,c ∈ g l , P S(g l w,h,c ) = RF (z l w,h,c ) k l c , where P S(g) is the dot product of two similar-size matrices, one representing the receptive field activities, and the other, the kernel at channel c and layer l.
The selection process has three stages of computation. Each stage processes the input PS activities and then feed the selected activities to the next stage. In the first stage, noisy redundant activities that interfere with the definition of task knowledge are determined to prune away. Among the remained PS activities, the most informative group of activities are marked as the winners of the selection process at the end of the second stage. In the last stage, the winner activities are normalized. Once multiplicatively biased by the top gating node activity, the activity of the bottom gating node is updated consequently. Fig. 2 schematically illustrates the sequence of actions beginning from fetching PS activities from the BU stream to propagating weighted activities of the top gating node to the lower layer.
Stage 1: Interference Reduction
The main critical issue to accomplish successfully any visual task is to be able to distinguish relevant regions from irrelevant ones. Winner-Take-All (WTA) is a biologicallyplausible mechanism that implements a competition between input activities. At the end of the competition, the winner retains it's activity, while the rest become inactive. The Parametric WTA (P-WTA) using the parameter θ is defined as P -W T A(P S(g), θ) = {s | s ∈ P S(g), s ≥ W T A(P S(g)) − θ}. The role of the parameter θ is to establish a safe margin from the winner activity to avoid under-selection such that multiple winners will be selected at the end of the competition. It is remarkably critical to have some near optimal selection process at each stage to prevent the under-or over-selection extreme cases.
We propose an algorithm to tune the parameter θ for an optimal value at which the safe margin is defined based a biologically-inspired approach. It is biologically motivated that once the visual attention is deployed downward to a part of the formed visual hierarchy, those nodes falling on the attention trace will eventually retain their node activities regardless of the intrinsic selective nature of attention mechanisms [22,27]. In analogy to this biological finding, the Activity Preserve (AP) algorithm optimizes for the distance from the sole winner of the WTA algorithm at which if all the PS activities outside the margin are pruned away, the top hidden node activity will be preserved.
Algorithm 1 specifies the upper and lower bounds of the safe margin. The upper bound is clearly indicated by the sole winner given by the WTA algorithm, while the lower bound is achieved by the output of the AR algorithm. Consequently, the P-WTA algorithm returns all the PS activities that fall within this range specified by the upper and lower bound values. They are highlighted as the winners of the first stage of the attentive selection process. Basically, the set W 1st , returned from P-WTA algorithm, contains those nodes within the receptive field that most participate in the calculation of the top node activity. Therefore, they are the best candidates to initiate the attention selection processes of the layer below. The size of the set of winners at this point, however, is still large. Apparently, further stages of selection are required to prohibit interference and redundant TD processing caused by the over-selection phenomenon.
Stage 2: Similarity Grouping In the second stage, the ultimate goal is to apply a more restrictive selection procedure in accordance with the rules elicited from the task knowledge. Grouping of the winners according to some similarity measures serves as the basis of the second stage of the attention selection process. Two modes of selection at the second stage are proposed depending on whether the current layer of processing has a spatial dimension or not: Spatially-Contiguous(SC) and Statistically-Important(SI) selection modes respectively. The former is applicable to the Convolutional layers and the latter to the Fully-Connected(FC) layers in a typical convolutional network.
There is no ordering information between the nodes in the FC layers. Therefore, one way to formulate the relative importance between nodes is using the statistics calculated from the sample distribution of node activities. SI selection mode is proposed find the statistically important activities. Based on an underlying assumption that the node activities have a Normal distribution, the set of winners of the second stage is determined by W 2nd = {s| s ∈ W 1st , s > µ + α * σ}, where µ and σ are the sample mean and standard deviation of W 1st respectively. The best value of the coefficient α is searched over the range {−3, −2, −1, 0, +1, +2, +3} in the second stage based on a search policy meeting the following criteria: First, the size of the winner set W 2nd at the end of the SI selection mode has to be non-zero. Second, the search iterates over the range of possible coefficient values in a descending order until |W 2nd | = 0. Furthermore, an offset parameter O is defined to loosen the selection process at the second stage once these criteria are met. For instance, suppose α is +1 when the second stage search is over. The loosened coefficient α will be -1 for the offset value of 2. In Sec. 4, experimental evaluations demonstrate the effects of loosening CI selection mode on performance improvement.
Convolutional layers, on the other hand, benefit from stacks of two dimensional spatial feature maps. Although the ordering of feature maps in the third dimension is not meant to encode for any particular information, 2D feature maps individually highlight spatial active regions of the input domain projected into a particular feature space. In other words, the spatial ordering is always preserved throughout the hierarchical representation. With the spatial ordering and the task requirement in mind, SC selection mode is proposed to determine the most spatially contigu- Table 1. Demonstration of the STNet configurations in terms of The hyperparameter values. Lprop is the name of the layer at which the attention map is calculated. OF C and O Bridge are the offset values of the CI selection mode at the fully-connected and bridge layers respectively. α is the trade-off multiplier of the SC selection mode. δpost represents the post-processing threshold value of the attention map. ous region of the winners based on their PS activities.
Architecture L prop O F C O Bridge α δ post ST-AlexNet pool1 3 3 0.2 µ A ST-VGGNet pool3 2 0 0.2 µ A ST-GoogleNet pool2/3x3 s2 0 - 0.2 µ A
SC selection mode first partitions the set of winners W 1st into groups of connected regions. A node has eight immediate adjacent neighbors. A connected region R i , therefore, is defined as the set of all nodes that are recursively in the neighborhood of each other. Out of all the number of connected regions, the output of the SC selection mode is the set of nodes W 2 that falls inside the winner connected region. The winning region is determined by the index i such thatî = arg max i α * rj ∈Ri P S rj (g) + (1 − α) * |R i |, where P S rj (g) is the PS activity of node r j among the set of all PS activities of the top node g, and the value of multiplier α is cross-validated in the experimental evaluation stage for a balance between the sum of PS activities and the number of nodes in the connected region. Lastly, SC selection mode returns the final set of winners W 2nd = {s| s ∈ Rî} such that W 2nd better addresses task requirements in comparison to W 1st . We experimentally support this arguments in 4.
Having known the set of winners W 2nd out of the entire set of nodes falling inside the receptive field of the top node RF (g), it is straightforward to compute values of the weight connections of the TD processing structure that are active and those that remain inactive. The inactive weight connections have value zero. In Stage 3, the mechanism to set the values of the active weight connections from W 2nd will be described.
Stage 3: Attention Signal Propagation
Gating node are defined to encode attention signals using multiple level of activities. The top gating node propagates the attention signal proportional to the normalized connection weights to the layer below. Having the set of winners W 2nd for the top gating node g, P S W 2nd (g) is the set of PS activities of the corresponding winners. The set of normalized PS activities is defined as P S norm = {s | s ∈ P S W 2nd (g), s = s/ si∈P S W 2nd (g) s i }. Weight values of the active TD connections are specified as follows: ∀ i ∈ W 2nd , w ig = P S i norm , where w ig is the connection from the top gating node g to the gating node i in the layer below, and P S i norm is the PS activity of the winner node i. At each layer, the attentive selection process is per- Table 2. Comparison of the STNet localization error rate on the ImageNet validation set with the previous state-of-the-art results. The bounding box is predicted given the single center crop of the input images with the TD processing initialized by the ground truth category label. *Based on the result reported by [30] formed for all the active top gating nodes. Once the winning set for each top gating node is determined and the normalized values of the corresponding connection weights to the layer below are computed, the winner gating nodes of the layer below are updated as follows:
∀ i ∈ {1, . . . , |g l |}, ∀ j ∈ {1, . . . , |W 2nd i |}, g l−1 j + = w ji * g l i .
The updating rule ensures that the top gating node activity is propagated downward such that it is multiplicatively biased by weight values of the active connections.
Experimental Results
Top-down visual attention is necessary for the completion of sophisticated visual tasks for which only Bottom-Up information processing is not sufficient. This implies that tasks such as object localization, visual attribute extraction, and part decomposition require more processing time and resources. STNet, as a model benefiting from both streams of processing, is experimentally evaluated on object localization task in this work.
STNet is implemented using Caffe [13], a library originally developed for convolutional networks. AlexNet [16], VGGNet(16) [24], and GoogleNet [25] are the three convolutional network architectures that are applied to define the BU processing structure of STNet. The model weight parameters are retrieved from the publicly available convolutional network repository of Caffe Model Zoo in which they are pre-trained on ImageNet 2012 classification training dataset [5]. For the rest of the paper, we refer to STNet with AlexNet as the base architecture of the BU structure as ST-AlexNet. This also applies similarly to VGGNet and GoogleNet.
Implementation Details
Bounding Box Proposal: Having an input image fed into the BU processing stream, a class specific attention map for category k at layer l is created. It is a resultant of the TD processing stream initiated from the top gating layer with only one at node k and zero at the rest. Once the attention signals are completely propagated downward to layer l, the class specific attention map is defined by collapsing the gating volume g l ∈ R H l ×W l ×C l at the third di- mension into the attention map A l k ∈ R H l ×W l as follows:
A l k = i∈C l g l i ,
where C l is the number of gating sheets at layer l, and g l i is a 2D gating sheet. We propose to postprocess the attention map by setting all the small collapsed values below the sample mean value of the map to zero.
We propose to predict a bounding box from the thresholded attention map l k using the following procedure. Apparently, the predicted bounding box is supposed to enclose an instance of the category k. If layer l is somewhere in the middle of the visual hierarchy, l k is transformed into the spatial space of the input layer. In the subsequent step, a tight bounding box around the non-zero elements of the transformed l k is calculated. Nodes inside the RF of the gating nodes at the boundary of the predicted box are likely to be active if the TD attentional traversal further continues processing lower layers. Therefore, we choose to pad the tight predicted bounding box with the half size of the accumulated RF at layer l. We calculate accurately the accumulated RF size of each layer according to the intrinsic properties of the BU processing structure such as the amount of padding and striding of the layer.
Search over Hyperparameters: There a few number of hyperparameters in STNet that are experimentally crossvalidated using one held-out partition of the ImageNet validation set. It contains 1000 images which are selected from the randomly-shuffled validation set. The grid search over the hyperparameter space finds the best-performing configuration for each convolutional network architecture.
The SI selection mode is experimentally observed to perform more efficiently once the offset parameter O is higher than zero. The offset parameter has the role of loosening the selection process for the cases under-selection is very dominant. Furthermore, we define the bridge layer as the one at which the 3D volume of hidden nodes collapses into a 1D hidden vector. CI selection procedure is addition-ally applied to the entire gating volume of the bridge layer in order to prevent the over-selection phenomenon. Except GoogleNet, the other two have a bridge layer. Further implementation details regarding all three architectures are given in the supplementary document.
Hyperparameters such as the layer at which the best localization result is obtained, the multiplier of the SC selection mode, and the threshold value for the bounding box proposal procedure are all set by the values obtained from the cross-validation on the held-out partition set for all three convolutional networks. Having the best STNet configurations given in Table 1, we measure STNet performance on the entire ImageNet validation set.
Weakly Supervised Localization
The significance of the attentive TD processing in STNet is both quantitatively and qualitatively evaluated on the Im-ageNet 2015 benchmark dataset for the object localization task. The experimental setups and procedures have been considerably kept comparative with previous works.
Dataset and evaluation: Localization accuracy of STNet is evaluated on the ImageNet 2015 validation set containing 50,000 images of variable sizes. The shortest side of each image is reduced to the size of the STNet input layer. A single center crop of the size equal to the input layer is then extracted and sent to STNet for bounding box prediction. In order to remain comparative with the previous experimental setups for the weakly supervised localization task [3,30], the ground truth label is provided to initiate the TD processing. A localization prediction considers to be correct if the Intersection-over-Union (IoU) of the predicted bounding box with the ground truth is over 0.5.
Quantitative results: STNet localization performance surpasses the previous works with a comparative testing protocol on the ImageNet dataset. For all three BU archi- tectures, Table 2 indicates that STNet quantitatively outperforms the state-of-the-art results [24,31,3,30]. The results imply not only the localization accuracy has improved but also significantly less number of nodes are active in the TD processing stream, while all the previous approaches densely seek a locally optimum state of the TD structure.
Comparison with Previous Works: One of the factors distinguishing STNet from other approaches is the selective nature of the TD processing. In gradient-based approaches such as [24], the gradient signals, which are computed with respect to the input image rather than the weight parameters, are deployed densely downward to the input layer. Deconvnet [29] is proposed to reverse the same type and extent of processing as the feedforward pass originally for the purpose of visualization. The Feedback model [3] similarly defines a dense feedback structure that is iteratively optimized using a secondary loss function to maintain the label predictability of the entire network. The recent MWP model [30] proposes to remain faithful to all the aforementioned models with respect to the extent of the TD processing. In contrast to these all, the TD structure of STNet remains fully inactive except to a small portion that leads to the attended region of the input image. We empirically verify that on average less than 0.3% of the TD structure is active, while improving the localization accuracy. This implies comparative localization results can be obtained with faster speed and less wasted amount of computation in the TD processing stream. Furthermore, it is worth noting that ST-AlexNet localization performance is very close to the two other high capacity models despite the shallow depth and simplicity of the network architecture.
Qualitative Analysis: The qualitative results provide insights on the strengths and weakness of STNet as illustrated in Fig. 4. Investigating the failed cases, we are able to identify two extreme scenarios: under-selection and over-selection scenarios. The under-selection scenario is caused by the inappropriate learned representation or improper configuration of the TD processing, while the overselection scenario mainly is due to either multi-instance or what we call Correlated Accompanying Object cases. A large bounding box enclosing multiple objects is proposed as a result of over-selection. Neither streams of STNet are tuned to systematically deal with these extreme scenarios.
Class Hypothesis Visualization
We show that gating node activities can further be processed to visualize the salient regions of input images for an activated category label. Following a similar experimental setup to the localization task given in Table 1, an attentiondriven Class Hypothesis (CH) map is created from the transformed thresholded attention map. We simply increment by one the pixel values inside the accumulated RF box centered at each non-zero pixel of the attention map. Once iterated over all non-zero pixels, the CH map is smoothed out using a Gaussian filter with the standard deviation σ = 6. Fig. 5 illustrates qualitatively the performance of STNet to highlight the salient parts of the input image once the TD processing stream is initiated with the ground truth category label. Further details regarding the visualization experimental setups are given in the supplementary document.
Comparison of convolutional networks: We observed in Sec. 4.2 that the localization performance of the ST-GoogleNet surpasses both ST-AlexNet and ST-VGGNet. The qualitative experimental results using CH maps in Fig. 5 further shed some light on the inherent nature of this discrepancy. Both AlexNet and VGGNet take benefit of a coherently increasing RF sizes along the visual hierarchy such that at each layer all hidden nodes have a similar RF size. Consequently, the scale at which features are extracted coherently changes from layer to layer. On the other hand, GoogleNet is always taking advantage of intermixed multiscaling feature extraction at each layer. Additionally, 1x1 Figure 6. The critical role of the second stage of selection is illustrated using CH visualization. In the top row of each section, images are presented with boxes for the ground truth (blue), full-STNet predictions (green), and second-stage-disabled predictions (red). In the second and third rows of each section, CH maps from the full and partly disabled STNet are given respectively. convolutional layers act as high capacity parametrized modules by which any affine mixture of features free from the spatial domain could be computed. In the TD processing, such layers as are fully-connected layers in all experiments.
Context Interference: The learned representation of convolutional networks strongly relies on the background context over which the category instances are superimposed for the category label prediction of the input image. This is expected since the learning algorithm does not impose any form of spatial regularization during the training phase. Fig. 6 depicts the results of the experiment in which we purposefully deactivated the second stage of the selection process. Furthermore, among the set of winners at the end of the first stage, the one with the highest PS activity is remained in the set and the rest are excluded. In this way, there is always one winner at each FC layer. Deactivating the second stage on the convolutional layers deteriorates the capability of STNet to sharply highlight the salient regions relevant to objects in the TD processing stream. The results implies that the learned representation heavily relies on the features collected across the entire image regardless of the ground truth. The SC mode of the second stage helps STNet to visualize the coherent and sharply localized confident regions. The CH visualization demonstrates the essential role of the second stage to deal with the redundant and disturbing context noise for the localization task.
Correlated Accompanying Objects: The other shortcoming of the learned representation emphasized by CH visualization is that the BU processing puts high confidence on the features collected from the regions belonging to cor- Figure 7. We demonstrate using ST-VGGNet the confident region of the accompanying object that highly correlates with the true object category. In each section, the top row contains images with the ground truth (blue) and predicted (red) boxes. In the bottom rows, CH maps highlight the most confident salient regions. related accompanying objects. They happen to co-occur extremely frequently with the the ground truth objects in the training set on which convolutional networks are pretrained. Similar to the previous experiment, the modified version of the first stage for FC layers is used, while the convolutional layers benefit from the original 2-stage selection process. Fig. 7 reveals how STNet misleadingly localize with the highest confidence the accompanying object that highly correlates with the ground truth object. As soon as the visual representation confidently relates the correlated accompanying object with the true category label, overselecting for the bounding box prediction will be inevitable. CAO, in addition to the multi-instance scenario, mostly accounts for the over-selection phenomenon in the localization task. We credit these two sources of over-selection to the pre-trained representation obtained from the backpropagation learning algorithm.
Figure 1 .
1STNet consists of both BU and TD processing streams.
Figure 2 .
2Schematic Illustration of the sequence of interaction between the BU and TD processing. Three stages of the attentive selection process are illustrated. are fulfilled.
Figure 3 .
3Modular diagram of the interactions between various blocks of processing in both the BU and TD streams.
N EG(P S) = {s | s ∈ P S(g), s ≤ 0} 2: P OS(P S) = {s | s ∈ P S(g), s > 0}3: SU M (N EG) = n i ∈N EG(P S) ni 4: buf f er = SU M (N EG) 5: i = 0 6: while i ≤ |P OS(P S)|, buf f er < do 7: buf f er+ = SORT (P OS(P S))while 10: return SORT (P OS(P S))[i − 1]
Figure 4 .
4Illustration of the predicted bounding boxes in comparison to the ground truth for ImageNet images. In the top section, STNet is successful to localize the ground truth objects. The bottom section, on the other hand, demonstrates the failed cases. The top, middle, and bottom rows of each section depict the bounding boxes from the ground truth, ST-VGGNet, and ST-GoogleNet respectively.
Figure 5 .
5Demonstration of the attention-driven class hypothesis maps for ImageNet images. In each top or bottom section, each row top to bottom represents ground truth boxes on RGB images, the CH map from ST-VGGNet, and the CH map from ST-GoogleNet respectively.
ConclusionWe proposed an innovative framework consisting of the Bottom-Up and Top-Down streams of information processing for the task of object localization. We formulated the Top-Down processing as a cascading series of local attentive selection processes each consisting of three stages: First inference reduction, second similarity grouping, and third attention signal propagation. We demonstrated experimentally the efficiency, power, and speed of STNet to localize objects on the ImageNet dataset supported by the significantly improved quantitative results. Class Hypothesis maps are introduced to qualitatively visualize attentiondriven class-dependent salient regions. Having investigated the difficulties of STNet in object localization, we believe the visual representation of the Bottom-Up stream is one of the shortcomings of this framework. The significant role of the selective Top-Down processing in STNet could be foreseen as a promising approach applicable in a similar fashion to other challenging computer vision tasks.
50 years of object recognition: Directions forward. Computer Vision and Image Understanding. A Andreopoulos, J K Tsotsos, 117A. Andreopoulos and J. K. Tsotsos. 50 years of object recog- nition: Directions forward. Computer Vision and Image Un- derstanding, 117(8):827-891, 2013. 1
Towards the quantitative evaluation of visual attention models. Z Bylinskii, E M Degennaro, R Rajalingham, H Ruda, J Zhang, J K Tsotsos, Vision Research. 1162Z. Bylinskii, E. M. DeGennaro, R. Rajalingham, H. Ruda, J. Zhang, and J. K. Tsotsos. Towards the quantitative evalu- ation of visual attention models. Vision Research, 116:258- 268, 2015. 2
Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. C Cao, X Liu, Y Yang, Y Yu, J Wang, Z Wang, Y Huang, L Wang, C Huang, W Xu, D Ramanan, T S Huang, IEEE International Conference on Computer Vision. 67C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang, W. Xu, D. Ramanan, and T. S. Huang. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In IEEE Inter- national Conference on Computer Vision, December 2015. 2, 5, 6, 7
Visual attention: The past 25 years. M Carrasco, Vision Research. 5113M. Carrasco. Visual attention: The past 25 years. Vision Research, 51(13):1484-1525, 2011. 1
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, IEEE Conference on Computer Vision and Pattern Recognition. IEEEJ. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 248-255. IEEE, 2009. 5
Concurrent processing streams in monkey visual cortex. E A Deyoe, D C Van Essen, Trends in Neurosciences. 115E. A. DeYoe and D. C. Van Essen. Concurrent processing streams in monkey visual cortex. Trends in Neurosciences, 11(5):219-226, 1988. 1
Object categorization: computer and human vision perspectives, chapter The evolution of object categorization and the challenge of image abstraction. A L S Dickinson, Cambridge University PressA. L. S. Dickinson. Object categorization: computer and human vision perspectives, chapter The evolution of object categorization and the challenge of image abstraction, pages 1-37. Cambridge University Press, 2009. 1
A neural network model for selective attention in visual pattern recognition. K Fukushima, Biological Cybernetics. 551K. Fukushima. A neural network model for selective atten- tion in visual pattern recognition. Biological Cybernetics, 55(1):5-15, 1986. 2
Top-down influences on visual processing. C D Gilbert, W Li, Nature Reviews Neuroscience. 145C. D. Gilbert and W. Li. Top-down influences on visual processing. Nature Reviews Neuroscience, 14(5):350-363, 2013. 1
Fast r-cnn. R Girshick, IEEE International Conference on Computer Vision. R. Girshick. Fast r-cnn. In IEEE International Conference on Computer Vision, December 2015. 1
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385arXiv preprintK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1
Why vision is not both hierarchical and feedforward. M H Herzog, A M Clarke, Frontiers in Computational Neuroscience. 81M. H. Herzog and A. M. Clarke. Why vision is not both hierarchical and feedforward. Frontiers in Computational Neuroscience, 8, 2014. 1
Caffe: Convolutional architecture for fast feature embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, ACM International Conference on Multimedia. ACMY. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Interna- tional Conference on Multimedia, pages 675-678. ACM, 2014. 5
Eye movements: The past 25years. Vision Research. E Kowler, 51E. Kowler. Eye movements: The past 25years. Vision Re- search, 51(13):1457-1483, 2011. 1
The ventral visual pathway: an expanded neural framework for the processing of object quality. D J Kravitz, K S Saleem, C I Baker, L G Ungerleider, M Mishkin, Trends in Cognitive Sciences. 171D. J. Kravitz, K. S. Saleem, C. I. Baker, L. G. Ungerleider, and M. Mishkin. The ventral visual pathway: an expanded neural framework for the processing of object quality. Trends in Cognitive Sciences, 17(1):26-49, 2013. 1
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in Neural Information Processing Systems. 15A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097-1105, 2012. 1, 5
A critical review of selective attention: an interdisciplinary perspective. K Lee, H Choo, Artificial Intelligence Review. 401K. Lee and H. Choo. A critical review of selective atten- tion: an interdisciplinary perspective. Artificial Intelligence Review, 40(1):27-50, 2013. 1
Is object localization for free? -weakly-supervised learning with convolutional neural networks. M Oquab, L Bottou, I Laptev, J Sivic, IEEE Conference on Computer Vision and Pattern Recognition. M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object local- ization for free? -weakly-supervised learning with convo- lutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, June 2015. 2
Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. G Papandreou, L.-C Chen, K P Murphy, A L Yuille, IEEE International Conference on Computer Vision, ICCV '15. Washington, DC, USAIEEE Computer SocietyG. Papandreou, L.-C. Chen, K. P. Murphy, and A. L. Yuille. Weakly-and semi-supervised learning of a deep convolu- tional network for semantic image segmentation. In IEEE International Conference on Computer Vision, ICCV '15, pages 1742-1750, Washington, DC, USA, 2015. IEEE Com- puter Society. 2
Constrained convolutional neural networks for weakly supervised segmentation. D Pathak, P Krahenbuhl, T Darrell, IEEE International Conference on Computer Vision. D. Pathak, P. Krahenbuhl, and T. Darrell. Constrained con- volutional neural networks for weakly supervised segmenta- tion. In IEEE International Conference on Computer Vision, pages 1796-1804, 2015. 2
From image-level to pixellevel labeling with convolutional networks. P O Pinheiro, R Collobert, IEEE Conference on Computer Vision and Pattern Recognition. P. O. Pinheiro and R. Collobert. From image-level to pixel- level labeling with convolutional networks. In IEEE Con- ference on Computer Vision and Pattern Recognition, pages 1713-1721, 2015. 2
The role of neural mechanisms of attention in solving the binding problem. J H Reynolds, R Desimone, Neuron. 241J. H. Reynolds and R. Desimone. The role of neural mech- anisms of attention in solving the binding problem. Neuron, 24(1):19-29, 1999. 4
Learning internal representations by error propagation. D E Rumelhart, G E Hinton, R J Williams, DTIC Document. 3Technical reportD. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learn- ing internal representations by error propagation. Technical report, DTIC Document, 1985. 3
Deep inside convolutional networks: Visualising image classification models and saliency maps. K Simonyan, A Vedaldi, A Zisserman, arXiv:1312.603467arXiv preprintK. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. 2, 5, 6, 7
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, IEEE Conference on Computer Vision and Pattern Recognition. 15C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. 1, 5
Analyzing vision at the complexity level. Behavioral and Brain Sciences. J K Tsotsos, 13J. K. Tsotsos. Analyzing vision at the complexity level. Be- havioral and Brain Sciences, 13(03):423-445, 1990. 2
A computational perspective on visual attention. J K Tsotsos, MIT PressJ. K. Tsotsos. A computational perspective on visual atten- tion. MIT Press, 2011. 1, 4
Modeling visual attention via selective tuning. J K Tsotsos, S M Culhane, W Y K Wai, Y Lai, N Davis, F Nuflo, Artificial Intelligence. 7812J. K. Tsotsos, S. M. Culhane, W. Y. K. Wai, Y. Lai, N. Davis, and F. Nuflo. Modeling visual attention via selective tun- ing. Artificial Intelligence, 78(12):507 -545, 1995. Special Volume on Computer Vision. 2
Visualizing and understanding convolutional networks. M D Zeiler, R Fergus, European Conference on Computer Vision. Springer27M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Com- puter Vision, pages 818-833. Springer, 2014. 2, 7
Topdown neural attention by excitation backprop. J Zhang, Z Lin, J Brandt, X Shen, S Sclaroff, European Conference on Computer Vision. Springer67J. Zhang, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff. Top- down neural attention by excitation backprop. In European Conference on Computer Vision, pages 543-559. Springer, 2016. 2, 5, 6, 7
Learning deep features for discriminative localization. B Zhou, A Khosla, A Lapedriza, A Oliva, A Torralba, IEEE Conference on Computer Vision and Pattern Recognition. 6B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Tor- ralba. Learning deep features for discriminative localization. In IEEE Conference on Computer Vision and Pattern Recog- nition, June 2016. 2, 5, 6
| [] |
[
"Mixed, Multi-color, and Bipartite Ramsey Numbers Involving Trees of Small Diameter",
"Mixed, Multi-color, and Bipartite Ramsey Numbers Involving Trees of Small Diameter"
] | [
"Jeremy F Alm ",
"Nicholas Hommowun ",
"Aaron Schneider "
] | [] | [] | In this paper we study Ramsey numbers for trees of diameter 3 (bistars) vs., respectively, trees of diameter 2 (stars), complete graphs, and many complete graphs. In the case of bistars vs. many complete graphs, we determine this number exactly as a function of the Ramsey number for the complete graphs. We also determine the order of growth of the bipartite k-color Ramsey number for a bistar. | null | [
"https://arxiv.org/pdf/1403.0273v2.pdf"
] | 119,585,994 | 1403.0273 | dbb3029a42480c0ce70cba2fd1766fd3519f0060 |
Mixed, Multi-color, and Bipartite Ramsey Numbers Involving Trees of Small Diameter
Jeremy F Alm
Nicholas Hommowun
Aaron Schneider
Mixed, Multi-color, and Bipartite Ramsey Numbers Involving Trees of Small Diameter
Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)Ramsey numbers of trees · bipartite Ramsey numbers · stars · bistars
In this paper we study Ramsey numbers for trees of diameter 3 (bistars) vs., respectively, trees of diameter 2 (stars), complete graphs, and many complete graphs. In the case of bistars vs. many complete graphs, we determine this number exactly as a function of the Ramsey number for the complete graphs. We also determine the order of growth of the bipartite k-color Ramsey number for a bistar.
Introduction
Background
In this paper we investigate Ramsey numbers, both classical and bipartite, for trees vs. other graphs. Trees have been studied less than other graphs, although there have been a number of papers in the last few years. Some general results applying to all trees are known, such as the following result of Gyárfás and Tuza [4]. Theorem 1. Let T n be a tree with n edges. Then R k (T n ) ≤ (n − 1)(k + k(k − 1)) + 2.
More recently, various researchers have studied particular trees of small diameter. Burr and Roberts [3] completely determine the Ramsey number R(S n1 , . . . , S ni ) for any number of stars, i.e., trees of diameter 2. Boza et. al. [2] determine R(S n1 , . . . , S ni , K m1 , . . . , K mj ) exactly as a function of R(K m1 , . . . , K mj ). Bahls and Spencer [1] study R(C, C), where C is a caterpillar, i.e., a tree whose non-leaf vertices form a path. They prove a general lower bound, and prove exact results in several cases, including "regular" caterpillars, in which all non-leaf vertices have the same degree.
We will study bistars (i.e. trees of diameter 3) vs. stars and bistars vs. complete graphs in Section 2, bistars vs. many complete graphs in Section 3, and bistars vs. bistars in bipartite graphs in Section 4.
Notation
For graphs G 1 , . . . , G n , let R(G 1 , . . . , G n ) denote the least integer N such that any edge-coloring of K N in n colors must contain, for some 1 ≤ i ≤ n, a monochromatic G i in the i th color. Let S n denote the (n + 1)-vertex graph consisting of a vertex v of degree n and n vertices of degree 1 (a star). Let B k,m denote the (k + m)-vertex graph with a vertex v of degree k, a vertex w incident to v of degree m, and k + m − 2 vertices of degree 1 (a bistar ). We will call the edge vw the spine of B k,m . (Note that some authors refer to the set of vertices {v, w} as the spine.) We will depict the spine of a bistar with a double-struck edge; see Figure 1. For a graph G whose edges are colored red and blue, and for vertices v and w, if v and w are incident by a red edge, we will say (for the sake of brevity) that w is a "red neighbor" of v. Let deg red (v) denote the number of red neighbors of v, and let
∆ red (G) = max{deg red (v) : v ∈ G} and δ red (G) = min{deg red (v) : v ∈ G}.
In Section 2 we will make use of cyclic colorings. Let K N have vertex set {0, 1, 2, . . . , N − 1}, and let R ⊆ Z N \0 such that R = −R, i.e., R is closed under additive inverse. Define a coloring of K N by uv is colored red if u − v ∈ R and blue otherwise.
Cyclic colorings are computationally nice. For instance, it is not hard to show that if R ⊆ R + R, then any two vertices v and w incident by a red edge must share a red neighbor. We will need this fact in the proof of Theorem 3.
Mixed 2-Color Ramsey Numbers
First we consider bistars vs. stars. We have the following easy upper bound.
Theorem 2. R(B k,m , S n ) ≤ k + m + n − 1.
Proof. Let N = k + m + n − 1, and let the edges of K N be colored in red and blue. Suppose this coloring contains no blue S n . Then every red edge is the spine of a red B k,m , as follows.
If there is no blue S n , then ∆ blue ≤ n − 1, and hence δ red ≥ (N − 1) − (n − 1) = k + m − 1. Let the edge uv be colored red. Then both u and v have (k − 1) + (m − 1) red neighbors besides each other. Even if these sets of neighbors coincide, we may select k − 1 leaves for u and m − 1 leaves for v, giving a red B k,m .
The following lower bound uses some cyclic colorings.
Theorem 3. R(B k,m , S n ) > k+m 2 + n for k, m ≥ 4.
Proof. Let k + m be odd. Let N = k+m 2 + n. Let G be any (n − 1)-regular graph on N vertices. Consider the edges of G to be the blue edges, and replace all non-edges of G with red edges, so that the resulting K N is k+m 2 -regular for red. Clearly, this coloring admits no blue S n . Consider the red edge set. If an edge uv is colored red, then u and v combined have at most k + m − 3 red neighbors besides each other, which is not enough to supply the needed k − 1 red leaves for u and the m − 1 red leaves for v.
Now let k + m be even, and N = k+m 2 + n. We seek a subset R ⊆ Z N that is symmetric (R = −R) and of size k+m 2 satisfying R ⊆ R + R. Thus each vertex will have red degree k+m 2 , but any red edge uv cannot be the spine of a red B k,m , since u and v will have a common neighbor. There are two cases:
Case (i.): k+m 2 is even. Let R = {2} ∪ {2 + 1 : 1 ≤ ≤ k+m−4 4
}, and let R := R ∪ −R . It is easy to check that R ⊆ R + R. Setting B = Z N \{R ∪ 0}, we have |B| = n − 1, and so the cyclic coloring of K N induced by R and B has no red B k,m and no blue S n .
Case (ii.): k+m 2 is odd. Let R = {2} ∪ {2 + 1 : 1 ≤ ≤ k+m−6 4 }, and set R := R ∪ { k+m 2 } ∪ −R . Again, set B = Z N \{R ∪ 0}
, and the cyclic coloring of K N induced by R and B has the desired properties.
Corollary 4. R(B n,n , S n ) > 2n for n ≥ 4.
We conjecture that the lower bound in Corollary 4 is tight; that is, that R(B n,n , S n ) = 2n + 1 for n ≥ 4. We show that this result obtains for n = 4 (but not for n = 3). For the upper bound, suppose there exists a 2-coloring of K 6 with no blue S 3 . Then δ red ≥ 3. So consider the red subgraph G. If G has a vertex of degree 5, the existence of a B 3,3 is immediate. If G has a vertex of degree 4, then it must have 2 such vertices u and v. If u v, then G must look like Figure 3. One may use any edge incident to u or v as a spine. If u ∼ v, and u and v do not share all three remaining neighbors, then the existence of a B 3,3 is immediate. So suppose u and v have neighbors x, y, and z. The only way for G to have degree sequence (4, 4, 3, 3, 3, 3) is for the remaining vertex w to be adjacent to x, y, and z. Then we have a B 3,3 as indicated in Figure 4. Finally, suppose G is 3-regular. If there is no B 3,3 , then any adjacent vertices share a neighbor. It is not hard to see that adjacent vertices cannot share two neighbors in a 3-regular graph on 6 vertices. If any two adjacent vertices share exactly one neighbor, then G can be partitioned into edgedisjoint triangles. But any vertex in such a graph must have even degree, since its degree will be twice the number of triangles in which it participates. This is a contradiction, so there must be some vertices u and v that have no common neighbor. But u and v each have degree three, immediately yielding a B 3,3 . Proof. The lower bound is given by Theorem 3. For the upper bound, suppose a 2-coloring of K 9 contains no blue S 4 . Then δ red ≥ 5. Let G be the red u x y w v z Fig. 4 A red B 3,3 subgraph. Since G has odd order, there must be at least one vertex v of degree ≥ 6. Suppose v ∼ w. It is easy to see that v and w must have at least two neighbors in common; call them y and z. Now v is adjacent to 3 other vertices; call them x 1 , x 2 , and x 3 . There are two remaining vertices x 4 and x 5 . If w is adjacent to either of them, we are done. So suppose w is adjacent to x 1 and x 2 . If either x 4 or x 5 is adjacent to v, we are done, so suppose neither x 4 nor x 5 is adjacent to v or to w. Then x 4 (in order to have degree ≥ 5) must be adjacent to y 1 or to y 2 . Suppose it's y 1 . There are two cases:
1. y 1 ∼ x 5 . Then we have a red B 4,4 as indicated in Figure 5. Consideration of R(B 5,5 , S 5 ) leads into rather unpleasant case analysis when trying to reduce the upper bound from that given by Theorem 2. Now we consider bistars vs. complete graphs.
Theorem 7. R(B k,m , K 3 ) = 2(k + m − 1) + 1.
Proof. For the lower bound, let V 1 and V 2 be two red cliques, each of size k + m − 1, and let every edge between V 1 and V 2 be colored blue.
For the upper bound, let N = 2(k + m − 1) + 1, and give K N an edgecoloring in red and blue. Suppose there is a vertex v with blue degree at least k + m. If any edge in N blue (v) is blue, we have a blue triangle. If not, then N blue (v) is a red clique of size at least k + m, so it contains a red B k,m .
So then suppose that ∆ blue < k + m. It follows that δ red ≥ k + m − 1. Then every red edge is the spine of some red B k,m . To see this, let uv be colored red. Both u and v each have at least k + m − 2 other red neighbors. Even if these red neighborhoods coincide, there are still k − 1 red leaves for u and m − 1 red leaves for v. Now we extend to arbitrary K n .
Theorem 8. R(B k,m , K n ) = (k + m − 1)(n − 1) + 1.
Proof. We proceed by induction on n. Theorem 7 provides the base case n = 3.
So assume n > 3, and let R(B k,m , K n−1 ) ≤ (k + m − 1)(n − 2) + 1. Let N = (k + m − 1)(n − 1) + 1, and consider any edge-coloring of K n in red and blue. If δ red ≥ k + m − 1, then every red edge is the spine of a red B k,m , so suppose δ red ≤ k + m − 2. Then there is a vertex v with blue degree at least (k + m − 1)(n − 2) + 1. By the induction hypothesis, the subgraph induced by N blue (v) contains either a red B k,m or a blue K n−1 . In the latter case, the blue K n−1 along with v forms a blue K n .
For the lower bound, let V 1 , . . . , V n−1 be vertex-disjoint red cliques, each of size k + m − 1. Color all edges among the V i 's blue. Clearly there are no red B k,m 's. Since the blue subgraph forms a Turan graph, there are no blue K n 's.
Mixed Multi-color Ramsey Numbers
In [2], the authors determine R(S k1 , . . . , S ki , k ni , . . . , K n ) exactly as a function of R(K n1 , . . . , K n ). In [6], Omidi and Raeisi give a shorter proof of this result via the following lemma, whose proof is straight from The Book.
Lemma 9. Let G 1 , . . . , G m be connected graphs, let r = R(G 1 , . . . , G m ) and r = R(K n1 , . . . , K n ). If n ≥ 2 and R(G 1 , . . . , G m , K n ) = (r − 1)(n − 1) + 1, then R(G 1 , . . . , G m , K n1 , . . . , K n ) = (r − 1)(r − 1) + 1.
Proof. Let R = R(G 1 , . . . , G m , K n1 , . . . , K n ). For the lower bound, give K r −1 an edge-coloring in colors β 1 , . . . , β that has no copy of K ni in color β i . Replace each vertex of K r −1 by a complete graph of order r − 1 whose edges are colored by colors α 1 , . . . , α m so that no copy of G i appears in color α i . Each edge in the original graph K r −1 expands to a copy of K r−1,r−1 , with each edge the same color as the original edge. This shows that R > (r − 1)(r − 1).
For the upper bound, let N = (r − 1)(r − 1) + 1, and color the edges of K N in colors α 1 , . . . , α m , β 1 , . . . , β . Recolor the edges colored β 1 , . . . , β with a new color α. Since R(G 1 , . . . , G m , K r ) = (r − 1)(r − 1) + 1 = N , K N contains a copy of G i in color α i or a copy of K r in color α. In the former case we are done, so assume the latter obtains. Then consider the clique K r which is colored α. Return to the original coloring in colors β 1 , . . . , β . Since R(K n1 , . . . , K n ) = r , some color class β i contains a copy of K ni . This concludes the proof.
We will now make use of Lemma 9 to determine R(B k,m , K n1 , . . . , K n ) as a function of R(K n1 , . . . , K n ).
Theorem 10. R(B k,m , K n1 , . . . , K n ) = (k + m − 1)[R(K n1 , . . . , K n ) − 1] + 1.
Proof. From Theorem 8 we have that R(B k,m , K n ) = (k + m − 1)(n − 1) + 1. Note that R(B k,m , K 2 ) = k + m, so that
R(B k,m , K 2 , K n ) = R(B k,m , K n ) = [R(B k,m , K 2 ) − 1](n − 1) + 1.
Hence we may apply Lemma 9 to get R(B k,m , K n1 , . . . , K n ) = (k + m − 1)[R(K n1 , . . . , K n ) − 1] + 1.
The authors are unsure whether a similar result can be proved for multiple bistars; we leave this as an open problem.
Bipartite Ramsey Numbers
Let G 1 and G 2 be bipartite graphs. Then BR(G 1 , G 2 ) is the least integer N so that any 2-coloring of the edges of K N,N contains either a red G 1 or a blue G 2 . In [5], Hattingh and Joubert determine the bipartite Ramsey number for certain bistars:
Theorem 11. Let k, n ≥ 2. Then BR(B k,k , B n,n ) = k + n − 1.
We generalize this result slightly.
Theorem 12. Let k ≥ m ≥ 2, n ≥ ≥ 2. Then BR(B k,m , B n, ) = k + n − 1.
Proof. The upper bound follows immediately from Theorem 11. The lower bound construction given in Theorem 1 of Hattingh-Joubert for BR(B s,s , B t,t ) does not work for us. We need this construction: Let L and R be the partite sets, and let N = k + n − 2 = (k − 1) + (n − 1). Let L = {v 0 , v 1 , . . . , v N −1 } and R = {w 0 , w 1 , . . . , w N −1 }. Color v i w j red if (i − j) mod N ∈ {0, 1, . . . , k − 2}, and blue if (i − j) mod N ∈ {k − 1, . . . , N − 1}. Then the red subgraph is (k − 1)-regular, hence no red B k,m , and the blue subgraph is (n − 1)-regular, hence no blue B n, .
Corollary 13. Let T m (resp., T n ) be a tree of diameter at most 3 with maximum degree m (resp., n). Then BR(T m , T n ) = m + n − 1.
Hattingh and Joubert also prove the following k-color upper bound.
Fig. 1
1A bistar, with spine indicated
Fig. 2
2Critical coloring of K 5
Fig. 3
3Configuration of the red subgraph G.
Theorem 6 .
6R(B 4,4 , S 4 ) = 9.
Fig. 5
5A red B 4,4 2. y 1 x 5 . Then, since deg(y 1 ) ≥ 5, y 1 ∼ x i for some i ∈ {1, 2, 3}. Then we have a red B 4,4 as indicated inFigure 6, where |{i, k, }| = 3.
Fig. 6
6A red B 4,4
Theorem 14 .
14For k ≥ 2 and m ≥ 3, we haveBR k (B m,m ) = BR(B m,m , . . . , B m,m ) ≤ k(m − 1) + (m − 1) 2 (k 2 − k) − k(2m − 4)Hence BR k (B m,n ) = O(k). We provide a lower bound to get the following result.Theorem 15. Fix m ≥ 3. Then BR k (B m,m ) = Θ(k). Proof. We show that BR k (B m,m ) > k·(m−1). Let N = k·(m−1), and consider a k-coloring of the edges of K N,N in colors c 0 , . . . , c k−1 . Let the partite sets be L = {v 0 , . . . , , v N −1 } and R = {w 0 , . . . , w N −1 }. Color edge v i w j with color c if and only if ≡ (i − j) mod k. Then the c -subgraph is (m − 1)-regular, hence there can be no monochromatic B m,m .
On the Ramsey numbers of trees with small diameter. P Bahls, T S Spencer, 10.1007/s00373-011-1098-yGraphs Combin. 291Bahls, P., Spencer, T.S.: On the Ramsey numbers of trees with small diameter. Graphs Combin. 29(1), 39-44 (2013). DOI 10.1007/s00373-011-1098-y. URL http://dx.doi. org/10.1007/s00373-011-1098-y
On the Ramsey numbers for stars versus complete graphs. L Boza, M Cera, P García-Vázquez, M P Revuelta, 10.1016/j.ejc.2010.03.009European J. Combin. 317Boza, L., Cera, M., García-Vázquez, P., Revuelta, M.P.: On the Ramsey numbers for stars versus complete graphs. European J. Combin. 31(7), 1680-1688 (2010). DOI 10.1016/j.ejc.2010.03.009. URL http://dx.doi.org/10.1016/j.ejc.2010.03.009
On Ramsey numbers for stars. S A Burr, J A Roberts, Utilitas Math. 4Burr, S.A., Roberts, J.A.: On Ramsey numbers for stars. Utilitas Math. 4, 217-220 (1973)
An upper bound on the Ramsey number of trees. A Gyárfás, Z Tuza, 10.1016/0012-365X(87)90107-5Discrete Math. 663Gyárfás, A., Tuza, Z.: An upper bound on the Ramsey number of trees. Discrete Math. 66(3), 309-310 (1987). DOI 10.1016/0012-365X(87)90107-5. URL http://dx.doi.org/ 10.1016/0012-365X(87)90107-5
Some bistar bipartite Ramsey numbers. J H Hattingh, E J Joubert, DOI 10Graphs Combin. 261Hattingh, J.H., Joubert, E.J.: Some bistar bipartite Ramsey numbers. Graphs Combin. 26(1), 1-7 (2013). DOI 10
A note on the Ramsey number of stars-complete graphs. G R Omidi, G Raeisi, 10.1016/j.ejc.2011.01.007European J. Combin. 324Omidi, G.R., Raeisi, G.: A note on the Ramsey number of stars-complete graphs. European J. Combin. 32(4), 598-599 (2011). DOI 10.1016/j.ejc.2011.01.007. URL http://dx.doi.org/10.1016/j.ejc.2011.01.007
| [] |
[
"Spectrally Adapted Physics-Informed Neural Networks for Solving Unbounded Domain Problems",
"Spectrally Adapted Physics-Informed Neural Networks for Solving Unbounded Domain Problems"
] | [
"Mingtao Xia [email protected] \nDept. of Mathematics\nUCLA\n90095-1555Los AngelesCAUSA\n",
"Lucas Böttcher [email protected] \nDept. of Computational Science and Philosophy\nSchool of Finance and Management\n60322Frankfurt, Frankfurt am MainGermany\n",
"Tom Chou [email protected] \nDept. of Mathematics\nUCLA\n90095-1555Los AngelesCAUSA\n"
] | [
"Dept. of Mathematics\nUCLA\n90095-1555Los AngelesCAUSA",
"Dept. of Computational Science and Philosophy\nSchool of Finance and Management\n60322Frankfurt, Frankfurt am MainGermany",
"Dept. of Mathematics\nUCLA\n90095-1555Los AngelesCAUSA"
] | [] | Solving analytically intractable partial differential equations (PDEs) that involve at least one variable defined on an unbounded domain arises in numerous physical applications. Accurately solving unbounded domain PDEs requires efficient numerical methods that can resolve the dependence of the PDE on the unbounded variable over at least several orders of magnitude. We propose a solution to such problems by combining two classes of numerical methods: (i) adaptive spectral methods and (ii) physics-informed neural networks (PINNs). The numerical approach that we develop takes advantage of the ability of physics-informed neural networks to easily implement high-order numerical schemes to efficiently solve PDEs and extrapolate numerical solutions at any point in space and time. We then show how recently introduced adaptive techniques for spectral methods can be integrated into PINN-based PDE solvers to obtain numerical solutions of unbounded domain problems that cannot be efficiently approximated by standard PINNs. Through a number of examples, we demonstrate the advantages of the proposed spectrally adapted PINNs in solving PDEs and estimating model parameters from noisy observations in unbounded domains. | 10.1088/2632-2153/acd0a1 | [
"https://export.arxiv.org/pdf/2202.02710v2.pdf"
] | 246,634,214 | 2202.02710 | 5e8fb2f94af04a63eb7816ce9bc2985b347cdde8 |
Spectrally Adapted Physics-Informed Neural Networks for Solving Unbounded Domain Problems
28 Feb 2023
Mingtao Xia [email protected]
Dept. of Mathematics
UCLA
90095-1555Los AngelesCAUSA
Lucas Böttcher [email protected]
Dept. of Computational Science and Philosophy
School of Finance and Management
60322Frankfurt, Frankfurt am MainGermany
Tom Chou [email protected]
Dept. of Mathematics
UCLA
90095-1555Los AngelesCAUSA
Spectrally Adapted Physics-Informed Neural Networks for Solving Unbounded Domain Problems
28 Feb 2023Physics-informed neural networksPDE modelsspectral methodsadaptive methodsunbounded domains
Solving analytically intractable partial differential equations (PDEs) that involve at least one variable defined on an unbounded domain arises in numerous physical applications. Accurately solving unbounded domain PDEs requires efficient numerical methods that can resolve the dependence of the PDE on the unbounded variable over at least several orders of magnitude. We propose a solution to such problems by combining two classes of numerical methods: (i) adaptive spectral methods and (ii) physics-informed neural networks (PINNs). The numerical approach that we develop takes advantage of the ability of physics-informed neural networks to easily implement high-order numerical schemes to efficiently solve PDEs and extrapolate numerical solutions at any point in space and time. We then show how recently introduced adaptive techniques for spectral methods can be integrated into PINN-based PDE solvers to obtain numerical solutions of unbounded domain problems that cannot be efficiently approximated by standard PINNs. Through a number of examples, we demonstrate the advantages of the proposed spectrally adapted PINNs in solving PDEs and estimating model parameters from noisy observations in unbounded domains.
Introduction
The use of neural networks as universal function approximators [1,2] led to various applications in simulating [3,4] and controlling [5,6,7,8] physical, biological, and engineering systems. Training neural networks in function-approximation tasks is typically realized in two steps. In the first step, an observable u s associated with each distinct sample or measurement point (x, t) s ≡ (x s , t s ), s = 1, 2, . . . , n is used to construct the corresponding loss function (e.g., the mean squared loss) in order to find representations for the constraint u s ≡ u(x s , t s ) or infer the equations u obeys. In many physical settings, the variables x and t denote the space and time variables, respectively. Thus, the data points (x, t) s in many cases can be classified in two groups, {x s } and {t s }, and the information they contain may be manifested differently in an optimization process. In the second step, the loss function is minimized by backpropagating gradients to adjust neural network parameters Θ. If the number of observations n is limited, additional constraints may help to make the training process more effective [9].
To learn and represent the dynamics of physical systems, the constraints used in physicsinformed neural networks (PINNs) [3,4] provide one possible option of an inductive bias in the training process. The key idea underlying PINN-based training is that the constraints imposed by the known equations of motion for some parts of the system are embedded in the loss function. Terms in the loss function associated with the differential equation can be evaluated using a neural network, which could be trained via backpropagation and automatic differentiation. In accordance with the distinction between Lagrangian and Hamiltonian formulations of the equations of motion in classical mechanics, physics-informed neural networks can be also divided into these two categories [10,11,12]. Another formulation of PINNs uses variational principles [13] in the loss function to further constrain the types of functions used. Such variational PINNs rely on finite element (FE) methods to discretize partial differential equation (PDE)-type constraints.
Many other PINN-based numerical algorithms have been recently proposed. A spacetime domain decomposition PINN method was proposed for solving nonlinear PDEs [14]. In other variants, physics-informed Fourier neural operators have also been proposed to learn the underlying PDE models [15]. In general, PINNs link modern neural network methods with traditional complex physical models and allow algorithms to efficiently use higher-order numerical schemes to (i) solve complex physical problems with high accuracy, (ii) infer model parameters, and (iii) reconstruct physical models in data-driven inverse problems [3]. Therefore, PINNs have become increasingly popular as they can avoid certain computational difficulties encountered when using traditional FE/FD methods to find solutions to physics models.
The broad utility of PINNs is reflected in their application to aerodynamics [16], surface physics [17], power systems [18], cardiology [19], and soft biological tissues [20]. When implementing PINN algorithms to find functions in an unbounded system, the unbounded variables cannot be simply normalized, precluding reconstruction of solutions outside the range of data. Nonetheless, many problems in nature are associated with long-ranged potentials [21,22] (i.e., unbounded spatial domains) and processes that are subject to algebraic damping [23] (i.e., unbounded temporal domains), and thus need to be solved in unbounded domains. For example, to capture the oscillatory and decaying behavior at infinity of the solution to Schrödinger's equation, efficient numerical methods are required in the unbounded domain R [24]. As another example, in structured cellular proliferation models in mathematical biology, efficient unbounded domain numerical methods are required to detect and better resolve possible blow-up in mean cell size [25,26]. Finally, in solid-state physics, long-range interactions [27,28] require algorithms tailored for unbounded domain problems to accurately simulate particle interactions over long distances.
Solving unbounded domain problems is thus a key challenge in various fields that cannot be addressed with standard PINN-based solvers. To efficiently solve PDEs in unbounded domains, we will treat the information carried by the x s data using spectral decompositions of u in the x variable. Typically, a spatial initial condition of the desired solution is given and some spatial regularity is assumed from the underlying physical process. As a consequence, we suppose that we can use a spectral expansion in x to record spatial information. On the other hand, a solution's behavior in time t is unknown and one still has to numerically step forward in time to obtain the solution. Thus, we combine PINNs with spectral methods and propose a spectrally adapted PINN (s-PINN) method that can also utilize recently developed adaptive function expansions techniques [29,30].
In contrast to traditional numerical spectral schemes that can only furnish solutions at discrete, predetermined timesteps, our approach uses time t as an input variable into the neural network combined with the PINN method to define a loss function, which enables (i) easy implementation of high-order Runge-Kutta schemes to relax the constraint on timesteps and (ii) easy extrapolation of the numerical solution at any time. However, our approach is distinct from that taken in standard PINN, variational-PINN, or physics-informed neural operator approaches. We do not input x into the network or try to learn u(x) as a composition of, e.g., Fourier neural operators; instead, we assume that the function can be approximated by a spectral expansion in x with appropriate basis functions. Rather than learning the explicit spatial dependence directly, we train the neural network to learn the time-dependent expansion coefficients. Our main contributions include (i) integrating spectral methods into multi-output neural networks to approximate the spectral expansions of functions when partial information is available, (ii) incorporating recently developed adaptive spectral methods in our s-PINNs, and (iii) presenting explicit examples illustrating how s-PINNs can be used to solve unbounded domain problems, recover spectral convergence, and more easily solve inverse-type PDE inference problems. We show how s-PINNs provide a unified, easyto-implement method for solving PDEs and performing parameter-inference given noisy observation data and how complementary adaptive spectral techniques can further improve efficiency, especially for solving problems in unbounded domains.
In Sec. 2, we show how neural networks can be combined with modern adaptive spectral methods to outperform standard neural networks in function approximation tasks. As a first application, we show in Sec. 3 how efficient PDE solvers can be derived from spectral PINN methods. In Sec. 4, we discuss another application that focuses on reconstructing underlying physical models and inferring model parameters given observational data. In Sec. 5, we summarize our work and discuss possible directions for future research.
A summary of the main variables and parameters used in this study is given in Table 1. Our source codes are publicly available at https://gitlab.com/ComputationalScience/spectrally-adapted-pinns.
Combining Spectral Methods with Neural Networks
In this section, we first introduce the basic features of function approximators that rely on neural networks and spectral methods designed to handle variables that are defined in unbounded domains. In a dataset (x s , t s , u s ), s ∈ {1, . . . , n}, x s are values of the sampled "spatial" variable x which can be defined in an unbounded domain. We will also assume that our problem is defined within a finite time horizon so that t s are time points restricted to a
φ β i,x L (x) ≔ φ i (β(x − x L )) x L translation of basis functions φ β i,x L ≔ φ i (β(x − x L )) u β N,x L spectral expansion of order N generated by the neural network: u β N,x L = N i=0 w β i,x L φ i (β(x − x L )) F (u β N,x L )
frequency indicator for the spectral expansion u β
N,x L H β i,x L
generalized Hermite function of order i, scaling factor β, and translation x L P β N,x L function space defined by the first N + 1 generalized Hermite functions
P β N,x L ≔ {Ĥ β i,x L } N i=0
q scaling factor (β) adjustment ratio ν threshold for adjusting the scaling factor β ρ, ρ 0 threshold for increasing, decreasing N γ ratio for adjusting ρ bounded domain, and are thus normalizable. One central goal is to approximate the constraint u s ≔ u(x s , t s ) by computing the function u(x, t) and the equation it obeys. Our key assumption is that the solution's behavior in x can be represented by a spectral decomposition, while u's behavior in t remains unknown and is to be learned from the neural network. This is achieved by isolating the possibly unbounded spatial variables x from the bounded variables t by expressing u in terms of suitable basis functions in x with time-dependent weights. As indicated in Fig. 1(a), we approximate u s using
u s ≔ u(x s , t s ) ≈ u N (x s , t s ) ≔ N i=0 w i (t s )φ i (x s ),(1)
where {φ i } N i=0 are suitable basis functions that can be used to approximate u in an unbounded domain (see Fig. 1(b) for a schematic of a basis function φ i (x) that decays with x). Examples of such basis functions include, for example, the generalized Laguerre functions in R + and the generalized Hermite functions in R [31]. In addition to being defined on an unbounded domain, spectral expansions allow high accuracy [32] calculations with errors that decay exponentially (spectral convergence) in space if the target function u(x, t) is smooth. Figure 1(c) shows a schematic of our proposed spectrally adapted PINN algorithm. The variable x is directly fed into the basis functions φ i instead of being used as an input in the neural network. If one wishes to connect the output u N (x, t; Θ) of the neural network to the solution of a PDE, one has to include derivatives of u with respect to x and t in the loss function L. Derivatives that involve the variable x can be easily and explicitly calculated by taking derivatives of the basis functions with high accuracy while derivatives with respect to t can be obtained via automatic differentiation [33,34].
If a function u can be written in terms of a spectral expansion in some dimensions (e.g., x in Eq. 1) with appropriate spectral basis functions, we can approximate u using a multi-output neural network by solving the corresponding least squares optimization problem
min Θ s |u N (x s , t s ; Θ) − u s | 2 , u N (x, t; Θ) = N i=0 w i (t; Θ)φ i (x),(2)
where Θ is the hyperparameter set of a neural network that outputs the t-dependent vector of weights w i (t; Θ). This representation will be used in the appropriate loss function depending on the application. The neural network can achieve arbitrarily high accuracy in the minimization of the loss function if it is deep enough and contains sufficiently many neurons in each layer [35]. Since the solution's spatial behavior has been approximated by the spectral expansion which could achieve high accuracy with proper φ i , we shall show that solving Eq. 2 can be more accurate and efficient than directly fitting to u s by a neural network without using a spectral expansion. As a motivating example, we compare the approximation error of a neural network which is fed both x s and t s with that of the s-PINN method in which only t s are inputted, but with the information contained in x s imposed on the solution via basis functions of x. We show that taking advantage of the prior knowledge on the x-data greatly improves training efficiency and accuracy. All neural networks that we use in our examples are based on fully connected linear layers with ReLU activation functions. Weights in each layer are initially distributed according to a uniform distribution U(− √ a, √ a), where a is the inverse of the number of input features. To normalize hidden-layer outputs, we apply the batch normalization technique [36]. Neural-network parameters are optimized using stochastic gradient descent.
Example 1: Function approximation
Consider approximating the function u(x, t) = 8x sin 3x
x 2 + 4 2 t,(3)
which decays algebraically as u(x → ∞, t) ∼ t/|x| 3 when |x| → ∞. To numerically approximate Eq. 3, we choose the loss function to be the mean-squared error
MSE = 1 n n s=1 |u N (x s , t s ) − u s | 2 .(4)
A standard neural network approach is applied by inputting both x s and t s into a 5-layer, 10 neuron-per-layer network defined by hyperparametersΘ to find a numerical approximation to u N (x s , t s ) ≔ũ(x s , t s ;Θ) by minimizing Eq. 4 with respect toΘ (theũ,Θ notation refers to hyperparameters in the non-spectral neural network).
To apply a spectral multi-output neural network to this problem, we need to choose an appropriate spectral representation of the spatial dependence of Eq. 3, in the form of Eq. 2. In order to capture an algebraic decay at infinity as well as the oscillatory behavior resulting from the sin(3x) term, we start from the modified mapped Gegenbauer functions (MMGFs) [37]
R λ,β i (x) = (1 + (βx) 2 ) −(λ+1)/2 C λ i βx/ 1 + (βx) 2 , x ∈ R,(5)
where C λ i (·) is the Gegenbauer polynomial of order i. At infinity, the MMGFs decay as
R λ,β i (x) ∼ sign(x) i (2λ) (i) i! (1 + (βx) 2 ) −(λ+1)/2 , where (2λ) (i)
is the i th rising factorial of 2λ. A suitable basis φ i needs to include functions that decay more slowly than x −3 . If we choose β = 1/2 and the special case λ = 0, the basis function is defined as
φ i (x) = R 0,β i (x) ≡ (1 + (βx) 2 ) −1/2 T i (βx/ 1 + (βx) 2 )
, where T i are the Chebyshev polynomials. We thus use
u N (x s , t s ; Θ) = N=9 i=0 w i (t s ; Θ)R 0,β i (x s )(6)
in Eq. 4 and use a 4-layer neural network with 10 neurons per layer to learn the coefficients {w i (t; Θ)} 9 i=0 by minimizing the MSE (Eq. 4) with respect to Θ. The total numbers of parameters for both the 4-layer spectral multi-output neural network and the normal 5-layer Figure 2. Example 1: Function approximation. Approximation of the target function Eq. 3 using both standard neural networks and a spectral multi-output neural network that learns the coefficients w i (t; Θ) in the spectral expansion Eq. 1. Comparison of the approximation error using a spectral multi-output neural network (red) with the error incurred when using a standard neural-network function approximator (black).
Here, both the spectral and non-spectral function approximators use the same number of parameters, but the spectral multi-output neural network converges much faster on the training set and has a smaller testing error than the standard neural network. (a) The training curve of the spectral multi-output neural network decreases much faster than that of the standard neural network. (b) Since the spectral multi-output neural network is better at fitting the data by taking advantage of the spectral expansion in x, its testing error is also much smaller and decreases faster.
neural network are the same. The training set and the testing set each contain n = 200 pairs of values (x, t) s = (x s , t s ) where x s are sampled from the Cauchy distribution, x s ∼ C(12, 0), and t s ∼ U(0, 1). For each pair (x s , t s ), we find u s = u(x s , t s ) using Eq. 3. Clearly, x s is sampled from the unbounded domain R and cannot be normalized (the expectation and variance of the Cauchy distribution do not exist). We set the learning rate η = 5 × 10 −4 and plot the training and testing MSEs (Eq. 4) as a function of the number of training epochs in Fig. 2. Figures 2(a) and (b) show that the spectral multi-output neural network yields smaller errors since it naturally and efficiently captures the oscillatory and decaying feature of the underlying function u from Eq. 3. Directly fitting u ≈ũ leads to over-fitting on the training set which does nothing to reduce the testing error. Therefore, it is important to take advantage of the data structure, in this case, using the spectral expansion to represent the function's known oscillations and decay as x → ∞. In this and subsequent examples, all computations are performed using Python 3.8.10 on a laptop with a 4-core Intel ® i7-8550U CPU @ 1.80 GHz.
Application to Solving PDEs
In this section, we show that spectrally adapted neural networks can be combined with physics-informed neural networks (PINNs) which we shall call spectrally adapted PINNs (s-PINNs). We apply s-PINNs to numerically solve PDEs, and in particular, spatiotemporal PDEs in unbounded domains for which standard PINN approaches cannot be directly applied.
Although we mainly focus on solving spatiotemporal problems, s-PINNs are also applicable to other types of PDEs.
Again, we assume that the problem is defined over a finite time horizon t while the spatial variable x may be defined in an unbounded domain. Assuming the solution's asymptotic behavior in x is known, we approximate it by a spectral expansion in x with suitable basis functions (e.g., MMGFs in Example 1 for describing algebraic decay at infinity). Assuming M is an operator that only involves the spatial variable x (e.g., ∂ x , ∂ 2
x , etc.), we can represent the solution to the spatiotemporal PDE ∂ t u = M[u](x, t) by the spectral expansion in Eq. 2 with expansion coefficients {w i (t; Θ)} to be learned by a neural network with hyperparameters Θ. If the solution's behavior in both x and t are known and one can find proper basis functions in both the x and t directions, then one could use a spectral expansion in both x and t to solve the PDE directly without time-stepping. However, it is often the case that the time dependence is unknown and u(x, t) needs to be solved step-by-step in time.
As in standard PINNs, we use a high-order Runge-Kutta scheme to advance time by uniform timesteps ∆t. What distinguishes our s-PINNs from standard PINNs is that only the intermediate times t s between timesteps are defined as inputs to the neural network, while the outputs contain global spatial information (the spectral expansion coefficients), as shown in Fig. 1(c). Over a longer time scale, the optimal basis functions in the spectral expansion Eq. 2 may change. Therefore, one can use new adaptive spectral methods proposed in [29,30]. Using s-PINNs to solve PDEs has the advantages that they can (i) accurately represent spatial information via spectral decomposition, (ii) convert solving a PDE into an optimization and data fitting problem, (iii) easily implement high-order, implicit schemes to advance time with high accuracy, and (iv) allow the use of recently developed spectral-adaptive techniques that dynamically find the most suitable basis functions.
The approximated solution to the PDE ∂ t u = M[u](x, t) can be written at discrete timesteps t j+1 − t j = ∆t as
u N (x, t j+1 ; Θ j+1 ) = N i=0 w i (t j+1 ; Θ j+1 )φ i (x),(7)
where Θ j+1 , j ≥ 1 is the hyperparameter set of the neural network used in the time interval ( j∆t, ( j + 1)∆t). In order to forward time from t j = j∆t to t j+1 = ( j + 1)∆t, we can use, e.g., a K th -order implicit Runge-Kutta scheme, with 0 < c s < 1 (s = 1, . . . , K) as parameters describing different collocation points in time and a rs , b r (r = 1, . . . , K) the associated coefficients. Given u(x, t j ), the K th -order implicit Runge-Kutta scheme aims to approximate u(x, t j + c s ∆t) and u(x, t j + ∆t) through
u N (x, t j + c s ∆t) = u(x, t j ) + K r=1 a rs M[u N (x, t j + c r ∆t)], u N (x, t j + ∆t) = u(x, t j ) + K r=1 b r M[u N (x, t j + c r ∆t)].(8)
With the starting point u N (t 0 , x; Θ 0 ) ≔ u N (t 0 , x) defined by the initial condition at t 0 , we define the target function as the sum of squared errors
SSE j = K s=1 u N (x, t j + c s ∆t; Θ j+1 ) − u N (x, t j ; Θ j ) − K r=1 a sr M[u N (x, t j + c r ∆t; Θ j+1 )] 2 2 + u N (x, t j + ∆t; Θ j+1 ) − u N (x, t j ; Θ j ) − K r=1 b r M[u N (x, t j + c r ∆t; Θ j+1 )] 2 2 ,(9)
where the L 2 norm is taken over the spatial variable x. Minimization of Eq. 9 provides a numerical solution at t j+1 given its value at t j . If coefficients in the PDE are sufficiently smooth, we can use the basis function expansion in Eq. 7 for u N and find that the weights at the intermediate Runge-Kutta timesteps can be written as the Taylor expansion
w i (t j + c r ∆t; Θ j ) = ∞ ℓ=0 w (ℓ) i (t j ) ℓ! (c r ∆t) ℓ ,(10)
where w (ℓ) i (t j ) is the ℓ th derivative of w i with respect to time, evaluated at t j . Therefore, the neural network is learning the mapping t j + c s ∆t → ∞ ℓ=0 w (ℓ) i (t j )(c s ∆t) ℓ /ℓ! for every i by minimizing the loss function Eq. 9.
Example 2: Solving bounded domain PDEs
Before focusing on the application of s-PINNs to PDEs whose solution is defined in an unbounded domain, we first consider the numerical solution of a PDE in a bounded domain to compare the performance of the spectral PINN method (using recently developed adaptive methods) to that of the standard PINN.
Consider the following PDE:
∂ t u = x + 2 t + 1 ∂ x u, x ∈ (−1, 1), u(x, 0) = cos(x + 2), u(1, t) = cos(3(t + 1)),(11)
which admits the analytical solution u(x, t) = cos((t + 1)(x + 2)). In this example, we use Chebyshev polynomials T i (x) as basis functions and the corresponding Chebyshev-Gauss-Lobatto quadrature collocation points and weights such that the boundary u(1, t) = cos(3(t+1)) can be directly imposed at a collocation point x = 1.
Since the solution becomes increasingly oscillatory in x over time, an ever-increasing expansion order (i.e., the number of basis functions) is needed to accurately capture this behavior. Between consecutive timesteps, we employ a recently developed p-adaptive technique for tuning the expansion order [30]. This method is based on monitoring and controlling a frequency indicator F (u N ) defined by where
F (u N ) = N i=N−[ N 3 ]+1 γ i (w i ) 2 N i=0 γ i (w i ) 2 1 2 ,(12)γ i ≔ 1 −1 T 2 i (x)(1 − x 2 ) −1/2 dx.
The frequency indicator F (u N ) measures the proportion of high-frequency waves and serves as a lower error bound of the numerical solution
u N (x, t; Θ) ≔ N i=0 w i (t; Θ)T i (x)
. When F (u N ) exceeds its previous value by more than a factor ρ, the expansion order is increased by one. The indicator is then updated and the factor ρ also is scaled by a parameter γ ≥ 1.
We use a fourth-order implicit Runge-Kutta method to advance time in the SSE 9 and in order to adjust the expansion order in a timely way, we take ∆t = 0.01. The initial expansion order N = 8, and the two parameters used to determine the threshold of adjusting the expansion order are set to ρ = 1.5 and γ = 1.3. A neural network with N H = 4 layers and H = 200 neurons per layer is used in conjunction with the loss function 9 to approximate the solution of Eq. 11. We compare the results obtained using the s-PINN method with those obtained using a fourth-order implicit Runge-Kutta scheme with ∆x = 1 256 , ∆t = 0.01 in a standard PINN approach [3], also using N H = 4 and H = 200. Figure 3 shows that s-PINNs can be used to greatly improve accuracy because the spectral method can recover exponential convergence in space, and when combined with a high-order accurate implicit scheme in time, the overall error is small. In particular, the large error shown in Fig. 3 of the standard PINN suggests that the error of applying auto-differentiation to calculate the spatial derivative is significantly larger than the spatial derivatives calculated using spectral methods. Moreover, when equipping spectral PINNs with the p-adaptive technique to dynamically adjust the expansion order, the frequency indicator can be controlled, leading to even smaller errors as shown in Fig. 3(b,c).
Computationally, using our 4-core laptop on this example, the standard PINN method requires ∼ 10 6 seconds while the s-PINN approach with and without adaptive spectral techniques (dynamically increasing the expansion order N) required 1711 and 1008 seconds, respectively. Thus, s-PINN methods can be computationally more efficient than the standard PINN approach. This advantage can be better understood by noting that training of standard PINNs requires time ∼ O( N H i=0 H i H i+1 ) (H i is the number of neurons in the i th layer) to calculate each spatial derivative (e.g., ∂ x u, ∂ 2
x u, ...) by autodifferentiation [38]. However, in an s-PINN, since a spectral decomposition u N (x, t; Θ) has been imposed, the computational time to calculate derivatives of all orders is O(N), where N is the expansion order. Since
N H i=0 H i H i+1 ≥ N H
i=0 H i and the total number of neurons N H i=0 H i is usually much larger than the expansion order N, using s-PINNs can substantially reduce computational cost.
What distinguishes s-PINNs from the standard PINN framework is that the latter uses spatial and temporal variables as neural-network inputs, implicitly assuming that all variables are normalizable especially when batch-normalization techniques are applied while training the underlying neural network. However, s-PINNs rely on spectral expansions to represent the dependence of a function u(x, t) on the spatial variable x. Thus, x can be defined in unbounded domains and does not need to be normalizable. In the following example, we shall explore how our s-PINN is applied to solving a PDE defined in (
x, t) ∈ R + × [0, T ].
Example 3: Solving unbounded domain PDEs
Consider the following PDE, which is similar to Eq. 11 but is defined in (x, t) ∈ R + × [0, T ]:
∂ t u = − x t + 1 ∂ x u, u(x, 0) = e −x , u(0, t) = 1.(13)
Equation 13 admits the analytical solution u(x, t) = exp[−x/(t + 1)]. In this example, we use the basis functions
{L β i (x)} ≔ {L (0) i (βx)} whereL (0) i (x)
is the generalized Laguerre function of order i defined in [31]. Here, we use the Laguerre-Gauss quadrature collocation points and weights so that x = 0 is not included in the collocation node set. We use a fourth-order implicit Runge-Kutta method to minimize the SSE 9 by advancing time. In order to address the boundary condition, we augment the loss function in Eq. 9 with terms that represent the cost of deviating from the boundary condition:
SSE j = K s=1 u N (x, t j + c s ∆t; Θ j+1 ) − u N (x, t j ; Θ j ) − K r=1 a sr M[u N (x, t j + c r ∆t; Θ j+1 )] 2 2 + u N (x, t j + ∆t; Θ j+1 ) − u N (x, t j ; Θ j ) − K r=1 b r M[u N (x, t j + c r ∆t; Θ j+1 )] 2 2 (14) + K s=1 [u N (0, t j + c s ∆t; Θ j+1 ) − u(0, t j + c s ∆t)] 2 + [u N (0, t j+1 ; Θ j+1 ) − u(0, t j+1 )] 2 ,
where the last two terms push the constraints associated with the Dirichlet boundary condition at x = 0 at all time points:
u N (0, t j + c s ∆t; Θ j+1 ) = u(0, t j + c s ∆t), u N (0, t j+1 ; Θ j+1 ) = u(0, t j+1 ),(15)
where in this example, u(0, t j + c s ∆t) = u(0, t j+1 ) ≡ 1.
Because the solution of Eq. 13 becomes more diffusive with x (i.e., decays more slowly at infinity), it is necessary to decrease the scaling factor β to allow basis functions to decay more slowly at infinity. Between consecutive timesteps, we adjust the scaling factor by applying the scaling algorithm proposed in [29]. Thus, we dynamically adjust the basis functions in Eq. 1. As with the p-adaptive technique we used in Example 2, the scaling technique also relies on monitoring and controlling the frequency indicator given in Eq. 12. In order to efficiently and dynamically tune the scaling factor, we set ∆t = 0.05. The initial expansion order is N = 8, the initial scaling factor is β = 2, the scaling factor adjustment ratio is set to q = 0.95, and the threshold for tuning the scaling factor is set to ν = 1/(0.95). A neural network with 10 layers and 100 neurons per layer is used in conjunction with the loss function 9. The neural network of the standard PINN consists of eight intermediate layers with 200 neurons per layer. Figure 4(a) shows that s-PINNs can achieve very high accuracy even when a relatively large timestep (∆t = 0.05) is used. Scaling techniques to dynamically control the frequency indicator are also successfully incorporated into s-PINNs, as shown in Figs. 3(b,c).
In Eq. 13, we imposed a Dirichlet boundary condition by modifying the SSE 14 to include boundary terms. Other types of boundary conditions can be applied in s-PINNs by including boundary constraints in the SSE as in standard PINN approaches.
In the next example, we focus on solving a PDE with two spatial variables, x and y, each defined on an unbounded domain.
which admits the analytical solution
u(x, y, t) = 1 √ (t + 3)(t + 2) exp − x 2 4(t + 3) − y 2 4(t + 2)
.
Note that the solution spreads out over time in both dimensions, i.e., it decays more slowly at infinity as time increases. Therefore, we apply the scaling technique to capture the increasing spread by adjusting the scaling factors β x and β y of the generalized Hermite basis functions. Generalized Hermite functions of orders i = 0, . . . , N x and ℓ = 0, . . . , N y are used in the x and y directions, respectively. In order to solve Eq. 16, we multiply it by any test function v ∈ H 1 (R) and integrate the resulting equation by parts to convert it to the weak form (∂ t u, v) = −(∇u, ∇v). Solving the weak form of Eq. 16 ensures numerical stability. When implementing the spectral method, the goal is to find
u β x ,β y N x ,N y (x, y, t) = N x i=0 N y ℓ=0 w i,ℓ (t)Ĥ β x i,0 (x)Ĥ β y ℓ,0 (y),(18)
whereĤ β x i,0 ,Ĥ β y ℓ,0 are generalized Hermite functions defined in Table 1 such that
(∂ t u, v) = −(∇u, ∇v) t ∈ (t j , t j+1 ) for all v ∈ P β x N x ,0 × P β y N y ,0 , t ∈ (t j , t j+1
). This allows one to advance time from t j to t j+1 given u
β x ,β y N x ,N y (x, t j ).
Tuning the scaling factors β x , β y across different timesteps is achieved by monitoring the frequency indicators in the x-and y-directions, F x and F y , as detailed in [29]. We use initial expansion orders N x = N y = 8 and scaling factors β x = 0.4, β y = 0.5. The ratio and threshold for adjusting the scaling factors, are set to be q = 0.95 and ν −1 = 0.95. The timestep ∆t = 0.1 is used to adjust both scaling factors in both dimensions in a timely manner and a fourth order implicit Runge-Kutta scheme is used for numerical integration. The neural network that we use to learn w i,ℓ (t) has 5 intermediate layers with 150 neurons in each layer.
The results depicted in Fig. 5(a) show that an s-PINN using the scaling technique can achieve high accuracy by using high-order Runge-Kutta schemes in minimizing the SSE 9 and by properly adjusting β x and β y (shown in Fig. 5(b)) to control the frequency indicators F x and F y (shown in Fig. 5(c) and (d)). The s-PINNs can be extended to higher spatial dimensions by calculating the numerical solution expressed in tensor product form as in Eq. 18.
Since our method outputs spectral expansion coefficients, if using the full tensor product in the spatial spectral decomposition leads to a number of outputs that increase exponentially with dimensionality. The very wide neural networks needed for such highdimensional problems results in less efficient training. However, unlike other recent machine-learning-based PDE solvers or PDE learning methods [39,40] that explicitly rely on a spatial discretization of grids or meshes, the curse of dimensionality can be partially mitigated in our s-PINN method. By using a hyperbolic cross space [41], we can effectively reduce the number of coefficients needed to accurately reconstruct the numerical solution. In the next example, we solve a 3D parabolic spatiotemporal PDE, similar to that in Example 4, but we demonstrate how implementing a hyperbolic cross space can reduce the number of outputs and boost training efficiency. Consider the (3+1)-dimensional heat equation
∂ t u(x, y, z, t) = ∆u(x, y, z, t), u(x, y, 0) = 1 √ 6 e −x 2 /12−y 2 /8−z 2 /4 ,(19)
which admits the analytical solution
u(x, y, z, t) = 1 √ (t + 3)(t + 2)(t + 1) exp − x 2 4(t + 3) − y 2 4(t + 2) − z 2 4(t + 1)(20)
for (x, y, z) ∈ R 3 . If we use the full tensor product of spectral expansions with expansion orders N x = N y = N z = 9, we will need to output 10 3 = 1000 expansion coefficients, and in turn, a relatively wide neural network with many parameters will be needed to generate the corresponding weights as shown in Fig. 1(c). Training such wide networks can be inefficient. However, many of the spectral expansion coefficients are close to zero and can be eliminated without compromising accuracy. One way to select expansion coefficients is to use the hyperbolic cross space technique [41] to output coefficients of the generalized Hermite basis functions only in the space
V β, x 0 N,γ × ≔ span Ĥ n 1 (β 1 x)Ĥ n 2 (β 2 y)Ĥ n 3 (β 3 z) : | n| mix n −γ × ∞ ≤ N 1−γ × , n ≔ (n 1 , n 2 , n 3 ), | n| mix ≔ max{n 1 , 1} max{n 2 , 1} max{n 3 , 1},(21)
where the hyperbolic space index γ × ∈ (−∞, 1). Taking γ × = −∞ in Eq. 21 corresponds to the full tensor product with N + 1 basis functions in each dimension. For fixed N in Eqs. 21, the number of total basis function tend to decrease with increasing γ × . We set N = 9 in Eq. 21 and use the initial scaling factors β x = 0.4, β y = 0.5, β z = 0.7. Using a fourth-order implicit Runge-Kutta scheme with timestep ∆t = 0.2, we set the ratio and threshold for adjusting the scaling factors are set to q = 0.95 and ν −1 = 0.95 in each dimension.
To illustrate the potential numerical difficulties arising from outputting large numbers of coefficients when solving higher-dimensional spatiotemporal PDEs, we use a neural network with two hidden layers and different numbers of neurons in the intermediate layers. We also adjust γ × to explore how decreasing the number of coefficients can improve training efficiency. Our results are listed Table 2.
The results shown in Table 2 indicate that, compared to using the full tensor product γ × = −∞, implementing the hyperbolic cross space with a moderate γ × = −1 or 0, the total number of outputs is significantly reduced, leading to faster training and better accuracy. However, increasing the hyperbolicity to γ × = 1 2 , the error increases relative to using γ × = −1, 0 because some useful, nonzero coefficients are excluded. Also, comparing the results across different rows, wider layers lead to both more accurate results and faster training speed. The sensitivity of our s-PINN method to the number of intermediate layers in the neural network and the number of neurons in each layer are further discussed in Example 7. Overall, in higher-dimensional problems, there is a balance between computational cost and accuracy as the number of outputs needed will grow fast with dimensionality. Spectrallyadapted PINNs can easily incorporate a hyperbolic cross space so that the total number of outputs can be reduced to a manageable number for moderate-dimensional problems. Finding the optimal hyperbolicity index γ × for the cross space Eq. 21 will be problem-specific.
In the next example, we explore how s-PINNs can be used to solve Schrödinger's equation in x ∈ R. Solving this complex-valued equation poses substantial numerical difficulties as the solution exhibits diffusive, oscillatory, and convective behavior [24].
Example 6: Solving an unbounded domain Schrödinger equation
We seek to numerically solve the following Schrödinger equation defined on
x ∈ R i∂ t ψ(x, t) = −∂ 2 x ψ(x, t), ψ(x, 0) = 1 √ ζ exp ikx − x 2 4ζ .(22)
For reference, Eq. 22 admits the analytical solution
ψ(x, t) = 1 ζ + it exp ik(x − kt) − (x − 2kt) 2 4(ζ + it) .(23)
As in Example 4, we shall numerically solve Eq. 22 in the weak form
(∂ t Ψ(x, t), v) + i(∂ x Ψ(x, t), ∂ x v) = 0, ∀v ∈ H 1 (R).(24)
Since the solution to Eq. 22 decays as ∼ exp[−x 2 /(4 (ζ 2 + t 2 ))] at infinity, we shall use the generalized Hermite functions as basis functions. The solution is rightward-translating for k > 0 and increasingly oscillatory and spread out over time. Hence, as detailed in [30], we apply three additional adaptive spectral techniques to improve efficiency and accuracy: (i) a scaling technique to adjust the scaling factor β over time in order to capture diffusive behavior, (ii) a moving technique to adjust the center of the basis function x L to capture convective behavior, and (iii) a p-adaptive technique to increase the number of basis functions N to better capture the oscillations. We set the initial parameters β = 0.8, x L = 0, N = 24 at t = 0. The scaling factor adjustment ratio and the threshold for adjusting the scaling factor are q = ν −1 = 0.95, the minimum and maximum displacements are 0.004 and 0.1 within each timestep for moving the basis functions, respectively, and the threshold for moving is 1.001. Finally, the thresholds of the p-adaptive technique are set to ρ = ρ 0 = 2 and γ = 1.4. To numerically solve Eq. 24, a fourth-order implicit Runge-Kutta scheme is applied to advance time with timestep ∆t = 0.1. The neural network underlying the s-PINN that we use in this example contains 13 layers with 80 neurons in each layer. Figure 6(a) shows that the s-PINN with adaptive spectral techniques leads to very high accuracy as it can properly adjust the basis functions over a longer timescale (across different timesteps), while not adapting the basis functions results in larger errors. Figs. 6(b-d) show that the scaling factor β decreases over time to match the spread of the solution, the displacement of the basis function x L increases in time to capture the rightward movement of the basis functions, and the expansion order N increases to capture the solution's increasing oscillatory behavior. Our results indicate that our s-PINN method can effectively utilize all three adaptive algorithms.
We now explore how the timestep and the order of the implicit Runge-Kutta method affect the approximation error, i.e., to what extent can we relax the constraint on the timestep and maintain the accuracy of the basis functions, or, if higher-order Runge-Kutta schemes are better. Another feature to explore is the neural network structure, such as the number of layers and neurons per layer, and how it affects the performance of s-PINNs. In the following example, we carry out a sensitivity analysis.
Example 7: Sensitivity analysis of s-PINN
To explore how the performance of an s-PINN depends on algorithmic set-up and parameters, we apply it to solving the heat equation defined on x ∈ R,
∂ t u(x, t) = ∂ 2 x u(x, t) + f (x, t), u(x, 0) = e −x 2 /4 sin x(25)u(x, t) = sin x √ t + 1 exp − x 2 4(t + 1) .(26)
We solve Eq. 25 in the weak form by multiplying any test function v ∈ H 1 (R) on both sides and integrating by parts to obtain The solution diffusively spreads over time, requiring one to decrease the scaling factor β of the generalized Hermite functions {Ĥ β i (x)}. We shall first study how the timestep and the order of the implicit Runge-Kutta method associated with solving the minimization problem 9 affect our results. We use a neural network with five intermediate layers and 200 neurons per layer, and set the learning rate η = 5 × 10 −4 . The initial scaling factor is set to β = 0.8. The scaling factor adjustment ratio and threshold are set to q = 0.98, and ν = q −1 , respectively. For comparison, we also apply a Crank-Nicolson scheme for numerically solving Eq. 27, i.e.,
(∂ t u, v) = −(∂ x u, ∂ x v) + ( f, v), ∀v ∈ H 1 (R).(27)U β N (t j+1 ) − U β N (t j ) ∆t = D β N [U β N (t j+1 ) + U β N (t j )] 2 + F β N (t j+1 ) + F β N (t j ) 2 .(28)
where U β N (t), F β N (t) are the N + 1-dimensional vectors of spectral expansion coefficients of the numerical solution and the of the source, respectively. D β N ∈ R (N+1)×(N+1) is the tridiagonal block matrix representing the discretized Laplacian operator ∂ 2
x :
D i,i−2 = β 2 √ (i − 2)(i − 1) 2 , D i,i = −β 2 i − 1 2 , D i,i+2 = β 2 √ i(i + 1) 2 ,
and D i, j = 0, otherwise. Table 3 shows that since the error from temporal discretization ∆t 2K is already quite small for K ≥ 4, using a higher-order Runge-Kutta method does not significantly improve accuracy for all choices of ∆t. Using higher-order (K ≥ 4) schemes tends to require longer run times. Higher orders require fitting over more data points (using the same number of parameters) leading to slower convergence when minimizing Eq. 9, which can result in larger errors. Compared to the second-order Crank-Nicolson scheme, whose error is O(∆t 2 ), the errors of our s-PINN method do not grow significantly when ∆t increases. In fact, the accuracy using the smallest timestep ∆t = 0.02 in the Crank-Nicolson scheme was still inferior to that of the s-PINN method using the second order or fourth order Runge-Kutta scheme with ∆t = 0.2. Moreover, the run time of our s-PINN method using a second or fourth-order implicit Runge-Kutta scheme for the loss function is not significantly larger than that of the Crank-Nicolson scheme. Thus, compared to traditional spectral methods for numerically solving PDEs, our s-PINN method, even when incorporating some lower-order Runge-Kutta schemes, can greatly improve accuracy without significantly increasing computational cost. In Table 3, the smallest run time of our s-PINN method, which occurs for K = 2, ∆t = 0.2, is shown in blue. The smallest error case, which arises for K = 4, ∆t = 0.02, is shown in red. The run time always increases with the order K of the implicit Runge-Kutta scheme and always decreases with ∆t due to fewer timesteps. Additionally, the error always increases with ∆t regardless of the order of the Runge-Kutta scheme. However, the expected convergence order is not observed, implying that the increase in error results from increased lag in adjustment of the scaling factor β when ∆t is too large, rather than from an insufficiently small time discretization error ∆t 2K . Using a fourth-order implicit Runge-Kutta scheme with ∆t = 0.05 to solve Eq. 27 seems to both achieve high accuracy and avoid large computational costs.
We also investigate how the total number of parameters in the neural network and the structure of the network affect efficiency and accuracy. We use a sixth-order implicit Runge-Kutta scheme with ∆t = 0.1. The learning rate is set to η = 5 × 10 −4 for all neural networks.
As shown in Table 4, the computational cost tends to decrease with the number of neurons H in each layer as it takes fewer epochs to converge when minimizing Eq. 9. The run time tends to decrease with N H due to a faster convergence rate, until about N H = 8. The errors when H = 50 are significantly larger as the training terminates (after a maximum of 100000 epochs) before it converges. For N H = 3, the corresponding s-PINN always fails to achieve accuracy within 100000 epochs unless H 200. Therefore, overparametrization is indeed helpful in improving the neural network's performance, leading to faster convergence rates, in contrast to most traditional optimization methods that take longer to converge with more parameters. Similar observations have been made in other optimization tasks that involve deep neural networks [42,43]. Consequently, our s-PINN method retains the advantages of deep and wide neural networks for improving accuracy and efficiency.
Parameter Inference and Source Reconstruction
As with standard PINN approaches, s-PINNs can also be used for parameter inference in PDE models or reconstructing unknown sources in a physical model. Assuming observational data at uniform time intervals t j = j∆t associated with a partially known underlying PDE model, s-PINNs can be trained to infer model parameters θ by minimizing the sum of squared errors, weighted from both ends of the time interval (t j , t j+1 ),
SSE j = SSE L j + SSE R j ,(29)
where
SSE L j = K s=1 u(x, t j + c s ∆t; θ j+1 ; Θ j+1 ) − u(x, t j ; θ j ) − K r=1 a sr M[u(x, t j + c r ∆t; θ j+1 ; Θ j+1 )] 2 2 , SSE R j = K s=1 u(x, t j + c s ∆t; θ j+1 ; Θ j+1 ) − u(x, t j+1 ; θ j+1 ) − K r=1 (a sr − b r )M[u(x, t j + c r ∆t; θ j+1 ; Θ j+1 )] 2 2 .(30)
Here, θ j+1 is the model parameter to be found using the sample points c s ∆t between t j and t j+1 . The most obvious advantage of s-PINNs over standard PINN methods is that they can deal with models defined on unbounded domains, extending PINN-based methods that are typically applied to finite domains. Given observations over a certain time interval, one may wish to both infer parameters θ j in the underlying physical model and reconstruct the solution u at any given time. Here, we provide an example in which both a parameter in the model is to be inferred and the numerical solution obtained.
Example 8: Parameter (diffusivity) inference As a starting point for a parameter-inference problem, we consider diffusion with a source defined on x ∈ R
∂ t u(x, t) = κ∂ 2 x u(x, t) + f (x, t), u(x, 0) = e −x 2 /4 sin x,(31)
where the constant parameter κ is the thermal conductivity (or diffusion coefficient) in the entire domain. In this example, we set κ = 2 as a reference and assume the source
f (x, t) = 2 (x cos x + (t + 1) sin x) (t + 1) 3/2 − x 2 4(t + 1) 2 + sin x 2(t + 1) 3/2 exp − x 2 4(t + 1) .(32)
In this case, the analytical solution to Eq. 31 is given by Eq. 26. We numerically solve Eq. 31 in the weak form of Eq. 27. If the form of the spatiotemporal heat equation is known (such as Eq. 31), but some parameters such as κ is unknown, reconstructing it from measurements is usually performed by defining and minimizing a loss function as was done in [44]. It can also be shown that κ = κ(t) in Eq. 31 can be uniquely determined by the observed solution u(x, t) [45,46,47] under certain conditions. Here, however, we assume that observations are taken at discrete time points t j = j∆t and seek to reconstruct both the parameter κ and the numerical solution at t j + c s ∆t (defined in Eqs. 30) by minimizing Eq. 29. We use a neural network with 13 layers and 100 neurons per layer with a sixth-order implicit Runge-Kutta scheme. The timestep ∆t is 0.1. At each timestep, we draw the function values from
u(x, t j ) = sin x t j + 1 exp − x 2 4(t j + 1) + ξ(x, t j ),(33)
where ξ(x, t) is the noise term that is both spatially and temporally uncorrelated, and ξ(x, t) ∼ N(0, σ 2 ), where N(0, σ 2 ) is the normal distribution of mean 0 and variance σ 2 (i.e., ξ(x, t)ξ(t, s)) = σ 2 δ x,y δ s,t ). For different levels of noise σ 2 , we take one trajectory of the measured solution with noise u(x, t j ) to reconstruct the parameter κ, which is presumed to be a constant in [t j , t j+1 ), and simultaneously obtain the numerical solutions at the intermediate time points t j + c s ∆t. We are interested in how different levels of noise and the increasing spread of the solution will affect the SSE and the reconstructed parameterκ. Figure 7 shows the deviation of the reconstructedκ from its true value, |κ − 2|, the SSE, the scaling factor, and the frequency indicator as functions of time for different noise levels. Figure 7(a) shows that the larger the noise, the less accurate the reconstructed κ. Moreover, as the function becomes more spread out (when σ 2 = 0), the error in both the reconstructed diffusivity and the SSE increases across time, as shown in Fig. 7(b). This behavior suggests that a diffusive solution that decays more slowly at infinity can give rise to inaccuracies in the numerical computation of the intermediate timestep solutions and in reconstructing model parameters. Finally, as indicated in Fig. 7(c,d), larger variances in the noise will impede the scaling process since the frequency indicator cannot be as easily controlled because larger variance in the noise usually corresponds to high-frequency and oscillatory components of a solution.
In Example 8, both the parameter and the unknown solution were inferred. Apart from reconstructing the coefficients in a given physical model, in certain applications, we may also wish to reconstruct the underlying physical model by inferring, e.g., the heat source f (x, t). Source recovery from observational data commonly arises and has been the subject of many previous studies [48,49,50]. We now discuss how the s-PINN methods presented here can also be used for this purpose. For example, in Eq. 25 or Eq. 31, we may wish to reconstruct an unknown source f (x, t) by also approximating it with a spectral decomposition
f (x, t) ≈ f N (x, t) = N i=0 h i (t)φ β i,x L (x),(34)
and minimizing an SSE that is augmented by a penalty on the coefficients h i , i = 0, . . . , N.
We learn the expansion coefficients h i within [t j , t j+1 ] by minimizing
SSE j = SSE L j + SSE R j + λ K s=1 h N (t j + c s ∆t) 2 2 , λ ≥ 0, SSE L j = K s=1 u(x, t j + c s ∆t) − u(x, t j ) − K r=1 a sr [∂ xx u(x, t j + c r ∆t) + f N (x, t j + c r ∆t; Θ j+1 )] 2 2 ,(35)SSE R j = K s=1 u(x, t j + c s ∆t) − u(x, t j+1 ) − K r=1 (a sr − b r )[∂ xx u(x, t j + c r ∆t) + f N (x, t j + c r ∆t; Θ j+1 )] 2 2 ,
where h N (t j + c s ∆t) ≡ (h 1 (t j + c s ∆t), . . . , h N (t j + c s ∆t)) and u (or the spectral expansion coefficients w i of u) is assumed known at all intermediate time points c s ∆t in (t j , t j+1 ). The last term in Eq. 35 adds an L 2 penalty term on the coefficients of f which tends to reconstruct smoother and smaller-magnitude sources as λ is increased. Other forms of regularization such as L 1 can also be considered [51]. In the presence of noise, an L 1 regularization further drives small expansion weights to zero, yielding an inferred source f N described by fewer nonzero weights.
Since the reconstructed heat source f N is expressed in terms of a spectral expansion in Eq. 34, and minimizing the loss function Eq. 35 depends on the global information of the observation u, f at any location x also contains global information intrinsic to u. In other words, for such inverse problems, the s-PINN approach extracts global spatial information and is thus able to reconstruct global quantities. We consider an explicit case in the next example.
Example 9: Source recovery Consider the canonical source reconstruction problem [52,53,54] of finding f (x, t) in the heat equation model in Eq. 25 for which observational data are given by Eq. 33 but evaluated at t j +c s ∆t. A physical interpretation of the reconstruction problem is identifying the heat source f (x, t) using measurement data in conjunction with Eq. 25. As in Example 5, we numerically solve the weak form Eq. 27. To study how the L 2 penalty term in Eq. 35 affect source recovery and whether increasing the regularization λ will make the inference of f more robust against noise, we minimize Eq. 29 for different values of λ and σ 2 . We use a neural network with 13 layers and 100 neurons per layer to reconstruct f i (t) in the decomposition Eq. 34 with N = 16, i.e., the neural network outputs the coefficients th i at the intermediate timesteps t j + c s ∆t. The basis functions φ β i,x L (x) are chosen to be Hermite functionsĤ β i,x L (x). For simplicity, we consider the problem only at times within the first time point [0, 0.2] and a fixed scaling factor β = 0.8 as well as a fixed displacement x L = 0.
In Table 5, we record the L 2 error
f (x, t) − 16 i=0 h i (t)Ĥ β i,x L (x) 2(36)
the lower-left of each entry and the SSE 0 in the upper-right. Observe that as the variance of the noise increases, the reconstruction of f via the spectral expansion becomes increasingly inaccurate. In the noise-free case, taking λ = 0 in Eq. 35 achieves the smallest SSE 0 and the smallest reconstruction error. However, with increasing noise σ 2 , using an L 2 regularization term in Eqs. 35 can prevent over-fitting of the data although SSE 0 increases with the regularization strength λ. When σ = 10 −3 , taking λ = 10 −2 achieves the smallest reconstruction error Eq. 36; when σ = 10 −2 , 10 −1 , λ = 10 −1 achieves the smallest reconstruction error. However, if λ is too large, coefficients of the spectral approximation to f are pushed to zero. Thus, it is important to choose an intermediate λ so that the reconstruction of the source is robust to noise. In Fig. 8, we plot the norm of the reconstructed heat source h N 2 and the "error" SSE 0 which varies as λ changes for different σ. . When λ is large, the norm of the reconstructed heat source h N 2 always tends to decrease while the "error" SSE 0 tends to increase. When λ = 10 −1 , h N 2 is small and the SSE 0 is large. A moderate λ ∈ [10 −2 , 10 −3 ] could reduce the error SSE 0 , compared to using a large λ, while also generating a heat source with smaller h N 2 .
Summary and Conclusions
In this paper, we propose an approach that blends standard PINN algorithms with adaptive spectral methods and show through examples that this hybrid approach can be applied to a wide variety of data-driven problems including function approximation, solving PDEs, parameter inference, and model selection. The underlying feature that we exploit is the physical differences across classes of data. For example, by understanding the difference between space and time variables in a PDE model, we can describe the spatial dependence in terms of basis functions, obviating the need to normalize spatial data. Thus, s-PINNs are ideal for solving problems in unbounded domains. The only additional "prior" needed is an assumption on the asymptotic spatial behavior and an appropriate choice of basis functions. Additionally, adaptive techniques have been recently developed to further improve the efficiency and accuracy, making spectral decomposition especially suitable for unboundeddomain problems that the standard PINN cannot easily address. We applied s-PINNs (exploiting adaptive spectral methods) across a number of examples and showed that they can outperform simple neural networks for function approximation and existing PINNs for solving certain PDEs. Three major advantages are that s-PINNs can be applied to unbounded domain problems, more accurate by recovering spectral convergence in space, and more efficient as a result of faster evaluation of spatial derivatives of all orders compared to standard PINNs that use autodifferentiation. These advantages are rooted in separated data structures, allowing for spectral computation and high-accuracy numerics. Straightforward implementation of s-PINNs retains most of the advantageous features of deep-neural-network in PINNs, making s-PINNs ideal for data-driven inference problems. However, in the context of solving higher-dimensional PDEs, a tradeoff is necessary when using s-PINNs instead of PINNs. For s-PINNs, the network structure needs to be significantly widened to output an exponentially increasing (with dimensionality) number of expansion coefficients, while in standard PINNs, the network structure remains largely preserved but an exponentially larger number of trajectories are needed for sufficient training. We found that by restricting the spatial domain to a hyperbolic cross space, the number of outputs required for s-PINNs can be appreciably decreased for problems of moderate dimensions. While using a hyperbolic cross space cannot reduce the number of outputs sufficiently to allow s-PINNs to be effective for very high dimensional problems, the standard PINNs approach to problems in very high dimensions could require an unattainable number of samples for sufficient training.
In Table 6, we compare the advantages and disadvantages of the standard PINN and s-PINN methods. Potential improvements and extensions include applying techniques for selecting basis functions that best characterize the expected underlying process, spatial or otherwise, and inferring forms of the underlying model PDEs [55,56]. While standard PINN methods deal with local information (e.g., ∂ x u, ∂ 2
x u), spectral decompositions capture global information making them a natural choice for also efficiently learning and approximating nonlocal terms such as convolutions and integral kernels. Potential future exploration using our s-PINN method may include adapting it to solve higher-dimensional problems by more systematically choosing a proper hyperbolic space or using other coefficient-reducing (outputs of the neural network) techniques. Also, recent Gaussian-process-based smoothing techniques [57] can be considered to improve robustness of our s-PINN method against noise/errors in measurements, and noise-aware physics-informed machine learning techniques [58] can be incorporated when applying our s-PINN for inverse-type PDE discovery problems. Finally, one can incorporate a recently proposed Bayesian-PINN (B-PINN) [59] method into our s-PINN method to quantify uncertainty when solving inverse problems under noisy data. Table 6. Advantages and disadvantages of traditional and PINN-based numerical solvers. We use "+" and "-" signs to indicate advantages and disadvantages, respectively. Finite difference (FD), finite-element (FE), and spectral methods can be used in a traditional sense without relying on neural networks. This table provides an overview of the advantages and disadvantages associated with the corresponding methods and solvers.
Methods
Solvers
Traditional PINN Non-spectral + leverages existing numerical methods + low-order FD/FE schemes easily implemented + efficient evaluation of function and derivatives mainly restricted to bounded domains complicated time-extrapolation complicated implementation of higher-order schemes algebraic convergence, less accurate more complicated inverse-type problems more complicated temporal and spatial extrapolation requires understanding of problem to choose suitable discretization + easy implementation + efficient deep-neural-network training + easy extrapolation + easily handles inverse-type problems mainly restricted to bounded domains less accurate less interpretable spatial derivatives limited control of spatial discretization expensive evaluation of neural networks incompatible with existing numerical methods Spectral + suitable for bounded and unbounded domains + spectral convergence in space, more accurate + leverage existing numerical methods + efficient evaluation of function and derivatives information required for choosing basis functions more complicated inverse-type problems more complicated implementation more complicated temporal extrapolation in time usually requires a "regular" domain e.g. rectangle, R d , a ball, etc.
+ suitable for both bounded and unbounded domains + easy implementation + spectral convergence in space, more accurate + efficient deep-neural-network training + more interpretable derivatives of spatial variables + easy extrapolation + easily handles inverse-type problems + compatible with existing adaptive techniques requires some information to choose basis functions expensive evaluation of neural networks usually requires a "regular" domain
Figure 1 .
1Solving unbounded domain problems with spectrally adapted physicsinformed neural networks for functions u N (x, t) that can be expressed as a spectralexpansion u N (x, t) = N i=0 w i (t)φ i (x). (a)An example of a function u N (x, t) plotted at three different time points. (b) Decaying behavior of a corresponding basis function element φ i (x). (c) PDEs in unbounded domains can be solved by combining spectral decomposition with the PINNs and minimizing the loss function L. Spatial derivatives of basis functions are explicitly defined and easily obtained.
Figure 3 .
3Example 2: Solving Eq. 11 in a bounded domain. L 2 errors, frequency indicators, and expansion order associated with the numerical solution of Eq. 11 using the adaptive s-PINN method with a timestep ∆t = 0.01. (a) In a bounded domain, the s-PINNs, with and without the adaptive spectral technique, have smaller errors than the standard PINN (black). Moreover, the s-PINN method combined with a p-adaptive technique that dynamically increases the number of basis functions (red) exhibits a smaller error than the non-adaptive s-PINN (blue). The higher accuracy of the adaptive s-PINN is a consequence of maintaining a small frequency indicator 12, as shown in (b). (c) Keeping the frequency indicator at small values is realized by increasing the spectral expansion order.
Figure 4 .
4Example 3: Solving Eq. 13 in an unbounded domain. L 2 error, frequency indicator, and expansion order associated with the numerical solution of Eq. 13 using the s-PINN method combined with the spectral scaling technique. (a) The s-PINN method with the scaling technique (red) has a smaller error than the s-PINN without scaling (blue). The higher accuracy of the adaptive s-PINN is a consequence of maintaining a smaller frequency indicator Eq. 12, as shown in (b). (c) Keeping the frequency indicator at small values is realized by reducing the scaling factor so that the basis functions decay more slowly at infinity. The timestep is ∆t = 0.05 .
Example 4 :
4Solving 2D unbounded domain PDEsConsider the two-dimensional heat equation on (x, y) ∈ R 2 ∂ t u(x, y, t) = ∆u(x, y, t), u(x, y, 0) = 1 √ 2 e −x 2 /12−y 2 /8 ,
Figure 5 .
5Example 4: Solving a higher dimensional unbounded domain PDE (Eq. 16). L 2 error, scaling factor, and frequency indicators associated with the numerical solution of Eq. 16 using s-PINNs, with and without dynamic scaling. (a) L 2 error as a function of time. The s-PINNs that are equipped with the scaling technique (red) achieve higher accuracy than those without (black). (b) The scaling factors β x (blue) and β y (red) as functions of time. Both scaling factors are decreased to match the spread of the solution in both the x and y directions. Scaling factors are adjusted to maintain small frequency indicators in the x-direction (c), and in the y-direction (d).In all computations, the timestep is ∆t = 0.1.
Figure 6 .
6Example 6: Solving the Schrödinger equation (Eq. 22) in an unbounded domain. Approximation error, scaling factor, displacement, and expansion order associated with the numerical solution of Eq. 22 using adaptive (red) and non-adaptive (black) s-PINNs. (a) Errors for numerically solving Eq. 22 with and without adaptive techniques. (b) The change of the scaling factor which decreases over time as the solution becomes more spread out. (c) The displacement of the basis functions x L which is increased as the solution moves rightwards. (d) The expansion order N increases over time as the solution becomes more oscillatory. A timestep ∆t = 0.1 was used.
using generalized Hermite functions as basis functions. For the source f (x, t) = [x cos x + (t + 1) sin x] (t + 1) −3/2 exp[− x 2 4(t+1) ], Eq. 25 admits the analytical solution
Figure 7 .
7Example 8: Parameter (diffusivity) inference. The parameter κ inferred within successive time windows of ∆t = 0.1, the SSE error Eq. 29, the scaling factor, and the frequency indicators associated with solving Eq. 31, for different noise levels σ 2 . Here, the SSE was minimized to find the estimateθ ≡κ and the solutions u N at intermediate timesteps t j + c s ∆t. (a, b) Smaller σ 2 leads to smaller SSE Eq. 30 and a more accurate reconstruction ofκ. When the function has spread out significantly at long times, the reconstructedκ becomes less accurate, suggesting that unboundedness and small function values render the problem susceptible to numerical difficulties. (c, d) Noisy data results in a larger proportion of high-frequency waves and thus a large frequency indicator, impeding proper scaling.
Figure 8 .
8Example 9: Source recovery. SSE 0 plotted against the reconstructed heat source h N 2 as given by 35, as a function of λ for various values of σ 2 (an "Lcurve")
Table 1 .
1Overview of variables. Definitions of the main variables and parameters used in this paper.Symbol Definition
n
number of observations
N
spectral expansion order
N H
number of intermediate layers in the neural network
H
number of neurons per layer
η
learning rate of stochastic gradient descent
Θ
neural network hyperparameters
K
order of the Runge-Kutta scheme
β
scaling factor of basis functions
Table 2 .
2Example 5: Applying hyperbolic cross space and s-PINNs to the (3+1) dimensional PDE Eq. 19. Applying the hyperbolic cross space (Eq. 21), we record the L 2 error in the lower left and the training time in the upper right of each cell. 141, 110, respectively. Using γ × = −1 or 0 leads to the most accurate results. The training time tends to increase with the number of outputs (a smaller γ × corresponds to more outputs). By comparing the results in different rows for the same column, it can be seen that more outputs require a wide neural network for training.The number of coefficients (outputs in the neural network) for γ × = −∞, −1, 0, 1
2 are
1000, 205, H
γ ×
−∞
−1
0
1
2
200
2.217e-03
22911
1.651e-04
4309
5.356e-05
2886
3.173e-04
3956
400
1.072e-03
26725
2.970e-05
7014
5.356e-05
3309
3.173e-04
2356
700
2.276e-03
43923
2.900e-05
3133
5.356e-05
3229
3.173e-04
2098
1000
7.871e-05
55880
2.901e-05
3002
5.356e-05
2016
3.173e-04
1894
Table 3 .
3Example 7: Sensitivity analysis of s-PINN. Computational runtime (in seconds), error, and the final scaling factor for different timesteps ∆t, different implicit order-K Runge-Kutta schemes, and the traditional Crank-Nicolson scheme. In each box, the run time (in seconds), the SSE, and the final scaling factor are listed from left to right. The results associated with the smallest error are highlighted in red while the results associated with the shortest run time for our s-PINN method are indicated in blue.∆t
K
C-K scheme
2
4
6
10
0.02
12, 8.252e-06, 0.545
27, 4.011e-08, 0.545
54, 1.368e-08, 0.545
279, 2.545e-07, 0.545
7071, 6.358e-05, 0.695
0.05
5, 5.157e-05, 0.545
12, 2.799e-08, 0.545
23, 1.651e-08, 0.545
105, 2.566e-07, 0.545
3172, 1.052e-06, 0.545
0.1
3, 2.239e-04, 0.695
6, 1.331e-06, 0.695
10, 1.314e-06, 0.695
72, 1.346e-06, 0.695
1788, 2.782e-06, 0.695
0.2
2, 9.308e-04, 0.695
3, 3.760e-06, 0.695
9, 2.087e-06, 0.695
317, 2.107e-06, 0.695
1310, 1.925e-03, 0.753
Table 4 .
4Example 7: Sensitivity analysis of s-PINN. Computational runtime (in seconds), error, and the final scaling factor for different numbers of intermediate layers N H and neurons per layer H. In each box, the run time (in seconds), the SSE, and the final scaling factor are listed from left to right. Results associated with the smallest error are marked in red while those associated with the shortest run time are highlighted in blue.H
N H
3
5
8
13
50
1348, 6.317e-04, 0.738
798, 9.984e-05, 0.695
995, 1.891e-04, 0.579
778, 4.022e-04, 0.695
80
784, 7.164e-04, 0.654
234, 1.349e-06, 0.695
216, 1.345e-06, 0.695
376, 1.982e-06, 0.695
100
1080, 8.804e-05, 0.695
114, 1.344e-06, 0.695
102, 1.346e-06, 0.695
145, 1.348e-06, 0.695
200
219, 1.349e-06, 0.695
72, 1.346e-06, 0.695
43, 1.347e-06, 0.695
64, 1.345e-06, 0.695
Table 5 .
5The error SSE 0 from Eq. 30 and the error of the reconstructed source Eq. 36, under different strengths of data noise and regularization coefficients λ. The SSE is listed in the upper-right of each cell and the error of the reconstructed source (Eq. 36) is listed in the lower-left of each cell.σ
λ
0
10 −3
10 −2
10 −1
0
0.1370
1.543e-08
0.1370
1.368e-05
0.1477
0.00132
0.3228
0.0888
10 −3
0.1821
2.837e-06
0.1818
2.736e-05
0.1702
1.387e-03
0.3222
0.08964
10 −2
1.0497
0.001517
1.0383
1.579e-03
0.8031
6.078e-03
0.3434
0.1168
10 −1
11.505
0.2976
11.458
0.3032
8.2961
0.6905
1.3018
2.9330
AcknowledgmentsLB acknowledges financial support from the Swiss National Fund (grant number P2EZP2 191888). The authors also acknowledge support from the US Army Research Office (W911NF-18-1-0345) and the National Science Foundation (DMS-1814364).
Approximation capabilities of multilayer feedforward networks. Kurt Hornik, Neural Networks. 42Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251- 257, 1991.
Minimum width for universal approximation. Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin, International Conference on Learning Representations. Sejun Park, Chulhee Yun, Jaeho Lee, and Jinwoo Shin. Minimum width for universal approximation. In International Conference on Learning Representations, 2020.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Maziar Raissi, Paris Perdikaris, George E Karniadakis, Journal of Computational Physics. 378Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686-707, 2019.
Physics-informed machine learning. George Em Karniadakis, G Ioannis, Lu Kevrekidis, Paris Lu, Sifan Perdikaris, Liu Wang, Yang, Nature Reviews Physics. 36George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning. Nature Reviews Physics, 3(6):422-440, 2021.
Neural ordinary differential equation control of dynamics on graphs. Thomas Asikis, Lucas Böttcher, Nino Antulov-Fantulin, Physical Review Research. 2022in pressThomas Asikis, Lucas Böttcher, and Nino Antulov-Fantulin. Neural ordinary differential equation control of dynamics on graphs. Physical Review Research (in press), 2022.
AI Pontryagin or how neural networks learn to control dynamical systems. Lucas Böttcher, Nino Antulov-Fantulin, Thomas Asikis, Nature Communications. 2021Lucas Böttcher, Nino Antulov-Fantulin, and Thomas Asikis. AI Pontryagin or how neural networks learn to control dynamical systems. Nature Communications, 2021.
Near-optimal control of dynamical systems with neural ordinary differential equations. Lucas Böttcher, Thomas Asikis, Machine Learning: Science and Technology. 3445004Lucas Böttcher and Thomas Asikis. Near-optimal control of dynamical systems with neural ordinary differential equations. Machine Learning: Science and Technology, 3(4):045004, 2022.
Neural network control of robot manipulators and non-linear systems. Suresh Fw Lewis, Aydin Jagannathan, Yesildirak, CRC PressFW Lewis, Suresh Jagannathan, and Aydin Yesildirak. Neural network control of robot manipulators and non-linear systems. CRC Press, 2020.
Regularization for deep learning: A taxonomy. Jan Kukačka, Vladimir Golkov, Daniel Cremers, arXiv:1710.10686arXiv preprintJan Kukačka, Vladimir Golkov, and Daniel Cremers. Regularization for deep learning: A taxonomy. arXiv preprint arXiv:1710.10686, 2017.
Deep Lagrangian networks: Using physics as model prior for deep learning. M Lutter, Jan Ritter, Peters, International Conference on Learning Representations. OpenReview.net. M Lutter, C Ritter, and Jan Peters. Deep Lagrangian networks: Using physics as model prior for deep learning. In International Conference on Learning Representations. OpenReview.net, 2019.
A Manuel, Roehrl, A Thomas, Veronika Runkler, Michel Brandtstetter, Stefan Tokic, Obermayer, Modeling system dynamics with physics-informed neural networks based on Lagrangian mechanics. IFAC-PapersOnLine. 53Manuel A Roehrl, Thomas A Runkler, Veronika Brandtstetter, Michel Tokic, and Stefan Obermayer. Modeling system dynamics with physics-informed neural networks based on Lagrangian mechanics. IFAC-PapersOnLine, 53(2):9195-9200, 2020.
Symplectic ODE-net: Learning Hamiltonian dynamics with control. Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty, International Conference on Learning Representations. Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Symplectic ODE-net: Learning Hamiltonian dynamics with control. In International Conference on Learning Representations, 2019.
Ehsan Kharazmi, Zhongqiang Zhang, George Em Karniadakis, arXiv:1912.00873Variational physics-informed neural networks for solving partial differential equations. arXiv preprintEhsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. Variational physics-informed neural networks for solving partial differential equations. arXiv preprint arXiv:1912.00873, 2019.
Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. D Ameya, George Em Jagtap, Karniadakis, Communications in Computational Physics. 285Ameya D Jagtap and George Em Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Communications in Computational Physics, 28(5):2002-2041, 2020.
Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, arXiv:2111.03794Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. arXiv preprintZongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. arXiv preprint arXiv:2111.03794, 2021.
Physics-informed neural networks for highspeed flows. Zhiping Mao, D Ameya, George Em Jagtap, Karniadakis, Computer Methods in Applied Mechanics and Engineering. 360112789Zhiping Mao, Ameya D Jagtap, and George Em Karniadakis. Physics-informed neural networks for high- speed flows. Computer Methods in Applied Mechanics and Engineering, 360:112789, 2020.
A physics-informed neural network framework for PDEs on 3D surfaces: Time independent problems. Zhiwei Fang, Justin Zhan, IEEE Access. 8Zhiwei Fang and Justin Zhan. A physics-informed neural network framework for PDEs on 3D surfaces: Time independent problems. IEEE Access, 8:26328-26335, 2019.
Physics-informed neural networks for power systems. Andreas George S Misyris, Spyros Venzke, Chatzivasileiadis, 2020 IEEE Power & Energy Society General Meeting (PESGM). IEEEGeorge S Misyris, Andreas Venzke, and Spyros Chatzivasileiadis. Physics-informed neural networks for power systems. In 2020 IEEE Power & Energy Society General Meeting (PESGM), pages 1-5. IEEE, 2020.
Physicsinformed neural networks for cardiac activation mapping. Francisco Sahli Costabal, Yibo Yang, Paris Perdikaris, E Daniel, Ellen Hurtado, Kuhl, Frontiers in Physics. 842Francisco Sahli Costabal, Yibo Yang, Paris Perdikaris, Daniel E Hurtado, and Ellen Kuhl. Physics- informed neural networks for cardiac activation mapping. Frontiers in Physics, 8:42, 2020.
A generic physics-informed neural network-based constitutive model for soft biological tissues. Minliang Liu, Liang Liang, Wei Sun, Computer methods in applied mechanics and engineering. 372113402Minliang Liu, Liang Liang, and Wei Sun. A generic physics-informed neural network-based constitutive model for soft biological tissues. Computer methods in applied mechanics and engineering, 372:113402, 2020.
Lucas Böttcher, J Hans, Herrmann, Computational Statistical Physics. Cambridge University PressLucas Böttcher and Hans J Herrmann. Computational Statistical Physics. Cambridge University Press, 2021.
Modeling deformed transmission lines for continuous strain sensing applications. H Stefan, Lucas Strub, Böttcher, Measurement Science and Technology. 31335109Stefan H Strub and Lucas Böttcher. Modeling deformed transmission lines for continuous strain sensing applications. Measurement Science and Technology, 31(3):035109, 2019.
Algebraic damping in the one-dimensional Vlasov equation. Julien Barré, Alain Olivetti, Yoshiyuki Y Yamaguchi, Journal of Physics A: Mathematical and Theoretical. 4440405502Julien Barré, Alain Olivetti, and Yoshiyuki Y Yamaguchi. Algebraic damping in the one-dimensional Vlasov equation. Journal of Physics A: Mathematical and Theoretical, 44(40):405502, 2011.
Stability and error analysis for a second-order fast approximation of the one-dimensional Schrödinger equation under absorbing boundary conditions. Buyang Li, Jiwei Zhang, Chunxiong Zheng, SIAM Journal on Scientific Computing. 406Buyang Li, Jiwei Zhang, and Chunxiong Zheng. Stability and error analysis for a second-order fast approximation of the one-dimensional Schrödinger equation under absorbing boundary conditions. SIAM Journal on Scientific Computing, 40(6):A4083-A4104, 2018.
PDE models of adder mechanisms in cellular proliferation. Mingtao Xia, Chris D Greenman, Tom Chou, SIAM Journal on Applied Mathematics. 803Mingtao Xia, Chris D Greenman, and Tom Chou. PDE models of adder mechanisms in cellular proliferation. SIAM Journal on Applied Mathematics, 80(3):1307-1335, 2020.
Kinetic theory for structured populations: application to stochastic sizer-timer models of cell proliferation. Mingtao Xia, Tom Chou, Journal of Physics A: Mathematical and Theoretical. Mingtao Xia and Tom Chou. Kinetic theory for structured populations: application to stochastic sizer-timer models of cell proliferation. Journal of Physics A: Mathematical and Theoretical, 2021.
Real-space observation of emergent magnetic monopoles and associated Dirac strings in artificial Kagomé spin ice. Elena Mengotti, J Laura, Arantxa Heyderman, Frithjof Fraile Rodríguez, Nolting, V Remo, Hans-Benjamin Hügli, Braun, Nature Physics. 71Elena Mengotti, Laura J Heyderman, Arantxa Fraile Rodríguez, Frithjof Nolting, Remo V Hügli, and Hans- Benjamin Braun. Real-space observation of emergent magnetic monopoles and associated Dirac strings in artificial Kagomé spin ice. Nature Physics, 7(1):68-74, 2011.
Artificial Kagomé spin ice: dimensional reduction, avalanche control and emergent magnetic monopoles. Rv Hügli, B Duff, E O'conchuir, Mengotti, F Fraile Rodríguez, Nolting, H B Lj Heyderman, Braun, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 370RV Hügli, G Duff, B O'Conchuir, E Mengotti, A Fraile Rodríguez, F Nolting, LJ Heyderman, and HB Braun. Artificial Kagomé spin ice: dimensional reduction, avalanche control and emergent magnetic monopoles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370(1981):5767-5782, 2012.
Efficient scaling and moving techniques for spectral methods in unbounded domains. Mingtao Xia, Sihong Shao, Tom Chou, SIAM Journal on Scientific Computing. 435Mingtao Xia, Sihong Shao, and Tom Chou. Efficient scaling and moving techniques for spectral methods in unbounded domains. SIAM Journal on Scientific Computing, 43(5):A3244-A3268, 2021.
A frequency-dependent p-adaptive technique for spectral methods. Mingtao Xia, Sihong Shao, Tom Chou, Journal of Computational Physics. 446110627Mingtao Xia, Sihong Shao, and Tom Chou. A frequency-dependent p-adaptive technique for spectral methods. Journal of Computational Physics, 446:110627, 2021.
Jie Shen, Tao Tang, Li-Lian Wang, Spectral methods: algorithms, analysis and applications. Springer Science & Business Media41Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011.
Lloyd N Trefethen, Spectral methods in MATLAB. SIAM. Lloyd N Trefethen. Spectral methods in MATLAB. SIAM, 2000.
Taylor expansion of the accumulated rounding error. Seppo Linnainmaa, BIT Numerical Mathematics. 162Seppo Linnainmaa. Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2):146-160, 1976.
Automatic differentiation in PyTorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017.
Multilayer feedforward networks are universal approximators. Kurt Hornik, Maxwell Stinchcombe, Halbert White, Neural Networks. 25Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, International Conference on Machine Learning. PMLRSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456. PMLR, 2015.
Rational spectral methods for PDEs involving fractional Laplacian in unbounded domains. Tao Tang, Li-Lian Wang, Huifang Yuan, Tao Zhou, SIAM Journal on Scientific Computing. 422Tao Tang, Li-Lian Wang, Huifang Yuan, and Tao Zhou. Rational spectral methods for PDEs involving fractional Laplacian in unbounded domains. SIAM Journal on Scientific Computing, 42(2):A585-A611, 2020.
Automatic differentiation in machine learning: a survey. Atilim Gunes Baydin, A Barak, Alexey Pearlmutter, Jeffrey Mark Andreyevich Radul, Siskind, Journal of Machine Learning Research. 18Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. Journal of Machine Learning Research, 18, 2018.
Message passing neural PDE solvers. Johannes Brandstetter, E Daniel, Max Worrall, Welling, International Conference on Learning Representations. Johannes Brandstetter, Daniel E Worrall, and Max Welling. Message passing neural PDE solvers. In International Conference on Learning Representations, 2021.
Fourier neural operator for parametric partial differential equations. Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, International Conference on Learning Representations. Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, et al. Fourier neural operator for parametric partial differential equations. In International Conference on Learning Representations, 2020.
Sparse spectral approximations of high-dimensional problems based on hyperbolic cross. Jie Shen, Li-Lian Wang, SIAM Journal on Numerical Analysis. 483Jie Shen and Li-Lian Wang. Sparse spectral approximations of high-dimensional problems based on hyperbolic cross. SIAM Journal on Numerical Analysis, 48(3):1087-1109, 2010.
On the optimization of deep networks: Implicit acceleration by overparameterization. Sanjeev Arora, Nadav Cohen, Elad Hazan, Proceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 244-253, 2018.
How much over-parameterization is sufficient to learn deep ReLU networks. Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu, International Conference on Learning Representations. Zixiang Chen, Yuan Cao, Difan Zou, and Quanquan Gu. How much over-parameterization is sufficient to learn deep ReLU networks? In International Conference on Learning Representations, 2020.
Identification of the timewise thermal conductivity in a 2D heat equation from local heat flux conditions. J Mousa, Huntul, Inverse Problems in Science and Engineering. 297Mousa J. Huntul. Identification of the timewise thermal conductivity in a 2D heat equation from local heat flux conditions. Inverse Problems in Science and Engineering, 29(7):903-919, 2021.
Inverse problems for the heat-conduction equation with nonlocal boundary conditions. Ni Ivanchov, Ukrainian Mathematical Journal. 458NI Ivanchov. Inverse problems for the heat-conduction equation with nonlocal boundary conditions. Ukrainian Mathematical Journal, 45(8):1186-1192, 1993.
The determination of a coefficient in a parabolic differential equation: Part i. existence and uniqueness. B Frank Jones Jr, Journal of Mathematics and Mechanics. B Frank Jones Jr. The determination of a coefficient in a parabolic differential equation: Part i. existence and uniqueness. Journal of Mathematics and Mechanics, pages 907-918, 1962.
On finding a coefficient in a parabolic equation. N Ya, Beznoshchenko, Differential Equations. 10N. Ya. Beznoshchenko. On finding a coefficient in a parabolic equation. Differential Equations, 10:24-35, 1974.
A meshless method for solving an inverse spacewisedependent heat source problem. Liang Yan, Feng-Lian Yang, Chu-Li Fu, Journal of Computational Physics. 2281Liang Yan, Feng-Lian Yang, and Chu-Li Fu. A meshless method for solving an inverse spacewise- dependent heat source problem. Journal of Computational Physics, 228(1):123-136, 2009.
Inverse problem of time-dependent heat sources numerical reconstruction. Liu Yang, Mehdi Dehghan, Jian-Ning Yu, Guan-Wei Luo, Mathematics and Computers in Simulation. 818Liu Yang, Mehdi Dehghan, Jian-Ning Yu, and Guan-Wei Luo. Inverse problem of time-dependent heat sources numerical reconstruction. Mathematics and Computers in Simulation, 81(8):1656-1672, 2011.
A simplified Tikhonov regularization method for determining the heat source. Fan Yang, Chu-Li Fu, Applied Mathematical Modelling. 3411Fan Yang and Chu-Li Fu. A simplified Tikhonov regularization method for determining the heat source. Applied Mathematical Modelling, 34(11):3286-3299, 2010.
Toward an artificial intelligence physicist for unsupervised learning. Tailin Wu, Max Tegmark, Physical Review E. 100333311Tailin Wu and Max Tegmark. Toward an artificial intelligence physicist for unsupervised learning. Physical Review E, 100(3):033311, 2019.
Determination of an unknown heat source from overspecified boundary data. John Rozier Cannon, SIAM Journal on Numerical Analysis. 52John Rozier Cannon. Determination of an unknown heat source from overspecified boundary data. SIAM Journal on Numerical Analysis, 5(2):275-286, 1968.
A variational method for identifying a spacewise-dependent heat source. Tomas Johansson, Daniel Lesnic, IMA Journal of Applied Mathematics. 726B Tomas Johansson and Daniel Lesnic. A variational method for identifying a spacewise-dependent heat source. IMA Journal of Applied Mathematics, 72(6):748-760, 2007.
A unified approach to identifying an unknown spacewise dependent source in a variable coefficient parabolic equation from final and integral overdeterminations. Alemdar Hasanov, Burhan Pektacc, Applied Numerical Mathematics. 78Alemdar Hasanov and Burhan Pektacc. A unified approach to identifying an unknown spacewise dependent source in a variable coefficient parabolic equation from final and integral overdeterminations. Applied Numerical Mathematics, 78:49-67, 2014.
PDE-net: Learning PDEs from data. Zichao Long, Yiping Lu, Xianzhong Ma, Bin Dong, International Conference on Machine Learning. PMLRZichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. PDE-net: Learning PDEs from data. In International Conference on Machine Learning, pages 3208-3216. PMLR, 2018.
Deep hidden physics models: Deep learning of nonlinear partial differential equations. Maziar Raissi, Journal of Machine Learning Research. 191Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. Journal of Machine Learning Research, 19(1):932-955, 2018.
Recipes for when physics fails: Recovering robust learning of physics informed neural networks. Chandrajit Bajaj, Luke Mclennan, Timothy Andeen, Avik Roy, Machine Learning: Science and Technology. Chandrajit Bajaj, Luke McLennan, Timothy Andeen, and Avik Roy. Recipes for when physics fails: Recovering robust learning of physics informed neural networks. Machine Learning: Science and Technology, 2023.
Noise-aware physicsinformed machine learning for robust pde discovery. Pongpisit Thanasutives, Takashi Morita, Masayuki Numao, Ken-Ichi Fukui, Machine Learning: Science and Technology. 415009Pongpisit Thanasutives, Takashi Morita, Masayuki Numao, and Ken-ichi Fukui. Noise-aware physics- informed machine learning for robust pde discovery. Machine Learning: Science and Technology, 4:015009, 2022.
B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. Liu Yang, Xuhui Meng, George Em Karniadakis, Journal of Computational Physics. 425109913Liu Yang, Xuhui Meng, and George Em Karniadakis. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. Journal of Computational Physics, 425:109913, 2021.
| [] |
[
"NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty",
"NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty"
] | [
"Jonathan Balloch \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Zhiyu Lin \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Mustafa Hussain \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Aarun Srinivas \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Robert Wright \nGeorgia Tech Research Institute Atlanta\nGAUSA\n",
"Xiangyu Peng \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Julia Kim \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Mark Riedl \nGeorgia Institute of Technology Atlanta\nGAUSA\n",
"Balloch@gatech Edu "
] | [
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Tech Research Institute Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA",
"Georgia Institute of Technology Atlanta\nGAUSA"
] | [] | A robust body of reinforcement learning techniques have been developed to solve complex sequential decision making problems. However, these methods assume that train and evaluation tasks come from similarly or identically distributed environments. This assumption does not hold in real life where small novel changes to the environment can make a previously learned policy fail or introduce simpler solutions that might never be found. To that end we explore the concept of novelty, defined in this work as the sudden change to the mechanics or properties of environment. We provide an ontology of for novelties most relevant to sequential decision making, which distinguishes between novelties that affect objects versus actions, unary properties versus non-unary relations, and the distribution of solutions to a task. We introduce NOVGRID, a novelty generation framework built on Mini-Grid, acting as a toolkit for rapidly developing and evaluating novelty-adaptation-enabled reinforcement learning techniques. Along with the core NOVGRIDwe provide exemplar novelties aligned with our ontology and instantiate them as novelty templates that can be applied to many MiniGridcompliant environments. Finally, we present a set of metrics built into our framework for the evaluation of noveltyadaptation-enabled machine-learning techniques, and show characteristics of a baseline RL model using these metrics. | 10.48550/arxiv.2203.12117 | [
"https://arxiv.org/pdf/2203.12117v1.pdf"
] | 247,172,246 | 2203.12117 | cf6fddeddafaa59a6df21379f7b77deb0ca1e2f1 |
NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty
Jonathan Balloch
Georgia Institute of Technology Atlanta
GAUSA
Zhiyu Lin
Georgia Institute of Technology Atlanta
GAUSA
Mustafa Hussain
Georgia Institute of Technology Atlanta
GAUSA
Aarun Srinivas
Georgia Institute of Technology Atlanta
GAUSA
Robert Wright
Georgia Tech Research Institute Atlanta
GAUSA
Xiangyu Peng
Georgia Institute of Technology Atlanta
GAUSA
Julia Kim
Georgia Institute of Technology Atlanta
GAUSA
Mark Riedl
Georgia Institute of Technology Atlanta
GAUSA
Balloch@gatech Edu
NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty
A robust body of reinforcement learning techniques have been developed to solve complex sequential decision making problems. However, these methods assume that train and evaluation tasks come from similarly or identically distributed environments. This assumption does not hold in real life where small novel changes to the environment can make a previously learned policy fail or introduce simpler solutions that might never be found. To that end we explore the concept of novelty, defined in this work as the sudden change to the mechanics or properties of environment. We provide an ontology of for novelties most relevant to sequential decision making, which distinguishes between novelties that affect objects versus actions, unary properties versus non-unary relations, and the distribution of solutions to a task. We introduce NOVGRID, a novelty generation framework built on Mini-Grid, acting as a toolkit for rapidly developing and evaluating novelty-adaptation-enabled reinforcement learning techniques. Along with the core NOVGRIDwe provide exemplar novelties aligned with our ontology and instantiate them as novelty templates that can be applied to many MiniGridcompliant environments. Finally, we present a set of metrics built into our framework for the evaluation of noveltyadaptation-enabled machine-learning techniques, and show characteristics of a baseline RL model using these metrics.
Introduction
There exists a robust body of machine learning techniquesincluding but not limited to imitation learning and reinforcement learning-that can be used to form learning models of agent behavior in complex sequential decision making environments. These techniques can be generally applied to find an optimal policy that solves nearly any problem that can be modeled as a Markov Decision Process, and the policies can be anything from simple look-up tables to Gaussian Processes and Deep Neural Networks (Engel, Mannor, and Meir 2005;Sutton and Barto 2018).
However, success in these learning methods shares a common assumption: the stochastic process used to model the environment is equivalent in both training and evaluation. While this train-test similarity assumption holds in some settings, in most real-world cases environments cannot ever Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Initially the yellow key opens the door so the agent (red triangle) can get to the goal (green box). The agent learns a converged policy. At a certain time, the yellow key stops working and the blue key opens the door. The agent's performance drops off and recovers (bottom). The blue and red lines are notional learning curves for agent with and without novelty adaptation, respectively. be guaranteed to function the same forever. Whether in environments related to health care-the industry which the United States Bureau of Labor Statistics projects to have the greatest projected labor demand over the next decade-or freight driving which is an integral part of the modern supply chain, agents encounter and need to respond to novelty (BLS 2021). To eventually meet these real-world challenges, the reinforcement learning research community needs to analyze how well different agents response to a wide variety of novelties.
We provide three contributions. First, we propose an ontology of novelty for sequential decision making that distinguishes between object novelties (new or changed properties of objects) and action novelties (changes in how the agent's actions work). Our ontology also relates novelties to goalseeking performance, categorizing novelties as to whether they hinder or facilitate future expected reward. Second, we introduce NOVELTY MINIGRID (NOVGRID), an extension of MiniGrid environment (Chevalier-Boisvert, Willems, and Pal 2018) that allows for the world properties and dynamics to change according to a generalized novelty generator. The MiniGrid environment is a grid-world that facilitates reinforcement learning algorithm development with low environment integration overhead, which allows for rapid iteration and testing. NOVGRID extends the MiniGrid environment by expanding the way the grid world and the agent interact to allow novelties to be injected into the environment. Specifically this is done by expanding the functionality of the actionable objects-doors, keys, lava, etc.-already in MiniGrid and creating a general environment wrapper that injects novelty at a certain point in the training process. We provide a number of example novelties aligned with different dimensions of our novelty ontology (in addition to allowing developers to create their own novelties). Third, we propose a set of metrics important to measuring adaptation to novelty for the evaluation of agents. With these components NOVGRID will enable more rapid research progress on agent novelty adaptation.
In this paper we will first provide a background on novelty adaptation in sequential decision making problems and its relationship to prior work. We will then discuss the ontology of novelties organized by the different characteristics we understand as most important to sequential decision making agents. After this, we detail the ways NOVGRID environment was designed and how the provided novelty implementations enable research on each part of this ontology. Finally we describe the metrics used to test novelty and some sample performance of the system.
Novelty Background and Related Work
In this section we discuss some of the background and motivation for novelty adaptation as a research challenge.
Novelty can be described in the context of differences in an environment over a period of time and associated capabilities to detect and respond to those changes (Langley 2020). Boult et al. (2021) looks to unify the study of novelty both sequential decision making and traditional machine learning domains, categorizing novelties broadly as world novelties, agent novelties, and observation novelties. By Boult et al. definition, world novelties are changes in the objects and dynamics of the world external to the agent as well as the agent's abilities to affect change on the world. Agent novelties are those where the agent's state does not align with the agent's prior understanding of the world and/or can be classified by the agent as a novelty. Observation novelties are those where a sense-making tool external to the agent (e.g., a sensor) is subject to changes in the environment, such as when a camera or radar signal experiences deterioration from use or unexpected interference.
It is important to note concepts and research areas that should not be confused with or included in novelty. Novelty and novelty adaptation is not equivalent to methods related to outliers like outlier detection or rejection-outliers assumes a priori some correct or expected distribution, where the points or behaviors that lie outside that distribution are aberrant behavior. By contrast, novelty is a sudden change that changes the environment distribution unexpectedly (2007).
Novelty adaptation is also different from continual learning or lifelong learning. In continual and lifelong learning there exists a sequence "tasks", each of which could be different environments, datasets, or novel classes. These tasks can be overlapping, task boundaries don't have to be welldefined, and can include a mixture of supervised and unsupervised data, but in most cases these tasks are disjoint and task boundaries are known and discrete (Parisi et al. 2019;Silver, Yang, and Li 2013;Smith et al. 2021). Most distinctly, in continual learning the model is trained on only one task at a time but validated on that task and all prior tasks. Novelty adaptation, on the other hand, only requires the agent to perform well at the task at hand, so before the introduction of the novelty the agent is evaluated only on the pre-novelty world and then only on post-novelty world after novelty is introduced.
Novelty adaptation is a superset of domain shift and domain adaptation. Domain adaptation in sequential decision making problems specifically addresses problems where training and deployment have the same feature space but different distributions over that feature space (Zhang et al. 2019;Sun, Shi, and Wu 2015). Additionally, this research domain usually assumes that the agent must adapt to this extremely quickly-few shot domain adaptation-or simply be robust to these changes-as in zero-shot domain adaptations. By this definition, domain shift and domain adaptation are one aspect of novelty and novelty adaptation, but novelty goes beyond this by also including variation of the fundamental dynamics of an environment and the agent interactions as well.
The closest analog to novelty adaptation is transfer learning. Given a set of source or training tasks and a target task, transfer learning aims to learn an optimal policy for the target domain leveraging what it learned from the source domains as well as what it has access to in the target domain (Zhu, Lin, and Zhou 2021). While there are some works that focus on variants like zero-shot transfer learning-most notably research focused on transferring from a simulation to the real world-for the most part transfer learning is studies the mechanics of model reuse and fine tuning (Higgins et al. 2017). That is to say: given an already trained model trained on a certain task, how would one reuse this model in a new task where this new task can range from being a subtask of the original task to being completely unrelated. In novelty adaptation on the other hand is the pre-novelty and post-novelty tasks are always related by a (usually realistic) transformation. While transfer learning optimizes the reuse of a learned model with no constraints on the relationship between source and target tasks and by any means necessary, novelty adaptation optimizes for adaptation performance and efficiency online given the knowledge that a realistic transformation between source and target exists.
We are not the first to investigate the characterization and evaluation of novelty adaptation in sequential decision making problems. Pimentel et. al. (2014) conducted a com-prehensive survey of novelty detection that characterized novelty types. There has also been a small amount of research that studies how agents might more effectively adapt. Approaches range from adaptive mixed continuous-discrete planning to knowledge graphs have been used in combination with actor-critic reinforcement learning techniques to improve both detection and adaptation (Klenk et al. 2020;Peng, Balloch, and Riedl 2021). Recently there have even been strong efforts to formulate a unified theory of novelty detection and novelty characterization, and to conceive a metric with which the degree of all novelties can be measured (Boult et al. 2021;Langley 2020;Alspector 2021). There also has been people who have worked on problems extremely relevant to the space of novelty and novelty adaptation without being aware of the novelty adaptation space, such as work on adaptation "hidden" domain shifts (Chen et al. 2021).
Most similarly to this work there has been recent work on environments for novelty detection, characterization, and adaptation in sequential decision making problems. The Novgridworld environment implements a Minecraftinspired grid world to study novelties in the context of agent-driven object-object interaction (Goel et al. 2021). The GNOME Monopoly environment examines multi-agent gameplay in a long term multi-faceted strategic context, while Science Birds has a fewer number of timesteps and in each episode by examining both observational and world novelties (Kejriwal and Thomas 2021;Gamage et al. 2021). In both cases, an implementation is provided for outside users. These works set a firm foundation for studying novelty, but what NOVGRID uniquely provides are highly extendable implementations what can be applied to any Min-iGrid environment with an OpenAI Gym interface, and it provides a standard set of novelties provided with the implementations making it easy for researchers to benchmark progress through time.
An Ontology of Novelty for Sequential Decision Making
We model sequential decision making problems as having two fundamentally different types of entities: agents and the environment, which we model as as interacting in a stochastic game-a multi-agent generalization of Markov Decision Process (MDP). The injection of novelty constitutes a transformation from the original game or MDP M to a new game or MDP M . Given this model of environments, we consider all aspects of the problem except a agent's decision making model to be property of the environment. This includes agent morphology, sensors, and action preconditions and effects. As a result, the ontology we lay out here can be considered a specification of Boult et. al.'s world novelties in the context of sequential decision making problems. With this model of the environment we assume that each agent's observation space and action space remains consistent before and after novelty is injected. That is, the number of actions and the size and shape of observations are consistent throughout each experiment. That said, the manifestation of these fixed sets may change; actions that ini-tially have some specific effect or no effect pre-novelty can take on different effects post-novelty. Likewise, there may be observations and states that never occur pre-novelty that start to occur post-novelty. This is consistent with a robotics perspective on MDPs where actions and observations are governed by an underlying physics of the real world, even though we experiment within grid worlds and games (Kejriwal and Thomas 2021; Gamage et al. 2021).
We assume that the agent's mission T is consistent before and after the novelty, meaning that we do not consider changes to the extrinsic agent rewards. While changes to the goal and reward structure of an agent are indeed important in the real world, this is closely related to continual lifelong learning and multitask learning. The integration of novelty adaptation with these fields is left to future work.
We characterize novelties along three dimensions. The first dimension is object vs action novelties. Objects are any component of the environment that is not controllable. This includes keys, doors, balls, etc. Object novelties involve changes to, or the introduction or removal of, objects or classes of objects. Actions are the ways in which the world is affected by controllable entities. Action novelties involve changes in the dynamics of actions through which the state of the world is affected. Action novelties can involve changes in the preconditions of actions-the applicability criteria of actions-or action effects-the way in which the world is changed when an action is executed.
Second, novelties can be expressed as changes to unary predicates or non-unary (or n-ary where n > 1) relations. Unary object novelties can be thought of as added, removed, or changes to intrinsic properties of objects like mass, volume, or shape. Non-unary object novelties are changes in the relationship between objects, which is to say properties of objects that are necessarily defined in the context of other entities. Unary and non-unary action novelties involve (a) the addition, removal, or change of properties of objects required for action applicability, or (b) changes to the properties of objects or changes to the relationship between objects.
Third, we observe that novelties can be categorized according to how they change the distribution of solutions to a task: • Barrier novelty-the optima in the solution distribution are longer after novelty than before novelty. For example: a door the agent must pass through to achieve a goal initially required one key pre-novelty but requires the agent to possess two keys simultaneously post-novelty. • Shortcut novelty-the optima in the solution distribution are on average shorter after novelty than before novelty. For example, a door that required a key pre-novelty then does not require any keys post-novelty. • Delta novelty-the optima in the solution distribution are the same before and after novelty injection. For example, a door that required one key pre-novelty then requires a different key post-novelty. This means that NOVGRID additionally works as a jumpingoff point to evaluate any of the many 3rd-party environments based on MiniGrid. This novelty adaptation package has three fundamental components: a novelty injection mechanism built into the core wrapper class, new and modified objects and entities to work with the novelty ontology as we described, and the novelty generator as well as the sample novelties we have to exemplify our ontology. The core novelty injection system is designed to be simple so that it is applicable to as many MiniGrid environments as possible. The wrapper wraps the environment, and no arguments are required besides the environment, but users can also specify the novelty injection episode, the episode in which novelty is injected. Given a model in train mode, MiniGrid resets its grid at the beginning of every episode with the reset function being called against the environment, which in turn calls the function gen grid. Our novelty injection wrapper monitors the training cycle, and when the novelty injection episode is reached the wrapper class switches to using alternatives for the reset and gen grid functions. Specifically, after the novelty injection episode, the system now uses post novelty reset and post novelty gen grid. This allows the wrapper to quickly and easily load in and overwrite the old environment with the new one.
Novelty MiniGrid
NOVGRID is built around an OpenAI Gym Wrapper and
As each novelty is different the post novelty gen grid method in the base Nov-eltyWrapper class is only an abstract method that acts as a template. Each implementation of a novelty for testing adaptation requires an implementation of a class that inherits from this NoveltyWrapper and implements the post novelty gen grid method. To exemplify both this process and the novelty ontology that we describe in Section we have built 11 exemplar novelties that together cover all of the different categories of our ontology. This way all researchers using NOVGRID can test their agent's adaptation sensitivity to different parts of the novelty ontology. The novelties delivered with NOVGRID and how the respective objects would usually work in MiniGrid are:
• GoalLocationChange: This novelty changes the location of the goal object. In MiniGrid the Goal object is usually at fixed location.
• DoorLockToggle: This novelty makes a door that is assumed to always be locked instead always unlocked and vice versa. In MiniGrid this is usually a static property. If a door that was unlocked before novelty injection is locked and requires a certain key after novelty injection, the policy learned before novelty injection will likely to fail. On the other hand, if novelty injection makes a previously locked door unlocked, an agent that does not explore after novelty injection may always still seek out a key for a door that does not need it.
• DoorKeyChange: This novelty changes which key that opens a locked door. In MiniGrid doors are always unlocked by keys of the same color as the door. This means that if key and door colors do not match after novelty, agents will have to find another key to open the door. This may cause a previously learned policy to fail until the agent learns to start using the other key. This novelty is illustrated in Figure 1.
• DoorNumKeys: This novelty changes the number of keys needed to unlock a door. The default number of keys is one; this novelty tends to make policies fail because of the extra step of getting a second key.
• ImperviousToLava: Lava becomes non-harmful, whereas in Minigrid lava always immediately ends the episode with no reward. This may result in new routes to the goal that potentially bypass doors.
• ActionRepetition: This novelty changes the number of sequential timesteps an action will have to be repeated for it to occur. In MiniGrid it is usually assumed that for an action to occur it only needs to be issued once. So if an agent needed to command the pick-up action twice before novelty but only once afterwards, to reach its most efficient policy it would need to learn to not command pickup twice.
• ForwardMovementSpeed This novelty modifies the number of steps an agent takes each time the forward command is issued. In MiniGrid agents only move one gridsquare per time step. As a result, if the agent gets faster after novelty, the original policy may have a harder time controlling the agent, and will need to learn how to embrace this change that could make it reach the goal in fewer steps.
• ActionRadius: This novelty is an example of a change to the relational preconditions of an action by changing the radius around the agent where an action works. In Mini-Grid this is usually assumed to be only a distance of one or zero, depending on the object. If an agent can pick up objects after novelty without being right next to them, it will have to realize this if it is to reach the optimum solution.
• ColorRestriction: This novelty restricts the objects one can interact with by color. In MiniGrid it is usually assumed that all objects can be interacted with. If an agent is trained with no blue interactions before novelty and then isn't allowed to interact with yellow objects after novelty, the agent will have to learn to pay attention to the color of objects.
• Burdening: This novelty changes the effect of actions based on whether the agent has any items in the inventory. In MiniGrid it is usually assumed that the inventory has no effect on actions. An agent experiencing this novelty, for example, might move twice as fast as usual when their inventory is empty, but half as fast as usual when in possession of the item, which it will have to compensate for strategically.
• TransitionDeterminism: This novelty changes the likelihood with which that actions selected by the agent occur. In MiniGrid it is usually assumed that all actions are deterministic. If an agent is trained with deterministic transitions before novelty and then experiences stochastic transitions after novelty, it will need to learn to take safe routes to the goal or its policy will fail more often
In Table 1 we map each of the exemplar novelties to dimensions in our novelty ontology. To implement these novelties we also had to design custom versions of different standard MiniGrid objects, and these custom objects are also all included with NOVGRID.
Evaluation and Baseline
In novelty adaptation the core considerations by which we measure whether an agent adapted successfully involve not only performance on the task, but also the way the agent reacts to the novelty and the speed with which it recovers. To that end, we built the following metrics into NOVGRID:
• Resilience: the difference in performance between a random agent and a pre-novelty agent when evaluated on the post-novelty domain without any adaptation. This represents the drop-off in performance when novelty is injected, relative to the performance of a random agent. A resilient agent may not encounter significant decrease in performance as in the case where the novelty is a goal location change and the agent has been trained on randomized grids. Barrier novelties may result in performance dropping to theoretical minimums. Shortcut novelties may result in no performance drop-off, but a random agent may experience greater reward.
• Asymptotic adaptive performance: final converged performance post novelty above random. This is what would simply be considered the converged performance of the agent in an environment with no novelty.
• Adaptive efficiency: the number of environment interactions required for post-novelty convergence. Figure 2: Several of the evaluation metrics illustrated against a notional performance curve for an agent.
• One-shot adaptive performance: the performance of the agent post-novelty after only one episode of interaction with the environment.
To demonstrate the way in which NOVGRID can be applied to the activity of sequential decision makers, we trained a reinforcement learning agent and the presence of novelty and measured its performance based on the metrics listed above. Specifically, we used the DoorKeyToggle novelty in an environment with 2 keys and 1 door on a 6x6 grid. It is set up so that before novelty injection the door is opened with one key and after it is opened with the other. The agent was trained as a proximal policy optimization deep reinforcement learning agent with a convolutional neural net feature extractor and two fully connected output networks, one to estimate the value and one to serve as the policy functions of the agent.
The agent was allowed to train for 500k time steps over just shy of 2000 episodes, at which point novelty was injected. As we can see in Figure 3 there is a precipitous drop in performance demarcating the novelty injection at this point. The agent is then able to train further for 500k time steps, yielding a more complete picture of the novelty adaptation of this agent. This baseline agent has no novelty adaptability other than to continue learning using the extrinsic rewards that is provide by the environment. As a result, this represents a lower-bound against which to compare future novelty-adaptive agents. As illustrated in Figure 3, progress manifests itself as faster restoration of asymptotic maximum.
The results shown in Figure 3 plot shows the progression of the agent learning from scratch in the original environment, experiencing the novelty, and then adapting to the novelty of the changed key. The random agent to which this agent is compared only every receives zero reward for this environment and task. Examining the drop in performance where the novelty was injected at the 500k timestep (indicated by the vertical red dotted line) we can see low resilience of the baseline agent, with a resilience value of only 0.0531. Not visible on this map is the one-shot performance, or performance after one update over one full episode, which is actually reasonable at 0.22 . This tells us that the baseline reinforcement learning agent, while not efficient at getting to optimal post-novelty performance, has some promise as a starting point. Looking at the yellow line we can see the adaptive efficiency which only converges around 300k Figure 3: PPO baseline in NOVGRID using the DoorKeyChange novelty. The plot shows the progression of the agent learning, experiencing the novelty, and then adapting to the novelty. The blue line indicates the learning process before novelty. The novelty was injected at the 500k timestep, as indicated by the vertical red dotted line. The yellow line shows adaptation to the novelty, which only converges around 300k timesteps after the novelty injection. This is expected as this baseline has no means of adaptation besides simply continuing to learn. timesteps after the novelty injection, and the adaptive performance converges to a lower 0.8 reward. This from this we can tell that the agent is not effective at adapting, but expected as this baseline has no means of adaptation besides simply continuing to learn.
Given that PPO is an algorithm that is near-state of the art in many reinforcement learning tasks, this experiment serves as an important demonstration of how much more needs to be researched in novelty adaptation. Future solutions to novelty adaptation can seek better resilience, adaptive efficiency, asymptotic adaptive performance, or even one-shot adaptive performance as there is much room for improvement in all of these areas.
Future Work
Novelty in sequential decision making is a rich space that promises to enhance robustness of agents in virtual worlds in anticipation of operation in the real world. There are a number of ways that the ontology and the way it manifests itself in NOVGRID can be enhanced in future iterations as research in this space matures. When it comes to novelties, the two major axes of novelty undressed by this work have to do with the (a) local or global application of novelties and (b) populations of agents, including the behaviors of external agents. When we differentiate local and global novelties, we mean to say that novelty can affect individual entities or instances as well as any entities or instances of a certain type or class; agents will react differently if a novelty changes the way all doors operate as opposed to the way one door operates. However, we have not yet factored that dimension into NOVGRID.
Observation of other agents performing the same or related task can have implications on novelty adaptation. When agents are acting in the presence of other agents it has a powerful effect on the long-term performance of the agent as well as the learning ability of the agent. For example, if agents are competing for the same resources to reach the same goal this affects the strategy agents will take to reach that goal, and agents can use other agents effectively as a source of exploration when external agents do something that it originally thought was not possible. Indeed, this may be a significant way in which novelty-adaptive agents detect and adapt to shortcut novelties.
Another way that other agents factor into novelty is in adversarial settings where the novelty may be a change in the behavior or strategy of the adversary, up to and including adversaries becoming cooperative or vice versa.
Beyond and expanded novelty ontology, additional measurement and quantification of novelty is an important future direction for NOVGRID. Measuring the difference between these distributions of novelties is key among these measures as allows comparison between different novelties. There are many ways to quantify differences in distribution, common among them Shannon Jensen divergences like KL divergence, and metrics like the earth-movers distance. Additionally, very recent work has examined using fixed agent baselines to characterize differences in distribution, as well as metrics of mutual information and edit distance. Integrating metrics like these would be extremely valuable additions to NOVGRID as it would enable researchers to not only compare these novelties, but also to set expectations of novelty adaptation based on the distribution differences. Along this same thread integrating metrics of novelty detection and characterization into NOVGRID may be of great interest to want to study these subproblems in the context of sequential decision making problems.
Figure 1 :
1Illustrative example of NOVGRID.
AcknowledgementsThis research is sponsored in part by the Defense Advanced Research Projects Agency (DARPA), under contract number W911NF-20-2-0008. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the US Department of Defense or the United States Government. We gratefully acknowledge David Aha, Katarina Doctor, and members of the DARPA SAIL-ON Working Group on Metrics.
Representation Edit Distance as a Measure of Novelty. J Alspector, arXiv:2111.02770arXiv preprintBLS, U. 2021. Employment and output by industryAlspector, J. 2021. Representation Edit Distance as a Mea- sure of Novelty. arXiv preprint arXiv:2111.02770 . BLS, U. 2021. Employment and output by in- dustry. URL https://www.bls.gov/emp/tables/industry- employment-and-output.htm.
Towards a Unifying Framework for Formal Theories of Novelty. T Boult, P Grabowicz, D Prijatelj, R Stern, L Holder, J Alspector, M M Jafarzadeh, T Ahmad, A Dhamija, C Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Boult, T.; Grabowicz, P.; Prijatelj, D.; Stern, R.; Holder, L.; Alspector, J.; Jafarzadeh, M. M.; Ahmad, T.; Dhamija, A.; Li, C.; et al. 2021. Towards a Unifying Framework for For- mal Theories of Novelty. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 35, 15047-15052.
Outlier detection: A survey. V Chandola, A Banerjee, V Kumar, ACM Computing Surveys. 1415Chandola, V.; Banerjee, A.; and Kumar, V. 2007. Outlier detection: A survey. ACM Computing Surveys 14: 15.
Active Online Learning with Hidden Shifting Domains. Y Chen, H Luo, T Ma, C Zhang, Chen, Y.; Luo, H.; Ma, T.; and Zhang, C. 2021. Active On- line Learning with Hidden Shifting Domains.
Minimalistic Gridworld Environment for OpenAI Gym. M Chevalier-Boisvert, L Willems, S Pal, Chevalier-Boisvert, M.; Willems, L.; and Pal, S. 2018. Min- imalistic Gridworld Environment for OpenAI Gym. https: //github.com/maximecb/gym-minigrid.
Reinforcement learning with Gaussian processes. Y Engel, S Mannor, R Meir, Proceedings of the 22nd international conference on Machine learning. the 22nd international conference on Machine learningEngel, Y.; Mannor, S.; and Meir, R. 2005. Reinforcement learning with Gaussian processes. In Proceedings of the 22nd international conference on Machine learning, 201- 208.
Novelty Generation Framework for AI Agents in Angry Birds Style Physics Games. C Gamage, V Pinto, C Xue, M Stephenson, P Zhang, J Renz, Conference on Games. Gamage, C.; Pinto, V.; Xue, C.; Stephenson, M.; Zhang, P.; and Renz, J. 2021. Novelty Generation Framework for AI Agents in Angry Birds Style Physics Games. In Conference on Games.
Nov-elGridworlds: A Benchmark Environment for Detecting and Adapting to Novelties in Open Worlds. S Goel, G Tatiya, M Scheutz, J Sinapov, Adaptive and Learning Agents Workshop at AAMAS2021. Goel, S.; Tatiya, G.; Scheutz, M.; and Sinapov, J. 2021. Nov- elGridworlds: A Benchmark Environment for Detecting and Adapting to Novelties in Open Worlds. In Adaptive and Learning Agents Workshop at AAMAS2021.
Darla: Improving zero-shot transfer in reinforcement learning. I Higgins, A Pal, A Rusu, L Matthey, C Burgess, A Pritzel, M Botvinick, C Blundell, A Lerchner, PMLRInternational Conference on Machine Learning. Higgins, I.; Pal, A.; Rusu, A.; Matthey, L.; Burgess, C.; Pritzel, A.; Botvinick, M.; Blundell, C.; and Lerchner, A. 2017. Darla: Improving zero-shot transfer in reinforcement learning. In International Conference on Machine Learning, 1480-1490. PMLR.
A multi-agent simulator for generating novelty in monopoly. Simulation Modelling Practice and Theory. M Kejriwal, S Thomas, Kejriwal, M.; and Thomas, S. 2021. A multi-agent simulator for generating novelty in monopoly. Simulation Modelling Practice and Theory 102364.
Model-Based Novelty Adaptation for Open-World AI. M Klenk, W Piotrowski, R Stern, S Mohan, J De Kleer, International Workshop on Principles of Diagnosis (DX). Klenk, M.; Piotrowski, W.; Stern, R.; Mohan, S.; and de Kleer, J. 2020. Model-Based Novelty Adaptation for Open-World AI. In International Workshop on Principles of Diagnosis (DX).
Open-world learning for radically autonomous agents. P Langley, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Langley, P. 2020. Open-world learning for radically au- tonomous agents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 13539-13543.
Continual lifelong learning with neural networks: A review. G I Parisi, R Kemker, J L Part, C Kanan, S Wermter, Neural Networks. 113Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter, S. 2019. Continual lifelong learning with neural networks: A review. Neural Networks 113: 54-71.
Detecting and Adapting to Novelty in Games. X Peng, J C Balloch, M O Riedl, AAAI Workshop on Reinforcement Learning in Games. Peng, X.; Balloch, J. C.; and Riedl, M. O. 2021. Detecting and Adapting to Novelty in Games. In AAAI Workshop on Reinforcement Learning in Games.
A review of novelty detection. M A Pimentel, D A Clifton, L Clifton, L Tarassenko, Signal Processing. 99Pimentel, M. A.; Clifton, D. A.; Clifton, L.; and Tarassenko, L. 2014. A review of novelty detection. Signal Processing 99: 215-249.
Lifelong machine learning systems: Beyond learning algorithms. D L Silver, Q Yang, L Li, 2013 AAAI spring symposium series. Silver, D. L.; Yang, Q.; and Li, L. 2013. Lifelong machine learning systems: Beyond learning algorithms. In 2013 AAAI spring symposium series.
J Smith, J Balloch, Y.-C Hsu, Z Kira, arXiv:2101.09536Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer. arXiv preprintSmith, J.; Balloch, J.; Hsu, Y.-C.; and Kira, Z. 2021. Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer. arXiv preprint arXiv:2101.09536 .
A survey of multi-source domain adaptation. S Sun, H Shi, Y Wu, Information Fusion. 24Sun, S.; Shi, H.; and Wu, Y. 2015. A survey of multi-source domain adaptation. Information Fusion 24: 84-92.
Reinforcement learning: An introduction. R S Sutton, A G Barto, MIT pressSutton, R. S.; and Barto, A. G. 2018. Reinforcement learn- ing: An introduction. MIT press.
Vr-goggles for robots: Real-to-sim domain adaptation for visual control. J Zhang, L Tai, P Yun, Y Xiong, M Liu, J Boedecker, W Burgard, IEEE Robotics and Automation Letters. 42Zhang, J.; Tai, L.; Yun, P.; Xiong, Y.; Liu, M.; Boedecker, J.; and Burgard, W. 2019. Vr-goggles for robots: Real-to-sim domain adaptation for visual control. IEEE Robotics and Automation Letters 4(2): 1148-1155.
Transfer Learning in Deep Reinforcement Learning: A Survey. Z Zhu, K Lin, J Zhou, Zhu, Z.; Lin, K.; and Zhou, J. 2021. Transfer Learning in Deep Reinforcement Learning: A Survey.
| [] |
[
"RESPONSIBLE-AI-BY-DESIGN: A PATTERN COLLECTION FOR DESIGNING RESPONSIBLE AI SYSTEMS",
"RESPONSIBLE-AI-BY-DESIGN: A PATTERN COLLECTION FOR DESIGNING RESPONSIBLE AI SYSTEMS"
] | [
"Qinghua Lu ",
"Liming Zhu ",
"Xiwei Xu ",
"Jon Whittle Data61 ",
"Csiro ",
"Australia "
] | [] | [] | Although AI has significant potential to transform society, there are serious concerns about its ability to behave and make decisions responsibly. Many ethical regulations, principles, and guidelines for responsible AI have been issued recently. However, these principles are high-level and difficult to put into practice. In the meantime much effort has been put into responsible AI from the algorithm perspective, but they are limited to a small subset of ethical principles amenable to mathematical analysis. Responsible AI issues go beyond data and algorithms and are often at the system-level crosscutting many system components and the entire software engineering lifecycle. Based on the result of a systematic literature review, this paper identifies one missing element as the system-level guidance -how to design the architecture of responsible AI systems. We present a summary of design patterns that can be embedded into the AI systems as product features to contribute to responsible-AI-by-design. | 10.1109/ms.2022.3233582 | [
"https://export.arxiv.org/pdf/2203.00905v2.pdf"
] | 247,218,356 | 2203.00905 | da40fe10253e732460baff189e7c8cdac9e6033d |
RESPONSIBLE-AI-BY-DESIGN: A PATTERN COLLECTION FOR DESIGNING RESPONSIBLE AI SYSTEMS
September 21, 2022
Qinghua Lu
Liming Zhu
Xiwei Xu
Jon Whittle Data61
Csiro
Australia
RESPONSIBLE-AI-BY-DESIGN: A PATTERN COLLECTION FOR DESIGNING RESPONSIBLE AI SYSTEMS
September 21, 2022Responsible AIethical AItrustworthy AIAI engineeringsoftware architectureMLOpsAIOps
Although AI has significant potential to transform society, there are serious concerns about its ability to behave and make decisions responsibly. Many ethical regulations, principles, and guidelines for responsible AI have been issued recently. However, these principles are high-level and difficult to put into practice. In the meantime much effort has been put into responsible AI from the algorithm perspective, but they are limited to a small subset of ethical principles amenable to mathematical analysis. Responsible AI issues go beyond data and algorithms and are often at the system-level crosscutting many system components and the entire software engineering lifecycle. Based on the result of a systematic literature review, this paper identifies one missing element as the system-level guidance -how to design the architecture of responsible AI systems. We present a summary of design patterns that can be embedded into the AI systems as product features to contribute to responsible-AI-by-design.
Introduction
Although AI has significant potential and capacity to stimulate economic growth and improve productivity across a growing range of domains, there are serious concerns about the AI systems' ability to behave and make decisions in a responsible manner. According Gartner's recent report, 21% of organizations have already deployed or plan to deploy responsible AI technologies within the next 12 months 1 .
Many ethical principles, and guidelines have been recently issued by governments, research institutions, and companies [1]. However, these principles are high-level and can hardly be used in practice by developers. Responsible AI research has been focusing on algorithm solutions limited to a subset of issues such as fairness [2]. Ethical issues can enter at any point of the software engineering lifecycle and are often at the system-level crosscutting many components of AI systems. To try to fill the principle-algorithmic gap, some development guidelines have started to appear. However, those efforts tend to be high-level development process checklists 23 and ad-hoc sets lacking of state-related linkages for final products [3]. Therefore, in this paper, rather than staying at the ethical principle-level or AI algorithm-level, we take a pattern-oriented approach and focuses on the system-level design patterns to build responsible-AI-by-design into final AI products. The design patterns are collected based on the results of a systematic literature review (SLR) and can be embedded into the design of AI systems as product features to contribute to responsible-AI-by-design. We identify the lifecycle of a provisioned AI system in which the states or state transitions are associated with design patterns to show when the design patterns can take effects. The lifecycle along with the design pattern annotations provide an responsible-AI-focused view of system interactions and a guide to effect use of design patterns to implement responsible AI from a system perspective. To the best of our knowledge, this is the first study that provides a concrete and actionable system-level design guidance for architects and developers to reference.
Methodology
To operationalize responsible AI, we performed an SLR to identify design patterns that architects and developers can use during the development process. Fig. 1 illustrates the methodology. The research question is: "What solutions for responsible AI can be identified?" The research question focuses on identifying the reusable patterns for responsible AI. We used "AI", "Responsible", "Solution" as the key terms and included synonyms and abbreviations as supplementary terms to increase the search results. The main data sources are ACM Digital Library, IEEE Xplore, Science Direct, Springer Link, and Google Scholar. The study only includes the papers that present concrete design or process solutions for responsible AI, and excludes the papers that only discuss high-level frameworks. The complete SLR protocol is available as online material 4 . We use the ethical principles listed in Harvard University's mapping study [4]: Privacy, Accountability (professional responsibility is merged into accountability due to the overlapping definitions), Safety & Security, Transparency & Explainability, Fairness and Non-discrimination, Human Control of Technology, Promotion of Human Values. Fig. 2 illustrates the lifecycle of a provisioned AI system using a state diagram and highlights the patterns associating with relevant states or transitions, which show when the design patterns could take effect. We have limited the scope to the design patterns that can be embedded into the AI systems and provisioned supply chain tool-chain as final product features. The best practices of the development process including some patterns related to offline model training is out of the scope of this paper. Before an AI system is provisioned, the supply chain information can be accessed through bill of materials. The users can be required to provide the verifiable ethical credentials to show their capability to operate the systems, while the users can examine the system's verifiable ethical credentials for ethical compliance checking. Once the AI system starts serving, it is important to perform system-level simulation through an ethical digital twin. ethical sandbox can be used to physically separate AI components from non-AI components. When an AI system is requested to execute a task, decision-making is often needed before executing the task. AI component can be activated or deactivated through AI model switcher to automatically make the decision or involve human experts to review the suggestion. Figure 2: Lifecycle of a provisioned AI system. behaviors and decision-making outcomes of the AI system are monitored and validated through continuous ethical validator. Incentives for the ethical behaviors can be maintained by an incentive registry. If the system is failed to meet the requirements (including ethical requirements) or a near-miss is detected, the system need to be updated. Federated learner retrains the model locally at each client to protect data privacy. Co-versioning registry can be used to track the co-evolution of AI system components or assets. An ethical knowledge base can be built to make the ethical knowledge systematically accessed and used when developing or updating the AI system. The AI system needs to be audited regularly or when major-failures/near-misses occur. An ethical black box can be designed to record the critical data that can be kept as evidence. A global-view auditor can be built on top to provide global-view accountability when multiple systems are involved in an accident. The stakeholders can determine to abandon the AI system if it no longer fulfils the requirements.
Lifecycle of a provisioned AI System
Design Patterns
To operationalize responsible AI, Fig. 3 lists a collection of patterns for responsible-AI-by-design. The full version of design patterns is available online 5 .
• Bill of materials: AI product vendors often create AI systems by assembling commercial or open source AI and/or non-AI components from third parties. The AI users often have ethical concerns about the procured AI systems/components. Before an AI system is provisioned, the supply chain information can be accessed through bill of materials 6 , which keeps a formal machine-readable record of the supply chain details of the components used in building an AI system, such as component name, version, supplier, dependency relationship, author and timestamp. The purpose of bill of material is to provide traceability and transparency into the components that make up AI systems so that ethical issues can be tracked and addressed. There have been many tools to generate SBOM for practitioners, such as Dependency-Track 7 . To ensure traceability and integrity, immutable data infrastructure [5] is needed to store the data of bill of materials. For example, the manufacturers of autonomous vehicles can maintain a material registry contract on blockchain to track their components' supply chain information, e.g., the version and supplier of the third-party AI-based navigation component. • Verifiable ethical credential: Verifiable ethical credentials are cryptographically verifiable data that can be used as strong proof of ethical compliance for AI systems, components, artifacts, and stakeholders (such as developers and users). Before using the provisioned AI systems, users verify the systems' ethical credential to check if the systems are compliant with AI ethics principles or regulations [6]. On the other hand, the users is often required to provide the ethical credentials to use and operate the AI systems. Publicly accessible data infrastructure needs to be built to support the generation and verification for ethical credentials 8 . For example, before driving an vehicle, the driver is requested to scan her/his ethical credential to show she/he has the capability to drive safely, while verifying the ethical credential of the vehicle's automated driving system shown on the center console. • Ethical digital twin: Before running the provisioned AI system in a production environment, it is critical to conduct system-level simulation through an ethical digital twin running on a simulation platform to monitor the behaviors of AI systems and predict potential ethical risks. Ethical digital twin can also be designed as a component at the operation infrastructure level to examine the AI systems' runtime behaviors and decisions based on the abstract simulation model using the real-world data. The risk assessment results can be used by the system or users to take further actions to mitigate the potential ethical risk. For example, the manufacturers of autonomous vehicles can use the ethical digital twin to explore the limits of autonomous vehicles based on the collected run-time data, such as NVIDIA DRIVE Sim 9 and xFpro 10 . • Ethical sandbox: It is risky to execute the whole system including AI components and non-AI components in the same environment. When an AI system is being served, ethical sandbox can be used to physically separate AI components from non-AI components by running the AI component in a self-contained emulation execution environment [7], e.g. sandboxing the unverified visual perception component. The AI components placed in the ethical sandbox has no access to the rest of the AI system. All the hardware and software functionality of the AI component are duplicated in the ethical sandbox. Thus, the AI component can run safely under supervision before being deployed at scale. For example, Fastcase AI Sandbox 11 provides a secure AI execution platform for analysing data safely in a secure environment. Maximal tolerable probability should be set as an ethical margin for the sandbox against the ethical requirements. A watch dog can be added to restrict the execution time of the AI component to avoid the potential ethical risk, e.g., only executing the visual perception component for 10 minutes on the roads designed especially for autonomous vehicles. • AI mode switcher: When to activate AI is a major architectural design decision when designing a software system. When an AI system is making a decision, AI mode switcher enables efficient invocation and dismissal mechanisms for activating or stopping the AI component when needed. Kill switch is a special type of invocation mechanism which immediately turns off the AI component and terminates its negative effects, e.g. switching off the autopilot functionality 12 and its internet connection. The AI component can make decisions automatically or provide suggestions to human experts in high risk situations. The decisions can be approved or overridden by human expert (e.g. skipping the path suggested by the navigation system). If the system state after acting an AI decision is not expected by human experts, fallback can be triggered to reverse the system back to the previous state. A built-in guard ensures that the AI component is only being used under the predefined risk categories. • Multi-model decision-maker: The reliability of traditional software is dependent on the design of software components. One of the reliability practices in the reliability community is redundancy, which can be applied to AI components. When decisions are being made by an AI system, multi-model decision-maker can run different models to make a single decision [8], e.g., using different algorithms for visual perception. Reliability can be improved by using different models under different context (e.g., different user groups or regions). In addition, fault tolerance can be enabled by cross-checking the results given by multiple models (e.g., only accepts the same results from the deployed models). IBM Watson Natural Language Understanding make predictions using an ensemble learning framework that includes multiple emotion detection models 13 . • Homogeneous redundancy: Ethical failures in AI systems can cause serious damage to the humans or environment. N-version programming is a design pattern for dealing with reliability issues of traditional software. This concept can be adapted and applied to AI system design. Homogeneous redundancy (e.g., two brake control components) can be applied to tolerate the highly uncertain AI system components that can make unethical decisions or the adversary hardware components that produce malicious data or behave 8 https://securekey.com 9 https://developer.nvidia.com/drive/drive-sim 10 https://rfpro.com 11 https://www.fastcase.com/sandbox/ 12 https://www.tesla.com/autopilot 13 https://www.ibm.com/au-en/cloud/watson-natural-language-understanding unethically [9]. When an AI system is executing a task, a cross-check can be performed for the outputs given by multiple redundant components of a single type.
Context
• Incentive registry: Incentives are effective in motivating AI systems to execute tasks in a responsible manner.
When executing a task, an incentive registry records the rewards that are given for the behavior and decisions and behaviors of AI systems [10], e.g., rewards for the recommended path without safety risk. There are different ways to enforce the incentive mechanism, e.g., designing the incentive mechanism on blockchain based data infrastructure that is publicly accessible, using reinforcement learning. However, it is challenging to design the mechanisms in the responsible AI context since it difficult to measure the ethical impact of decisions and behaviors of AI systems on some ethical principles (such as human values). Besides, consensus needs to be reached on the incentive mechanism by all the stakeholders. Additionally, in some cases, ethical principles are conflicting with each other, making the design of incentive mechanism harder. FLoBC 14 is a tool that uses blockchain to incentivize training contribution for federated learning.
• Continuous ethical validator: AI systems often need to conduct continual learning when data drift or unethical behavior is detected in production. When an AI system executes tasks, continuous ethical validator monitors and validates the outcomes of AI systems (e.g., the path suggested by the navigation system) against the ethical requirements. The outcomes of AI systems are the consequences of decisions and behaviors of the systems, i.e., whether the AI system behaves ethically or provides the promised benefits in a given situation. The time and frequency of validation can be predefined within the continuous validator. Version-based feedback and rebuild alert can be sent when the ethical requirements are met or breached. Incentive registry can be used to reward or punish the ethical/unethical behavior or decisions of AI systems.
• Ethical knowledge base: AI systems involve broad ethical knowledge, including AI ethics principles, regulations, unethical use cases, etc. Unfortunately, such ethical knowledge is scattered in different documents (e.g., AI incidents) and is usually implicit or even unknown to developers who primarily focus on the technical aspects of AI systems and do not have ethics background. This results in negligence or ad-hoc use of relevant ethical knowledge in AI system development. Ethical knowledge base is built upon a knowledge graph to make meaningful entities, concepts and their rich semantic relationships are explicit and traceable across heterogeneous documents so that the ethical knowledge can be systematically accessed, analysed, used when developing or updating AI systems [11]. For example, an ethical knowledge base can be used to support continuous ethical risk assessment. Ethical knowledge base can be built based on the AI ethics principles, frameworks, and actual AI use cases discussed in the existing papers.
• Co-versioning registry: AI systems involve different levels of dependencies and need frequent evolution when data drift or unethical behavior occurs. Co-versioning of the components of AI systems or AI assets generated in AI pipelines provides provenance guarantees across the entire lifecycle of AI systems. There have been many version control tools for managing the co-versioning of data and models, such as DVC 15 . When updating an AI system, co-versioning registry can track the co-evolution of components or AI assets. There are different levels of co-versioning: co-versioning of AI components and non-AI components, co-versioning of the assets within the AI components (i.e., co-versioning of data, model, code, configurations). A publicly accessible data infrastructure can be used to maintain the co-versioning registry to provide a trustworthy trace for dependencies. For example, a co-versioning registry contract can be built on blockchain to manage different versions of visual perception models and the corresponding training datasets.
• Federated learner: Despite the widely deployed mobile or IoT devices generating massive amounts of data, data hungriness is still a challenge given the increasing concern in data privacy. When learning or updating AI models, federated learner preserves the data privacy by performing the model training locally on the client devices and formulating a global model on a central server based on the local model updates 16 , e.g., train the visual perception model locally in each vehicle. Decentralized learning is an alternative to federated learning, which uses blockchain to remove the single point of failure and coordinate the learning process in a fully decentralized way. In the event of negative outcomes, the responsible humans can be traced and identified by an ethical black box for accountability.
• Ethical black box: Black box was introduced initially for aircraft several decades ago for recording critical flight data. The purpose of embedding an ethical black box in an AI system is to audit an AI system and investigate why and how the system caused an accident or a near miss. The ethical black box continuously records sensor data, internal status data, decisions, behaviors (both system and operator) and effects [12]. For example, an ethical black box could be built into the automated driving system to record the behaviors of the system and driver and their effects. Design decisions need to be made on what data should be recorded and where the data should be stored (e.g. using a blockchain-based immutable log or a cloud-based data storage). • Global-view auditor: There can be more than one AI systems involved in an ethical incident (e.g. multiple autonomous vehicles in a car accident). During auditing, it is often challenging to identify the liability as the data collected from each of the involved systems can be conflicting to each other. Global-view auditor can enable accountability by analysing the data discrepancies between the involved AI systems and identifying the liability for the ethical incident. This pattern can be also applied to improve the reliability an AI system by taking the data from other systems. For example, an autonomous vehicle increases its visibility using the perception data collected from the other vehicles. [13]. All the historical data of AI systems can be recorded by an immutable log for third-party auditing.
Conclusion
To operationalize responsible AI, we take a pattern-oriented approach and collect a set of product design patterns that can be embedded into an AI system as product features to enable responsible-AI-by-design. The patterns are associated to the states or state transitions of a provisioned AI system, serving as an effective guidance for architects and developers to design a responsible AI system. We are currently building up a responsible AI pattern catalogue that includes multi-level governance patterns, trustworthy process patterns (i.e., best practices and techniques), and responsible-AI-by-design product patterns.
Figure 1 :
1Methodology.
Figure 3 :
3Operationalized design patterns for responsible AI systems.
Multi-model decision maker can use different models to make a single decision and cross-check the results. Similarly, homogeneous redundancy can be applied to the system design to enable fault-tolerance. Both theEthical
knowledge base
Federated learner
Incentive registry
AI mode switcher
Continuous
ethical validator
Multi-model
decision maker
Provisioned
Abandoned
Updating
Task-failed
Task-executing
Homogeneous
redundancy
State transition
Pattern enabler
Legend
Decision-making
Task-completed
Co-versioning registry
Near-miss-detected
Requested
Bill of materials
Verifiable ethical
credential
Ethical digital twin
Ethical black box
Global-view auditor
Ethical sandbox
Auditing
State
Pattern
Serving
https://drive.google.com/file/d/1Ty4Cpj_GzePzxwov5jGKJZS5AvKzAy3Q/view?usp=sharing
https://drive.google.com/file/d/1SBuqkdx91hzcxiGjtxxMyt_55JzlVBK6/view?usp=sharing 6 https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf 7 https://dependencytrack.org
https://github.com/Oschart/FLoBC 15 https://dvc.org/ 16 https://leaf.cmu.edu
The global landscape of ai ethics guidelines. A Jobin, M Ienca, E Vayena, Nature Machine Intelligence. 19A. Jobin, M. Ienca, and E. Vayena, "The global landscape of ai ethics guidelines," Nature Machine Intelligence, vol. 1, no. 9, pp. 389-399, 2019.
A survey on bias and fairness in machine learning. N Mehrabi, F Morstatter, N Saxena, K Lerman, A Galstyan, CSUR. 546N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, "A survey on bias and fairness in machine learning," CSUR, vol. 54, no. 6, pp. 1-35, 2021.
Software engineering for responsible ai: An empirical study and operationalised patterns. Q Lu, L Zhu, X Xu, J Whittle, D Douglas, C Sanderson, ICSE-SEIP'22. IEEEQ. Lu, L. Zhu, X. Xu, J. Whittle, D. Douglas, and C. Sanderson, "Software engineering for responsible ai: An empirical study and operationalised patterns," in ICSE-SEIP'22. IEEE, 2022, pp. 241-242.
Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai. J Fjeld, Berkman Klein Center Research PublicationJ. Fjeld et. al., "Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai," Berkman Klein Center Research Publication, no. 2020-1, 2020.
Providing assurance and scrutability on shared data and machine learning models with verifiable credentials. I Barclay, A Preece, I Taylor, S K Radha, J Nabrzyski, CCPE. 6997I. Barclay, A. Preece, I. Taylor, S. K. Radha, and J. Nabrzyski, "Providing assurance and scrutability on shared data and machine learning models with verifiable credentials," CCPE, p. e6997, 2022.
A decentralized approach towards responsible ai in social ecosystems. W Chu, W. Chu, "A decentralized approach towards responsible ai in social ecosystems," 2021.
Towards trustworthy ai: safe-visor architecture for uncertified controllers in stochastic cyber-physical systems. A Lavaei, B Zhong, M Caccamo, M Zamani, Proceedings of CAADCPS'21. CAADCPS'21A. Lavaei, B. Zhong, M. Caccamo, and M. Zamani, "Towards trustworthy ai: safe-visor architecture for uncertified controllers in stochastic cyber-physical systems," in Proceedings of CAADCPS'21, 2021, pp. 7-8.
Tfutils multi-model training for tensorflow. Neuroailab, NeuroAILab, "Tfutils multi-model training for tensorflow," 2018. [Online]. Available: http://neuroailab.stanford. edu/tfutils/fundamentals/multimodel.html
Threat assessment in machine learning based systems. L N Tidjon, F Khomh, arXiv:2207.00091arXiv preprintL. N. Tidjon and F. Khomh, "Threat assessment in machine learning based systems," arXiv preprint arXiv:2207.00091, 2022.
Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. J Weng, J Weng, J Zhang, M Li, Y Zhang, W Luo, IEEE Transactions on Dependable and Secure Computing. 185J. Weng, J. Weng, J. Zhang, M. Li, Y. Zhang, and W. Luo, "Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive," IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 5, pp. 2438-2455, 2021.
A semantic framework to support ai system accountability and audit. I Naja, M Markovic, P Edwards, C Cottrill, The Semantic Web. I. Naja, M. Markovic, P. Edwards, and C. Cottrill, "A semantic framework to support ai system accountability and audit," in The Semantic Web, 2021, pp. 160-176.
A distributed 'black box'audit trail design specification for connected and automated vehicle data and software assurance. G Falco, J E Siegel, arXiv:2002.02780arXiv preprintG. Falco and J. E. Siegel, "A distributed 'black box'audit trail design specification for connected and automated vehicle data and software assurance," arXiv preprint arXiv:2002.02780, 2020.
Putting accountability of ai systems into practice. B S Miguel, A Naseer, H Inakoshi, IJCAI'21. B. S. Miguel, A. Naseer, and H. Inakoshi, "Putting accountability of ai systems into practice," in IJCAI'21, 2021, pp. 5276-5278.
| [
"https://github.com/Oschart/FLoBC"
] |
[
"XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE ETO Meets Scheduling: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem",
"XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE ETO Meets Scheduling: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem"
] | [
"Wendi Xu ",
"Xianpeng Wang [email protected] ",
"\nCollege of Information Science and Engineering\nNortheastern University\n\n",
"\nMinistry of Education\nKey Laboratory of Data Analytics and Optimization for Smart Industry\nChina\n"
] | [
"College of Information Science and Engineering\nNortheastern University\n",
"Ministry of Education\nKey Laboratory of Data Analytics and Optimization for Smart Industry\nChina"
] | [] | Evolutionary transfer optimization(ETO) serves as "a new frontier in evolutionary computation research", which will avoid zero reuse of experience and knowledge from solved problems in traditional evolutionary computation. In schedule-ing applications via ETO, a highly competitive "meeting" framework between them could be constituted towards both intelligent scheduling and green scheduling, especially for carbon neutrality within the context of China.To the best of our knowledge, our study on scheduling here, is the 1 st work of ETO for complex optimization when multiobjective problem "meets" single-objective problems in combinatorial case (not multitasking optimization). More specifically, key knowledge like positional building blocks clustered, could be learned and transferred for permutation flow shop scheduling problem (PFSP). Empirical studies on well-studied benchmarks validate relatively firm effectiveness and great potential of our proposed ETO-PFSP framework. | 10.1109/cac53003.2021.9727579 | [
"https://arxiv.org/pdf/2206.12902v1.pdf"
] | 247,461,252 | 2206.12902 | 341105561d52ffd63a3b07625f307b0eeff2aaf7 |
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE ETO Meets Scheduling: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem
Wendi Xu
Xianpeng Wang [email protected]
College of Information Science and Engineering
Northeastern University
Ministry of Education
Key Laboratory of Data Analytics and Optimization for Smart Industry
China
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE ETO Meets Scheduling: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem
evolutionary transfer optimizationgreen schedu- lingtransfer learningdata analyticssystem optimizationcarbon neutrality
Evolutionary transfer optimization(ETO) serves as "a new frontier in evolutionary computation research", which will avoid zero reuse of experience and knowledge from solved problems in traditional evolutionary computation. In schedule-ing applications via ETO, a highly competitive "meeting" framework between them could be constituted towards both intelligent scheduling and green scheduling, especially for carbon neutrality within the context of China.To the best of our knowledge, our study on scheduling here, is the 1 st work of ETO for complex optimization when multiobjective problem "meets" single-objective problems in combinatorial case (not multitasking optimization). More specifically, key knowledge like positional building blocks clustered, could be learned and transferred for permutation flow shop scheduling problem (PFSP). Empirical studies on well-studied benchmarks validate relatively firm effectiveness and great potential of our proposed ETO-PFSP framework.
INTRODUCTION
A. Evolutionary transfer optimization(ETO) meets scheduling
Relation is ubiquitous, whereas, isolation is seldom, which is the philosophy for evolutionary transfer optimization (ETO), "a paradigm that integrates EA solvers with knowledge learning and transfer across related domains to achieve better optimization efficiency and performance" [3]. ETO, has emer-ged as a new frontier in evolutionary computation, which has connections with transfer learning [5], deep learning [4] [15] and evolutionary algorithms.
In manufacturing and service industries, scheduling is a decision-making process that allocates resources to tasks in given periods to optimize one or more management objectives.
We hope to explore the relatively rare studied and new solution framework of ETO meeting scheduling, especially in production management [2] or system engineering towards intelligent scheduling [2] [16] and green scheduling [2].
B. ETO meets scheduling for complex optimization: from
sinlge-objective problems to multi-objective problem According to types of problems solved, ETO can be categorized as follows in existing studies [3]: (1) When ETO meets scheduling, particularly, for complex optimization, we can narrow the setting of the problem pair as a scenario from single-objective problems to multiobjective problem [10].
Because single-objective problems(SOPs) are simpler and related problems for transferring knowledge into its corres-ponding multi-objective problem(MOP), which will greatly reduce computational complexity of scheduling, like permut-ation flow shop scheduling(PFSP), that often is a NP-hard problem.
C. Learning/Transferring specific key knowledge: building blocks between PFSPs
In this paper, we will focus on genetic algorithm(GA) based memetic algorithm(MA). Building block(BB) is the main object/role that is implicitly or explicitly manipulated for the success of GAs and MAs [7] [8]. The exploitation of BB, especially the tight one, is considered as the "holy grail to GA optimization" [8]. (From now on, "BB" and "block" are exchangeable in the paper. )
The definition of BBs or genetic linkage is tightly intertwined with crossover operator in [10] for both biological systems and GAs. The BBs for machine scheduling, especially PFSP, have been identified by [9] [13]. That is, engineering has proved the existence of blocks and the success manipulation via nonlearning tools [9] on blocks. However, the manipulation via learning tools [13] on blocks is relatively rare( not only in machine/shop scheduling problems, but also in other combinatorial problems like travel salesman problems, vehicle routing problems and so on). Our recommended key knowledge for ETO and learning also tend to be the BBs between PFSPs.
D. Related works and main contributions
Related Works.The most related work to ours is [10]. To the best of their knowledge, their work is the first work to enhance evolutionary multi-objective optimization via transf-erred knowledge from corresponding single-objective problems. Their problems are continuous, not combinatorial. For example, travel salesman problem(TSP), quadratic assignment problem(QAP), linear ordering problem(LOP), job shop scheduling problem(JSP) are combinatorial. In our work, we follow roughly the same basic framework in [10], like transferring experience by injection of external populations from source task of all single objective settings to their corresponding multi-objective problem as target task every G generations, where G is the gap. Fundamental differences between continuous and combinatorial cases prevent we from following their design of mapping/connection between populations in source task and target task directly via denoising autoencoder. The connection is lack of physical meaning for job permutation representation in PFSP. Instead, that connection in our work, is constructed as follow. We add the intra-task learning of clustering that is not applied in their work to choose potential elite solutions with good positional BBs, then inject the external population(may including those elites) from source task to target one.
In the following, we will present another four important related works. First one is ETO for multitask optimization in combinatorial case of TSP, QAP, LOP, and JSP. Multifactorial evolutionary algorithm has been designed to explore the potential power of evolutionary multitasking (EMT), which can serve as the engine to simultaneously optimize multiple permutation-based combinatorial optimiza-tion problems in supply chain networks [11]. To gain the adaptive ability, a unified representation design and selection operator are applied. Secondly, ETO for multitask optimi-zation in combinatorial case of routing. In [12], a memetic computing paradigm is introduced, which can learn and evolve knowledge meme that traverses two different but related domains to enhance evolutionary search performance. For combinatorial case, a realization is studied on two NPhard routing domains, i.e., capacitated vehicle routing problem and capacitated arc routing problem. Thirdly, machine learning based intelligent optimization(MA) for PFSP. In the work [13], machine learning base memetic algorithm is developed for PFSP with the setting of multiobjective. The algorithm of MA is called ML-MOMA, one of whose main features is improvement of local search via machine learning. In the algorithm, historical data during the optimization of PFSP is utilized. The clustering method is introduced to selected better solutions that are representative for refinement during local search phase. Duplicated searches thus are avoided effectively on similar individual solutions. Besides MA here, another method in the family of intelligent optimization, simulated annealing(SA) can also solve PFSP in the following. Lastly, residual learning based intelligent optimization(SA) for PFSP. To solve PFSP, [14] introduces an improved SA armed with residual learning. The neighborhood of the PFSP are defined and its key blocks are divided. Residual learning is applied to extract and train the features of those blocks, which belong to supervised learning style. Moreover, the trained/ learned/fitted parameters are further stored in the SA for greater search efficiency.
Main contributions are as follow: 1. An ETO framework in scheduling application is proposed. To the best of our knowledge, it may be the first attempt to extend ETO for the complex optimization in scheduling cases (i.e., combi-natorial cases) from single to multi-objective(not MTO). 2. A step towards green scheduling and carbon neutrality via a devotion to avoid the scheduling operations from scratch. 3.We develop a simple yet effective algorithm, MA for PFSP, as a powerful data analytics method for system optimization in smart industry [2].
II. PRELIMINARY: LOW DISCREPANCY, OPTIMIZATION
OBJECTIVES, TRANSFERRED KNOWLEDGE
A. Low discrepancy: which problem pair
Firstly, the setting of which problem pair mainly concerns with similarity or discrepancy between them in nature. So the goal of low discrepancy motivates our setting here.
The SOPs "naturally share great similarity with the given MOP, which thus could yield useful traits for enhancing the problem-solving of the MOP'' [10] .
B. Optimization objectives: makespan objective dominates other objectives
Makespan(Cmax), the minimization of the maximum completion time, is the core and most powerful optimization objective /driving force among the objectives in multiobjective and single-objective scheduling problems.
It dominates total flow time(TFT), total tardiness, max imum lateness and so on. In other words, Cmax should be put forward first, better Cmax often is also correlated with other better objectives.
C. Transferred knowledge : positional BB dominates other
BBs, for makespan For combinational problems, especially shop scheduling or PFSP, those blocks remains unclear, and some clues may help us to figure out. Positional, precedent and adjacent information units are believed to exits in combinational problems. For TSP, adjacency structures dominates positional ones, while positional ones matters more than adjacency ones for optimization of Cmax in PFSP [1]. Based on the observations above, we propose our assumption that there may exit 3 kinds of BB, positional, precedent and adjacent ones. In later experiments, we focus on the 1st one.
Considering both Ⅱ .B and above, we could summarize that positional BBs help improve Cmax, and also inherently improve other objectives, therefore the significance of positional BB is highlighted here.
III. TEST PROBLEM: PFSP
PFSP is formulated as follow: each job/operation is to be processed sequentially on machines, given their processing time of the operation. Each machine in PFSP can process one job at most, and each job can be processed on one machine at most. The sequence of jobs is the same on each machine. In this literature, we choose both Cmax and TFT as objectives.
IV. THE FRAMEWORK ACROSS TASKS: ETO_PFSP
A. Three frameworks: transfer learning , transfer optimization, and our ETO_PFSP Classical transfer learning [8] framework(F1), includes 2 learning tasks [5] [6]. However, the framework(F2) of transfer optimization [3] (at least, the current definition) only emphasize the transfer action between two evolutionary tasks, the learning component in each evolutionary task is not required. Our framework ETO_PFSP(F3) is between F1 and F2, and contains 2 evolutionary tasks, each of which apply intra-task learning components.
In other words, of cause, F3 is F2, furthermore, shares richer features with F1 which are lack in F2. We tend to believe that, F2 is an ongoing definition, with a lot room for theoretical refinement and practical enrichment.
B. Eight tasks, two groups: group 1, t1_wc(t_wc 1.0, t_wc 1.1), t2_wc and t2e_wc; group 2, t1_nc(t_nc 1.0, t_nc 1.1), t2_nc and t2e_nc Overview (in Figure 2). In F3, we set two task groups.
Group 1 owns 4 tasks, namely, t1_wc including two sub tasks(t_wc 1.0, t_wc 1.1), t2_wc and t2e_wc, where "wc" means " with clustering and" "e" is " external" transferring from t1_wc, sharing the same toolkit of W-X-L(only probabilities vary in X, more is in Ⅴ .A ). All above is the same for group 2 of t1_nc(t_nc 1.0, t_nc 1.1), t2_nc and t2e_nc, except that no clustering(named "nc") is in W.
W-X-L contains a special operator to chooses who to be parents(W), a special crossover(X) and local search(L) opera-tor. It's worth to mention that, task family above shares the same random initial(I) population for fair comparison.
Phase of selection(S) differs. In S , we use NSGA Ⅱ , sort-ing methods by Cmax or TFT and so on. Therefore, many shared parts above at both problem and algorithm levels are elaborately constituted as a harmony test bed towards a well-defined F3. Fig. 2. Overview. From parent P i to P i offspring, at generation i. Figure 3). With C and S? in W, we choose who to be parents P i . Density peak based clustering (DPBC) [6] is in C(in Figure 4). DPBC is based on the observation that centers of cluster are characterized by a relatively higher density than points in their neighborhoods and by a relatively long distance from points that have higher densities.We implement it via hamming distance, which is widely used in evolutionary computation. To our surprise, it also focus on the mining of the positional building block. Quite obviously and intuitively, measures of hamming distance(dissimilarity) and positional BB(similarity) work from opposite sides to the same characterization. Figure 5), especially X. More of X is in Figure 5. L is an ordinary insertion operator. Figure 6). t1_wc/nc and t2_wc/nc have no P i 0, only t2e_wc/nc needs P i 0 every G generation. t_wc/nc 1.0 and 1.1 construct the selection pressure via single objective, and t2_wc/nc and t2e_wc/nc consider both objectives via NSGAⅡ. 4. Details of C. A typical example of ρ-δ(rho-delta) graph in our simul-ation, where ρ is local density, and δ is the minimum distance between one sample point and any other one with higher density.
Details of W (in
Details of XL (in
Details of SS in S (in
V. EXPERIMENTAL STUDIES AND COMPARISONS
A. Experimental setting
To test the validation of F3, we carry out simulation on some instances in well-studied benchmarks, including tai01(20x5), tai42(50x10) and VFR100_20_1 (100x20), e.g., 20x5, is 20 jobs and 5 machines. In our simulation, F3 is coded in Python 3.7.0(3.8.8 also OK), and is executed on servers.
The following parameters are set: N is 100, number of generation is 100. In t1_wc/nc 1.0 and 1.1, [px1, px2/m] are [0.3, 0.7] and [0.1,0.9], respectively. For t2_wc/nc and t2e_wc/nc, it's [0.2,0.8]. For reference points, tai01 takes range (2500, 1000) to normalize Cmax, and (25000,10000) to normalize TFT; tai42 uses (4200,2500) and (120000, 80000); and instance 3 picks (10000,5000) and (550000, 350000). The gap G is 2. The base-line size of P i 2 is 50, modified by a factor K1. For P i 1, the base-line size is 20+H.
And 20 is also adapted by K2, H may be 0, 1, 2 or 3, depending on the solutions with equal distance at cutting distance.
Varying [K1, K2] from [1,0.6], [0.6,0.6] to [1,1] for each instance, we have 9 cases (in Figure 7), and each case calculates hamming distance from the 15 th job to last job(test the distribution of positional BB) and owns 20 runs independ-ently. In each run, we perform 8 tasks, namely, 2 task groups in Ⅳ.B.
B. Simulation results and comparisons
In each case/result, both t2_wc and t2e_wc perform with clustering, and both t2_nc and t2e_nc work without clustering. And for each case, we evolve overall 100(generations per task, seen at x axis) *4 (4 tasks in a group) *2 (wc/nc) *20(runs for average) = 16000 generations! Furthermore, the size of solution space is a factorial (e.g., 20x5, 20! solutions), imposing a great computational challenge to get those 9 cases.
VI. DISCUSSION
A. Why t2_nc and t2e_nc are better than t2_wc and t2e_wc : philosophy behind K1 and K2 Why t2_nc and t2e_nc is better than t2_wc and t2e_wc in terms of hypervolume? In the paper [10], transferred task also is beaten by NSGAⅡ on the test of DTLZ1 function. At times, there may exist some deceptive positional BB. When worse solutions dominate in P i 0, less confidence should be assigned or decrease K1, and vice versa. The common used injection of exact solutions sounds simple yet effective, with inherently robustness. So, controlling K2 seems valuable.
Mining and learning of positional BB should be improved in the future. Because building block is the "holy grail to GA optimization" [8], the nature of difficulty in mining and learning exists on our future road for improvement.
B. Transferring between four task 2
It's should be highlighted that the knowledge/experience transferring between two tasks in F3, includes both negative and positive knowledge, both ordinary knowledge and specific knowledge of positional BB. Between t2_wc and t2e_wc, there is always obvious positive transferring in figure 7, which tends to validate the effectiveness part of our ETO_PFSP (F3).
Whereas, as to t2_nc and t2e_nc, in cases 1 to 7, nearly no transferring exists, that is, both early and later stages nearly overlap, and small negative transferring exists in middle stages.And for cases 8 and 9, slight negative transferring happens, which tends to validate the relative ineffectiveness part of our F3(Also, many negative or nearly no transferring exist on continuous benchmark functions in [10]), left room for potential improvement.
VII. CONCLUSION AND FUTURE WORK
Our framework ETO_PFSP works as a common framework at macro level.
Considering the common property, it is similar to a common crossover operator, but those two are obviously different, in that at least the former is at macro level and the latter is at micro level.
Transferring from one task to another task, does not necessarily lead to the state of the art(SOTA) performance of the latter task(as usually observed in transfer learning community). transfer optimization(TO) is new, and the shift to understand relationship between TO and SOTA is not easy. A solver that is already strong enough and armed with proper TO(like our framework) will be more likely to achieve SOTA. A (weak, or mild) optimization engine just equipped with TO, can't guarantee SOTA. Again, we construct a framework, a new framework(for 1 st time, in combinatorial case within single objectives to multiobjective setting, not MTO).
ETO_PFSP attempts to avoid scheduling production operations from scratch, not only contributing to "China's industrial upgrading and transformation"[2] [16], but also heading towards the pledge of the China's carbon neutrality.
In the future, many directions are attractive and inspiring. First, extensive study of [K1,K2] may be helpful to the philosophy in Ⅵ .A and philosophy of ETO in Ⅰ .A. Then, extending to other problem pair in scheduling is also quite anticipated. For example , from a PFSP to a JSP. At last, to disentangle the kinds of different knowledge learned/ transferred(maybe via fitness landscape, local optima network, …) is also important , which somehow is like the disentanglement of different features/representations [5] [15] in deep learning/transfer learning [5] towards interpretability [17] [18] [19][ 20] [21] of AI.
ETO for Optimization in Uncertain Environment, (2) ETO for Multitask Optimization(MTO), (3) ETO for Multi-/Many-Objective Optimization, (4) ETO for Machine Learning Applications, (5) ETO for Complex Optimization.
Fig. 1 .
1A example of building blocks in PFSP[9].
Fig. 3 .
3Details of W. Choose parents P i . DPBC is in C.
Fig.
Fig. 4. Details of C. A typical example of ρ-δ(rho-delta) graph in our simul-ation, where ρ is local density, and δ is the minimum distance between one sample point and any other one with higher density.
Fig. 5 .
5Details of X. M helps overcome premature convergence.
Fig. 6 .
6Details of SS.
Fig. 7 .
7Cases 1,2, ..., 9 are shown above, respectively. Notes: (1) "stat" means statistics of hypervolume. (2)Actually, t2_nc is the baseline NSGA Ⅱ, without both clustering and transferring.
ACKNOWLEDGMENTThe authors thank Teacher Xiangman Song(also, in our group of DAO, NEU) for his useful support and kindly help. Student Xu(one of the authors), also thanks a PhD student Zuocheng Li(also, in our group), for his useful discussion of building block theory. At last, student Xu expresses his thank to Prof. Yingping Chen(from National Yang Ming Chiao Tung University) for his insights on linkage learning genetic algorithms, which helps him(student Xu) understand the building block theory better.
Memetic algorithms in planning, scheduling, and timetabling. C Cotta, A J Fernández, SpringerBerlin HeidelbergC. Cotta, A. J. Fernández, "Memetic algorithms in planning, scheduling, and timetabling," Springer Berlin Heidelberg, 2007.
Data analytics and optimization for smart industry. L Tang, Y Meng, Frontiers of Eng. Mana. L. Tang and Y. Meng, "Data analytics and optimization for smart industry," Frontiers of Eng. Mana, 2021.
Evolutionary transfer optimization -a new frontier in evolutionary computation research. K C Tan, L Feng, M Jiang, IEEE Comp. Inte. Magn. K. C. Tan, L. Feng, and M. Jiang, "Evolutionary transfer optimization -a new frontier in evolutionary computation research," IEEE Comp. Inte. Magn., 2021.
Towards WARSHIP: combining braininspried computing of RSH for image super resolution. W Xu, M Zhang, IEEE Clou. Comp. and Inte. Syst. W. Xu and M. Zhang, "Towards WARSHIP: combining brain- inspried computing of RSH for image super resolution," IEEE Clou. Comp. and Inte. Syst., 2018.
Deep learning of representations for unsupervised and transfer learning. Y Bengio, Workshop on Unsupervised & Transfer Learning. Y. Bengio, "Deep learning of representations for unsupervised and transfer learning," Workshop on Unsupervised & Transfer Learning, 2011.
Clustering by fast search and find of density peaks. A Rodriguez, A Laio, Science. A. Rodriguez and A. Laio, "Clustering by fast search and find of density peaks," Science, 2014.
Learning gene linkage to efficiently solve problems of bounded difficulty using genetic algorithms. G R Harik, A PhD Thesis in The University of MichiganG. R. Harik, "Learning gene linkage to efficiently solve problems of bounded difficulty using genetic algorithms," A PhD Thesis in The University of Michigan, 1997.
Extending the scalability of linkage learning genetic algorithms. Y Chen, Part of the Studies in Fuzziness and Soft Computing book seriesY. Chen, "Extending the scalability of linkage learning genetic algorithms," Part of the Studies in Fuzziness and Soft Computing book series, 2006.
A block mining and recombination enhanced genetic algorithm for the permutation flow-shop scheduling problem. P C Chang, W H Huang, J L Wu, T C E Cheng, Inter. J. of Prod. Econ. P. C. Chang, W. H. Huang, J. L. Wu, and T. C. E. Cheng, "A block mining and recombination enhanced genetic algorithm for the permutation flow-shop scheduling problem," Inter. J. of Prod. Econ., 2013.
A preliminary study of improving evolutionary multi-objective optimization via knowledge transfer from single-objective problems. L Huang, L Feng, H Wang, Y Hou, K Liu, C Chen, IEEE Inter. Conf. on Syst., Man, and Cybe. L. Huang, L. Feng, H. Wang, Y. Hou, K. Liu and C. Chen, "A preliminary study of improving evolutionary multi-objective optimization via knowledge transfer from single-objective problems," IEEE Inter. Conf. on Syst., Man, and Cybe., 2020.
Evolutionary multitasking in permutation-based combinatorial optimization problems: realization with TSP, QAP, LOP, and JSP. Y Yuan, Y S Ong, A Gupta, P S Tan, H Xu, IEEE Region. Y. Yuan, Y. S. Ong, A. Gupta, P. S. Tan and H. Xu, "Evolutionary multitasking in permutation-based combinatorial optimization problems: realization with TSP, QAP, LOP, and JSP," IEEE Region 10 Conf., 2016.
Memetic search with interdomain learning: a realization between CVRP and CARP. L Feng, Y Ong, M Lim, I W Tsang, 10.1109/TEVC.2014.2362558IEEE Trans. on Evol. Comp. 195L. Feng, Y. Ong, M. Lim and I. W. Tsang, "Memetic search with interdomain learning: a realization between CVRP and CARP," IEEE Trans. on Evol. Comp., vol. 19, no. 5, pp. 644-658, Oct. 2015, doi: 10.1109/TEVC.2014.2362558.
A machine-learning based memetic algorithm for the multi-objective permutation flowshop scheduling problem. X Wang, L Tang, Comp. & Oper. Rese. X. Wang and L. Tang, "A machine-learning based memetic algorithm for the multi-objective permutation flowshop scheduling problem," Comp. & Oper. Rese., 2017.
An improved simulated annealing algorithm based on residual network for permutation flow shop scheduling. Y Li, C Wang, L Gao, Y Song, X Li, Comp. & Inte. Sys. Y. Li, C. Wang, L. Gao, Y. Song and X. Li, "An improved simulated annealing algorithm based on residual network for permutation flow shop scheduling," Comp. & Inte. Sys., 2020.
Theory of generative deep learning : probe landscape of empirical error via norm based capacity control. W Xu, M Zhang, IEEE Clou. Comp. and Inte. Syst. W. Xu and M. Zhang, "Theory of generative deep learning : probe landscape of empirical error via norm based capacity control," IEEE Clou. Comp. and Inte. Syst., 2018.
Advances in green shop scheduling. L Wang, J Wang, C Wu, Control and Decision. L. Wang, J. Wang and C. Wu. "Advances in green shop scheduling," Control and Decision, 2018.
N Lei, Z Luo, S T Yau, D X Gu, arXiv:1805.10451Geometric Understanding of Deep Learning. N. Lei, Z. Luo, S.T. Yau and D. X. Gu. "Geometric Understanding of Deep Learning," arXiv:1805.10451, 2018.
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. M Bronstein, J Bruna, T Cohen, P Velikovi, M. Bronstein, J. Bruna, T. Cohen and P. Velikovi. "Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges," 2021.
Dark, beyond deep: a paradigm shift to cognitive ai with humanlike common sense. Y Zhu, T Gao, L Fan, S Huang, M Edmonds, H Liu, F Gao, C Zhang, S Qi, Y Wu, J Tenenbaum, S Zhu, arXiv:2004.09044Y. Zhu, T. Gao, L. Fan,S. Huang, M. Edmonds, H. Liu, F. Gao, C. Zhang, S. Qi, Y. Wu, J. Tenenbaum and S. Zhu. "Dark, beyond deep: a paradigm shift to cognitive ai with humanlike common sense," arXiv:2004.09044, 2020.
A Proposal on Machine Learning via Dynamical Systems. W E , Comm. in Math. & Stat. W. E. "A Proposal on Machine Learning via Dynamical Systems," Comm. in Math. & Stat., 2017.
Ten Questions for the New Generation of Artificial Intelligence. D Li, CAAI Trans. on Intell. Sys. D. Li. "Ten Questions for the New Generation of Artificial Intelligence," CAAI Trans. on Intell. Sys., 2020.
| [] |
[
"SMOOTHNESS OF SUBRIEMANNIAN ISOMETRIES",
"SMOOTHNESS OF SUBRIEMANNIAN ISOMETRIES"
] | [
"Luca Capogna ",
"Enrico Le Donne "
] | [] | [] | We show that the group of isometries (i.e., distance-preserving homeomorphisms) of an equiregular subRiemannian manifold is a finite-dimensional Lie group of smooth transformations. The proof is based on a new PDE argument, in the spirit of harmonic coordinates, establishing that in an arbitrary subRiemannian manifold there exists an open dense subset where all isometries are smooth. | 10.1353/ajm.2016.0043 | [
"https://arxiv.org/pdf/1305.5286v2.pdf"
] | 53,979,503 | 1305.5286 | 8ef230223cc85457817ca577cb0aa086dd57b59a |
SMOOTHNESS OF SUBRIEMANNIAN ISOMETRIES
22 May 2013
Luca Capogna
Enrico Le Donne
SMOOTHNESS OF SUBRIEMANNIAN ISOMETRIES
22 May 2013
We show that the group of isometries (i.e., distance-preserving homeomorphisms) of an equiregular subRiemannian manifold is a finite-dimensional Lie group of smooth transformations. The proof is based on a new PDE argument, in the spirit of harmonic coordinates, establishing that in an arbitrary subRiemannian manifold there exists an open dense subset where all isometries are smooth.
Introduction
A classical result from 1939, due to Myers and Steenrod [MS39], states that any distance preserving homeomorphism (henceforth called isometry) between open subsets of smooth Riemannian manifolds is a smooth diffeomorphism. This regularity opens the way for the use of analytical tools in the study of metric properties of manifolds. Since 1939 several different proofs for this result have been proposed for Riemannian structures with varying degrees of smoothness. We refer the reader to the work of Palais [Pal57], Calabi and Hartman [CH70], and Taylor [Tay06]. The latter introduced a novel streamlined approach to the regularity issue, which (unlike many of the previous arguments) is not based on the study of the action of isometries on geodesics but rather relies on PDE. Namely, in [Tay06], Taylor notes that isometries preserve harmonic functions. Consequently, the composition of the isometry with harmonic coordinates must be harmonic and as smooth as the ambient geometry.
Since the distance preserving property transfers easily to Gromov-Hausdorff limits for sequences of pairs of Riemannian manifolds, in many applications it becomes relevant to study isometries between metric spaces obtained this way. In particular, in various settings including Mostow rigidity theory and the study of analysis and geometry on boundaries of strictly pseudo-convex domains in C n , the limit spaces fail to be Riemannian manifolds but have instead a natural subRiemannian structure. This gives rise to the problem of establishing regularity of isometries in this broader setting. Since subRiemannian distance functions are generally not smooth with respect to the underlying differential structure, this regularity problem is quite different from its Riemannian counterpart.
Throughout this paper we define a subRiemannian manifold as a triplet (M, ∆, g) where M is a connected, smooth manifold of dimension n ∈ N, ∆ denotes a subbundle of the tangent bundle T M that bracket generates T M , and g is a positive definite smooth, bilinear form defined on ∆, see [Mon02]. Analogously to the Riemannian setting, one can endow (M, ∆, g) with a metric space structure by defining the Carnot-Caratheodory (CC) control distance: For any pair x, y ∈ M set d(x, y) = inf{δ > 0 such that there exists a curve γ ∈ C ∞ ([0, 1]; M ) with endpoints x, y such thatγ ∈ ∆(γ) and |γ| g ≤ δ}.
Curves whose velocity vector lies in ∆ are called horizontal, their length is defined in an obvious way. A horizontal curve is a geodesic if it is locally distance minimizing. A geodesic is normal if it satisfies a subRiemannian analogue of the geodesic equation. One of the striking features of subRiemannian geometry is that not all geodesics are normal (see [Mon02] and references therein).
The first systematic study of distance preserving homeomorphisms 1 in the subRiemannian setting goes back to the groundbreaking papers by Strichartz [Str86,Str89] LDO12]). Let G 1 , G 2 be Carnot groups and for i = 1, 2 consider Ω i ⊂ G i open sets. If f : Ω 1 → Ω 2 is an isometry then G 1 is isomorphic to G 2 and f is the a restriction to Ω 1 of the composition of a group translation with a Lie group automorphism.
Smoothness of isometries in this setting, where not all geodesics are normal, was settled for globally defined isometries by Kishimoto, using a theorem of Montgomery and Zippin [MZ74, page 208, Theorem 2], see also the detailed proof in [LDO12, Corollary 2.12], while in the local case (of mapping between open subsets of Carnot groups) it follows from the study of 1-quasiconformal mappings by Cowling and the first named author [CC06].
The main results of the present paper represent an extension of the work mentioned above to the general subRiemannian setting, without the extra assumptions on the presence of homogeneous structures or on the absence of abnormal geodesics.
Theorem 1.2. Isometries between subRiemannian manifolds are smooth in any open sets where the subRiemannian structure is equiregular. In particular, there exists an open dense subset in which every isometry is smooth.
The proof of Theorem 1.2 proceeds along two steps of independent interest: Theorem 1.3 and Theorem 1.4.
1 Henceforth we will refer to distance preserving homeomorphisms as isometries and will specify if they are Riemannian or subRiemannian only when it is not completely clear from the context. Theorem 1.3. Let F : M → N be an isometry between two subRiemannian manifolds. If there exist two C ∞ volume forms vol M and vol N such that F * vol M = vol N , then F is a C ∞ diffeomorphism.
A C ∞ volume form on an n-dimensional manifold is the measure associated to a C ∞ nonvanishing n-form on the manifold. When F : M → N is a continuous map, the measure F * vol M is defined by F * vol M (A) := vol M (F −1 (A)), for all measurable sets A ⊆ N .
The proof of Theorem 1.3 is similar in spirit to Taylor's approach to the regularity of Riemannian isometries via harmonic coordinates [Tay06]. However, it appears very difficult at present to establish existence of harmonic coordinates in the subRiemannian setting. To overcome this difficulty we observe that while isometries preserve harmonic functions (critical points of the Dirichlet energy (2.10)) they also preserve weak solutions of nonhomogenous PDE in divergence form built by polarization from the Dirichlet energy, see Corollary 2.14. Using this observation we show that the pull-back of any system of coordinates in the target manifold satisfies a subelliptic linear PDE for which appropriate L p estimates holds. Regularity then follows through a bootstrap argument.
Next we turn to the hypothesis of Theorem 1.3. The existence of smooth volume forms that are preserved by (metric) isometries is far from trivial. The natural choice for an isometric invariant measure on a manifold is the top-dimensional Hausdorff measure H (or spherical Hausdorff measure S). In the Riemannian setting H and S coincide with the Riemannian volume measure and hence they readily provide smooth measures that are isometric invariant. A similar argument carries out to the Carnot group setting, where the Hausdorff measures are Haar measures and hence equal up to a constant to any leftinvariant Riemannian measure. However, for general equiregular subRiemannian manifolds (see Definition 2.2) the counterpart of this statement is false. In [Mit85], Mitchell shows that if Q denotes the local homogeneous dimension (see (2.3)), then the density of Lebesgue measure with respect to the Q-dimensional Hausdorff volume is bounded from above and below by positive constants. In their recent paper [ABB12], Agrachev, Barillari, and Boscain have studied the density function of any smooth volume form with respect to the spherical Hausdorff volume S in equiregular subRiemannian manifolds and have proved that it is always continuous. Moreover, if the topological dimension of M is less or equal to 4 this function is smooth. They also showed that for dimensions 5 and higher this is no longer true and exhibit examples where the density is C 3 but not C 5 .
In view of this obstacle one has to find a different approach. Namely, we consider a canonical smooth volume form that is defined only in terms of the subRiemannian structure, the so called Popp measure (see [Mon02]) and show the following result. For smooth isometries a proof of the above result can be find in [BR13]. The novelty in Theorem 1.4 consists in the fact that we are considering isometries with no smoothness assumptions.
The proof of Theorem 1.4 is based on the nilpotent tangent space approximation of the subRiemannian structure (see Section 2) and on the detailed analysis of the induced map among such tangent spaces. Ultimately, the argument rests on Theorem 1.1 and on Theorem 2.6, which is a representation formula for Hausforff measures recently established in [ABB12] and [GJ13, Section 3.2, Remark 2].
Corollary 1.5. Every F : M → N isometry between equiregular subRiemannian manifolds is a C ∞ diffeomorphism.
Theorem 1.2 follows immediately from Corollary 1.5 once we notice that, if M is a sub-Riemannian manifold, then there exists an open dense subset U ⊂ M such that the sub-Riemannian structure restricted to any connected component of U is equiregular.
We conclude this introduction with some consequences of our regularity result, which in fact were our initial motivations. Note that one can always extend an equiregular subRiemannian structure to a Riemannian one, i.e., there exists a Riemannian tensor on T M that coincides with g when restricted to ∆. Riemannian extensions are also called Riemannian metrics that tame the subRiemannian metric, [Mon02, Definition 1.9.1]. Hence, suppose that F : M → N is an isometry between two equiregular subRiemannian manifolds. For any Riemannian extension g M , the push-forward g N := F * g M is well-defined because of Corollary 1.5 and tames the subRiemannian structure on N . By construction, the map
F : (M, g M ) → (N, g N ) is also a Riemannian isometry.
It is less clear that one can find a single Riemannian extension for which a subRiemannian self-isometry is also a Riemannian isometry. This is the content of our main application: a structure theorem on the group of subRiemannian self-isometries Isom(M ) and its correspondence to the group of Riemannian isometries associated to a Riemannian metric that tames the subRiemannian one.
Theorem 1.6. If M is an equiregular subRiemannian manifold, then (i) Isom(M ) admits a structure of finite-dimensional Lie group; (ii) for all compact subgroup K < Isom(M ), there exists a Riemannian extension g K on M such that K < Isom(M, g K ).
In particular, since the group of isometries that fixes a point p ∈ M , denoted by Isom p (M ), is compact (in view of Ascoli-Arzelà's theorem) then we have the following
Basic definitions and preliminary results
Consider a subRiemannian manifold (M, ∆, g) and iteratively set ∆ 1 := ∆, and ∆ i+1 :
= ∆ i + [∆ i , ∆] for i ∈ N.
The bracket generating condition (also called Hörmander's finite rank hypothesis) is expressed by the existence of m ∈ N such that, for all p ∈ M , one has
(2.1) ∆ m p = T p M. Definition 2.2. A subRiemannian manifold (M, ∆, g) is equiregular if, for all i ∈ N, the dimension of ∆ i p is constant in p ∈ M .
In this case, the homogenous dimension is
(2.3) Q := m−1 i=1 [dim(∆ i+1 p ) − dim(∆ i p )].
Consider the metric space (M, d) where (M, ∆, g) is an equiregular subRiemannian manifold and d is the corresponding control metric. As a consequence of Chow-Rashevsky Theorem such a distance is always finite and induces on M the original topology. As a result of Mitchell [Mit85], the Hausdorff dimension of (M, d) coincide with (2.3). We shall use in a very crucial way the following representation formula for smooth volume measures. Here we denote by B Np(M ) (e, 1) the unit ball in the metric space N p (M ) with center the identity element.
Remark 2.8. When read in charts, smooth volume forms are absolutely continuous with respect to the Lebesgue measure. The volume density is smooth and bounded away from zero and infinity on compact sets. Namely, for any smooth volume form vol, one has d vol = ηdL n , with η a smooth, strictly positive density function. Here and hereafter, by L n we denote the Lebesgue measure. In particular, there is no ambiguity to say that some property holds almost everywhere without specifying with respect to which of the two possible measures: a volume form vol or Lebesgue measure.
(2.11) E(u, v) := E vol H,K (u, v) := K ∇ H u, ∇ H v d vol . Note that 2E(u, v) = E(u + v) − E(u) − E(v).
The definitions above extend immediately to the larger class W 1,2 H,loc , as defined in the next section. Regarding the next result and an introduction to the notion of (minimal) upper gradients, we refer the reader to Hajlasz and Koskela's monograph. Proposition 2.13. Let F : M → N be an isometry between two subRiemannian manifolds. If vol M and vol N are two C ∞ volume forms such that F * vol M = vol N then, for all compact sets K ⊆ M ,
(i) E vol M H,K (u • F ) = E vol N H,F (K) (u), ∀u ∈ Lip(N ); (ii) E vol M H,K (u • F, v • F ) = E vol N H,F (K) (u, v), ∀u, v ∈ Lip(N ).
Proof. Since F is distance and volume preserving then it is an isomorphism between the metric measure spaces (M, d M , vol M ) and (N, d N , vol N ). In particular, since minimal upper gradients are a purely metric measure notion, the map F induces a one-to-one correspondence between minimal upper gradients between the two spaces. Hence, by Lemma 2.12, being ∇ H u • F the pull back to M of the minimal upper gradient of u in (N, d N , vol N ), it coincides a.e. with ∇ H (u • F ) , i.e., the minimal upper gradient of u • F in (M, d M , vol M ). Consequently,
K ∇ H (u • F ) 2 d vol M = K ∇ H u 2 • F d vol M = F (K) ∇ H u 2 d vol N .
Since E(·) determines E(·, ·), the proof is concluded. We denote by Lip loc (M ) the functions on M that are Lipschitz on bounded sets. We denote by Lip c (M ) the Lipschitz functions on M that have compact support.
Corollary 2.14. If u ∈ Lip loc (N ) and g ∈ L 2 (N ) solve
N ∇ H u, ∇ H v d vol N = N gv d vol N , ∀v ∈ Lip c (N ),then the functionsũ := u • F andg := g • F solve M ∇ Hũ , ∇ H v d vol M = Mg v d vol M , ∀v ∈ Lip c (M ).
Proof. Note that, since F preserves compact sets, it gives a one-to-one correspondence
(v • F ) d vol M = E vol M H,K (u • F, v • F ) = E vol N H,F (K) (u, v) = N ∇ H u, ∇ H v d vol N = N g v d vol N = M (g • F )(v • F ) d vol M .
2.3. L p estimates for subLaplacians. The regularity of isometries in Theorem 1.3 is based on a bootstrap argument, which eventually rests on certain L p regularity estimates established by Rothschild and Stein (see [RS76,Theorem 18]). In this section we recall these estimates in Theorem 2.19, along with the basic definitions of horizontal Sobolev spaces.
Definition 2.15. Let X = {X 1 , ..., X r } be a system of smooth vector fields in R n satisfying Hörmander's finite rank hypothesis (2.1) and denote by L n the Lebesgue measure. For k ∈ N and for any multi-index I = (i 1 , ..., i k ) ∈ {1, ..., r} k we define |I| = k and X I u = X i 1 ...X i k u. For p ∈ [1, ∞) we define the horizontal Sobolev space W k,p H (R n , L n ) associated to X to be the space of all u ∈ L p (R n , L n ) whose distributional derivatives X I u are also in L p (R n , L n ) for all multi-indexes |I| ≤ k. This space can also be defined as the closure of the space of C ∞ c (R n ) functions with compact support with respect to the norm
(2.16) u p W k,p H := u p L p (R n ,L n ) + R n [ k |I|=1 (X I u) 2 ] p/2 dL n ,
see [GN98], [FSSC96] and references therein. A function u ∈ L p (R n , L n ) is in the local Sobolev space W k,p H,loc (R n , L n ) if, for any φ ∈ C ∞ c (R n ), one has uφ ∈ W k,p H (R n , L n ).
The definition above extends verbatim to a subRiemannian manifold (M, ∆, g) endowed with a smooth volume measure vol M . The W k,p H Sobolev norm in this setting is defined through a frame X = {X 1 , ..., X r } of ∆ by
(2.17) u p W k,p H := u p L p (M,vol M ) + M [ k |I|=1 (X I u) 2 ] p/2 d vol M .
Remark 2.18. Although the Sobolev norm (2.17) depends on the specific frame and smooth volume form chosen, the class W k,p H,loc (M, vol) does not. This is easily seen noticing that different choices of frames and smooth volume forms give rise to equivalent norms (on compact sets). In particular, one can check whether a function is in W k,p H,loc through a smooth coordinate chart and using the Lebesgue measure, as in (2.16).
Theorem 2.19 ([RS76, Theorem 18]). Let X 0 , X 1 , ..., X r be a system of smooth vector fields in R n satisfying Hörmander's finite rank hypothesis (2.1). Let
(2.20) L := r i=1 X 2 i + X 0
and consider a distributional solution to the equation
Lu = f in R n . For every k ∈ N ∪ {0} and 1 < p < ∞, if f ∈ W k,p H (R n , L n ) then u ∈ W k+2,p H,loc (R n , L n ).
Remark 2.21. If u ∈ Lip loc (R n ) then u ∈ W 1,p H,loc (R n , L n ) for all p ∈ [1, ∞). In fact, by virtue of [GN98, Theorem 1.5] one has that the distributional derivatives X i u are in L ∞ loc (R n ). A standard argument, involving coordinate charts and cut-off functions, leads to a local version of the Rothschild-Stein estimates in any subRiemannian manifold. Definition 2.24. Let X = {X 1 , ..., X r } be a system of smooth vector fields in R n satisfying Hörmander's finite rank hypothesis (2.1) inducing a control distance d. For α ∈ (0, 1) define the (Folland-Stein) Hölder class
C α loc (R n ) := ¶ u : R n → R ∀ compact K, ∃C > 0 : ∀x, y ∈ K, u(x) − u(y) ≤ Cd α (x, y) © .
For any system of smooth vector fields satisfying Hörmander's finite rank hypothesis and for any bounded open set Ω ⊂ R n , one can find constants Q = Q(Ω, X) > 0, the so-called local homogenous dimension relative to X and Ω, C = C(Ω, X) > 0 and R = R(Ω, X) > 0, such that, for all metric balls centered in Ω with radius less than R, one has a lower bound on the Lebesgue volume of these balls in terms of a power of their radius, i.e., L n (B) ≥ C diam(B) Q . The quantity Q arises out of the celebrated doubling property established by Nagel, Stein and Wainger in [NSW85] and plays an important role in the theory of Sobolev spaces in the subRiemannian setting. In case X is an equiregular distribution then Q does not depend on Ω and coincides with the homogenous dimension defined earlier. By virtue of Jerison's Poicaré inequality [Jer86] one has that, if u ∈ W 1,p H,loc (R n ) and p > Q > 1, then, for any d-metric ball B ⊂ R n , one has
(2.25) 1 L n (B) B |u − u B | Q dL n ≤ Cdiam(B) Q( p−Q p ) ,
where u B denotes the average of u over B and C > 0 is a constant depending only on p, Q, X, and u L p (B) . Then a simple argument, see for instance [Lu98] and [Cap99, Theorem 6.9], starting from (2.25) leads to the following Morrey-Campanato type embedding.
Proposition 2.26. Let X = {X 1 , ..., X r } be a system of smooth vector fields in R n satisfying Hörmander's finite rank hypothesis (2.1).
For every bounded open set Ω ⊂ R n there exists a positive number Q depending only on n, X and Ω such that, if p > Q, then Let us go back to the general case of equiregular subRiemannian manifolds. Recall the definition of nilpotent approximations N p (M ) from Section 2.1. We observe that, for every p ∈ M , since (M, p) and (N, F (p)) are isometric as pointed metric spaces, then, by uniqueness of Gromov-Hausdorff limits, there exists a (not necessarily unique) isometry G p : N p (M ) → N F (p) (N ), fixing the identity. In particular, the unit balls of the tangents are the same, i.e.,
W 1,p H,loc (Ω, L n ) ⊂ C α loc (Ω) with α = (p − Q)/p.(3.1) G p (B Np(M ) (e, 1)) = B N F (p) (N ) (e, 1).
In view of the argument at the beginning of the proof, we have that the Popp measures are the same, i.e.,
(3.2) (G p ) * (vol Np(M ) ) = vol N F (p) (N ) .
In addition, the spherical Hausdorff measures are preserved by isometries. Namely, we have
(3.3) F * S Q M = S Q N .
Therefore, for all measurable sets A ⊆ N , we have
F * vol M (A) = vol M (F −1 (A)) (from (2.7)) = 1 2 Q F −1 (A) N p (vol M )(B Np(M ) (e, 1)) dS Q M (p) (from (3.1)) = 1 2 Q F −1 (A) N p (vol M ) Ä G −1 p (B N F (p) (N ) (e, 1)) ä dS Q M (p) = 1 2 Q F −1 (A) (G p ) * N p (vol M ) Ä B N F (p) (N ) (e, 1) ä dS Q M (p) (from (2.5)) = 1 2 Q F −1 (A) (G p ) * vol Np(M ) Ä B N F (p) (N ) (e, 1) ä dS Q M (p) (from (3.2)) = 1 2 Q F −1 (A) vol N F (p) (N ) Ä B N F (p) (N ) (e, 1) ä dS Q M (p) (q = F (p) and from (3.3)) = 1 2 Q A vol Nq(N ) (B Nq(N ) (e, 1)) dS Q N (q) (from (2.5)) = 1 2 Q A N q (vol N )(B Nq(N ) (e, 1)) dS Q N (q) (from (2.7)) = vol N (A).
The proof is concluded.
Remark 3.4. We observe that, in view of Margulis-Mostow's version of Rademacher Theorem [MM95], in the proof above there is a natural choice for the maps G p for almost every p.
3.2. The iteration step. Theorem 1.3 will be proved bootstrapping the following two results.
Proposition 3.5. Let M and N be two subRiemannian manifolds equipped with C ∞ volume measures vol M and vol N . Let F : M → N be an isometry such that F * vol M = vol N . Let g ∈ C ∞ (N ). Assume that (i) g • F ∈ W k,p H,loc (M, vol M ), for some k ∈ N and some 1 ≤ p < ∞, and
(ii) The function u ∈ W k,p H,loc (M, vol M ) ∩ Lip(N, R) is a solution of N ∇ H u, ∇ H v d vol N = N g v d vol N , ∀v ∈ Lip c (N ).
Then u • F ∈ W k+1,p H,loc (M, vol M ) and solves
M ∇ H (u • F ), ∇ H v d vol M = M (g • F ) v d vol M , ∀v ∈ Lip c (M ).
Proof. The last statement is an immediate consequence of Corollary 2.14. Since g • F ∈ W k,p H,loc (M, vol M ) then the conclusion follows from Corollary 2.22. Proposition 3.6. Let U and V be two open sets of R n and X = {X 1 , ..., X r } a frame of smooth Hörmander vector fields in U . For k ∈ N ∪ {0} and 1 ≤ p < ∞, denote by W k,p H,loc (U, L n ) the corresponding Sobolev space. Let F = (F 1 , ..., F n ) : U → V be a map such that, for all i = 1, . . . , n, one has F i ∈ W k,p H,loc (U, L n ). If g ∈ C ∞ (V ), then g • F ∈ W k,p H,loc (U, L n ).
Proof. For all i = 1, ..., n and j = 1, ..., r, we have X j F i ∈ W k−1,p H,loc (U, L n ). Using the scalar chain rule for Sobolev functions one obtains (3.7)
X j (g • F ) = X j (g(F 1 , . . . , F n )) = ∂ 1 gX j F 1 + . . . ∂ n gX j F n , a.e. in U . Since ∂ i g ∈ C ∞ , then the right hand side of (3.7) is also in W k−1,p H,loc (U, L n ), concluding the proof.
3.3. Proof of Theorem 1.3. We work in (smooth) coordinates, so that the isometry can be considered as a map F : U → V with U and V two open sets of R n . Denote by x 1 , . . . , x n the coordinate functions in V . The volume form of the second manifold can be written as d vol N = ηdL n , for some C ∞ function η : V → R bounded away from zero on compact sets.
For each i = 1, . . . , n, set g i := η −1 ∇ * H (η∇ H x i ). We have g i ∈ C ∞ (V ) and V ∇ H x i , ∇ H v η dL n = V g i vη dL n , ∀v ∈ Lip c (V ).
Notice that, for all i = 1, ..., n, one has that x i • F and g i • F are Lipschitz function with respect to the control distance. Recalling Remark 2.21, we observe that as a consequence x i • F, g i • F ∈ W 1,p H,loc (U ), for all 1 ≤ p < ∞. Using Proposition 3.5 with k = 1, we have that F i := x i • F ∈ W 2,p H,loc (U, L n ). By Proposition 3.6, it follows that g • F ∈ W 2,p H,loc (U, L n ). Iterating Proposition 3.5 and Proposition 3.6, one obtains F i ∈ W k,p H,loc (U, L n ) for all k ∈ N. Invoking Proposition 2.26 we finally conclude F i ∈ C ∞ (U ), for all i = 1, ..., n. where µ H is a probability Haar measure on H. Notice that G := {F * g : F ∈ H} is a compact set of Riemannian tensors extending the subRiemannian structure on M . In particular, in local coordinates, all g ′ ∈ G can be represented with a matrix with uniformly bounded eigenvalues. Hence, g is a Riemannian tensor, which is H-invariant, by linearity of integrals.
Theorem 1. 4 .
4Let F : M → N be an isometry between equiregular subRiemannian manifolds. If vol M and vol N are the Popp measures on M and N , respectively, then F * vol M = vol N .
Corollary 1. 7 .
7If M is an equiregular subRiemannian manifold, then for all p ∈ M , there exists a Riemannian extensionĝ on M such that Isom p (M ) < Isom(M,ĝ). Acknowledgments The authors would like to thank E. Breuillard, U. Boscain, G. Citti, P. Koskela, U. Hamenstädt and F. Jean for useful feedback and advice. Most of the work in this paper was developed while the authors were guests of the program Interactions between Analysis and Geometry at IPAM in the Spring 2013. The authors are very grateful to the program organizers M. Bonk, J. Garnett, U. Hamenstädt, P. Koskela and E. Saksman, as well to IPAM for their support.
Example 2. 4 .
4Carnot groups are important examples of equiregular subRiemannian manifolds. Classical references are[Fol75,RS76,FS82]. These are simply connected analytic Lie groups G whose Lie algebras g has a stratification g =V 1 ⊕ ... ⊕ V m with[V 1 , V j ] = V 1+j and [V j , V m ] =0 for all j = 1, ..., m. The corresponding subRiemannian structures are given by setting ∆ to be the left-invariant subbundle associated to V 1 and by any choice of g left-invariant metric on ∆. 2.1. Tangent cones and measures. The tangent cone of the metric space (M, d) at a point p ∈ M is the Gromov-Hausdorff limit N p (M ) := lim t→0 (M, d/t, p). In view of Mitchell's work [Mit85] the metric space N p (M ) is described by the nilpotent approximation associated to the spaces ∆ i p . In particular, N p (M ) is a Carnot group. For the definition of Popp measure on a subRiemannian manifold (M, ∆, g) we refer the reader to [Mon02, Section 10.6]. Here we only recall that if U ⊂ M is an open set upon which the subRiemannian structure is equiregular then the Popp measure is a smooth volume form on U . Following [ABB12], we recall that, given any C ∞ volume form vol M on M , e.g., the Popp measure, we have a C ∞ volume form induced by vol M on the nilpotent approximation N p (M ), which we denote by N p (vol M ). Moreover, if vol M is the Popp measure on M and vol Np(M ) is the Popp measure on N p (M ), then (2.5) N p (vol M ) = vol Np(M ) .
Theorem 2.6 ([ABB12, pages 358-359], [GJ13, Section 3.2]). Denote by Q the Hausdorff dimension of an equiregular subRiemannian manifold M and by S Q M the spherical Hausdorff measure on M . Any C ∞ volume form is related to S Q by (2.7) d vol M = 2 −Q N p (vol M )(B Np(M ) (e, 1))dS Q M .
2. 2 .
2Dirichlet energy. Let X = {X 1 , . . . , X r } be an orthonormal frame of ∆ in a subRiemannian manifold (M, ∆, g), and vol a smooth volume measure on M . Denote by Lip(M ) the space of all Lipschitz functions with respect to the subRiemannian distance and recall the definition of the horizontal gradient (2.9) ∇ H u := (X 1 u)X 1 + . . . + (X r u)X r , defined a.e., as an L ∞ loc (M ) function, for u ∈ Lip(M ) (see [GN98, Theorem 1.3]). Given a compact set K ⊆ M , we consider the (horizontal Dirichlet) energy (on K and with respect to vol) as (2.10) E(u) := E vol H,K (u) := K ∇ H u 2 d vol, and the associated quadratic form defined as, for all u, v ∈ Lip(M ),
Lemma 2.12 ([HK00, page 51 and Section 11.2]). If u ∈ Lip(M ), then ∇ H u coincides almost everywhere with the minimal upper gradient of u.
v → v • F between Lip c (N ) and Lip c (M ). Hence, any map in Lip c (M ) is of the form v • F . Let F (K) be a compact set containing the support of v. Then M ∇ Hũ , ∇ H
Corollary 2 . 22 .
222Let M be a subRiemannian manifold equipped with a C ∞ volume form vol M and let k ∈ N and 1 ≤ p < ∞. If u ∈ W k,p H,loc (M, vol M ) ∩ Lip loc (M ) is a solution of (2.23) M ∇ H u, ∇ H v d vol M = M gv d vol M , ∀v ∈ Lip c (M ). for some g ∈ W k,p H,loc (M, vol M ), then u ∈ W k+1,p H,loc (M, vol M ). Smoothness of Sobolev functions is guaranteed by a Morrey-Campanato type embedding in an appropriate Hölder class.
.
Proof of Theorem 1.4. We begin by establishing the result in the special case of Carnot groups. Assume that M, N are two Carnot groups and let G : M → N be an isometry fixing the identity. In view of Theorem 1.1 the map G is a group isomorphism. Since on Carnot groups the Popp measures are left-invariant, G * (vol M ) is a Haar measure, hence it is a constant multiple of vol N . Since G is an isometry, the constant is one.
3. 4 .
4Proof of Theorem 1.6. Since Isom(M ) is locally compact by Ascoli-Arzelà's theorem, from a result of Montgomery and Zippin [MZ74, page 208, Theorem 2], we have (i). To show (ii), we fix an auxiliary Riemannian extensiong on M and define g = H F * g dµ H (F ),
and by Hamenstädt[Ham90]. In particular, it follows fromHamenstädt's results that, assuming that all geodesics are normal, isometries are smooth [Ham90, Theorem 6.2]. If the underlying structure is a Carnot group (see Example 2.4), then by virtue of the work of Hamenstädt and of Kishimoto [Ham90, Corollary 8.4], [Kis03, Theorem 4.2], one has a stronger result: Global isometries are compositions of group translations and group isomorphisms. More recently, Ottazzi and the second named author [LDO12, Theorem 1.1] have established a local version of this result, that applies to isometries between open subsets of Carnot groups and that does not rely on the Hamenstädt-Kishimoto's arguments. Pansu's differentiability Theorem [Pan89] allows us to summarize the previous results as follows.Theorem 1.1 ([
On the Hausdorff volume in sub-Riemannian geometry. Andrei Agrachev, Davide Barilari, Ugo Boscain, Calc. Var. Partial Differential Equations. 433-4Andrei Agrachev, Davide Barilari, and Ugo Boscain, On the Hausdorff volume in sub-Riemannian geometry, Calc. Var. Partial Differential Equations 43 (2012), no. 3-4, 355-388.
A formula for Popps volume in sub-Riemannian geometry. Davide Barilari, Luca Rizzi, to appear in Analysis and Geometry in Metric SpacesDavide Barilari and Luca Rizzi, A formula for Popps volume in sub-Riemannian geometry, to appear in Analysis and Geometry in Metric Spaces (2013).
Regularity for quasilinear equations and 1-quasiconformal maps in Carnot groups. Luca Capogna, Math. Ann. 3132Luca Capogna, Regularity for quasilinear equations and 1-quasiconformal maps in Carnot groups, Math. Ann. 313 (1999), no. 2, 263-295.
Conformality and Q-harmonicity in Carnot groups. Luca Capogna, Michael Cowling, Duke Math. J. 1353Luca Capogna and Michael Cowling, Conformality and Q-harmonicity in Carnot groups, Duke Math. J. 135 (2006), no. 3, 455-479.
On the smoothness of isometries. Eugenio Calabi, Philip Hartman, Duke Math. J. 37Eugenio Calabi and Philip Hartman, On the smoothness of isometries, Duke Math. J. 37 (1970), 741-750.
Subelliptic estimates and function spaces on nilpotent Lie groups. G B Folland, Ark. Mat. 132G. B. Folland, Subelliptic estimates and function spaces on nilpotent Lie groups, Ark. Mat. 13 (1975), no. 2, 161-207.
Hardy spaces on homogeneous groups. G B Folland, Elias M Stein, Mathematical Notes. 28Princeton University PressG. B. Folland and Elias M. Stein, Hardy spaces on homogeneous groups, Mathematical Notes, vol. 28, Princeton University Press, Princeton, N.J., 1982.
Meyers-Serrin type theorems and relaxation of variational integrals depending on vector fields. Bruno Franchi, Raul Serapioni, Francesco Serra Cassano, Houston J. Math. 224Bruno Franchi, Raul Serapioni, and Francesco Serra Cassano, Meyers-Serrin type theorems and relaxation of variational integrals depending on vector fields, Houston J. Math. 22 (1996), no. 4, 859-890.
Roberta Ghezzi, Frédéric Jean, arXiv:1301.3682Hausdorff measures and dimensions in non equiregular subriemannian manifolds. Roberta Ghezzi and Frédéric Jean, Hausdorff measures and dimensions in non equiregular sub- riemannian manifolds, arXiv:1301.3682 (2013), 1-13.
Lipschitz continuity, global smooth approximations and extension theorems for Sobolev functions in Carnot-Carathéodory spaces. Nicola Garofalo, Duy-Minh Nhieu, J. Anal. Math. 74Nicola Garofalo and Duy-Minh Nhieu, Lipschitz continuity, global smooth approximations and extension theorems for Sobolev functions in Carnot-Carathéodory spaces, J. Anal. Math. 74 (1998), 67-97.
Some regularity theorems for Carnot-Carathéodory metrics. Ursula Hamenstädt, J. Differential Geom. 323Ursula Hamenstädt, Some regularity theorems for Carnot-Carathéodory metrics, J. Differential Geom. 32 (1990), no. 3, 819-850.
. Piotr Haj Lasz, Pekka Koskela, Sobolev met Poincaré, Mem. Amer. Math. Soc. 145688101Piotr Haj lasz and Pekka Koskela, Sobolev met Poincaré, Mem. Amer. Math. Soc. 145 (2000), no. 688, x+101.
The Poincaré inequality for vector fields satisfying Hörmander's condition, Duke Math. David Jerison, J. 532David Jerison, The Poincaré inequality for vector fields satisfying Hörmander's condition, Duke Math. J. 53 (1986), no. 2, 503-523.
Geodesics and isometries of Carnot groups. Iwao Kishimoto, J. Math. Kyoto Univ. 433Iwao Kishimoto, Geodesics and isometries of Carnot groups, J. Math. Kyoto Univ. 43 (2003), no. 3, 509-522.
Isometries between open sets of Carnot groups and global isometries of subFinsler homogeneous manifolds. Enrico Le Donne, Alessandro Ottazzi, PreprintsubmittedEnrico Le Donne and Alessandro Ottazzi, Isometries between open sets of Carnot groups and global isometries of subFinsler homogeneous manifolds, Preprint, submitted (2012).
Embedding theorems on Campanato-Morrey spaces for vector fields on Hörmander type. Guozhen Lu, Approx. Theory Appl. (N.S.). 141Guozhen Lu, Embedding theorems on Campanato-Morrey spaces for vector fields on Hörmander type, Approx. Theory Appl. (N.S.) 14 (1998), no. 1, 69-80.
On Carnot-Carathéodory metrics. John Mitchell, J. Differential Geom. 211John Mitchell, On Carnot-Carathéodory metrics, J. Differential Geom. 21 (1985), no. 1, 35-45.
The differential of a quasi-conformal mapping of a Carnot-Carathéodory space. A Gregori, George D Margulis, Mostow, Geom. Funct. Anal. 52Gregori A. Margulis and George D. Mostow, The differential of a quasi-conformal mapping of a Carnot-Carathéodory space, Geom. Funct. Anal. 5 (1995), no. 2, 402-433.
A tour of subriemannian geometries, their geodesics and applications. Richard Montgomery, Mathematical Surveys and Monographs. 91American Mathematical SocietyRichard Montgomery, A tour of subriemannian geometries, their geodesics and applications, Math- ematical Surveys and Monographs, vol. 91, American Mathematical Society, Providence, RI, 2002.
The group of isometries of a Riemannian manifold. S B Myers, N E Steenrod, Ann. of Math. 2S. B. Myers and N. E. Steenrod, The group of isometries of a Riemannian manifold, Ann. of Math. (2) 40 (1939), no. 2, 400-416.
Topological transformation groups, Robert E. Deane Montgomery, Leo Zippin, Krieger Publishing CoHuntington, N.Y.Reprint of the 1955 originalDeane Montgomery and Leo Zippin, Topological transformation groups, Robert E. Krieger Pub- lishing Co., Huntington, N.Y., 1974, Reprint of the 1955 original.
Balls and metrics defined by vector fields. I. Basic properties. Alexander Nagel, Elias M Stein, Stephen Wainger, Acta Math. 1551-2Alexander Nagel, Elias M. Stein, and Stephen Wainger, Balls and metrics defined by vector fields. I. Basic properties, Acta Math. 155 (1985), no. 1-2, 103-147.
On the differentiability of isometries. Richard S Palais, Proc. Amer. Math. Soc. 8Richard S. Palais, On the differentiability of isometries, Proc. Amer. Math. Soc. 8 (1957), 805-807.
Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un. Pierre Pansu, Ann. of Math. 2Pierre Pansu, Métriques de Carnot-Carathéodory et quasiisométries des espaces symétriques de rang un, Ann. of Math. (2) 129 (1989), no. 1, 1-60.
Hypoelliptic differential operators and nilpotent groups. Linda Preiss Rothschild, E M Stein, Acta Math. 1373-4Linda Preiss Rothschild and E. M. Stein, Hypoelliptic differential operators and nilpotent groups, Acta Math. 137 (1976), no. 3-4, 247-320.
Sub-Riemannian geometry. Robert S Strichartz, J. Differential Geom. 242Robert S. Strichartz, Sub-Riemannian geometry, J. Differential Geom. 24 (1986), no. 2, 221-263.
Sub-Riemannian geometry. ] [str89, Corrections To, J. Differential Geom. 242J. Differential Geom.[Str89] , Corrections to: "Sub-Riemannian geometry" [J. Differential Geom. 24 (1986), no. 2, 221-263; (88b:53055) ], J. Differential Geom. 30 (1989), no. 2, 595-596.
Existence and regularity of isometries. Michael Taylor, Trans. Amer. Math. Soc. 3586electronicMichael Taylor, Existence and regularity of isometries, Trans. Amer. Math. Soc. 358 (2006), no. 6, 2415-2423 (electronic).
| [] |
[
"Probing the magnetic state by linear and non linear ac magnetic susceptibility measurements in under doped manganite Nd 0.8 Sr 0.2 MnO 3",
"Probing the magnetic state by linear and non linear ac magnetic susceptibility measurements in under doped manganite Nd 0.8 Sr 0.2 MnO 3"
] | [
"S Kundu \nDepartment of Physics and Meteorology\nIndian Institute of Technology\n721302KharagpurWest BengalIndia\n",
"T K Nath \nDepartment of Physics and Meteorology\nIndian Institute of Technology\n721302KharagpurWest BengalIndia\n"
] | [
"Department of Physics and Meteorology\nIndian Institute of Technology\n721302KharagpurWest BengalIndia",
"Department of Physics and Meteorology\nIndian Institute of Technology\n721302KharagpurWest BengalIndia"
] | [] | We have thoroughly investigated the entire magnetic states of under doped ferromagnetic insulating manganite Nd 0.8 Sr 0.2 MnO 3 through temperature dependent linear and non linear complex ac magnetic susceptibility measurements. This ferromagnetic insulating manganite is found to have frequency independent ferromagnetic to paramagnetic transition temperature at around 140 K. At around 90 K (≈ T ƒ ) the sample shows a second frequency dependent reentrant magnetic transition as explored through complex ac susceptibility measurements. Non linear ac susceptibility measurements (higher harmonics of ac susceptibility) have also been performed (with and without the superposition of a dc magnetic field) to further investigate the origin of this frequency dependence (dynamic behavior at this re -entrant magnetic transition).Divergence of 3 rd order susceptibility in the limit of zero exciting field indicates a spin glass like freezing phenomena. However, large value of spin relaxation time (τ 0 = 10 -8 s) and small value of coercivity (~22 Oe) obtained at low temperature (below T ƒ ) from critical slowing down model and dc magnetic measurements, respectively, are in contrast with what generally observed in a canonical spin glass (τ 0 = 10 -12 -10 -14 s and very large value of coercivity below freezing temperature). We have attributed our observation to the formation of finite size ferromagnetic clusters which are formed as consequence of intrinsic separation and undergo cluster glass like freezing below certain temperature in this under doped manganite. The results are supported by the electronic -and magneto -transport data. | 10.1016/j.jmmm.2010.02.047 | [
"https://export.arxiv.org/pdf/1006.3733v1.pdf"
] | 118,469,131 | 1006.3733 | d555810101c963a3b9c9362dc639bb1a34a4d7b2 |
Probing the magnetic state by linear and non linear ac magnetic susceptibility measurements in under doped manganite Nd 0.8 Sr 0.2 MnO 3
S Kundu
Department of Physics and Meteorology
Indian Institute of Technology
721302KharagpurWest BengalIndia
T K Nath
Department of Physics and Meteorology
Indian Institute of Technology
721302KharagpurWest BengalIndia
Probing the magnetic state by linear and non linear ac magnetic susceptibility measurements in under doped manganite Nd 0.8 Sr 0.2 MnO 3
* Corresponding author: [email protected] Area code: 721302, INDIAManganitesAc susceptibilityDipolar interaction PACS: 7547Lx7550Lk
We have thoroughly investigated the entire magnetic states of under doped ferromagnetic insulating manganite Nd 0.8 Sr 0.2 MnO 3 through temperature dependent linear and non linear complex ac magnetic susceptibility measurements. This ferromagnetic insulating manganite is found to have frequency independent ferromagnetic to paramagnetic transition temperature at around 140 K. At around 90 K (≈ T ƒ ) the sample shows a second frequency dependent reentrant magnetic transition as explored through complex ac susceptibility measurements. Non linear ac susceptibility measurements (higher harmonics of ac susceptibility) have also been performed (with and without the superposition of a dc magnetic field) to further investigate the origin of this frequency dependence (dynamic behavior at this re -entrant magnetic transition).Divergence of 3 rd order susceptibility in the limit of zero exciting field indicates a spin glass like freezing phenomena. However, large value of spin relaxation time (τ 0 = 10 -8 s) and small value of coercivity (~22 Oe) obtained at low temperature (below T ƒ ) from critical slowing down model and dc magnetic measurements, respectively, are in contrast with what generally observed in a canonical spin glass (τ 0 = 10 -12 -10 -14 s and very large value of coercivity below freezing temperature). We have attributed our observation to the formation of finite size ferromagnetic clusters which are formed as consequence of intrinsic separation and undergo cluster glass like freezing below certain temperature in this under doped manganite. The results are supported by the electronic -and magneto -transport data.
Introduction
The perovskite rare earth manganites with general formula R 1-x A x MnO 3 (R= rare earth elements, A= divalent elements) have been studied quite intensely from the discovery of their colossal magnetoresistance (CMR) property as this proved a potential way of application of such materials in the form of magnetic field sensors and memory devices [1]. As a matter of fact, it is not only their potential for application but also the fascinating physics that these manganites involve made them an interesting family of material for scientific study. It is already known that manganites are strongly correlated material having a close interplay between their spin, charge, orbital and lattice degrees of freedom [2,3]. Due to this strong correlation, manganites show some unusual electronic and magnetic properties. Around an optimal doping level (x ≈ 0.3) the double exchange mechanism plays the key role behind the observation of ferromagnetism and CMR property. It is this CMR property and ferromagnetism which attracted most of the research interest on optimally doped manganites [4]. The half doped manganites have also been studied quite intensively due to their interesting property of simultaneous charge and orbital ordering [5].
The overdoped and underdoped manganites are comparatively less explored by the physicists. In fact, due to the dominance of antiferromagnetic superexchange interaction competing with double exchange interaction and strong Jahn -Teller interaction results a complex magnetic state in these systems. The detailed understandings of the underlying physics of the complex magnetic state could be interesting in such manganites.We have investigated the magnetic property of under doped manganite Nd 0.8 Sr 0.2 MnO 3 through linear and non linear ac susceptibility measurements. According to the bulk reported phase diagram this system shows a ferromagnetic insulating to paramagnetic insulating phase transition around 150 K [1]. Our susceptibility and resistivity data show the same features. Moreover, we have observed a second frequency dependent magnetic transition at lower temperature originating most likely from cluster glass type freezing. In some Mn site doped manganites such frequency dependent reentrant transition were observed and mostly of them were claimed to have glassy character [6][7][8].
Some rare earth cobaltites are also known to have glassy character [9,10]. The glassyness in these cases was attributed to the intrinsic inhomogenity and phase separation in the samples. It has also been observed that A-site disorder has a major role to play in the observed glassy behavior in some manganites [11]. However, in an otherwise undoped manganite like ours, such reentrant frequency dependent transition is not common in literature. N. Rama et al. [12] have reported similar magnetic transition in Pr 0.8 Sr 0.2 MnO 3 . In this investigation, we have reported for the first time to the best of our knowledge, a detailed linear and nonlinear complex ac susceptibility study along with dc field superposition effect and explored the cluster glass like freezing of ferromagnetic clusters in Nd 0.8 Sr 0.2 MnO 3 sample.
Experimental details
The manganites Nd 0.8 Sr 0.2 MnO 3 was synthesized employing chemical pyrophoric reaction route.
Requisite amount of TEA (Triethanolamine) was mixed with the stoichiometric solution of high purity Nd 2 O 3 , Sr(NO3) 2 , and Mn(CH 3 COO) 2 . The solution was then heated at 180 °C and stirred continuously. The finally obtained black fluppy powder was ground and sintered at 1150 °C in air for 8 hrs to produce the polycrystalline manganites of Nd 0.8 Sr 0.2 MnO 3 .
The XRD (x -ray diffraction) pattern shown in Fig. 1(a) confirms the perovskite structure with no observable impurity phase present. From FESEM (field emission scanning electron microscopy) image ( Fig. 1(b)) it is evident that the sample is composed of nearly micrometer size particles.
The complex ac magnetic susceptibility measurement (both linear and non linear) on this sample was done by employing an indigenously developed ac susceptibility set up by us. In this set up the sample was placed inside one of the two oppositely wound pickup coils while the excitation ac magnetic field (h) was applied by a solenoid coaxial with the pickup coils. The signal was measured employing a lock -in -amplifier (model-SR830). We have also measured the higher order non linear ac magnetic susceptibility just by detecting the signal at frequencies integral multiple of the exciting frequency. The temperature was monitored by a temperature controller (Lakeshore, model-325). A calibrated cernox censor (Lakeshore made) was used to sense the temperature with 0.05 K resolutions. The dc magnetization was measured employing a homemade vibrating sample magnetometer having sensitivity better than 10 -3 emu. Similar kind of electronic equipments that were used in the ac susceptibility set up were also employed for detecting the signal and monitoring the temperature in the magnetometer.
The magnetization m of a specimen can be expressed, when excited by a small ac magnetic field h, as m = m 0 + χ 1 h + χ 2 h 2 + χ 3 h 3 + …. (m 0 is spontaneous magnetization) [13].
(1)
Here, χ 1 is the linear susceptibility and χ 2 , χ 3 etc. are the higher order or the nonlinear susceptibilities. For h = h 0 sin(ωt), the induced voltage in the secondary pick up coils can be written as [10],
Here, K is the constant of the set up which is to be determined by calibration of the set up with a known sample. We have used the high moment paramagnetic salt Gd 2 O 3 to calibrate our system.
In the low field limit (small h 0 ) the series in the first brackets rapidly converge. So the different terms in equation (2), can be written as,
1 0 |V | = K | | h ω ω χ (3) 2 2 2 0 |V | = K | | h ω ω χ(4)h h t h h h t h h t χ χ χ ω χ χ χ ω χ χ ω + + + + + + + − + + + V ω , V 2ω , V 3ω
, are the measured voltages by the lock in amplifier at frequencies ω, 2ω and 3ω.
By properly manipulating the voltages and knowing the calibration constant K the linear and the non linear susceptibilities can be determined. Each of these susceptibilities has a real and imaginary component. For example, the linear susceptibility χ 1 can be expressed as,
χ 1 = χ 1 R -jχ 1 I (6)
χ 1 R is the real component and χ 1 I is the imaginary component of the linear susceptibility. With a lock in amplifier both the components can be measured by measuring the signal in phase with the exciting field (imaginary part) and out of phase with the exciting field (real part). It is to be noted that a small enough ac magnetic field (h) is required as mentioned above, to make all the approximation applicable. Moreover, high exciting field can destroy any critical features to be observed [14], especially in measuring the higher order susceptibilities. Application of high field can itself smear out the magnetic phase transition and contributions from domain wall movement/rotation may appear. Very low ac magnetic field enables us to measure the true spin susceptibility of a magnetic system. The limit of this field is again determined by the sensitivity of the set up itself. For small applied field the signal strength also becomes weak and so a compromisation has to be made. In our case the applied ac field (h) is always kept at a very small value (below 7 Oe rms).
Results and discussion
To probe the magnetic behavior and investigate the relaxation process of the system we have performed ac susceptibility measurements over a wide range of frequency (three decades). The distinct peaks at these two transition region. The height of the low temperature peak is greater than that at the higher temperature. This clearly indicates that the loss in the system is quite high around this low temperature transition region compared to that in the FM-PM transition. The presence of an appreciable peak in χ 1 I around the second transition at the low temperature region indicates that this transition is a genuine magnetic transition and not experimental artifacts. Most interestingly, the magnitude of the imaginary part is non zero in the FM region. It is quite evident that while the high temperature FM-PM transition is not frequency dependent, the transition (cusp) at the lower temperature is strongly frequency dependent. This transition temperature shifts towards the higher temperature with the increase of frequency. The magnitude of susceptibility is slightly suppressed with the increase of frequency at this transition region and the susceptibility curves become separated from each other completely at the lower temperatures.
The peak of the χ 1 I of the sample increases in magnitude with the increase of frequency and show a shift to higher temperature side with the increase of frequency (not shown). All the features described above regarding the second transition at lower temperature seem similar to that observed in case of a freezing process of a spin glass around the freezing temperature. However, we want to mention here that the frequency dependence of susceptibility is not unique to a spin freezing process and can also be observed in a superparamagnet and even in complex ferromagnetic systems.
In order to probe the complex magnetic state in the system, the ac susceptibility with a superimposed dc magnetic field has also been investigated in details. If a dc magnetic field H dc is applied along with the ac field, all the susceptibilities (linear and non linear) will be modified.
From Fig. 3 we can observe that with the increase of the magnitude of the dc field the FM-PM transition is suppressed considerably. With the increase of dc field the transition shifts to lower temperature with an overall suppression of χ 1 R in the ferromagnetic regime. It is generally observed for a common ferromagnet that the susceptibility maxima (Hopkinson peak) just below the FM-PM transition is slightly suppressed on application of dc magnetic field [15,16]. In our case we have observed an enhanced suppression of χ 1 R over a wide temperature range on application of a small dc magnetic field (45 Oe). It is also observed that the low temperature transition is only slightly suppressed compared to the FM-PM transition region on application of a maximum of 45 Oe dc field whereas, at the lowest temperature regime the susceptibility is almost independent of dc field.
To further investigate the magnetic state of our sample we have studied the non linear or the higher harmonic response of the ac susceptibility which are much more sensitive around the phase transitions compared to its linear counterpart and more effective in some cases to analyze the complex magnetic system. The even harmonic χ 2 can be observed if there is a presence of spontaneous magnetization or a symmetry breaking internal field [15]. The odd harmonic χ 3 has been efficiently used to characterize a spin glass or a superparamagnet and also to differentiate between them. The 3 rd harmonic (χ 3 ) shows a very sharp peak at the freezing and a broad peak at the blocking temperature of a spin glass and superparamagnet, respectively [17]. For a spin glass We have also performed the electronic transport study of the sample with and without the application of external magnetic field to support our magnetic data. The resistivity vs.
temperature plot is shown in Fig. 6. The sample is insulating in the whole measured temperature range. Moreover, resistivity increases very sharply just below 100 K. We note that this is the temperature regime (around 90 K) where the ac susceptibility data show a transition and frequency dependence. On application of a 5 T magnetic field a pronounced suppression in the resistivity can be observed just below 90 K. The magnetoresistance (MR) vs. T data (Inset of We have carefully analyzed our experimental observation as follows. We have first employed the thermal activation model and analyzed the low temperature frequency dependent transition by applying the Arrhenius law given by τ = τ 0 exp (E a /k B T ƒ ) [19]. Here, E a is the activation energy. The freezing temperature T ƒ is taken as the temperature corresponding to the peak in χ 1 I in the low temperature region as the position of the exact observed transition point from χ 1 R is not prominent. τ is taken as the experimental time scale 1/ƒ. We first note that the variation of T ƒ is not linear in ƒ as evident from the inset of Fig. 7. The variation of T ƒ with frequency is very sharp in the low frequency regime whereas, a slow variation T ƒ is observable in the high frequency regime. To verify the validity of the Arrhenius law in this system ln (τ) is plotted as a function of T ƒ . The best fit results of this plot give τ 0 = 10 -50 s and E a = 1224000 K.
Both these values are unphysical. So, we can rule out the possibility of any thermally activated process and hence the possibility of a superparamagnetic blocking at this temperature regime.
Moreover, the M vs. H/T plots at different temperatures (not shown) do not overlap and indicates
that the frequency dependence of susceptibility is not due to blocking of magnetic moments. The large grain size (micrometer order) of this sample also indicates that superparamagnetic behavior is not favorable in this manganite. Another way of analyzing the frequency dependent peak in acχ is calculating the shift in peak with decade change in frequency. The quantity ∆T f / T f ∆(logω)
gives a value of 0.06. This value is well within the regime observed for insulating spin glasses [19]. In case of a spin glass the frequency dependence is very often analyzed by employing the conventional critical slowing down model. According to this theory the dynamic scaling law of the form τ = τ 0 (T f /T g -1) -zν is very commonly used [19,20]. Here, τ 0 is the spin-relaxation time, zν is the critical exponent and T g is the spin glass transition temperature. The fitting procedure in this case remains the same. The fitting of this equation to our experimental data (Fig. 7) is quite good with the obtained fitting parameters given by, T g = 73 K, zν = 9 and τ 0 = 10 -8 sec. For a conventional spin glass system typical value of τ 0 is of the order of 10 -12 -10 -14 s. We have obtained a higher value of τ 0 compared to that of a conventional spin glass system whereas, the value of zν is well within the regime of a spin glass [19]. Interestingly, we note that the value of T g (≈ 73 K) is the temperature corresponding to the peak position of the dc magnetization vs.
temperature plot. In fact, T g is equivalent to the freezing temperature of a spin glass in the limit of zero frequency. The low temperature transition is very similar to a transition to a cluster glass phase as the obtained value of the characteristic time is quite higher compared to that of a spin glass. A high value of spin relaxation time was obtained by B. Ray et al. [21] in Nd 0.7 Sr 0.3 MnO 3 and the phase was designated as a cluster glass. The peak around T ƒ in the temperature dependent χ 3 R arises due to this cluster glass type freezing process. The peak is broad in nature. For a canonical spin glass a very sharp divergence at the freezing temperature can be observed. The dc field dependence of ac -χ is quite remarkable in our system. The FM-PM transition is found to be suppressed on application of dc magnetic field whereas, at low temperature transition and below that the system is very robust against dc field. For a conventional spin glass system the sharp susceptibility peak around the freezing temperature is strongly suppressed and gets rounded off on application of dc magnetic field [19]. The broad peak in χ 3 around the freezing temperature, robustness of the low temperature transition against dc magnetic field together with the obtained high value of relaxation time and low value of coercivity differentiates our system from a conventional spin glass. The peak temperature in the χ 2 can be taken as the exact value of the FM-PM transition temperature or T C , the Curie temperature of the system. Reduction of T C and suppression of peak height in χ 2 on application of dc field directly indicates the reduction of FM exchange interaction and hence the spontaneous magnetization in the system under dc field.
The behavior of the system can be understood with the concept of the formation of finite size ferromagnetic clusters and interaction among them. The formation of such short range ferromagnetism is a consequence of phase separation phenomena which is very commonly observed in manganites. The coexistence of large carrier-rich FM clusters with smaller carrierpoor FM clusters has been reported earlier [10,22]. This is indirectly indicating the so called electronic phase separation in manganites. The larger clusters will, understandably, favor ferromagnetism in the sample. Whereas, according to their hypothesis, smaller ones will enhance glassyness. We also propose that the ferromagnetic region of the sample does not have Dey et al [23]. The AFM interaction between Gd and Mn moments was thought to have promoted the formation of FM clusters. In our system, it is quite understandable that AFM interaction in the Mn 3+ -O-Mn 3+ will be appreciable beside double exchange due to the abundance of Mn 3+ ions in this system. This AFM interaction resists the formation of long ranged ferromagnetic order Rivadulla et al. [25] experimentally showed the possibility of formation of such a state in manganites where frustration and glassyness may arise due to intercluster interaction. They argued that finite size of the ferromagnetic clusters which restrains the divergence of the magnetic correlation length ξ, had a role to play in the system behavior. They also showed that dc field can modulate the cluster size/concentration which in turn modifies the frustration and glassyness in the system. From the experimental evidences, it seems that a dc field induced size modulation of the ferromagnetic clusters can take place in our system. On application of dc field, probably cluster size decreases and limits the value of ξ and hence the spontaneous magnetization of the system. This is reflected in the suppression of T C and absolute value of χ 2 and χ 1 under dc field. When the applied dc field is very low a slight enhancement of χ 2 as well as χ 1 can be observed. We believe that this enhancement is due to the contribution from the higher order susceptibility [13] and not a genuine property of the system. This can be understood by the fact that T C decreases monotonically with the increase of dc field although the absolute value of susceptibility initially increases and again decrease at higher applied dc field. At the low temperature regime the ferromagnetic clusters undergo random dipolar interaction between them.
This causes a frustration and collective relaxation in the system as manifested by the frequency dependence of ac-χ .The system undergoes a cluster glass type freezing due to this dipolar interaction. It is known that an increase in cluster number or cluster concentration will enhance the random dipolar interaction [26]. As a result the frustration leading to glassy behavior will be enhanced. Probably, there is a reduction of cluster size and hence an increase in cluster concentration with the decrease of temperature occurs in our system. Around a certain temperature (90 K) we observe the glassy behavior. The enhanced frustration at around 90 K is reflected in the sharp rise in resistivity at the same temperature regime. Double exchange, which is enhanced by the alignment of spins or ferromagnetic moments, will be hindered at this temperature regime due to increased frustration and disorder in this system. At the low temperature regime dc field cannot further reduce the cluster size which is already small. This could be the reason for the observed robustness of the system in the low temperature regime against the application of dc field. The MR of such manganite sample is known to originate from spin polarized inter -granular tunneling phenomenon and double exchange mechanism (intrinsic MR). Intrinsic MR is appreciable only around T C . The most significant spin polarized tunneling MR (MR SPT ) is proportional to M 2 and 1/T [27]. The increase in negative MR with the decrease of temperature is qualitatively consistent with the dc magnetization data (magnetization increases with the decrease of temperature). We observe that there is a sharp decrease in the MR with the decrease of temperature in the lowest temperature region. The freezing of the spins/ clusters might be the reason of this reduced MR.
Lastly, we want to mention the possibility of an intra cluster contribution leading to glassyness which may originate from random ferromagnetic and anti ferromagnetic bonds in each cluster of this sample. We believe that system having random interaction within a single cluster would have more similarities with a real spin glass and would give rise to more spin glass like properties to be observed in this system. Our present study shows that this system displays many properties which are not commonly observed in a canonical spin glass. So, inter cluster interaction seems the most plausible explanation for our observation of glassy behavior in the system.
Conclusions:
In summary, we can say that we have investigated in details the magnetic properties of under doped manganite Nd 0.8 Sr 0.2 MnO 3 employing ac magnetic susceptibility measurement techniques.
Besides having a ferromagnetic to paramagnetic phase transition we have observed another magnetic transition in the low temperature region. This transition seems to be originating from the freezing of ferromagnetic clusters. Around this transition point ac susceptibility shows frequency dependence and 3 rd harmonic of susceptibility shows a broad peak and diverges in the zero field limit. This divergence of χ 3 is a typical signature of a spin glass. However, we have observed very low value of coercivity of the sample from M-H curves and large value of spin relaxation time (10 -8 s). These observations are not in accord with genuine glassy behavior. Our observation has been attributed to inter cluster dipolar interactions. It seems that the phase separation and inter cluster interaction is responsible for the observation of some glass like behaviors in the system. Resistivity and magneto transport measurements supports our observed magnetic properties.
Figure captions
of h = 2 Oe is shown in the Fig. 2. The inset of Fig. 2 shows the imaginary part which represents the loss or dissipation in the sample. The temperature variation of real part shows two transitions, with a high temperature transition occurring around 140 K and a low temperature cusp (T ƒ ) around 90 K. The sharp drop of susceptibility (χ 1 R ) at around 140 K can be attributed to the ferromagnetic (FM) to paramagnetic (PM) phase transition. The bulk phase diagram also shows a FM to PM transition almost at the same temperature. At around 90 K susceptibility decreases rapidly with the decrease of temperature showing a second re -entrant transition around this temperature. The imaginary part of ac susceptibility (
χ 3 is divergent in nature in the limit h→0 and ƒ→0 around the freezing temperature. For other magnetic systems diverging nature is not observed around magnetic transitions. In this way, measurement of χ 3 gives a high level confidence in determining glassyness in the application of dc field (H dc ), is shown in the inset ofFig.3. Temperature variation of χ 2 R shows a sharp peak at the FM-PM transition region.This is due to the onset of spontaneous magnetization in the ferromagnetic regime. On application of a small superimposed dc field the peak height of χ 2 R increases. However, we note that the peak height (at 140 K) shows a nonmonotonic dependence on H dc in the dc field range of 0 < H dc < 15 Oe. Further increase of H dc suppresses the peak. Peak position is found to shift towards the lower temperature with the increase of dc field. The 3 rd harmonic susceptibility (χ 3 R ) shows a broad peak at the low temperature transition region (T ~ 90 K) as evident fromFig. 4. However, the measured 3 rd harmonic show a divergent nature as the magnitude of the ac field (h) is decreased towards zero. max ) are plotted as a function of h to clarify this behavior (Inset ofFig. 4). It is clear that, the value of (|χ 3 R | max ) is diverging in the low field regime in the limit of h → 0 whereas, it shows a saturating nature as the field is increased. This indicates that the magnetic phase transition at the low temperature region occurs due to cooperative freezing phenomena, weather it constitutes a classical spin glass or the behavior is due to dipolar interaction among the magnetic clusters. The log -log plot of χ 3 R vs. reduced temperature (T-T*)/T* (T* is the peak temperature of χ 3 R ) is shown in the inset ofFig. 4. A saturating nature of the quantity log (χ 3 R ) is clear as T approaches T*. This behavior is a characteristic of a spin glass system[18]. It was experimentally found that the application of dc magnetic field (~ 45 Oe) does not have any effect on χ 3 around the low temperature re-entrant transition region.The dc magnetization of the sample was recorded at 500 Oe field and shown in the insetof Fig. 5. Magnetization of the sample initially rises with temperature and attains a maximum at around 73 K. With further increase of temperature magnetization of the sample decreases slowly and follows a sharp fall at around 140 K. The transition at 140 K is indicative of the FM-PM transition as obtained from the ac susceptibility data. The magnetic hysteresis (M vs. H) of the sample was also registered at different temperatures in the low field regime. Fig. 5 shows the M vs. H curves at different temperatures. Maximum dc field applied was 500 Oe. The magnetization of the sample at all temperatures show a non saturating behavior as the applied field is very small. Even with the application of a small field, one can observe a clear and feasible evolution in the M-H behavior of the sample with the change of temperature. The maximum value of magnetization has gradually decreased with the increase of temperature and the M-H curve becomes almost linear near the FM-PM transition temperature. Our main objective of this measurement was to have an idea about the coercivity of the sample, an important issue in systems where glassy behavior is evident. One noticeable fact is that the coercivity (H C ) of the sample is very small (~22 Oe) even at the lowest measured temperature. Large value of coercivity (even a few hundreds of Oe) is hallmark of a spin glass system. In this regard, our system does not match with a canonical spin glass. Inset of Fig. 5 shows the variation of H C with temperature. H C is highest at the lowest measured temperature and falls sharply at around 90 K and then decreases slowly with the increase of temperature up to the FM-PM transition region where it becomes almost zero.
Fig. 6 )
6clearly indicates this suppression by showing a minimum at this temperature. A small kink is observable at around 140 K, which is most likely due to the enhanced intrinsic MR around Curie temperature.
long ranged order but formed of small clusters of ferromagnet. Formation of FM clusters in manganites has been reported earlier. The formation of FM cluster in La 0.85 Ca 0.15 Mn 0.95 Fe 0.05 O 3 was attributed to the random field created by the PM clusters present in the ferromagnetic regime by H.U. Khan et al[6]. In La 0.5 Gd 0.2 Sr 0.3 MnO 3 formation of FM clusters were identified by P.
throughout the system but only allows the formation of finite size FM clusters. The ferromagnetic clusters may coexist with the non ferromagnetic regions and the whole system can be thought of a magnetically phase segregated state of a manganite. This kind of phase separation in terms of existence of ferromagnetic regions inside antiferromagnetic matrix was theoretically predicted by M Yu Kagan et al. in low doped manganites[24].
Fig. 1 .
1(a) XRD micrograph of the sample. (b) FESEM image of the sample.
Fig. 2 .
2Variation of the real part of linear ac susceptibility (χ 1 R ) with temperature at different frequencies (h = 2 Oe). The inset shows the plot of imaginary part of ac susceptibility (
Fig. 3 .
3(Color online) The variation of the real part of linear ac susceptibility under different dc magnetic fields. The inset shows the plot of the real part of 2 nd harmonic of ac susceptibility (
Fig. 4 .
4(
Fig. 5 .
5(colour online) The M-H behavior of the sample at different temperatures. The upper left inset shows the variation of coercivity with temperature. The bottom right inset shows the variation of magnetization of the sample with temperature recorded at 500 Oe field.
Fig. 6 .
6Measured resistivity of the sample as a function of temperature without the application of magnetic field and under 5 T field. Inset shows the plot of magnetoresistance as a function of temperature.
Fig. 7 .
7Plot of log (1/ƒ) as a function of freezing temperature T ƒ . Inset shows the variation of T ƒ with frequency.
Figure 1 S
1. Kundu et al.
Figure 4 S
4. Kundu et al.
Figure 5 S
5. Kundu et al.
Colossal Magnetoresistive Oxides, Gordon and Breach Science. Y Takura, SingaporeY.Takura, Colossal Magnetoresistive Oxides, Gordon and Breach Science, Singapore,2000.
. E Dagotto, T Hotta, A Moreo, Physics Reports. 344153E. Dagotto, T.Hotta, A. Moreo, Physics Reports 344 (2001) 153.
. E Dagotto, Science. 257E. Dagotto, Science, 257 (2005).
. S Jin, T H Tiefel, M Mccormack, R A Fastnacht, R Ramesh, L H , Chen Science. 264413S.Jin , T. H. Tiefel, M . McCormack, R.A. Fastnacht, R. Ramesh and L.H. Chen Science 264 (1994)413.
Colossal Magnetoresistance, Charge Ordering and Related Properties of Manganese Oxides. C N R Rao, B Raveau, World ScientificSingaporeC N R Rao, B. Raveau, Colossal Magnetoresistance, Charge Ordering and Related Properties of Manganese Oxides, World Scientific,Singapore, 1998.
. H U Khan, K Rabia, A Mumtaz, S K Hasanain, J. Phys.: Condens. Matter. 1910pp106202H. U. Khan, K. Rabia, A. Mumtaz, S. K. Hasanain, J. Phys.: Condens. Matter 19 (2007) 106202 (10pp).
. S K Srivastava, M Karb, S Ravi, J. Magn. and Magn. Mater. 307318S.K. Srivastava, M. Karb, S. Ravi, J. Magn. and Magn. Mater. 307 (2006) 318.
. J Dho, W S Kim, N H Hur, Phys. Rev. Lett. 8927202J. Dho, W. S. Kim, N. H. Hur, Phys. Rev. Lett. 89, (2002) 027202.
. J Wu, C Leighton, Phys. Rev. B. 67174408J. Wu and C. Leighton, Phys. Rev. B 67 (2003) 174408.
. K Asish, P Kundu, C N R Nordblad, Rao, Phys. Rev. B. 7214423Asish K. Kundu, P. Nordblad, C. N. R. Rao, Phys. Rev. B 72 (2005) 14423.
. K Asish, P Kundu, C N R Nordblad, Rao, J. Phys.: Condens. Matter. 184809Asish K. Kundu, P. Nordblad, C. N. R. Rao, J. Phys.: Condens. Matter, 18 (2006) 4809.
. N Rama, V Sankaranarayanan, M S Rao, J. Appl. Phys. 99N. Rama, V. Sankaranarayanan, M. S. Ramachandra Rao, J. Appl. Phys. 99, (2006) 08Q315.
. T Bitoch, T Shirane, S Chikazwa, J. Phys. Soc. Jpn. 622837T. Bitoch, T. Shirane and S. Chikazwa, J. Phys. Soc. Jpn. 62 (1993) 2837.
. T Sato, Y Miyako, J. Phys. Soc. Jpn. 511394T.Sato, Y. Miyako, J. Phys. Soc. Jpn. 51 (1981)1394.
. A K Pramanik, A Banerjee, J. Phys.: Condens. Matter. 2010pp275207A. K. Pramanik, A. Banerjee, J. Phys.: Condens. Matter 20 (2008) 275207 (10pp).
. G Sinha, A K Majumdar, J. Magn. and Magn. Mater. 18518G. Sinha, A. K. Majumdar, , J. Magn. and Magn. Mater. 185 (1998) 18.
. A Bajpai, A Banerjee, Phys. Rev. B. 55439A. Bajpai and A. Banerjee, Phys. Rev. B 55, (1997) 12 439.
. S Chikazawa, S Taniguchi, H Matsuyama, Y Miyako, J. Magn. Magn. Mater. 1355Chikazawa S, Taniguchi S, Matsuyama H, Miyako Y, J. Magn. Magn. Mater. 31-34 (1983)1355.
A J Mydosh, Spin Glasses. LondonTaylor and FrancisA. J. Mydosh, Spin Glasses, Taylor and Francis, London, 1993.
. K Binder, A P Young, Rev. Mod. Phys. 58801K. Binder, A.P. Young, Rev. Mod. Phys. 58 (1986) 801.
. B Ray, A Poddar, S Das, J. App. Phys. 100104318B. Ray, A. Poddar, S. Das, J. App. Phys. 100, (2006)104318.
. K Asish, P Kundu, C N R Nordblad, Rao, J. Solid State Chem. 179923Asish K. Kundu, P. Nordblad, C. N. R. Rao, J. Solid State Chem. 179 (2006) 923.
. P Dey, T K Nath, A Banerjee, J. Phys.: Condens. Matter. 1913pp376204P. Dey, T. K. Nath, A. Banerjee J. Phys.: Condens. Matter 19 (2007) 376204 (13pp).
. A Yu Kagan, I V V Klaptsov, K I Brodsky, A Kugel, A L Sboychakov, Rakhmanov, J. Phys. A: Math. Gen. 369155M Yu Kagan, A V Klaptsov, I V Brodsky, K I Kugel, A O Sboychakov and A L Rakhmanov, J. Phys. A: Math. Gen. 36 (2003) 9155.
. F Rivadulla, M A Lo´pez-Quintela, J Rivas, Phys. Rev. Lett. 93167206F. Rivadulla, M. A. Lo´pez-Quintela, J. Rivas, Phys. Rev. Lett. 93 (2004)167206.
. P Jok Nsson, T Jonsson, J L Garcmha-Palacios, P Svedlindh, J. Magn. and Magn. Mater. 222219P. JoK nsson, T. Jonsson, J.L. GarcmHa-Palacios, P. Svedlindh, J. Magn. and Magn. Mater. 222 (2000) 219.
. H Y Hwang, S-W Cheong, N P Ong, B Batlogg, Phys. Rev. Lett. 772041H. Y. Hwang, S-W. Cheong, N. P. Ong, B. Batlogg, Phys. Rev. Lett. 77 (1996) 2041.
| [] |
[
"Fully Anisotropic String Cosmologies, Maxwell Fields and Primordial Shear",
"Fully Anisotropic String Cosmologies, Maxwell Fields and Primordial Shear"
] | [
"Massimo Giovannini 1.4*[email protected] \nInstitute of Cosmology\nDepartment of Physics and Astronomy\nTufts University\n02155MedfordMassachusettsUSA\n"
] | [
"Institute of Cosmology\nDepartment of Physics and Astronomy\nTufts University\n02155MedfordMassachusettsUSA"
] | [] | We present a class of exact cosmological solutions of the low energy string effective action in the presence of a homogeneous magnetic fields. We discuss the physical properties of the obtained (fully anisotropic) cosmologies paying particular attention to their vacuum limit and to the possible isotropization mechanisms. We argue that quadratic curvature corrections are able to isotropize fully anisotropic solutions whose scale factors describe accelerated expansion. Moreover, the degree of isotropization grows with the duration of the string phase. We follow the fate of the shear parameter in a decelerated phase where, dilaton, magnetic fields and radiation fluid are simultaneously present. In the absence of any magnetic field a long string phase immediately followed by radiation is able to erase large anisotropies. Conversely, if a short string phase is followed by a long dilaton dominated phase the anisotropies can be present, in principle, also at later times. The presence of magnetic seeds after the end of the string phase can induce further anisotropies which can be studied within the formalism reported in this paper.I. FORMULATION OF THE PROBLEMSuppose that at some time t 1 the Universe becomes transparent to radiation and suppose that, at the same time, the four-dimensional background geometry (which we assume, for simplicity, spatially flat) has a very tiny amount of anisotropy in the scale factors associated with different spatial directions, namelywhere b(t), as it will be clear in a moment, has to be only slightly different from a(t). The electromagnetic radiation propagating along the x and y axes will have a different temperature, namely [1,2]where H(t) and F (t) are the two Hubble factors associated, respectively with a(t) and b(t). The temperature anisotropy associated with the electromagnetic radiation propagating along two directions with different expansion rates can be roughly estimated, in the limit H(t) − F (t) ≪ 1, aswhere, in the second equality, we assumed that the deviations from the radiation dominated expansion can be written as F (t) ∼ (1 − ǫ(t))/2t with |ǫ(t)| ≪ 1. As we will see in the following ǫ can be connected with the shear parameter. The dynamical origin of the primordial anisotropy in the expansion [2] can be connected with the existence of a primordial magnetic field (not dynamically generated but postulated as an initial condition) or with some other sources of anisotropic pressure and, therefore, the possible bounds on the temperature anisotropy can be translated into bounds on the time evolution of the anisotropic scale factors.In the context of string cosmology[4]there are no reasons why the Universe should not be mildly anisotropic as it was recently pointed out[5]. Indeed, the Universe can be anisotropic as a result of the Kasner-like nature of the pre-big-bang solutions. Moreover, we could argue, that the Universe has to be only mildly anisotropic. By mildly anisotropic we mean that the scale factors should be both expanding (in the String frame). The low energy beta functions can be solved for a metric (1.1) with the result [5] | 10.1103/physrevd.59.123518 | [
"https://export.arxiv.org/pdf/gr-qc/9812021v1.pdf"
] | 11,776,867 | gr-qc/9812021 | d39d181a85e0316a79e69302a65f0b92e7b7aa9a |
Fully Anisotropic String Cosmologies, Maxwell Fields and Primordial Shear
7 Dec 1998
Massimo Giovannini 1.4*[email protected]
Institute of Cosmology
Department of Physics and Astronomy
Tufts University
02155MedfordMassachusettsUSA
Fully Anisotropic String Cosmologies, Maxwell Fields and Primordial Shear
7 Dec 1998
We present a class of exact cosmological solutions of the low energy string effective action in the presence of a homogeneous magnetic fields. We discuss the physical properties of the obtained (fully anisotropic) cosmologies paying particular attention to their vacuum limit and to the possible isotropization mechanisms. We argue that quadratic curvature corrections are able to isotropize fully anisotropic solutions whose scale factors describe accelerated expansion. Moreover, the degree of isotropization grows with the duration of the string phase. We follow the fate of the shear parameter in a decelerated phase where, dilaton, magnetic fields and radiation fluid are simultaneously present. In the absence of any magnetic field a long string phase immediately followed by radiation is able to erase large anisotropies. Conversely, if a short string phase is followed by a long dilaton dominated phase the anisotropies can be present, in principle, also at later times. The presence of magnetic seeds after the end of the string phase can induce further anisotropies which can be studied within the formalism reported in this paper.I. FORMULATION OF THE PROBLEMSuppose that at some time t 1 the Universe becomes transparent to radiation and suppose that, at the same time, the four-dimensional background geometry (which we assume, for simplicity, spatially flat) has a very tiny amount of anisotropy in the scale factors associated with different spatial directions, namelywhere b(t), as it will be clear in a moment, has to be only slightly different from a(t). The electromagnetic radiation propagating along the x and y axes will have a different temperature, namely [1,2]where H(t) and F (t) are the two Hubble factors associated, respectively with a(t) and b(t). The temperature anisotropy associated with the electromagnetic radiation propagating along two directions with different expansion rates can be roughly estimated, in the limit H(t) − F (t) ≪ 1, aswhere, in the second equality, we assumed that the deviations from the radiation dominated expansion can be written as F (t) ∼ (1 − ǫ(t))/2t with |ǫ(t)| ≪ 1. As we will see in the following ǫ can be connected with the shear parameter. The dynamical origin of the primordial anisotropy in the expansion [2] can be connected with the existence of a primordial magnetic field (not dynamically generated but postulated as an initial condition) or with some other sources of anisotropic pressure and, therefore, the possible bounds on the temperature anisotropy can be translated into bounds on the time evolution of the anisotropic scale factors.In the context of string cosmology[4]there are no reasons why the Universe should not be mildly anisotropic as it was recently pointed out[5]. Indeed, the Universe can be anisotropic as a result of the Kasner-like nature of the pre-big-bang solutions. Moreover, we could argue, that the Universe has to be only mildly anisotropic. By mildly anisotropic we mean that the scale factors should be both expanding (in the String frame). The low energy beta functions can be solved for a metric (1.1) with the result [5]
We present a class of exact cosmological solutions of the low energy string effective action in the presence of a homogeneous magnetic fields. We discuss the physical properties of the obtained (fully anisotropic) cosmologies paying particular attention to their vacuum limit and to the possible isotropization mechanisms. We argue that quadratic curvature corrections are able to isotropize fully anisotropic solutions whose scale factors describe accelerated expansion. Moreover, the degree of isotropization grows with the duration of the string phase. We follow the fate of the shear parameter in a decelerated phase where, dilaton, magnetic fields and radiation fluid are simultaneously present. In the absence of any magnetic field a long string phase immediately followed by radiation is able to erase large anisotropies. Conversely, if a short string phase is followed by a long dilaton dominated phase the anisotropies can be present, in principle, also at later times. The presence of magnetic seeds after the end of the string phase can induce further anisotropies which can be studied within the formalism reported in this paper.
I. FORMULATION OF THE PROBLEM
Suppose that at some time t 1 the Universe becomes transparent to radiation and suppose that, at the same time, the four-dimensional background geometry (which we assume, for simplicity, spatially flat) has a very tiny amount of anisotropy in the scale factors associated with different spatial directions, namely
ds 2 = dt 2 − a 2 (t)dx 2 − b 2 (t)[dy 2 + dz 2 ],(1.1)
where b(t), as it will be clear in a moment, has to be only slightly different from a(t). The electromagnetic radiation propagating along the x and y axes will have a different temperature, namely [1,2] T x (t) = T 1 a 1 a = T 1 e − H(t)dt , T y (t) =
T 1 b 1 b = T 1 e − F (t)dt ,(1.2)
where H(t) and F (t) are the two Hubble factors associated, respectively with a(t) and b(t). The temperature anisotropy associated with the electromagnetic radiation propagating along two directions with different expansion rates can be roughly estimated, in the limit H(t) − F (t) ≪ 1, as
∆T T ∼ [H(t) − F (t)]dt = 1 2 ǫ(t)d log t (1.3)
where, in the second equality, we assumed that the deviations from the radiation dominated expansion can be written as F (t) ∼ (1 − ǫ(t))/2t with |ǫ(t)| ≪ 1. As we will see in the following ǫ can be connected with the shear parameter. The dynamical origin of the primordial anisotropy in the expansion [2] can be connected with the existence of a primordial magnetic field (not dynamically generated but postulated as an initial condition) or with some other sources of anisotropic pressure and, therefore, the possible bounds on the temperature anisotropy can be translated into bounds on the time evolution of the anisotropic scale factors.
In the context of string cosmology [4] there are no reasons why the Universe should not be mildly anisotropic as it was recently pointed out [5]. Indeed, the Universe can be anisotropic as a result of the Kasner-like nature of the pre-big-bang solutions. Moreover, we could argue, that the Universe has to be only mildly anisotropic. By mildly anisotropic we mean that the scale factors should be both expanding (in the String frame). The low energy beta functions can be solved for a metric (1.1) with the result [5]
a(t) = − t t 1 α , b(t) = − t t 1 β , φ(t) = (α + 2β − 1) log − t t 1 ,(1.4)
where α 2 + 2β 2 = 1. If we wantȧ > 0 andḃ > 0 (expanding solutions) andä > 0,b > 0 (inflationary solutions) the exponents α and β have to lie in the third quadrant along the arc of the ellipse reported in Fig. 1. The axionic logarithmic energy spectra (in frequency) are reported for different dilaton-driven models with Kasner-like exponents (α, β) in the string frame. The models belonging to the third quadrant and localized on the arc of the "vacuum" ellipse α 2 + 2β 2 = 1 correspond to anisotropic models with expanding and accelerated scale factors. If α and β lie on the ellipse but in the first quadrant we have solutions of the low energy beta functions which are both contracting (in the String frame). If α and β on the ellipse but either in the second or in the fourth quadrant we have solutions of the low energy beta functions where one of the two scale factors expands and the other contracts. Notice that the dashed (thick) line does correspond to the case of fully isotropic solutions (i.e. α = β = −1/ √ 3) whose intersection with the vacuum ellipse lies in the region of red spectra. Above the dot-dashed (tick) line the dilaton decreases for t < 0 whereas below the dot-dashed line the dilaton increases. We would be tempted to speculate that to have an increasing dilaton is a sufficient condition in order to have a pre-big-bang dynamics. This is not the case. Indeed, an increasing dilaton is also compatible with α and β in the second quadrant where the scale factors are not both expanding. Therefore, an increasing dilaton is not a sufficient condition for anisotropic pre-big-bang dynamics but only a necessary condition.
The requirement of having, in the same class of fully anisotropic models, axion spectra decreasing at large distances (i.e. blue frequency spectra) will select a slice exactly in the same region of parameter space [5]. Therefore, general considerations seem to point towards mildly anisotropic pre-big-bang models. Since for phenomenological considerations [6] we would like to have blue (or flat) logarithmic energy spectra it seems reasonable to analyze fully anisotropic string cosmological models. Recently it was pointed out that the collapse of a stochastic collection of dilatonic waves (in the Einstein frame) does also lead to an anisotropic pre-big-bang phase [7].
In the framework of an anisotropic pre-big-bang phase it is plausible, from a theoretical point of view, to investigate the role played by a homogeneous magnetic field which could represent a further source of anisotropy. Thus, the first technical point we want to investigate is the possible generalization of the pre-big-bang solutions to the case of a homogeneous magnetic field. This generalization is not so straightforward since, in the low energy limit, the dilaton field is directly coupled to the kinetic term of the Maxwell field
S = − 1 2λ 2 s d 4 x √ −ge −φ R + g αβ ∂ α φ∂ β φ − 1 12 H µνα H µνα + 1 4 F αβ F αβ + ... (1.5) where F αβ = ∇ [α A β]
is the Maxwell field strength and ∇ µ is the covariant derivative with respect to the String frame metric g µν . Notice that H µνα is the antisymmetric field strength. In Eq. (1.5) the ellipses stand for a possible (non-perturbative) dilaton potential and for the string tension corrections parameterized by α ′ = λ 2 s (notice that λ s is the string scale). In Eq. (1.5) F µν can be thought as the Maxwell field associated with a U (1) subgroup of E 8 × E 8 . One might think that by going to the Einstein frame the dilaton can be decoupled from the kinetic term of the Maxwell fields. This is not the case as we discuss, in greater detail, in Appendix A.
In addressing the possible occurrence of an anisotropic phase in the life of the Universe there is an immediate concern. We want to make sure that the amount of anisotropy encoded in the initial conditions will be eventually washed out by the subsequent evolution. It can be shown, in this respect, that an interesting role can be played by the string tension corrections [5]. In fact, by adding the first α ′ correction to the tree-level action reported in Eq. (1.5) two interesting things happens. On one hand the curvature invariants get regularized and, on the other hand, an anisotropic solution with expanding scale factors gets isotropized. This statement can be made more precise by looking, simultaneously at Fig. 1 and Fig. 2. In Fig. 2 we plot the shear parameter r(t) = 3[H(t)− F (t)]/[H(t)+ 2F (t)] for the particular vacuum solution leading to flat axionic spectra in four anisotropic dimensions, namely the case α = −7/9, β = −4/9. This case does correspond in Fig. 1 to the intersection between the full line and the vacuum ellipse in the third quadrant. We can clearly see that for t → −∞ the solution is anisotropic since r(t) is of order one. The crucial question for the purpose of the present paper is by how much the quadratic corrections are able to reduce the degree of anisotropy.
For our future convenience we define the degree of isotropization as the logarithm (in then basis) of the absolute value of the shear parameter:
I(t) = log |r(t)|.
(1.6)
The reason for the absolute value in the last equations is that r(t) can change sign depending upon the relative magnitude of H and F . In Fig. 2 we plot the degree of isotropization (at the left) and the shear parameter itself (at the right). We see that the degree of isotropization is a function of the duration of the string phase. Thus if we have a very long string phase we can have I(t) ≪ −7. On the other hand if the duration of the string phase is not too long, then one can naturally have larger values for r(t). Suppose now that the string phase stops at some time t and it is replaced by a radiation dominated phase. Then by the continuity of the scale factors at the transition time the anisotropy in the expansion is translated in a temperature ansisotropy according to Eq. (1.1). We plot the evolution of the degree of anisotropy (left) and of the shear parameter itself (right) in the case of a mildly anisotropic solution with α = −7/9 and β = −4/9 corresponding to the case of flat axionic spectra [5] (i.e. the intersection of the thick full line with the vacuum ellipse in the third quadrant of Fig. 1). As we can see for t → −∞, r(t) → [−7/9 + 4/9]/[−7/9 − 4/9] = 3/11. For t → +∞ the shear parameter and the degree of anisotropy gets reduced as the result of the addition of of higher order string tension corrections (see also Section IV). We notice that the degree of isotropization crucially depends upon the duration of the string phase (left). In a short string phase I(t) can be of the order of −3, −4. If the string phase is very long the degree of anisotropy can be much smaller than −7. Notice that, by only looking at the right picture we might guess (in a wrong way) that the shear parameter tends (for t → +∞) to a constant (small) value. The crucial point we want to stress is that this is not the case as we can understand by looking at I(t) which is always a decreasing function of the cosmic time.
Therefore, it could happen, depending upon the duration of the string phase, that either I(t) > ∼ 10 −5 or I(t) < ∼ 10 −5 right after the end of the string phase (under the assumption that radiation sets in instantaneously). Many questions can arise at this point. Will the radiation evolution completely erase the primordial anisotropy? If this is not the case, should we exclude, on a phenomenological ground, too short string phases since they might potentially conflict with the observed value of the large scale anisotropy? Which is the role of a primordial magnetic field in the picture? We want to recall, in fact, that even if we would not include a magnetic field in the pre-big-bang phase such a field will be anyway dynamically generated by parametric amplification of the electromagnetic (vacuum) fluctuations [8] and it will evolve, in the radiation dominated (post-big-bang) phase affecting the evolution of r(t) [1]. Certainly one can argue that the magnetic field generated from the electromagnetic vacuum fluctuations is stochastic in nature as all the other fields (gravitons, scalar fields...) produced as the result of parametric amplification of quantum mechanical initial conditions [9,10]. Therefore, we could conjecture that the obtained magnetic field does not have any preferred direction. We want to point out that this conclusion has been reached by looking at the magnetic field generation in a completely homogeneous and isotropic dilaton-driven phase. If the background is not completely isotropic during the dilaton driven evolution, then, we can argue, in analogy with the graviton production in anisotropic backgrounds of Bianchi type I, that the produced background has different properties in different spatial directions [11].
Finally, the considerations and the various questions we just stated hold in the case of transition from the string phase to the radiation dominated phase. However, it could happen that before reaching the radiation phase the background evolves toward a state of decreasing dilaton and then, we should treat again the same issues in the case of a background with decreasing dilaton coupling. These are the problems we want to investigate in the second part of the present paper.
Our plan is the following. In Section II we will discuss the magnetic solutions of the low energy beta functions whereas in Section III we will discuss their physical interpretation. In Section IV we will investigate the role of the α ′ corrections and we will try to follow the evolution of the degree of anisotropy all along the string phase. In Section V we will study the fate of the anisotropy in a decelerated phase and we will address the issue of possible bounds on the duration of the string phase from too large temperature anisotropies. Section VI contains our concluding remarks. Some useful technical results concerning the description of anisotropies and of magnetic fields in the String and in the Einstein frames are collected in Appendix A.
II. MAGNETIC SOLUTIONS
Consider a spatially flat background configuration with vanishing antysimmetric field strangth (H µνα = 0) and vanishing dilaton potential. The dilaton depends only on time and the metric will be taken fully anisotropic since we want to study possible solutions with a homogeneous magnetic field which is expected to break the isotropy of the background:
ds 2 = g µν dx µ dx ν = dt 2 − a 2 (t)dx 2 − b 2 (t)dy 2 − c 2 (t)dz 2 . (2.1)
This choice of the metric corresponds to a synchronous coordinate system in which the time parameter coincides with the usual cosmic time (i.e. g 00 = 1 and g 0i = 0). By varying the effective action with respect to φ, g µν and the vector potential A µ we get, respectively
R − g αβ ∂ α φ∂ β φ + 2g αβ ∇ α ∇ β φ = − 1 4 F αβ F αβ , (2.2) R ν µ − 1 2 δ ν µ R = 1 2 1 4 δ ν µ F αβ F αβ − F µβ F νβ − 1 2 δ ν µ g αβ ∂ α φ∂ β φ − ∇ µ ∇ ν φ + δ ν µ ✷φ, (2.3) ∇ µ e −φ F µν = 0. (2.4)
where ∇ µ denotes covariant differentiation with respect to the metric of Eq.
R ν µ + ∇ µ ∇ ν φ + 1 2 F µα F να = 0. (2.5)
Consider now a homogeneous magnetic field directed along the x axis. The generalized Maxwell equations (2.4) and the associated Bianchi identities can then be trivially solved by the field strength F yz = −F zy . The resulting system of equations (2.2), (2.5) can than be written, in the metric of Eq. (2.1), as
2φ + 2(H + F + G)φ −φ 2 − 2 HF + F G + F G + H 2 + F 2 + G 2 +Ḣ +Ḟ +Ġ + B 2 2b 2 c 2 = 0, (2.6) φ = H 2 + F 2 + G 2 +Ḣ +Ḟ +Ġ,(2.
7)
Hφ = HF + HG + H 2 +Ḣ, (2.8)
Fφ = HF + F G + F 2 +Ḟ − B 2 2b 2 c 2 , (2.9) Gφ = HF + F G + G 2 +Ġ − B 2 2b 2 c 2 ,(2.10)
where Eq. (2.6) is the dialton equation and Eqs. (2.7)-(2.10) do correspond, respectively to the (00) and (ii) components of Eq. (2.5). Notice that B is the magnetic field intensity. The inclusion of a time-dependent B is in principle possible but in order to be consistent with the Bianchi indentities we would need to include also an electric field (not necessarily homogeneous). Notice that in the limit B → 0 we obtain the well known form of the low energy beta functions in a fully anisotropic metric.
In order to give a general solution of the previous system it is convenient to define a generalized "conformal" time namely
e −φ dη = dt, φ = φ − log √ −g,(2.11)
where φ is the "shifted" dilaton which is invariant for duality related vacuum solutions. In terms of η Eqs. (2.2)-(2.10) become, respectively,
2φ ′′ + φ ′ 2 − 2Σ ′ − 2φ ′ Σ + Λ = − B 2 2 a 2 e −2φ , (2.12) φ ′′ + φ ′ 2 − 2Σφ ′ + Λ − Σ ′ = 0, (2.13) H ′ = 0, F ′ = B 2 2 a 2 e −2φ , G ′ = B 2 2 a 2 e −2φ , (2.14) where Σ = (H + F + G), Λ = 2(HF + HG + F G), (2.15)
with the obvious notation that (log a) ′ = H, (log b) ′ = F , (log c) ′ = G (the prime denotes differentiation with respect to η). By subtracting Eq. (2.13) from Eq. (2.12) we obtain
φ ′′ − Σ ′ = − B 2 2 a 2 e −2φ . (2.16)
Using the remaining equations in Eq. (2.16) in order to eliminate Σ ′ we get a general form of the solution which reads
a(η) = a 0 e H0η , b(η) = b 0 e F0η e φ , c(η) = a 0 e G0η e φ ,(2.17)
where φ satisfies
φ ′′ + φ ′ 2 − 2H 0 φ ′ − Λ 0 = 0, Λ 0 = 2(H 0 F 0 + F 0 G 0 + F 0 G 0 ). (2.18)
By solving this last equation and by inserting the (trial) solution (2.17) into Eqs. (2.12)-(2.10) we will get a set of consistency relations among the different integration constants which will determine a general form of the solutions in terms of the (undetermined) set of initial conditions. Let us notice that by repeatedly combining the equations of motion we obtain a very useful relation, namely 2φ ′′ = (Ba) 2 exp [−2φ]. Since the left hand side of this last equation is positive definite, we have also to demand, for physical consistency that, from Eq.
(2.18), −φ ′ 2 + 2H 0 φ ′ + Λ 0 > 0, which also implies H 0 + H 2 0 + Λ 0 < φ ′ < H 0 − H 2 0 + Λ 0 . (2.19)
Notice that this is a requirement to be satisfied in the presence of a constant magnetic field and it does change in the presence of a constant electric field (indeed for a constant electric field directed along the x direction and in the absence of any associated magnetic field we would have that, in the same conformal time parameterization φ ′′ < 0). The wanted solution is
a(η) = a 0 e H0η , b(η) = b 0 e φ0 e (F0+H0+∆0)η e −2∆0η + e φ1 , c(η) = c 0 e φ0 e (G0+H0+∆0)η e −2∆0η + e φ1 , (2.20) φ(η) = φ 0 + (H 0 + ∆ 0 )η + log e φ1 + e −2∆0η ,(2.21)
where, by consistency with all the other equations we have
∆ 0 ≡ H 2 0 + Λ 0 = a 0 2 √ 2 e −(φ0+φ1) B. (2.22)
A particular solution of the system is fixed by fixing the value of the magnetic field in string units. Before discussing the physical properties of the solution (2.21) we want to see in which way the vacuum solutions appear in the time parameterization defined by Eq. (2.11). The vacuum solutions are quite straightforward using η and they correspond to constant Hubble factors and linear dilaton, namely
H = H 0 , F = F 0 , G = G 0 , φ(η) = φ 0 + φ 2 η, (2.23)
with the condition,
φ 2 = H 0 + F 0 + G 0 ± H 2 0 + F 2 0 + G 2 0 . (2.24)
It is straightforward to see that these solutions are indeed Kasner-like by working out the cosmic time picture from Eq. (2.11). Indeed, with little effort we can see that
a(t) = a 0 t t 0 α1 , b(t) = b 0 t t 0 α2 , c(t) = c 0 t t 0 α3 , (2.25) where α 1 = H 0 H 0 + F 0 + G 0 − φ 2 , α 2 = F 0 H 0 + F 0 + G 0 − φ 2 , α 3 = G 0 H 0 + F 0 + G 0 − φ 2 (2.26)
Using Eq. (2.24) we can obtain that the three (cosmic time) exponents satisfy the Kasner-like condition, namely α 2 1 + α 2 2 + α 2 3 = 1 and the ± ambiguity of Eq. (2.24) simply refers to the two (duality related) branches. Before concluding the present Section we want to mention that the equations of motion can also be integrated in the case where the coupling of the dilaton to the Maxwell fields is slightly different [12] from the one examined in this Section, namely in the case where the action is written as
S = − 1 2λ 2 s d 4 x √ −ge −φ R + g αβ ∂ α φ∂ β φ + 1 4 e −qφ F αβ F αβ (2.27) with q = 0.
In this case the equations of motion in the generalized conformal time η can be written as
2φ ′′ + φ ′ 2 − 2Σφ ′ + Λ − 2Σ ′ = − q + 1 2 B 2 a 2 e −(q+2)φ (2.28) φ ′′ + φ ′ 2 − 2Σφ ′ − Σ ′ + λ = − q 4 B 2 a 2 e −(q+2)φ (2.29) H ′ = q 4 B 2 a 2 e −(q+2)φ , F ′ = q + 2 4 B 2 a 2 e −(q+2)φ , G ′ = q + 2 4 B 2 a 2 e −(q+2)φ . (2.30)
Notice that in this case the system changes qualitatively but it can still be integrated. In fact, by following the procedure outlined in the previous paragraphs we obtain a decoupled equation
for φ ′ − φ ′′ q + 1 − q 2 + 2q + 2 2(q + 1) 2 φ ′ 2 + 2H 0 q + 1 φ ′ + Λ 0 = 0 (2.31)
This equation can be easily integrated providing the solutions to the whole system in the case q = 0.
III. ANALYSIS OF THE MAGNETIC SOLUTIONS
In order to analyze the previous solution let us assume, for sake of simplicity, that b(η) ≡ c(η). Suppose now, that our initial state is weakly coupled (i.e. φ 0 sufficiently negative) and (H 0 , F 0 ) are sufficiently small in string units. Even though H 0 and F 0 are of the same order they do have different initial conditions. Let us therefore analyze what is the role of a magnetic field, small in string units. The discussion becomes simpler if we start looking at the η behavior of the solutions. From Eqs. (2.21) we can argue that the evolution of φ ′ (η) is quite straightforward since for η → −∞ and φ ′ → H 0 − ∆ 0 whereas for η → +∞ we have that φ ′ (η) → H 0 + ∆ 0 . Moreover, as we discussed in the previous Section, φ ′′ > 0 which implies that φ ′ is always an increasing function of η. As a consequence we will also have that F (η) will also be an increasing function of η whereas H is frozen to its constant value given by H 0 . According to Eq. (2.22) the magnetic field intensity in string units determines, together with φ 0 the value of ∆ 0 . Therefore, we conclude that the magnetic field intensity is essentially responsible of the amount of growth of the dilaton energy since it turns out that |φ ′ (+∞)− φ ′ (−∞)| = 2∆ 0 . Thus, by increasing the magnetic field we observe an increase of the dilaton energy density. Notice, however, that this growth cannot be so large. Indeed, from Eq. (2.22) we notice that the relation between ∆ 0 and B is suppressed by the second power of the (initial) coupling constant g(φ) = exp(φ/2). So, with our initial conditions, ∆ 0 is small from the beginning since φ 0 is very negative. Notice that to have small coupling is essentially dictated by the form of our original action which cannot be trusted if g(φ) gets of order one. This qualitative behavior of the solutions is illustrated in Fig. 3-4 in few specific cases. In the left plot we illustrate the behavior of H (dot-dashed line), F (thin line) and φ ′ (full thick line). At the right we illustrate the behavior of the shear parameter. We notice that, in spite of the appearance, the shear parameter does not decrease, asymptotically but it tends to a constant value of order one, in sharp contrast with what happens if we include quadratic corrections to the tree-level action (see next Section). We see that H, F and φ ′ are almost constant if the magnetic field is small in String units.
As a a reference value for H 0 and for φ 0 we take 0.001 and −5 since we want to deal with small curvatures in String units and small couplings. In Fig. 3 (left) we report the solutions in the case of a quite large magnetic field B ∼ 1. The dot-dashed line denotes the behavior of H (constant), the full (thin) line denotes the evolution of F (η) whereas the behavior of the full (thick) line denotes the evolution of φ ′ . Notice that in this case the growth of φ ′ is so minute that the dot-dashed line and the full thick lines are almost indistinguishable. This is due to the fact that φ ′ continuously interpolates between H0 − ∆0 and H0 + ∆0 and ∆0 ∼ 0.001e −5 . At the right we report the evolution of the shear parameter for the same solution.
It is interesting to compare this plot with the ones illustrated in Fig. 4 where (left plot) we report the solution (2.21) with the very same choice of initial conditions of Fig. (3) but with different magnetic field, namely B = 0.001 in String units. We see that the growth of φ ′ is reduced (by two order of magnitude).
What about the evolution of the shear parameter? Let us look at the left plots reported in Fig. 3, 4 where the evolution of
r(η) = 3[H(η) − F (η)] [H(η) + 2F (η)] (3.1)
is given for the same set of parameters we just discussed. In Fig. 3 (where B ∼ 0.1) we would say that the shear parameter decreases. This is certainly true but if we look at the numbers we can see that r(η) starts of order one and essentially remains of order one. In Fig. 4 (i.e., B = 0.01) r(η) seem to decrease, but again, a more correct statement would be that r(η) remains of order one (i.e. r(+∞) ∼ −0.88), up to a transition period. Thus, in spite of its smallness, the magnetic field has always the property of conserving the anisotropy. The negative sign in r(η) is only due to the fact that F > H for any η. Thus, as far as the tree-level evolution is concerned we can say that no isotropization is observed as a consequence of the inclusion of the magnetic field. This situation should be contrasted with what happens when the string tension corrections are included. The second effect we see is that the smaller is the value of the magnetic field, the milder is the transition to the vacuum solutions. We said, in the previous section that the vacuum solutions are essentially given by H = constant, F = constant, φ ′ = constant. For η → +∞ the magnetic solutions should then tend to the vacuum solutions. Thus, the influence of the magnetic field is mainly in the transition period from the initial state to the (asymptotic vacuum) solution.
In order to show that the large η behavior of our solutions is Kasner-like, let us look explicitly to the cosmic time description of the system. Let us look at the evolution of our system in cosmic time. The relation between η and t can be obtained by inserting Eqs. (2.21) into Eq. (2.11) whose integrated version becomes
t a 0 b 2 0 e φ0 = e φ1 e 2(F0+H0)η+∆0η [2(F 0 + H 0 ) + ∆ 0 ] + e 2(F0+H0)η−∆0η [2(F 0 + H 0 ) − ∆ 0 ] . (3.2)
Notice that the only two independent quantities determining the relation between η and t are H 0 and ∆ 0 (which is related to B according to Eq. (2.22)) since F 0 can be expressed in terms of ∆ 0 and H 0 .
If we now take the large η behavior of Eq. (3.2) and we substitute back into Eqs. (2.21) we do find that the two scale factors can be expressed, in cosmic time, as
a(t) ∼ t α , b(t) ∼ t β , (3.3) where α = H 0 2(H 0 + F 0 ) + ∆ 0 , β = F 0 + H 0 + ∆ 0 2(H 0 + F 0 ) + ∆ 0 . (3.4)
We can notice that α 2 + 2β 2 = 1. This can be easily seen by recalling the definition of ∆ 2 0 = H 2 0 + 2(2H 0 F 0 + F 2 0 ) which can be deduced from Eqs. (2.22) and (2.18).
So, close to the singularity, for η → ∞ the solutions are Kasner-like. Notice that the presence of the vector field (with the specific four-dimensional coupling required by the low energy effective action) does not lead to any type of oscillatory stage. This is evolution is in sharp contrast with one obtains in the case of a vector field (included following the Kaluza-Klein ansatz) in the Einstein frame [13]. The reason for this difference stems from from the difference in the couplings. In Fig. 5 the evolution ofφ, H(t) and F (t) are illustrated in the case of B ∼ 0.1. Notice that t → 0 does correspond to the large η behavior. The singularity is located, in this example for t ∼ 2. In Fig. 6 we report the same solution illustrated in Fig. 5 but with a smaller magnetic field, B = 0.01. As one can expect the singularity and the vacuum limit is reached before than in the previous case.
. (t) φ H(t)
IV. MAGNETIC SOLUTIONS AND THE HIGH CURVATURE REGIME
In the previous two Sections we examined fully anisotropic (magnetic) cosmologies using the tree level action. In this Section we are going to extend and complement our results with the addition to the tree-level action of the first string tension correction. In the presence of the first α ′ correction the action becomes
S = − 1 2λ 2 s d 4 x √ −ge −φ R + g αβ ∂ α φ∂ β φ + 1 4 F αβ F αβ − ωλ 2 s 4 R 2 GB − (g αβ ∂ α φ∂ β φ) 2 ,(4.1)
where R 2 GB is the Gauss-Bonnet invariant expressed in terms of the Riemann, Ricci and scalar curvature invariants
R 2 GB = R µναβ R µναβ − 4R µν R µν + R 2 ,(4.2)
and ω is a numerical constant of order 1 which can be precisely computed depending upon the specific theory we deal with (for instance ω = −1/8 for heterotic strings). In principle the magnetic field should also appear in the corrections to the tree-level action. However, it turns out that, the terms which would be significant for the type of configurations examined in this paper, the gauge fields appear beyond the first α ′ correction. For example terms like F αβ F µν R αβµν will appear to higher order in α ′ . Moreover, possible terms like F αβ R αβ are vanishing in the case of constant and homogeneous magnetic field directed, say, along the x direction. Finally, other possibly relevant terms (involving contractions of the vector potentials with the Ricci or Riemann tensors) are forbidden by gauge invariance. Of course, the spirit of our analysis of the high curvature regime is only semi-quantitative. After all, the first curvature correction can be thought as illustrative, since, ultimately all the α ′ corrections will turn out to be important. The only way of turning-off the possible effect of curvature corrections is to have explicit solutions whose curvature invariants are already regularized at tree level like in the examples presented in [14] where it was shown that there exist weakly inhomogeneous solutions of the tree level action which are regular and geodesically complete without the addition of any curvature or loop corrections to the tree level action. If this is not the case, the curvature corrections have to be certainly included. The inclusion of higher order curvature corrections can be studied either in the Einstein frame [15] or in the String frame [16]. Another interesting approach has been outlined in [17].
For a reasonably swift derivation of the equations of motion it is useful to write the action in terms of the relevant degrees of freedom. In order to get correctly the constraint equation (i.e. the (00) component of the beta functions) it is appropriate to keep the lapse function N (t) it its general form
g µν = diag[N (t) 2 , −e 2α(t) , −e 2β(t) , −e 2β(t) ],(4.3)
where we parameterized the two scale factors with an exponential notation. Only after the equations of motion have been derived we will set N (t) = 1 corresponding to the synchronous time gauge. In the metric (4.3) the previous action (4.1) becomes, after integration by parts,
S = 1 2λ 2 s dte α+2β−φ 1 N −φ 2 − 2β 2 − 4αβ + 2αφ + 4βφ − 1 2 B 2 e −4β(t) + ωλ 2 s 4N 3 8φαβ 2 −φ 4 . (4.4)
By varying Eq. (4.4) with respect to the lapse function N (t) and imposing, afterwards, the cosmic time gauge we get the constraintφ
2 + 2β 2 + 4αβ − 2αφ − 4βφ − 1 2 B 2 e −4β + 3 4φ 4 − 6αφβ 2 = 0,(4.5)
where we took string units λ s = 1 and ω = 1. By varying the action with respect to α, β and φ we get, for N (t) = 1 the diagonal components of the beta functions and the dilaton equation,
4β − 2φ + (4β − 2φ)(α + 2β −φ) − 2 (α + 2β −φ)φβ 2 +φβ 2 + 2ββφ + L(t) = 0,(4.6)
4(β +α −φ) + 2B 2 e −4β + 4(α +β −φ)(α + 2β −φ) − 4 βαφ +βαφ +βαφ +βαφ(α + 2β −φ) + 2L(t) = 0, (4.7)
2(φ −α − 2β) + 2(φ −α − 2β)(α + 2β −φ) − 2β(αβ + 2αβ) + (2αβ 2 −φ 3 )(α + 2β −φ) − 3φ 2φ − L(t) = 0, (4.8)
where we defined
L(t) = −φ 2 − 2β 2 − 4αβ + 2αφ + 4βφ − 1 2 B 2 e −4β + 1 4 (8φαβ 2 −φ 4 ). (4.9)
We can numerically integrate this system. The technique is very similar to the one used in the case of fully anisotropic string cosmologies with zero magnetic field [5,16]. Our results are illustrated in Fig. 8 and 9. There are two physically different cases. We integrate this system by imposing, as initial conditions small curvature and small dilaton coupling. We also impose mildly anisotropic initial conditions (corresponding toȧ > 0,ḃ > 0 andä > 0,b > 0). One of the results stressed in the previous Section is that the tree-level solutions with magnetic field are singular. Moreover, close to the singularity no oscillatory behavior is present. Therefore, as time goes by and as the solutions approach the singularity the vacuum solutions are recovered. One can then argue that if the α ′ corrections become relevant when the vacuum regime has been already recovered, nothing should change with respect to the case where B = 0. This in some sense is what happens, but not exactly. FIG. 8. We illustrate the numerical solutions of the equations of motions with string tension corrections and we plot the shear parameter. We select expanding initial conditions (in the String frame). In this particular example (left plot) we choose the magnetic field to be B = 0 (dot dashed line), and B = 0.01 (full line). The difference between the two cases is quite minute and in order to show it we do not plot the full evolution of r(t) (which goes to zero for t > 0). At the right we choose B = 0.01 and we change the initial condition within the vacuum solutions. The dot dashed line is for b(t) ∼ t −4/9 and the full line for b(t) ∼ t −0.2 . Also in this second case the difference is quite minute. We decided to plot only a limited range of time steps in order to stress the numerical difference between different values of the initial shear and of the primordial magnetic field. The full picture looks like the ones reported in Fig. 9.
If the magnetic field is large in string units (i.e. B > 1) this system evolves towards a singularity, as expected. In this case, it is certainly true that the tree level solutions evolve towards their vacuum limit. However, in this limit the solutions get more and more anisotropic. So the hypothesis B > 1 simply contradicts the assumption of mild anisotropy. From our point of view it is more interesting the case with B < 1. In this case mildly anisotropic initial conditions will be attracted towards an isotropic fixed point. One can understand this looking at Eqs. (4.8). For t → 0 b(t) is monotonically increasing (if we select, as we do, expanding initial conditions). Now the magnetic field is always suppressed by b −4 (t) = e −4β(t) terms in Eqs. (4.8). Therefore, an initially small magnetic field will become more and more sub-leading. The solutions will then be attracted towards the quasi-isotropic fixed pointα ∼β ∼ 0.606 anḋ φ ∼ 1.414. If a magnetic field is present the shear parameter is larger than in the case of zero magnetic field. It is of crucial importance to notice, for our purposes, that the shear parameter does not tend towards a small (finite value) but it decreases towards 0. In order to stress this aspect we also plot the logarithm (in ten basis) of the modulus of the shear parameter. We can see, as previously discussed, that the duration of the string phase is proportional to the drop in r(t).
In this case the curvature invariants are finite everywhere. We say that the fixed point is quasi-isotropic since the amount of anisotropy depends upon the duration of the high curvature scale. For example, as illustrated in Fig. 9, it can well happen that for a very short stringy phase the amount of anisotropy measured by the shear parameter r(t) will be of the order of 10 −3 .
V. THE FATE OF THE ANISOTROPIES IN THE POST-BIG-BANG EVOLUTION
In this Section we would like to investigate the fate of the anisotropies possibly present in the dilaton-driven and in the String phase. In particular we would like to understand in which limit these anisotropies can be either reduced to an acceptable value or completely washed out. For numerical purposes we find useful to exploit, in the present Section, the Einstein frame picture. The evolution equations in the Einstein frame are obtained in Appendix A. By linearly combining Eqs. (A.8), (A.9) and (A.10) of Appendix A we obtain a more readable form of the system, namelẏ
H + H(H + 2F ) = ρ 6 − w 2 e −φ , (5.1) F + F (H + 2F ) = ρ 6 + w 2 e −φ , (5.2) φ + (H + 2F )φ = we −φ , (5.3) (H + 2F ) 2 − (H 2 + 2F 2 ) =φ 2 2 + we −φ + ρ, (5.4) ρ + 4 3 (H + 2F )ρ = 0,ẇ + 4F w = 0, w = B 2 2b 4 , (5.5)
where we assumed the presence of a radiation fluid p = ρ/3 accounting for the field modes excited, via gravitational instability, during the dilaton driven phase [18] and re-entering in the post-big-bang phase. It is in fact well known that the ultraviolet modes of a field parametrically amplified behave as a radiation fluid [19]. From Eqs. (5.1)-(5.5) we can directly obtain the evolution equations for the quantities we are interested in, namely
r(t) = 3(H − F ) H + 2F , n(t) = H + 2F 3 , q(t) = w(t) ρ(t) , (5.6)
where n(t) is the mean expansion parameter r(t) is the shear anisotropy parameter and q(t) the fraction of magnetic energy in units of radiation energy. Notice that q(t) does not correspond to the critical fraction of magnetic energy density since, in principle, we have to take into account the energy density of the dilaton field. By subtracting Eq. (5.2) from Eq. (5.1) and by using the constraint (5.4) together with the definitions (5.6) we get the shear evolution. By summing Eqs. (5.1) and (5.2) with similar manipulations we get the evolution of n. The resulting system, equivalent to the one of Eqs. (5.1)-(5.5), but directly expressed in terms of n, r and q readṡ nr +ṙn + 3n 2 r = −qρe −φ , (5.7)
3ṅ + 9n 2 = ρ 2 (1 + qe −φ ),q − 4 3 nrq = 0, (5.8) 6n 2 (1 − r 2 9 ) =φ 2 2 + ρ(1 + qe −φ ),ρ + 4nρ = 0,φ + 3nφ = ρqe −φ . (5.9)
We want now to study the combined action of the dilaton and of a radiation fluid in a post-big-bang phase. Let us start from the case of medium size anisotropies. Suppose, in other words that after a string phase of intermediate duration the shear parameter is of the order of 10 −3 . As we saw from the previous Section this value is not excluded. Before analyzing the general case let us recall the results of the case where the dilaton is absent. In this case the system of Eqs. (5.7)-(5.9) can be simplified as followṡ nr +ṙn + 3n 2 r = −qρ, (5.10)
3ṅ + 9n 2 = ρ 2 (1 + q),q − 4 3 nrq = 0, (5.11) 6n 2 (1 − r 2 9 ) = ρ(1 + qe −φ ),ρ + 4nρ = 0. (5.12)
We are interested in the case of magnetic fields which are under-crtical , namely q(t 0 ) < 1 where t 0 is the initial integration time. Suppose, moreover, that r(t 0 ) ∼ 10 −3 . Needless to say that this is just an illustrative value for r and it is not meant to be of any particular theoretical relevance. As we stated clearly in the previous Section (see for instance Fig. 8 ) if the string phase is long the initial shear parameter can be much smaller. The results of the numerical integration are summarized in Fig. 10 where we report the logarithm (in ten basis) of the modulus of the shear parameter. . We take r(t0) ≃ 10 −3 and we integrate the system forward in time for different values of q(t0). In the left plot we illustrate the evolution of the logarithm (in ten basis) of the modulus of the shear parameter. With the dot-dashed (thick) line we represent the case q = 0 where the shear parameter is known to decrease sharply as t −1/2 . If, initially, the critical balance between the magnetic energy density and the radiation energy density increases, then r gets attracted towards and asymptotic value as described by Eqs. (5.13). At the right (full thick line) we report the evolution of q for the case q(t0) = 0.01. With the thin line we report the qualitative estimate based on Eqs. (5.13) and obtained by solving, approximately, Eqs. (5.10)-(5.12) for r < 1 and q < 1.
At the bottom of the left plot, with the dot-dashed line, is reported the case of zero magnetic field (i.e. q(t 0 ) = 0). This is the simplest case since we can approximately solve the system of Eqs. (5.10)-(5.12) in the limit of small r. From Eq. (5.10) we can find that a consistent solution iṡ
r ≃ − r 2t − 3q t ,q ≃ 2 3t rq. (5.13)
So, if q = 0 we can clearly see that r(t) ∼ t −1/2 . This is nothing but the well known result that in a radiation dominated phase the shear parameter decreases as 1/ √ t and it is precisely what we find, numerically, in the dotdashed line of Fig. 10 (left plot). If we switch-on the magnetic field we also know, from Eqs. (5.13) that the anisotropy will not decrease forever but it will reach an asymptotic value which crucially depends upon the balance between the magnetic energy density and the radiation energy density. In fact from eqs. (5.13) we see thatṙ = 0 for |r(t)| → 6|q(t)| and this is precisely what we observe in the full lines of Fig. 10 where the integration of the Eqs. (5.10)-(5.12) is reported for different (initial) values of q. We see that the asymptotic value of attraction for r is roughly six times the initial value of q. The evolution of q(t) whenṙ(t) → 0 can be simply obtained by integrating once the second of Eqs. (5.13) for the case r ∼ −6q. The result is that, taking for instance q(t 0 ) = 0.01, q(t) ∼ 1/{4 log[(t/t 0 )] + 100}. This simple curve is reported in Fig. 10 at the right (full thin line). As we can see the full thick line (result of the numerical integrations is practically indistinguishable).
The conclusion we draw for our specific case is very simple. If the string phase is not too long and if sizable anisotropies (i.e. r ∼ 10 −3 ) are still present at the end of the stringy phase, then , in the approximation of a sudden "freeze-out" of the dilaton coupling, the left over of the primordial anisotropy will be washed out provided the magnetic field is completely absent. If a tiny magnetic seed directed along the same direction of the anisotropy is present, then in spite of the primordial anisotropy, the shear parameter will be attracted to a constant value essentially fixed by the size of the seed.
Let us now investigate the opposite case. Let us imagine of switching-off completely the radiation background. Then the only non trivial variables determining the shear parameter and the mean expansion will be the dilaton coupling and the magnetic energy density. In this case Eqs. (5.7)-(5.9) do simplify as followṡ nr +ṙn + 3n 2 r = −we −φ , (5.14)
3ṅ + 9n 2 = w 2 e −φ ,ẇ + 4(n − nr 3 )w = 0, (5.15) 6n 2 (1 − r 2 9 ) =φ 2 2 + we −φ ,φ + 3nφ = ρqe −φ . (5.16)
As in the previous case let us assume, in order to fix our ideas and in order to have a swift comparison, that r(t 0 ) ∼ 10 −3 . The results of our analysis are reported in Fig. 11. 11. We report the resluts of the numerical integration of Eqs. (5.14)-(5.16) for different initial values of the magnetic energy density in Planck units. As we can see (dot-dashed line) the shear parameter remains fixed to its initial value if the magnetic field is completely absent. As soon as the magnetic field increases the shear parameter gets attracted towards a fixed point whose typical anisotropy is of order one. As in the previous cases we took r(t0) ∼ 10 −3 and we also took φ(t0) ∼ −0.1.
Suppose first of all that the magnetic field is switched-off, then, as we see from the dot-dashed line the anisotropy is completely conserved. The introduction of a tiny magnetic seed will eventually make the situation even worse in the sense that the anisotropy will eventually grow to a constant value which depends on the magnetic field intensity. In Fig. 11 we see (full lines) that already with a magnetic energy density of the order of 10 −3 (in Planck units) the increase in the anisotropy cannot be neglected. Therefore, if the transition from the string phase to the radiation dominated phase does occur through an intermediate dilaton dominated phase of decreasing coupling we have to accept that the anisotropy will be conserved provided the magnetic field is absent. If a magnetic field is present the anisotropy gets gets frozen to a constant value which can be non negligible depending upon the size of the magnetic seed. In very rough terms our analysis seems to disfavor mildly anisotropic models with a short string phase and with the simultaneous occurrence of a long dilaton dominated phase prior to the onset of the radiation epoch. So the combination of short string phase, long dilaton driven phase and radiation phase shorter than usual might give too large shear, say, prior to nucleosynthesis. Our conclusion can be even more problematic if it is present a sizable magnetic field.
With the knowledge coming from the two previous examples we can easily analyze the general case given in Eqs. (5.7)-(5.9). The dynamics of the system will depend upon the different balance in the initial conditions. Let us suppose, firstly that the dilaton kinetic energy is slightly smaller (by two orders of magnitude) than the radiation energy density. The results of the numerical integration of the shear parameter are reported in Fig. 12. In this two plots we report the results of the integration of Eqs. (5.7)-(5.9) in the general case where magnetic field, dilaton and radiation fluid are present simultaneously. In these particular plot we assume that, initially, the dilaton energy density is smaller (by two orders of magnitude) than the radiation energy density. The left plot corresponds to the case of φ(t0) = −0.2 and the right plot corresponds to the case of φ(t0) = −2. Different initial q are reported in both cases. As in the previous cases we assumed r(t0) ∼ 10 −3 .
We can clearly see that by increasing the magnetic field the anisotropy will increase. The dilaton evolution clearly affects this process as we can argue by comparing Fig. 12 with Fig. 10 where the dialton was absent. It is also interesting to notice that the initial value of the coupling can be of some relevance. In Fig. 12 we reported the result of the integration for two values of the initial coupling, namely φ 0 = −0.1 (at the left) and φ 0 = −2 (at the right). The effect can be explained from Eqs. (5.7)-(5.9) where we can see that by tuning the initial coupling to smaller values we amplify the effect of the magnetic energy density which appears always as qe −φ .
If the radiation energy density is sub-leading with respect to the dilaton kinetic energy we can expect, on the basis of the intuition developed in the previous cases that the shear parameter will stay basically constant if the magnetic field is turned off and it will increase to a constant value. This is more or less what happens. In Fig. 13 we report the results of a numerical integration of the shear parameter in the case where the radiation energy is hundred times smaller than the dilaton kinetic energy. We report the numerical integration of the system given in Eqs. (5.7)-(5.9) for another set of parameters. In this case we assume that the radiation energy density is hundred times smaller than the dilaton energy density and we watch the relaxation of the anisotropy for different values of the initial magnetic fields. As in the previous case the left picture refers to the case where φ(t0) ∼ −0.2 whereas the right plot refers to the case where φ(t0) ∼ −2. These plots have to be compared with the ones reported in Fig. 10 and Fig. 11. We see that, unlike in the case of Fig. 10, the integration with q(t0) = 0 does not lead to an exactly constant anisotropy. For larger values of q(t0) the shear parameter saturates, as expected.
The left picture refers to the case of φ(t 0 ) ∼ −0.2 and the right figure refers to the case of φ(t 0 ) ∼ −2. In both cases we can notice that in the limit of zero magnetic field the anisotropy is not exactly constant as in the case of ρ = 0 reported in Fig. 11 but it has some mild slope and it decreases. At the same time, in contrast with Fig. 10 the asymptotic value of the anisotropy is not exactly the one we could guess on the basis of the argument illustrated in Fig. 10.
We want to stress, finally an important point. Suppose that the dilaton kinetic energy and the radiation energy are both present but the magnetic field is zero (q(t 0 ) = 0). Then we said that the anisotropy has a mild slope. The question is how mild. This point is addressed in Fig. 14 where the results of a numerical integration are reported for fixed dilaton kinetic energy and for radiation energy ten, hundred, thousand ad ten thousand smaller than the dilaton kinetic energy. FIG. 14. In this plots we assume that the magnetic field is initially zero. We also assume that the radiation energy density is smaller than the dilaton energy density which is assumed to be of order one. The plot at the left is for φ(t0) = −2 whereas the plot at the right is for the case φ(t0) = −10. Nothing changes in the two plots since, for both cases q = 0. We notice that the behavior of the anisotropy is qualitatively similar both to the behaviors reported in Fig. 10 and Fig. 9 but with crucial differences. For instance, take the case with ρ ∼ 0.1 at the left. In this case the radiation and dilaton energy densities are almost comparable. So there is some functional similarity with the plot of Fig. 9. Look anyway at the time scale and take into account that the initial value of the anisotropy is the same in both cases. The same reduction of the shear parameter obtained in 100 time steps in the case of pure radiation is now obtained in 10 6 time steps. So the dilaton slows down the decay of the anisotropy in the absence of magnetic field. If the radiation energy density is gradually removed from the system the anisotropy is ( almost) constant.
We see that, in the best case, the anisotropy decrease of two orders of magnitude in 10 6 time steps. This behavior should be compared with the sudden decrease experienced by the anisotropy in the case of Fig. 10 (dot-dashed line) where the dilaton was absent. Therefore, the presence of the dilaton in a radiation background slows down considerably, in the absence of magnetic field, the sudden fall-off of the shear parameter.
The conclusion we can draw from the analysis of the general case is that, also in the presence of the dilaton field, the magnetic field has the effect of increasing the anisotropy. Moreover, if the dilaton is larger than (or comparable with) the radiation fluid at the moment of the end of the string phase the anisotropy does decrease not so easily.
VI. CONCLUDING REMARKS
In this paper generalized the solutions of the low energy beta functions to the case where a magnetic field is present. We found that these solutions can be expressed analytically in the String frame. We investigated their limit in the vicinity of the singularity and we noticed that they approach, monotonically, the well known Kasner-like vacuum solutions.
We addressed a very simple question which can be phrased in the following way. Suppose that the dilaton-driven evolution of the string cosmological models is fully anisotropic thanks, for instance, to a primordial magnetic seed. The obvious issue is to understand how (and possibly when) these fully anisotropic solutions reach the complete isotropic stage. By complete isotropic stage we meant, more quantitatively, a value of the shear parameter smaller, say, than 10 −7 . We reached a number of conclusions.
For physical reasons we want to deal, from the very beginning, with anisotropic solutions whose scale factors are both expanding and accelerated. In this situation it is not forbidden to have large anisotropies and shear parameters of order one prior to the onset of the string phase. Now, if the string phase is sufficiently long and if it is immediately followed by a radiation dominated phase, then, any pre-existing shear coming from the initial condition is very efficiently washed out to arbitrary small values depending upon the duration of the string phase. Of course, this conclusion holds provided no magnetic field is present in the ordinary decelerated phase. If a magnetic field is present, then the amount of shear appearing in our present Universe will not be determined by the primordial shear but by the shear induced by the magnetic seed itself. The role of the magnetic field might also be of some help but only in the case where the primordial shear is really large after the string phase. In this case a very small magnetic field can effectively attract the large anisotropy towards a smaller value.
We can also have the opposite situation. Namely we can have the case where the string phase is very short and it is followed by a phase dominated by the dilaton until sufficiently small scales lasting from the end of the sting phase until the onset of radiation which should anyway occur before nucleosynthesis. In this second scenario the shear will not be erased for two reasons. First of all because the duration of the string phase is proportional to the shear reduction. Secondly, because in a dilaton dominated phase (in the absence of magnetic field) the primordial shear is conserved.
In the same way H E = (log a) · and F E = (log b) · are the Hubble factors defined in the Einstein frame. We will not denote explicitly this distinction but we would like to remind that the various quantities defined in Sections II, II, and IV are defined in the String frame whereas the discussion reported in Section V refers to the Einstein frame.
Most of the times one can check that all the "physical" quantities are invariant with respect to a change of frame. Typical examples of this property are the spectra of the fields excited by the dilaton growth in the case of the Kasnerlike "vacuum" solutions derived in Section II. Suppose in fact to compute the axionic (or gravitonic or dilatonic) spectra in the String frame. Then Suppose to redo the same exercise in the Einstein frame with the transformed solutions. The two spectra will be equal. It is curious to notice that the shear parameter is only approximately invariant under conformal rescaling. In order to show this aspect let us consider the dilaton-driven (vacuum) solutions reported in Eq. (1.4) and written in the String frame
a s (t s ) = − t s t 1 α , b(t s ) = − t s t 1 β
, φ(t s ) = (α + 2β − 1) log − t s t 1 , (A.14)
with α 2 + 2β 2 = 1. The shear parameter for this solution can be very simply computed in the string frame
r(t s ) = 3[H s (t s ) − F s (t s )] [H s (t s ) + 2F s (t s )] ≡ 3(α − β) α + 2β . (A.15)
Let us therefore do the same exercise in the Einstein frame. The scale factors and the cosmic time, transformed from the String to the Einstein frame are
a E (t E ) = − t E t 1 α−2β+1 3−α−2β , b E (t E ) = − t E t 1 1−α 3−α−2β , − t s t 1 ≃ − t E t 1 2 3−α−2β .
(A. 16) Therefore we get that
r(t E ) = 3[H E (t E ) − F E (t E )] [H E (t E ) + 2F E (t E )] ≡ 6(α − β) 3 − (α + 2β)
.
(A.17)
Now, by comparing Eqs. (A.15) and (A.17) it is clear that r s and r E go to zero in the same way and they are both proportional to α − β. In this sense we can say that r is approximately invariant under conformal rescaling. For example by taking the anisotropic solution leading to flat axionic spectrum [5], namely α = −7/9, β = −4/9 we get that |r s (t s )| ∼ 3/5 whereas |r E (t E )| ∼ 3/7. In all the cases discussed in this paper we explicitly checked that the shear parameters in one frame and in the other are of the same order of magnitude. So for practical purposes the shear parameters computed in the either in the String or in the Einstein frame have the same quantitative information provided the calculation is performed with respect to the correct cosmic time, which changes from frame to frame. More generally one would like to find a frame-independent measure of the degree of anisotropy. One could argue that these quantity is provided by the Weyl tensor and, in particular by C β µνα . Indeed it is easy to show that C β µνα = C β µνα where the calligraphic style denotes, as usual in this Appendix, the quantities computed in the Einstein frame. Notice that the index position is crucial [20] since C µναβ = e φ C µναβ . The only problem with C β µνα is that it is dimension-full. If we want to construct some dimension-less combination we run into the same ambiguity we just described since C µναβ C µναβ /R µναβ R muναβ transforms non trivially under conformal rescaling. So our conclusion is that the shear parameter is perhaps still the best quantity in order to characterize the degree of anisotropy. In our problem, moreover, we can show that the way the shear parameter goes to zero is a truly frame independent statement since, ultimately if r s (t s ) ∝ |α − β| we also have that r E (t E ) ∝ |α − β|.
The second point we would like to stress concerns the covariant conservation of the (total) energy-momentum tensor in the Einstein frame. The Bianchi identities impose Using now Eq. (A.11) we see immediately that the mixed term T 0 0 (m)e −φφ cancels. Thus, in order to satisfy the covariant conservation of the energy-momentum tensor we have to impose
FIG. 1. The axionic logarithmic energy spectra (in frequency) are reported for different dilaton-driven models with Kasner-like exponents (α, β) in the string frame. The models belonging to the third quadrant and localized on the arc of the "vacuum" ellipse α 2 + 2β 2 = 1 correspond to anisotropic models with expanding and accelerated scale factors. If α and β lie on the ellipse but in the first quadrant we have solutions of the low energy beta functions which are both contracting (in the String frame). If α and β on the ellipse but either in the second or in the fourth quadrant we have solutions of the low energy beta functions where one of the two scale factors expands and the other contracts. Notice that the dashed (thick) line does correspond to the case of fully isotropic solutions (i.e. α = β = −1/ √ 3) whose intersection with the vacuum ellipse lies in the region of red spectra. Above the dot-dashed (tick) line the dilaton decreases for t < 0 whereas below the dot-dashed line the dilaton increases. We would be tempted to speculate that to have an increasing dilaton is a sufficient condition in order to have a pre-big-bang dynamics. This is not the case. Indeed, an increasing dilaton is also compatible with α and β in the second quadrant where the scale factors are not both expanding. Therefore, an increasing dilaton is not a sufficient condition for anisotropic pre-big-bang dynamics but only a necessary condition.
FIG. 2. We plot the evolution of the degree of anisotropy (left) and of the shear parameter itself (right) in the case of a mildly anisotropic solution with α = −7/9 and β = −4/9 corresponding to the case of flat axionic spectra [5] (i.e. the intersection of the thick full line with the vacuum ellipse in the third quadrant of Fig. 1). As we can see for t → −∞, r(t) → [−7/9 + 4/9]/[−7/9 − 4/9] = 3/11. For t → +∞ the shear parameter and the degree of anisotropy gets reduced as the result of the addition of of higher order string tension corrections (see also Section IV). We notice that the degree of isotropization crucially depends upon the duration of the string phase (left). In a short string phase I(t) can be of the order of −3, −4. If the string phase is very long the degree of anisotropy can be much smaller than −7. Notice that, by only looking at the right picture we might guess (in a wrong way) that the shear parameter tends (for t → +∞) to a constant (small) value. The crucial point we want to stress is that this is not the case as we can understand by looking at I(t) which is always a decreasing function of the cosmic time.
FIG. 3 .
3We plot the magnetic solutions given in Eqs. (2.21) in the case where H0 = 0.001, B = 01, φ0 = −5.
FIG. 4 .
4We plot the same solution given inFig. 3but with a different value of the magnetic field which is now B = 0.001. As in the previous plot at the left we report the evolution of H (dot-dashed line), F (thin line)and φ ′ ( full thick line).
FIG. 5 .
5We illustrate the behavior of H(t),φ (left plot) and of F (t) (right plot) in the case of a magnetic field of the order of B ∼ 0.1 in String units.
FIG. 6. We illustrate the behavior of H(t),φ (left plot) and of F (t) (right plot) in the case of a magnetic field of the order of B ∼ 0.1 in String units.
FIG. 7 .
7We report the result of the numerical integration of Eqs.(4.8) in the case where the initial conditions are mildly anisotropic. We choose B = 0.01.
FIG. 9 .
9We report the evolution of the shear parameter in the String phase. With the dot-dashed line we have the case B = 0, with the full line the case B = 0.01.
FIG. 10 .
10We report the results of the numerical integration of Eqs. (5.10)-(5.12)
FIG. 11. We report the resluts of the numerical integration of Eqs. (5.14)-(5.16) for different initial values of the magnetic energy density in Planck units. As we can see (dot-dashed line) the shear parameter remains fixed to its initial value if the magnetic field is completely absent. As soon as the magnetic field increases the shear parameter gets attracted towards a fixed point whose typical anisotropy is of order one. As in the previous cases we took r(t0) ∼ 10 −3 and we also took φ(t0) ∼ −0.1.
FIG. 12. In this two plots we report the results of the integration of Eqs. (5.7)-(5.9) in the general case where magnetic field, dilaton and radiation fluid are present simultaneously. In these particular plot we assume that, initially, the dilaton energy density is smaller (by two orders of magnitude) than the radiation energy density. The left plot corresponds to the case of φ(t0) = −0.2 and the right plot corresponds to the case of φ(t0) = −2. Different initial q are reported in both cases. As in the previous cases we assumed r(t0) ∼ 10 −3 .
FIG. 13. We report the numerical integration of the system given in Eqs. (5.7)-(5.9) for another set of parameters. In this case we assume that the radiation energy density is hundred times smaller than the dilaton energy density and we watch the relaxation of the anisotropy for different values of the initial magnetic fields. As in the previous case the left picture refers to the case where φ(t0) ∼ −0.2 whereas the right plot refers to the case where φ(t0) ∼ −2. These plots have to be compared with the ones reported in Fig. 10 and Fig. 11. We see that, unlike in the case of Fig. 10, the integration with q(t0) = 0 does not lead to an exactly constant anisotropy. For larger values of q(t0) the shear parameter saturates, as expected.
it is easy to see that the 0 component of this equation, in the case of a magnetic field directed along the x axis, can be written as (∂ 0 T 0 0 (m) + 4F T 0 0 (m))e −φ +φφ +φ 2 (H + 2F ) − T 0 0 (m)e −φφ +ρ + (H + 2F )(ρ + p) = 0. (A.19)
∂ 0
0T 0 0 (m) + 4F T 0 0 (m) = 0,ρ + (H + 2F )(ρ + p) = 0, (A.20) which implies that T 0 0 = B 2 /(2b 4 ) (where B is a constant) as correctly reported in Eqs. (A.8)-(A.12) from the very beginning as a consequence of the solutions of the equation for the field strength.
a s (t s ), b E (t E ) = e − φ 2 b s (t s ).(A.13)
APPENDIX A: FROM THE STRING TO THE EINSTEIN FRAME In this Appendix we show explicitly how to get the Einstein frame action for our system. The transformation from the String to the Einstein frame does not remove the coupling of the dilaton to the kinetic term of the gauge fields. The String frame action is simplyIn four dimensions the transformation from the String frame metric (g µν ) to the Einstein frame metric (G µν ) simply readsTherefore,the derivatives in Eq. (A.3) are covariant with respect to the metric G µν . Thus, the Einstein frame action can be written asAgain from this equation we can derive the Equations of motion. We will just report the essential points. The Equations of motion derived from the action of Eq. (A.5) readwhere on top of the energy momentum tensors of the dilaton (i.e. T ν µ (d)) and of the Maxwell fields (i.e. T ν µ (m)) we also added the energy momentum tensor of the fluid sources (i.e. T ν µ (f )) because of the considerations reported in Section V. In the fully anisotropic metric given in Eq. (2.1) the previous equations of motion becomeConcerning these equations two technical comments are in order. First of all the over-dot denotes the derivation with respect to the cosmic time of the Einstein frame which is related to the cosmic time of the String frame (used, for instance in Section II) as dt E = e −φ/2 dt s , a E (t E ) = e − φ
. B Ya, I D Zeldovich, NovikovThe Structure and Evolution of the Universe. 2Chicago University PressYa. B. Zeldovich and I. D. NovikovThe Structure and Evolution of the Universe, Vol. 2 (Chicago University Press, Chicago 1971).
. L P Grishchuk, A G Doroshkevich, I D Novikov, Zh.Éksp. Teor. Fiz. 551210Sov. Phys. JETPL. P. Grishchuk, A. G. Doroshkevich and I. D. Novikov, Zh.Éksp. Teor. Fiz. 55, 2281 (1968) [ Sov. Phys. JETP 28, 1210 (1969)];
. . B Ya, Zeldovich, Sov. Phys. JETP. 21656Ya. B. Zeldovich, Sov. Phys. JETP 21, 656 (1965);
. Sov, Astron, 13608Sov. Astron, 13, 608 (1970).
. J Barrow, Phys. Rev. D. 557451J. Barrow, Phys. Rev. D 55, 7451 (1997).
A Simple/Short introduction to Pre-Big-Bang Physics/Cosmology, talk given at International School of Subnuclear Physics. G Veneziano, hep-th/980205735th Course: Highlights: 50 Years Later. Erice, ItalyG. Veneziano, A Simple/Short introduction to Pre-Big-Bang Physics/Cosmology, talk given at International School of Subnuclear Physics, 35th Course: Highlights: 50 Years Later, Erice, Italy, 26 Aug -4 Sep 1997 (hep-th/9802057);
. G Veneziano, Phys. Lett. B. 265G. Veneziano, Phys. Lett. B 265 (1991);
. M Gasperini, G Veneziano, Astropart. Phys. 1317M. Gasperini and G. Veneziano, Astropart. Phys. 1, 317 (1993).
Blue Spectra of Kalb-Ramond Axions and Fully Anisotropic String Cosmologies, hep-th/9809185. M Giovannini, Phys. Rev. D. to appearM. Giovannini, Blue Spectra of Kalb-Ramond Axions and Fully Anisotropic String Cosmologies, hep-th/9809185, Phys. Rev. D (to appear).
. R Durrer, M Gasperini, M Sakellariadou, G Veneziano, Phys. Lett. B. 436R. Durrer, M. Gasperini, M. Sakellariadou, and G. Veneziano, Phys. Lett. B 436, 66 1998 ;
M Gasperini, G Veneziano, hep-ph/9806327Constraints on Pre-Big-Bang Models for seeding Large Scale Anisotropy by Massive Kalb-Ramond Fields, CERN-TH-98-180. M. Gasperini and G. Veneziano, Constraints on Pre-Big-Bang Models for seeding Large Scale Anisotropy by Massive Kalb-Ramond Fields, CERN-TH-98- 180, hep-ph/9806327 .
. G Veneziano, Phys. Lett. B. 406297G. Veneziano, Phys. Lett. B 406, 297 (1997);
. J Maharana, R Onofri, G Veneziano, JHEP. 44J. Maharana, R. Onofri and G. Veneziano, JHEP 4, 4 (1998);
Pre-Big-Bang Bubbles from the Gravitational Instability of Generic String Vacua. A Buonanno, T Damour, G Veneziano, hep-th/9806230A. Buonanno, T. Damour, and G. Veneziano, Pre-Big-Bang Bubbles from the Gravitational Instability of Generic String Vacua, IHES-P- 98-44, hep-th/9806230.
. M Gasperini, M Giovannini, G Veneziano, Phys. Rev. Lett. 753796M. Gasperini, M. Giovannini, and G. Veneziano, Phys. Rev. Lett. 75 3796 (1995);
. Phys. Rev. D. 526651Phys. Rev. D 52, 6651 (1995);
. D Lemoine, M Lemoine, Phys. Rev. D. 521955D. Lemoine and M. Lemoine, Phys. Rev. D 52, 1955 (1995);
. M Giovannini, Phys. Rev. D. 563198M. Giovannini, Phys. Rev. D 56, 3198 (1997).
. L P Grishchuk, Zh.Éksp. Teor. Fiz. 67Sov. Phys. JETPL. P. Grishchuk, Zh.Éksp. Teor. Fiz. 67, 825 (1974) [Sov. Phys. JETP 40, 409 (1975)];
. L P Grishchuk, Y V Sidorov, Phys. Rev. D. 423413L. P. Grishchuk and Y. V. Sidorov, Phys. Rev. D 42, 3413.
N D Birrel, P C Davies, Quantum Fields in Curved Space. Cambridge, EnglandCambridge University PressN. D. Birrel and P. C. Davies, Quantum Fields in Curved Space (Cambridge University Press, Cambridge, England, 1982).
. B L Hu, Phys. Rev. D. 18969B. L. Hu, Phys. Rev. D 18, 969 (1978).
. D Garfinkle, G Horowitz, A Strominger, Phys. Rev. D. 433140D. Garfinkle, G. Horowitz and A. Strominger, Phys. Rev. D 43, 3140 (1991).
. V A Belinskii, I M Khalatnikov, Zh. Eksp. Teor. Fiz. 63591Sov. Phys. JETPV. A. Belinskii and I. M. Khalatnikov, Zh. Eksp. Teor. Fiz. 63, 1121 (1972) [ Sov. Phys. JETP, 36, 591 (1973)].
. M Giovannini, Phys. Rev. D. 577223M. Giovannini, Phys. Rev. D 57, 7223 (1998);
Singularity Free Dilaton-Driven Cosmologies and Pre-Little-bangs, hepth/9807049. Phys. Rev. D. to appearSingularity Free Dilaton-Driven Cosmologies and Pre-Little-bangs, hep- th/9807049, Phys. Rev. D (to appear).
. M Gasperini, M Giovannini, Phys. Lett. B. 28236M. Gasperini and M. Giovannini, Phys. Lett. B. 282, 36 (1992);
. I Antoniadis, J Rizos, K Tamvakis, Nucl. Phys. B. 415497I. Antoniadis, J. Rizos and K. Tamvakis, Nucl. Phys. B 415, 497 (1994);
. J Rizos, K Tamvakis, Phys. Lett. B. 32657J. Rizos and K. Tamvakis, Phys. Lett. B 326, 57 (1994);
. P Kanti, N E Mavromatos, J Rizos, K Tamvakis, Phys. Rev. D. 545049P. Kanti, N. E. Mavromatos, J. Rizos, and K. Tamvakis, Phys. Rev. D 54, 5049 (1995).
. K Meissner, Phys. Lett. B. 392298K. Meissner, Phys. Lett. B 392, 298 (1997);
. M Gasperini, M Maggiore, G Veneziano, Nucl. Phys. B. 494315M. Gasperini, M. Maggiore, and G. Veneziano, Nucl. Phys. B 494, 315 (1997).
. R Brandenberger, R Easther, J Maia, J.High Energy Phys. 98087R. Brandenberger, R. Easther, and J. Maia, J.High Energy Phys. 9808, 007 (1998).
. M Gasperini, M Giovannini, Phys. Lett. B. 28236M. Gasperini and M. Giovannini, Phys. Lett. B 282, 36 (1992);
. Phys.Rev.D. 471519Phys.Rev.D 47, 1519 (1993).
. M Giovannini, Phys.Rev.D. 5883504M. Giovannini, Phys.Rev.D 58, 083504 (1998).
R M Wald, General Relativity. ChicagoChicago University Press447R. M. Wald, General Relativity, ( Chicago University Press, Chicago 1984), p.447.
| [] |
[] | [] | [] | [] | 1 SU SY C osm ological M odels F .A ceves de la C ruz 1 ,J.J.R osales 2 y V .I.T kach 3 ,J.Torres A . 4 z y Instituto de F sica,U niversidad de G uanajuato Lom as delBosque 103,Lom as delC am pestre 37150 Le on,G uanajuato,M exico z Instituto de F sica,U niversidad de G uanajuato Lom as delBosque 103,Lom as delC am pestre 37150 Le on,G uanajuato M exicoIn thi s w ork w e consi der the acti on for a set of com pl ex scal ar superm ul ti pl ets i nteracti ng w i th the scal e factor i n the supersym m etri c cosm ol ogi calm odel s. W e show that the l ocalconform alsupersym m etry l eads to a scal ar el d potenti alde ned i n term s ofthe K ahl er potenti aland superpotenti al . U si ng supersym m etry breaki ng,w e are abl e to obtai n a norm al i zabl e w avefuncti on for the FRW cosm ol ogi calm odel . | null | [
"https://export.arxiv.org/pdf/hep-th/0106007v2.pdf"
] | 119,407,561 | hep-th/0106007 | ff897786197f5adeb95898e5efe9a02ca46de578 |
arXiv:hep-th/0106007v2 3 Jul 2001
1 SU SY C osm ological M odels F .A ceves de la C ruz 1 ,J.J.R osales 2 y V .I.T kach 3 ,J.Torres A . 4 z y Instituto de F sica,U niversidad de G uanajuato Lom as delBosque 103,Lom as delC am pestre 37150 Le on,G uanajuato,M exico z Instituto de F sica,U niversidad de G uanajuato Lom as delBosque 103,Lom as delC am pestre 37150 Le on,G uanajuato M exicoIn thi s w ork w e consi der the acti on for a set of com pl ex scal ar superm ul ti pl ets i nteracti ng w i th the scal e factor i n the supersym m etri c cosm ol ogi calm odel s. W e show that the l ocalconform alsupersym m etry l eads to a scal ar el d potenti alde ned i n term s ofthe K ahl er potenti aland superpotenti al . U si ng supersym m etry breaki ng,w e are abl e to obtai n a norm al i zabl e w avefuncti on for the FRW cosm ol ogi calm odel .
Introduction
T he study of supersym m etri c m i ni superspace m odel s has l ed to i m portant and i nteresti ng resul ts. To nd the physi calstates,i t i s su ci ent to sol ve the Lorentz and supersym m etri c constrai nts [ 1,2,3] . Som e ofthese resul ts have al ready been presented i n two com prehensi ve and organi zed works: a book [ 4]and an extended revi ew [ 5] . In previ ous works [ 6,7]we have proposed a new approach to the study ofsupersym m etri c quantum cosm ol ogy. T he m ai n i dea i s to extend the group ofl ocalti m e reparam etri zati on ofthe cosm ol ogi calm odel s to the n = 2 l ocal conform al ti m e supersym m etry. For thi s purpose the odd \ti m e" param eters ; were i ntroduced (w here i s the com pl ex conjugate to ),w hi ch are the superpartners ofthe usualti m e param eters.T he new functi ons,w hi ch previ ousl y were functi ons of ti m e t becom e now superfuncti ons dependi ng on (t; ; ),w hi ch are cal l ed super el ds. Fol l ow i ng the super el d procedure we have constructed the super el d acti on for the cosm ol ogi calm odel s, w hi ch i s i nvari ant under n = 2 l ocalconform alti m e supersym m etry. T he ferm i oni c superpartners ofthe scal e factor and the hom ogeneous scal ar el ds at the quantum l evelare el em ents ofthe C l i ord al gebra. W e w i l l consi der the supersym m etri c FRW m odel i nteracti ng w i th a set of n com pl ex hom ogeneous scal ar superm atter el ds. W e show that i n thi s case,the potenti alofscal arm atter el dsi sa functi on ofthe K ahl er functi on and an arbi trary param eter . T he l ocalconform alsupersym m etry cannot x the val ue of the param eter , the space-ti m e supersym m etry does. Furtherm ore, w hen = 1, the scal ar el d potenti al becom es the vacuum energy of the scal ar el ds i nteracti ng w i th the chi ralm atter m ul ti pl ets as i n the case of 1 e-m ail: ferm in@ ifug2.ugto.m x 2 e-m ail: rosales@ ifug3.ugto.m x 3 e-m ail: vladim ir@ ifug3.ugto.m x 4 e-m ail: jtorres@ ifug1.ugto.m x N = 1 supergravi ty theory, [ 8] . U si ng supersym m etry we are abl e to obtai n a wavefuncti on w hi ch depende of the K ahl er functi on.
Supersym m etric F R W m odelw ith m atter elds
Let us begi n by consi deri ng the FRW acti on
S grav = 6 8 G N Z R _ R 2 2N + 1 2 kN R + d dt R 2 _ R 2N ! ! dt;(1)
w here k = 1;0; 1 stands for a spheri cal , pl ane and hyperspheri calthree-space,respecti vel y, _ R = dR dt ;G N i s the N ew toni an gravi tati onalconstant,N (t) i sthe l apse functi on and R (t) i s the scal e factordependi ng onl y on t. In thi s work we shal lset c = h = 1.
Iti swel lknow n thatthe acti on (1)preservesthe i nvariance under the ti m e reparam etri zati on.
t 0 ! t+ a(t)(2)
i f R (t) and N (t) are transform ed as
R = a _ R ; N = (aN ) : :(3)
In order to obtai n the super el d form ul ati on ofthe acti on (1),the transform ati on ofthe ti m e reparam etri zati on (2)wereextended to the n = 2 l ocalconform alti m e supersym m etry (t; ; ) [ 6] . T hese transform ati ons can be w ri tten as
w i th the superfuncti on I L(t; ; ),de ned by
I L(t; ; )= a(t)+ i 0 (t)+ i 0 (t)+ b(t) ;(5)
w here D = @ @ + i @ @t and D = @ @ i @ @t are the supercovari antderi vati vesofthegl obalconform alsupersym m etry w i th di m ensi on [ D ]= l 1=2 , a(t) i s a l ocal ti m ereparam etri zati on param eter, 0 (t)= N 1=2 (t) i s the G rassm ann com pl ex param eter ofthe l ocalconform alSU SY transform ati ons(4)and b(t) i stheparam eter ofl ocal U (1) rotati ons on the com pl ex coordi nate .
T hesuper el d general i zati on oftheacti on (1),w hi ch i si nvari antunderthe transform ati ons(4),wasfound i n our previ ous work [ 6]and i t has the form
S grav = 6 2 Z ( I N 1 2 I R D I R D I R + p k 2 I R 2 + 1 4 D (I N 1 I R 2 D I R ) 1 4 D (I N 1 I R 2 D I R ) d d dt;(6)
w here we i ntroduce the param eter 2 = 8 G N . W e can al so see that thi s acti on i s herm i ti an for k = 0;1. T he l ast two term s i n (6) form a totalderi vati ve w hi ch are necessary w hen we consi der i nteracti on. I N (t; ; ) i s a realone-di m ensi onalgravi ty super el d w hi ch has the form I N (t; ; )= N (t)+ i 0 (t)+ i 0 (t)+ V 0 (t); (7) w here 0 (t)= N 1=2 (t); 0 (t)= N 1=2 (t) and V 0 (t)= N V + . T hi s super el d transform s as
I N = (I LI N )+ i 2 D I LD I N + i 2 D I LD I N :(8)
T he com ponents of the super el d I N (t; ; ) i n (7) arethe gauge el d ofthe one-di m ensi onaln= 2 extended supergravi ty.
T he super el d I R (t; ; ) m ay be w ri tten as
I R (t; ; )= R (t)+ i 0 (t)+ i 0 (t)+ B 0 (t); (9) w here 0 (t)= N 1= 2 p R (t); 0 (t)= N 1= 2 p R(
T he com ponent B (t) i n (9)i s an auxi l i ary degree of freedom ; (t) and (t) are the ferm i oni c superpartners ofthe scal e factor R (t). T he super el d transform ati ons (8), (10) are the general i zati on of the transform ati ons for N (t) and R (t) i n (3).
T he com pl ex m attersuperm ul ti pl ets Z A (t; ; ) and Z A (t; ; ) = (Z A ) y consi st ofa set ofspati al l y hom ogeneousm atter el ds z A (t) and z A (t)(A = 1;2;:::;n), four ferm i oni c degrees of freedom A (t); A (t); A (t) and A (t),aswel las the bosoni c auxi l i ary el ds F A (t) and F A (t).
T he com ponents ofthe m atter super el ds Z A (t; ; ) and Z A (t; ; ) m ay be w ri tten as
Z A = z A (t)+ i 0A (t)+ i 0A (t)+ F 0A (t) ; (11) Z A = z A (t)+ i 0 A (t)+ i 0 A (t)+ F 0 A (t) ; (12) w here 0A (t)= N 1=2 R 3=2 A (t); 0A (t)= N 1=2 R 3=2 A (t); F 0A (t)= N F A 1 2 R 3=2 ( A A ):
T hetransform ati on rul eforthesuper el ds Z A (t; ; ) and Z A (t; ; ) m ay be w ri tten as
Z A = I L _ Z A + i 2 D I LD Z A + i 2 D I LD Z A ;(13)Z A = I L _ Z A + i 2 D I LD Z A + i 2 D I LD Z A :(14)
So,the super el d acti on takes the form
S = Z 3 2 I N 1 I R D I R D I R + 3 2 p kI R 2 2 3 I R 3 e G 2 + 1 2 2 N 1 I R 3 G A B h D Z A D Z B + D Z B D Z A io d d dt;(15)
w here 2 = 8 G N . T he acti on (15) i s de ned i n term sofone arbi trary K ahl ersuperfuncti on G (Z A ; Z A ) w hi ch i s a speci al com bi nati on of I K (Z A ; Z A ) and
g(Z A ),i . e. G (Z; Z )= I K (Z; Z )+ l ogj g(Z )j 2 :(16)
and i s i nvari ant under the transform ati ons
g(Z ) ! g(Z )exp f(Z ); I K (Z; Z ) ! I K (Z; Z ) f(Z ) f( Z );(17)
w i th the K ahl erpotenti alI K (Z; Z ) de ned by the compl ex super el d Z A rel ated to the G (Z; Z ) from (16).
T he superfuncti on G (Z; Z ) and thei r transform ati ons are the general i zati onsofthe K ahl erfuncti on G (z; z)= I K (z; z)+ l ogj g(z)j 2 de ned on the com pl ex m ani fol d. D eri vati ves of K ahl er functi on are denoted by
@G @ z A = G ;A G A ; @G @ z A = G ; A G A ; @ n G
@z A @z B @ z C :::@ z D = G ;A B C ::: D G A B C ::: D and the K ahl erm etri c i s
G A B = G B A = K A B ,the i nverse K ahl er m etri c G A B ,such as G A B G B D = A D can be used to de ne G A G A B G B and G B G A G A B .
T he acti on (15) i s i nvari ant under the l ocal n = 2 conform alsupersym m etry transform ati ons (4) i fthe super el ds are transform ed as (8),(10), (13) and (14). T he acti on (15) corresponds to FRW i n the m i ni superspace sector of supergravi ty coupl ed to com pl ex scal ar el ds [ 8] . A fter the i ntegrati on over the G rassm ann vari abl es ; the acti on (15)becom esa com ponent acti on w i th the auxi l i ary el ds B (t);F A (t) and F A (t). T hese el ds m ay be determ i ned from the com ponent acti on by taki ng the vari ati on w i th respect to them . T he equati ons for these el ds are al gebrai cal and thei r sol uti ons are
B = 18R 2 + p k + 1 4 R 2 G A B ( A B + B A ) R 2 e G =2 ; F D = 2R 3 ( D D ) 1 R 3 G D A G A B C C B + 2 G D A (e G =2 ) ; A :
A fter substi tuti ng them agai n i nto the com ponent acti on we get the fol l ow i ng acti on: (6), (15)and (18),asi susual l y the case, but negati ve. T hi s i s due to the fact that the parti cl el i ke uctuati ons do not correspond to the scal ar factor R (t) [ 9] . B esi des,the potenti alterm U (R ;z; z) reads
S = Z 3 2 R (D R ) 2 N N R 3 U (R ;z; z)+ 2i 3 D + N p k 3R N e G =2 + p k p R ( ) + R 3 N 2 G A B D z A D z B + i 2 D z B ( G A B A + G A B A ) + i 2 D z A ( G A B B + G A B B ) i 2 G A B ( AD B + AD B ) N 2 R 3 R A B C D A B C D i 4 p R 3 ( )G A B ( A B + B A )+ 3N 16 2 R 3 h G A B ( A B + B A ) i 2 + 3 p k 2 2 R G A B ( A B + B A ) 3N 2 3 e G =2 G A B ( A B + B A ) 2N 3 (e G =2 ) ;A B A B 2 3 N (e G =2 ) ; A B A B 2 3 N (e G =2 ) ; A B ( A B + B A ) N 2 h (e G =2 ) ;A A + (e G =2 ) ; A A i + N 2 h (e G =2 ) ;A A + (e G =2 ) ; A A i p R 3 2 ( )e G =2 + p R 3 3 (e G =2 ) ;A ( A A ) + p R 3 3 (e G =2 ) ; A ( A A ) ) dt;(18)w here D R = _ R 6 p R ( + );D z A = _ z A i 2 p R 3 ( A + A );D B = _ B i 2 V B , D B = _ B + i 2 V B , D = _ + i 2 V ,D B = D B + B C D _ z C D ,D B = D B + B C D _ z C D , R A B C D iU (R ;z; z)= 3k 2 R 2 + 6 p k 3 R e G =2 + V eff (z; z);(19)
w here the e ecti ve potenti alofthe scal ar m atter el ds i s
V eff = 4 4 (e G =2 ) ; A G A D (e G =2 ) ;D 3 4 e G = e G 4 [ G A G A 3] :(20)
In the acti on (18),as i n the e ecti ve potenti al ,the K ahl er functi on i s a functi on of scal ar el ds G (z; z). From (19) we can see that w hen k = 0, U (R ;z; z) = V eff (z; z).
In order to di scuss the i m pl i cati ons ofspontaneous supersym m etry breaki ng we need to di spl ay the potenti al(20) i n term s ofthe auxi l i ary el ds
V eff (z; z)= F A G A B F B 2 3B 2 R 2 ;(21)
w here the auxul i ary el ds B and F A now read
B = R 2 e G =2 ;(22)F A = 1 e G =2 G A :(23)
T he supersym m etry i s spontaneousl y broken,i fthe auxi l i ary el ds (23) of the m atter superm ul ti pl ets get nonvani shi ng vacuum expectati on val ues.T he potenti al (20, 21) consi sts of two term s; the rst of them i s the potenti alfor the scal ar el ds i n the case of gl obalsupersym m etry.Indeed thi ssuperpotenti ali snotposi ti ve sem i -de ni te i n contrast w i th the standard supersymm etri c quantum m echani cs case. T he gl obalsupersymm etry [ 10]i s unbroken w hen the energy i s zero due to F A = 0. B esi des,the energy pl ays the rol e ofthe order param eter i n thi s case. For the l ocalsym m etry,the energy ceasesto pl ay the rol e ofthe orderparam eterw hen gravi ty i staken i nto account [ 8]i n otherwords,thespontaneousbreaki ng ofsupersym m etry i n ourm odel ,al l ow s usto descri be the generalphysi calsi tuati on fordi erent energi es,i ncl udi ng the case w hen the energy i s zero.
N ow wecan seethatatthem i ni m um i n (21) V eff (z A 0 ; z A 0 ) = 0,but F A 6 = 0,then the supersym m etry i s broken w hen the vacuum energy i s zero. T he m easure of thi s breakdow n i s the term ( 1 e 2G (z A ; z A ) ) i n the acti on (18). B esi des,we can i denti fy
m 3=2 = 1 e G 2 (z A 0 ; z A 0 ) ;(24)
as the gravi ti no m ass i n the e ecti ve supergravi ty theory [ 8] . H ence,we can see thati n ourm odelthe conform alti m e supersym m etry (4), bei ng a subgroup of the space-ti m e SU SY ,gi vesusa m echani sm ofspontaneous breaki ng ofthi s SU SY [ 8] .
W ave function of the U niverse
T he G rassm ann com ponents of the vacuum con gurati on w i th the FRW m etri c m ay be obtai ned by decomposi ti on ofthe R ari ta-Schw i nger el d and ofthe spi nor el d i n the fol l ow i ng way [ 11]com m uti ng covari antconstant spi nors (x i ) and _ (x i ) are xed on the congurati on space, and an the other hand, ti m e-l i ke dependi ng G rassm ann vari abl esare notspi nors.T hen the ti m e-l i ke com ponentsofthe R ari ta-Schw i nger el d m ay be w ri tten as
0 (x i ;t)= (x i ) (t):(25)
T hespati alcom ponentsoftheR ari ta-Schw i nger el d have the fol l ow i ng representati on correspondi ng to the di rectproductti m e-subspace on the 3-space ofthe xed spati alcon gurati on (i n ourcase i ti sa pl ane ora three sphere). Expl i ci tl y,we get
H can = N H + 1 2 S 1 2 S + 1 2 V F ;(27)
w here H i s the H am i l toni an of the system , S and S are supercharges and F i s the U (1) rotati on generator. T he form of the canoni cal H am i l toni an (27) expl ai ns the fact,that N ; ; and V are Lagrange m ulti pl i ers w hi ch enforce onl y the rst-cl ass constrai nts, H = 0;S = 0; S = 0 and F = 0; w hi ch express the i nvari ance ofthe conform aln = 2 supersym m etri c transform ati ons. A s usualw i th the G rassm ann vari abl es we have the second-cl ass constrai nts, w hi ch can be el i m inated by the D i rac procedure. In the usualcanoni cal quanti zati on the even canoni calvari abl eschange by operators
R ! R ; R = i @ @R ; Z A ! Z A ; A = i @ @Z A (28)
and the odd vari abl es ; ; A ; A ; A and A after quanti zati on becom e anti conm utators.
W e can w ri te ; ; A ; A ; A and A i n the form ofthe di rect product 1 + 2n;2 2 m atri ces. W e then obtai n a m atri x real i zati on for the case of n com pl ex m ater superm ul ti pl ets To obtai n the quantum expressi on for the H am i ltoni an H and for the supercharges S and S + we m ust sol vethe operatororderi ng am bi gui ty.Such am bi gui ti es al ways ari se w hen,as i n our case,the operator expressi on contai ns the product ofnon-com m uti ng operators R ; R ;Z A and A . T hen we m ust i ntegrate w i th m easure R 1 2 (detG A ) 1=2 dR d A z d A Z i n the i nnerproductoftwo states. In thi s m easure the m om enta R = i @ @R i s non-H erm i ti an w i th + R = R 1=2 R R 1 2 ;however,the combi nati on (R 1=2 R ) + = + R R 1=2 = R 1=2 R i sH erm iti an. T he canoni calm om enta + A ,H erm i ti an-conjugate to A = i @ @Z A , have the form ( A ) + = g 1=2 ( A )g
w here f ; + g = 3 2 .T hen the equati on m ay be w ri tten i n the form
= 1 + ; A = 1 ( A ) + ; A = 1 ( A ) + :(33)
In order to have consi stency w i th expressi ons (32) and (33) i t i s necessary that the operator possess the fol l ow i ng properti es ( + = ):
+ = + ;( A ) + = ( A ) + ; ( A ) + = ( A ) + :(34)
T he operator ; A and A w i l lbe conjugate to operators ; A and A underi nnerproductoftwo states 1 and 2
< 1 ; 2 > q = Z 1 j j 2 R 1=2 g 1=2 dR d n zd n z;(35)
w hi ch i n generali s non-posi ti ve. In the m atri x real i zati on the operator has the form =
So,forthe superchangeoperator S we can construct conjugati on (33)under the operator S w i th the hel p of the fol l ow i ng equati on N ote that the superal gebra (31) does not de ne posi ti ve-de ni te H am i l toni an i n a ful l agreem ent w i th the ci rcunstance,that the potenti al V eff (z; z) ofscal ar el ds (20, 21) i s not posi ti ve sem i -de ni te i n contrast w i th the standard supersym m etri c quantum m echani cs. In thi s case the norm al i zabl e sol uti on to the quantum constrai nts
S = 1 S + :(37)S = 0; S = 0(39)
i sthewavefuncti on i n thesupersym m etry breaki ng state w i th zero energy. W i th the conform alal gebra gi ven by (31) we need to sol ve onl y these two quantum constrai nts i n order to search our sol uti ons. U si ng the m atri x representati on (29)to sol ve(39)one 2 2n + 1 com ponent can havethe ri ght behavi our w hen R ! 1 ,we have a norm al i zabl e sol uti on.
In thecaseofa m i ni m um thepotenti alV eff (z 0 ; z 0 )= 0 and k = 0,then usi ng (29) we get
2 1+ 2n (R )=C 0 R 3=4 e 2m 3= 2 M 2 p l R 3(42)
w here we have thus
1 =C 0 Z 1 0 R 3=2 e 4m 3= 2 M 2 plR 3 R 1=2 dR ;(43)
thenorm al i zati on constanthasthe fol l ow i ng val ueC 0 = (12m 3=2 M 2 pl ) 1 2 .
(R )dR j 2 2l+ 1 j 2 R 1=2 dR ;
w hi ch gi ve us the probabi l i ty to nd the U ni verse w i th scal efactorbetween R and R + dR ,asusuali n quantum m echani cs. T hen,the probabi l i ty (al so cal l ed di stri buti on functi on) ofhavi ng a U ni verse w i th scal e factor R i s
P (R ) = Z R 0 j 2 2l+ 1 j 2 R 1=2 dR ; = 1 e 4m 3= 2 M 2 p l R 3 :(45)
C onclusions
T he speci c quantum supersym m etri c m echani cs correspondi ng to quantum l evel i n our m odel s de nes the structure w hi ch perm i ts the fundam entalstates i nvariant under the n = 2 l ocal conform al supersym m etry i n N = 1 supergravi ty i nteracti ng w i th a set ofm atter el ds [ 8] . In ourcase the constrai ntsand the wave functi on ofthe uni verse perm i t the exi stence ofnon-tri vi al sol uti ons.
A cknow ledgem ent T hi s work was parti al l y suported by C O N A C yT ,grant N o. 28454E.
t) and B 0 (t)= N B 6 p R ( ). T he transform ati on rul e forthe realscal ar super el d I R (t; ; ) i s I R = I L _ I R + i 2 D I LD I R + i 2 D I LD I R :
s the curvature tensor of the K ahl erm ani fol d de ned by the coordi nates z A , z B w i th the m etri c G A B ,and B C D = G B A G A C D are the C hri sto el sym bol s i n the de ni ti on of the covari ant deri vati ves and thei r com pl ex conjugate. T he ki neti c energy term ofthe scal ar factor R (t) i s not posi ti ve i n the acti on,(1),
mm
(x i ;t)= e ( ) m _ ( ) _ (x i ) (t); (x i ;t) are the tetrads for the FRW m etri c. T hose representati ons are sol uti ons ofthe supergravi ty equati ons. W e have the cl assi calcanoni calH am i l toni an
1 ; 2 ; and 3 bei ng the Paul iM atri ces. In the m atri x real i zati on the operators ; A and A on the wave functi on = (R ;Z A ; Z A ; ; ; ) are 2 2n + 1 com ponent col um ns i (R ;Z A ; Z A ); (i = 1;:::;2 2n + 1 ):In the quantum theory the rstcl assconstrai nts associ ated w i th the i nvari ance of acti on (18) becom e condi ti ons on the wave functi ons . T herefore any physi cal l y al l owed states m ust obey the quantum constrai nts the rst equati on i n (30) i s the so-cal l ed W heel er D e W i tt equati on for m i ni superspace m odel s.
g = detG A B . T he quantum generators H ;S; S and F form a cl osed superal gebra of the supersym m etri c quantum m echani cs fS; Sg = 2H ; [ S;H ]= [ S;H ]= 0; S 2 = S 2 = 0 [ F ;S]= S; [ F ; S]= S; [ F ;H ]= 0: (31) A s we can see from H am i l toni an,the energy ofthe scal e factori snegati ve.T hi si sre ected i n the factthat the anti com m utator val ue f ; g = 3=2 ofsuperpartners and ofthe scal e factori snegati ve,unl i ke anticom m utati on rel ati ons for A , B and A ; B ,w hi ch are posi ti ve. A nti com m utati on rel ati ons m ay be sati sed under the condi ti ons. = + ;( A ) + = A ;( A ) + = A ;
1 1 2
1::: 1 2n + 1
W e can see thatthe anti com m utatorsofsupercharge S and thei rconjugate S underourconjugate operati on has the formfS; Sg = 1 fS; S + g = fS; Sg(38)and i t i s sel f-conjugate operator.A s a consequence of al gebra (31) we obtai n that the H am i l toni an H i s a sel f-conjugate operator H = 1 H + = H and i ts val ue i s real .
. B S , Phys. Rev. 1601143B . S. D e W i tt, Phys. Rev. 160, 1143 (1967);
J A Topol Ogy ; C . D E W I Tt, B D E W I Tt, G , ; M P Yan, \H am i l toni an C osm ol ogy. Spri nger-Verl agJ. A . W heel er, \ R el ati vi ty G roups and Topol ogy", eds. C . D e W i tt and B . D e W i tt, G ordon and B reach, 1969;M . P.R yan,\H am i l toni an C osm ol ogy",Spri nger- Verl ag,1971.
C l assQ uantum G rav. A , O , M P Yan, Jr , 41477A .M ac as,O .O breg on and M . P.R yan,Jr. ,C l assQ uan- tum G rav.4,1477 (1987);
. P D , D I Ughes, Phys.Lett.B. 214498P. D .D ' Eath and D . I.H ughes, Phys.Lett.B 214,498 (1988).
. R Raham, Phy.Rev.Lett. 671381R .G raham ,Phy.Rev.Lett.67,1381 (1991).
P D D ' Eath, C am bri dge U ni v. PressP. D .D ' Eath,\Supersym m etri c Q uantum C osm ol ogy", C am bri dge U ni v.Press,1996.
. P V , Int.J.M od.Phys.A. 114321P. V .M oni z,Int.J.M od.Phys.A 11, 4321 (1996).
. O Breg On, J J , V I Kach, Phys. Rev. D. 531750O . O breg on, J. J. R osal es and V . I. T kach, Phys. Rev. D 53, R 1750 (1996);
V I Kach, J J , O Breg On, C l ass.Q uantum G rav. 132349V . I. T kach, J. J. R osal es and O . O breg on,C l ass.Q uantum G rav.13,2349 (1996).
R osal es,C l ass.Q uantum G rav. V I Kach, O , J J , 14339V . I.T kach,O .O breg on and J. J.R osal es,C l ass.Q uan- tum G rav.14,339 (1997);
V I Kach, J J , J , C l ass. Q uantum G rav. 153755V . I.T kach,J. J.R osal es and J. M art nez, C l ass. Q uantum G rav., 15, 3755 (1998);
C l ass. Q uantum G rav. V I Kach, J J , J Socorro, 16797V . I.T kach, J. J. R osal es and J.Socorro, C l ass. Q uan- tum G rav. 16 797 (1999);
O Bre On, J J Es, J Socorro, V I Kach, C l ass.Q uantum G rav. 162861O . O bre on, J. J. R osal es, J. Socorro and V . I.T kach,C l ass.Q uantum G rav.16, 2861 (1999).
. E Rem M Er, B Jul I A, J Scherk, S Ferrara, L , P Van N I Ew Enhui Zen, Phys. B. 147105E. C rem m er, B . Jul i a, J. Scherk, S. Ferrara, L. G i - rardel l o and P. van N i ew enhui zen, N ucl . Phys. B 147, 105 (1979);
. V S , J Loui, Phys. Lett. B. 306269V . S. K apl unovsky and J. Loui s, Phys. Lett. B 306. 269 (1993);
. P Ath, R Rnow I Tt, C Ham Seddi Ne, W orl d Sci enti c. \A ppl i ed N = 1 Supergravi tyP. N ath, R . A rnow i tt and C . C ham seddi ne, \A ppl i ed N = 1 Supergravi ty", W orl d Sci enti c,1984.
. H B , S W Ng, Phys. Rev.D. 282960H . B .H artl e and S. W .H aw ki ng,Phys. Rev.D 28,2960 (1983);
. A D Li Nde, Sov.Phys.JET P. 60211A . D .Li nde,Sov.Phys.JET P 60,211 (1984).
. F Ooper, A Hare, U Sukhatm, Phys.Rep. 251267F.C ooper,A .K hare and U .Sukhatm e,Phys.Rep.251, 267 (1995);
. C V Sul, J.Phys.A. 182917C . V .Sul um an,J.Phys.A 18,2917 (1985);
. P , J W Van H Ol Ten, Phys.B. 196509P.Sal om onson and J.W .Van H ol ten,N ucl .Phys.B 196 509 (1982).
. E W I Tten, Phys.B. 186412E.W i tten,N ucl .Phys.B 186,412 (1981);
. C R Andjbar-D Oem I, A , J Strathdee, Phys.Lett. B. D . P.Soroki n,V . I.T kach and D . V .214301Phys.BC .R andjbar- D oem i ,A .Sal am and J.Strathdee,N ucl .Phys.B 214, 419 (1983).D . P.Soroki n,V . I.T kach and D . V .Vol kov, Phys.Lett. B 161,301 (1985).
| [] |
[
"On The Effects Of Data Normalisation For Domain Adaptation On EEG Data",
"On The Effects Of Data Normalisation For Domain Adaptation On EEG Data"
] | [
"Andrea Apicella \nDepartment of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly\n\nLaboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n\n",
"Francesco Isgrò \nDepartment of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly\n\nLaboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n\n",
"Andrea Pollastro \nDepartment of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly\n\nLaboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n\n",
"Roberto Prevete \nDepartment of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly\n\nLaboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n\n"
] | [
"Department of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly",
"Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n",
"Department of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly",
"Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n",
"Department of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly",
"Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n",
"Department of Electrical Engineering and Information Technology\nUniversity of Naples Federico II\nNaplesItaly",
"Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)\n"
] | [] | In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions, leading ML systems toward poor generalisation performances. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are often used. In fact, EEG signals are highly non-stationary both over time and between different subjects. To overcome this problem, several proposed solutions are based on recent transfer learning approaches such as Domain Adaption (DA). In several cases, however, the actual causes of the improvements remain ambiguous. This paper focuses on the impact of data normalisation, or standardisation strategies applied together with DA methods. In particular, using SEED, DEAP, and BCI Competition IV 2a EEG datasets, we experimentally evaluated the impact of different normalization strategies applied with and without several well-known DA methods, comparing the obtained performances. It results that the choice of the normalisation strategy plays a key role on the classifier performances in DA scenarios, and interestingly, in several cases, the use of only an appropriate normalisation schema outperforms the DA technique. | 10.1016/j.engappai.2023.106205 | [
"https://export.arxiv.org/pdf/2210.01081v2.pdf"
] | 252,683,758 | 2210.01081 | d09ee7d348f42edf60ad3b2642a6cf5db5fb9209 |
On The Effects Of Data Normalisation For Domain Adaptation On EEG Data
Andrea Apicella
Department of Electrical Engineering and Information Technology
University of Naples Federico II
NaplesItaly
Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)
Francesco Isgrò
Department of Electrical Engineering and Information Technology
University of Naples Federico II
NaplesItaly
Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)
Andrea Pollastro
Department of Electrical Engineering and Information Technology
University of Naples Federico II
NaplesItaly
Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)
Roberto Prevete
Department of Electrical Engineering and Information Technology
University of Naples Federico II
NaplesItaly
Laboratory of Augmented Reality for Health Monitoring (ARHeMLab)
On The Effects Of Data Normalisation For Domain Adaptation On EEG Data
BCI · EEG · domain shift · normalization · scaling · pre- processing
In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions, leading ML systems toward poor generalisation performances. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are often used. In fact, EEG signals are highly non-stationary both over time and between different subjects. To overcome this problem, several proposed solutions are based on recent transfer learning approaches such as Domain Adaption (DA). In several cases, however, the actual causes of the improvements remain ambiguous. This paper focuses on the impact of data normalisation, or standardisation strategies applied together with DA methods. In particular, using SEED, DEAP, and BCI Competition IV 2a EEG datasets, we experimentally evaluated the impact of different normalization strategies applied with and without several well-known DA methods, comparing the obtained performances. It results that the choice of the normalisation strategy plays a key role on the classifier performances in DA scenarios, and interestingly, in several cases, the use of only an appropriate normalisation schema outperforms the DA technique.
Introduction
In recent years, Brain-Computer Interfaces (BCIs) have been emerging as technology allowing the human brain to communicate with external devices without the use of peripheral nerves and muscles, enhancing the interaction capability of the user with the environment. BCI applications go from severely disabled persons for rehabilitation purposes to healthy subjects for devising new types of applications [1]. In particular, BCI has a growing interest in the scientific community thanks to its implication in several medical fields, such as assisting [2], monitoring [3], enhancing [4], or diagnosing patients' emotional or physical arXiv:2210.01081v2 [cs.LG] 22 Mar 2023 states [5,6]. Current literature reports that patients subjected to BCI-based Rehabilitation methods show benefit and improvement in their injured capacities [7]. Currently, several methods exist to allow the interaction between humans and machines. In particular, several proposals for BCI methods based on Electroencephalographic (EEG) signals are made. This is because measuring and monitoring the brain's electrical activity can provide important information related to the brain's physiological, functional, and pathological status. EEG signals are particularly suitable for this aim thanks to their essential qualities, such as non-invasiveness and high temporal resolution.
Modern Machine Learning (ML) methods such as Deep Neural Networks (DNNs) are mainly used to process acquired EEG signals for several tasks, such as emotion classification, engagement and attention detection. In general, a supervised ML model learns from human classified data to generalise to new unknown data. The standard pipeline to develop an ML system consists in i) data acquisition, ii) data preprocessing, iii) feature extraction, iv) model learning v) model validation. However, the performance obtained using classical ML methods in EEG-related tasks is often poor [8]. This is mainly because the EEG signal is highly non-stationary [9], substantial differences across the EEG acquired at different times or from different subjects exist, even with the same affect felt. More in detail, the starting hypothesis of the traditional ML methods states that all the used data, whether used in the training process or not, come from the same probability distribution. This assumption results are not always verified in the case of EEG signals. In the ML literature, this is an instance of the Dataset Shift problem [10]. In a nutshell, a Dataset Shift arises when the starting ML assumption is not valid, so the distribution of the training data differs from the data distribution used outside of the training stage. In other words, a model trained on a set of EEG data acquired from a given subject at a specific time (or during a specific session) should not work as expected in classifying EEG signals acquired from a different subject at different times. In other words, the model has poor generalisation performance. A first attempt to mitigate this problem is training specific models for each subject (Subject-Dependent models) to reduce the performance gap due to using the same ML system on different users. However, non-stationary signal problems related to the different user's physical and psychological conditions at different times remain. Furthermore, a Subject-Dependent model is valid only for the subject providing training data acquisition, making these models expensive and not very versatile and uncomfortable to the user, who will be tied to initial acquisition sessions before it can actually use the system for real classifications.
For these reasons, newer studies [11,12] tried to overcome these limits given by Dataset Shift, taking into account the difference between the data distribution probabilities (domains) acquired in different times and for different subjects. Several proposed solutions are based on Transfer Learning (TL) [13], a set of approaches aiming to transfer the knowledge learned from a system to improve another. TL approaches can be categorised into several subfamilies. One of the most famous is the Domain Adaptation (DA) [12] approaches family. DA approaches start from the hypothesis that unlabeled data from the target domain are also available during the training stage. For example, in the case of EEG-based emotion recognition, class-labelled data can be acquired in an initial session and classified using a standardised labelling protocol (e.g., questionnaires administered during the task). In contrast, class-unlabeled data can be acquired in a later session. DA provides several methods exploiting both labeled and unlabeled data to build an ML model able to minimise the discrepancy between the two data distributions, leading to better classification performances on unlabeled data. Thus, performance improvements are often reported using DA methods in several EEG-based classification studies. However, from a methodological point of view, it is essential to note that the pipeline to develop and evaluate an ML model consists of several steps which can influence each other [14]. Consequently, in several cases [15] the causes of the improvements can remain ambiguous. This paper focuses on the impact of data normalisation, or standardisation strategies applied together with DA methods.
However, DA methods assume that all the class-labelled data used during the training comes from the same source probability distribution (source domain), i.e. all the labelled data belong to the same unique domain. This assumption is often neglected in several EEG-based works [16,17], considering all the labeled data together during the training stage. Indeed, in several cross-subject/crosssession studies adopting DA strategies, it is not hard to see attempts to generalise toward an unseen domain (a subject or a session) using learning/source data acquisitions from several other and different sessions/subjects without considering their different probability distributions, so treated as belonging to the same domain. Despite this, performance improvements are often reported using DA methods in several EEG-based classification studies. We hypothesise that this improvement may not be caused by the DA method but by some data normalisation or standardisation strategies applied a priori.
More in detail, in ML applications, normalisation functions [18] are often applied to pre-process the input features before to be fed to the ML system. Normalisation functions are often adopted to scale or transform the features such that each feature has a uniform contribution to the ML pipeline. In [18] is shown that using some normalisation function can impact or not on the final classification performance, depending on the different features and properties that data may have. However, several tasks involving EEG and ML methods applying well-known normalisation functions (such as Z-score normalisation [18]) on the input features have been proposed over the years (for example, [19]). In many of these studies, the normalisation function is often a de-facto standard in an EEG ML pipeline. In particular, one of the most used normalisation strategies is the Z-score normalisation, consisting of a translation and a scaling of the data with respect to its mean and variance. For instance, in [20,21,22,23,24] is shown that using a normalisation function can affect the cross-subject performances. In particular, the translation with respect to the mean can already be seen as a simple form of domain adaptation.
This study aims to investigate if and how some normalisation strategies affect the performance of some of the DA methods applied to EEG signal classification. The main contribution of this research work is that in several EEG classification problems, the higher impact in reducing the domain shift seems to be due mainly to the data normalisation stage rather than the application of several DA methods commonly used in the literature.
The paper is organised as follows: in Section 2 some of the most known DA methods are reported, in Section 4 the DA framework is described, and our hypothesis is expressed, in Section 5 the experimental assessment, and the obtained results are reported, in Section 7 the obtained results are discussed. Finally, Section 8 is left to the final remarks.
Related works
As in this work, we want to investigate the impact of input normalization strategies on DA methods. We first discuss DA approaches. Then, we present the main standard data normalization techniques in this context. Finally, we highlight differences and similarities with related research studies.
More recently, Transfer Learning (TL) methods are receiving strong attention from the scientific community. TL methods are based on the concept of Domain. Following the survey of Pan et al. [13], a Domain can be defined as a set D = {F, P (X)} where F is a feature space and P (X) is the marginal probability distribution of a specific dataset X = {x 1 , x 2 , . . . , x n } ∈ F . Domain Adaptation methods start from the hypothesis that data sampled from two different Domains are available, called Source Domain and Target Domain, respectively. The main difference between Source and Target is that, while both data and labels S Source = {(x i , y i )} n i=1 can be sampled from the Source domain, only feature data points X T arget = {x j } m j=1 ∈ F T arget sampled from the Target Domain are available during the training stage, without any knowledge (unsupervised DA) or minimal knowledge (semi-supervised DA) of their real labels. DA methods are getting a great deal of attention in the scientific community in different contexts, such as image classification, voice recognition, etc., and several proposals have been made over the years. One trend of the literature is to adapt DA methods originally proposed in a context (e.g., image classification) to another one (e.g., EEG emotion recognition). For example, in [25] methods to adapt DA strategies from the image classification context to EEG emotion classification are proposed. However, each context has its characteristics and peculiarities, making it not trivial to adapt a DA method from one task to another. The scientific community attempted to adapt well-established DA methods to tasks involving EEG signal processing in the emotion recognition field.
In [15], DA methods are divided into two main categories: i) shallow DA methods, where a representation function projecting the source and the target data is given a-priori, and deep DA methods, where the data representation is learned as part of the DA strategy.
For instance, one of the most known shallow DA methods is Transfer Component Analysis (TCA, [26]). TCA searches for a data transformation based on the Maximum Mean Discrepancy (MMD, [27]). MMD was proposed to test the similarity between two probability distributions. An empirical estimation of MMD is given by
M M D(X S , X T ) = || 1 |X S | |X S | i=1 φ(x (i) S ) − 1 |X T | |X T | i=1 φ(x (i) T )|| 2 H where X S = {x (i) S } M i=1 and X T = {x (i) T } N i=1
are data sampled from the source and the target domain respectively, while φ(·) is an appropriate feature mapping.
Starting from the hypothesis that the data are sampled from two different domains, TCA searches for a transformation of the data such that the data variance is maximally preserved reducing, at the same time, the MMD discrepancy between the domains distributions.
An evaluation of the TCA on EEG data for emotion recognition was made in [16]. While it is not specifically proposed for Domain Adaptation, Kernel-PCA (KPCA, [28]) can be viewed as another shallow-DA strategy. In a nutshell, KPCA uses the kernel trick to project the data into proper kernel space and then apply the PCA to the projected data.
On another side, many modern deep DA strategies rely on Domain Adversarial Learning approaches, proposed in [15,29,30]. In a nutshell, these proposals learn a DNN feature representation considering both the desired task and the discrepancy between the Source and the Target domain. The goal is to make the data distributions indistinguishable for an ad-hoc domain discriminator. The final model is a deep neural network model (Domain Adversarial Neural Network, DANN) predicting, for each input, both the corresponding class and the belonging domain. Therefore, learning a feature mapping that maximises the class prediction performances and the domain classification loss to make the feature distributions as similar as possible is made. Adversarial Discriminative Domain Adaptation (ADDA) is another Domain adversarial learning strategy proposed in [31]. Differently from DANN, ADDA learns two autoencoders E S and E T , to represent the Source and the Target domains, respectively. Furthermore, E S is trained together with a classifier C, exploiting the available Source domain labelled data. Then, through an adversarial learning procedure, E T is trained to map the Target domain data to the space of the E S outputs. Finally, target data in E S can be classified by C.
Domain adversarial learning methods are widely used in several studies for EEG data recognition, for example, in [31,32,33].
All the methods mentioned above only consider two domains: the Source and the Target one.
However, simple methods used to reduce gaps between different data relied on data normalisation schemes, such as min-max or z-score normalisation, where data are transformed using simple functions that leverage statistics on the data itself. For instance, in [20,21,22,23,24] is shown that just a proper normalisation schemes to preprocess the EEG data can affect the cross-subject performances.
In [20] several normalisation schemes were applied following two different schemas: i) All-subjects, where the whole dataset was normalised, ii) Singlesubject, where the normalisation is made individually for each subject. The All-subject schema is the most common method used to mitigate the impact of each data value on the entire dataset. Single-subject, instead, consider each subject individually, applying normalisation to each subject. The authors empirically showed that Single-subject Z-score performs better in an EEG emotion recognition problem with respect to other normalisation schemes as min-max normalisation.
In [21] single-subject Z-score normalisation is effectively used to improve the performance in the cross-subject case of a student engagement detection problem. [23] scaled the range of each subject's features using the means of the subject features across the classes.
[22] applies single-subject Z − score normalisation after each neural network layer (Stratified Normalisation).
In [24] a simple transformation of the original data for better classification performance is proposed. It uses binary indicator features composed of 0s and 1s, depending on whether the original feature is lower or higher than the median feature value. This leads to a more effective reduction of the subject-dependent part of the EEG signal.
[34] the effect of different normalisation strategies is evaluated on DAN and a proposed domain adaptation method (MS-MDA) in an emotion recognition context. The reported results showed that the normalisation scheme could significantly impact the final classification performance.
Notation
In the remainder of this work we adopt the following notation: let X ∈ R n×m an EEG dataset having n samples and m features per sample acquired from N subjects, and X s ∈ R ns×m a subset of X containing only the n s samples related to a subject s. We denote with µ and σ respectively the mean and the standard deviation estimates computed on X. We denote with µ s and σ s respectively the mean and the standard deviation estimates computed on X s .
Problem description
Dealing with the non-stationarity of EEG signals is among the major challenges for the BCI research [35,36,37]. Non-stationarity of EEG signals over time can be observed in conjunction with changes in behaviours and mental states of the observed subjects. From a statistical point of view, it refers to a continuous change in a class definition causing a change in data distributions [38], thus implying a high variability of the signals among different experimental sessions for each subject. Moreover, high signal variability can also be seen among different subjects due to individual differences expressed through EEGs [39].
In the context of a classification task for a set of subjects, usually two scenarios are mainly explored: the building of a subject-independent model shared by all the subjects, where the goal is to realise a unique model able to be used on any subject with high performance, or the fitting of a subject-dependent model, where specific models are built for each subject. Due to the consequences of the high variability of the EEG signals, in these scenarios the hypothesis that the training set comes from the same probability distribution of the test set can be violated.
In the context of ML, due to the problems related to the high variability of the EEG signals, subjects and/or sessions can be considered as belonging to different domains affected by a distribution shift [40]. For this reason, in several works regarding EEG signals classification, Domain Adaptation techniques improved classification performances (e.g., [40,41,42]).
In this paper, we aim to investigate the hypothesis that the improvements in classifier performance reported by several affirmed DA methods may be strongly conditioned by data normalisation strategies rather than the DA techniques. For instance, Chen et al. in [20] have already highlighted the impact of representing each domain via z -score on the classification of the signals, but without analysing its impact on classical DA methods. We remember that the z-score Z of a set of data X can be computed as:
Z(X, µ, σ) = X − µ σ
, where µ and σ are usually the mean and the standard deviation computed respectively over the features of X. In fact, the authors emphasised how the application of the z-score to highly clustered domain subjects could help mitigate individual differences of the signals in the feature space.
In the presence of data coming from several subject domains affected by domain shifts and processed through DA techniques, we wonder whether the data normalization stage might be a critical step when one applies a DA method. Our idea is intuitively represented in the simple example shown in Figure 1, where we can see two different domains, D 0 and D 1 , having the same feature and label space but affected by a domain shift. Assuming that the two domains have the same conditional probabilities P (Y D0 |X D0 ) = P (Y D1 |X D1 ) as in Figure 1, a proper domain z-score data normalization stage, a scenario in which conditional distributions mostly overlap could be verified, thus mitigating the domain shift problem without the using of any DA method.
In the remainder of this paper, we investigate the impact of the normalization stage on DA methods through experiments on different EEG datasets. In particular, for each dataset, we compare the impact of different normalization strategies applied with and without several DA methods and the performance obtained by the DA strategies as usually described in the literature.
Experimental assessment
In this section, we investigate the impact of the normalization stage on DA methods considering the z-score as normalization strategy.
In classical ML problems, two main assumptions are that i) we have no access to test data during the training stage, ii) training and test data belong to the same domain. In this context, to compute the z-score normalization, µ and σ are usually estimated only over the training data due to the assumption that both the training and the test data are samples drawn from the same distribution, therefore sharing the same estimated parameters. On the other hand, in an Unsupervised Domain Adaptation scenarios, the training and test set are not usually drawn from the same distributions, and a set of unlabelled test data is supposed to be available during the training. In this case, µ and σ can be estimated in two ways:
1. µ and σ are estimated separately on training data and unlabelled test data; 2. µ and σ are estimated only on training data, as in the ML classical scenarios.
In the context of EEG data, acquisitions are made across several subjects/sessions. Since each subject/session can be considered as a different domain due to nonstationarity of EEG signals, two different hypothesis about the belonging domains can be made: a. all the subjects/sessions are considered as belonging to the same domain; b. each subject/session is considered as a different domain.
Considering these different conditions, several modalities emerge to perform zscore normalization in the contexts of EEG-data and DA methods. The following z-score normalization strategies were examined in this paper:
-Z 0 : the training set was transformed computing µ and σ on the only training data; the test subject/session was transformed using parameters µ and σ computed over the training set (i.e., it corresponds to the the classical zscore normalization applied on the training data); -Z 1 : each subject/session s belonging to the training set was transformed using its own parameters µ s and σ s ; the test subject/session was transformed using µ and σ computed on the whole training data; -Z 2 : each subject/session s, regardless the training/test set partitioning, was transformed using its own parameters µ s and σ s ; -Z 3 : the training set was transformed using parameters µ and σ computed on the whole training data; the test subject/session was transformed on its own parameters µ s and σ s .
Our hypothesis was explored in a series of experiments on three EEG datasets: SEED [43], BCI Competition IV 2a [44] and DEAP [45]. Further details regarding the mentioned datasets are provided in this section. We point out that our interest in these experiments is in investigating the normalisation strategies' impact on DA methods in terms of performance degradation/enhancement of classifiers and not in providing new state-of-the-art results on the involved datasets.
For each dataset, we conducted our experiments on the four normalisation strategies described above using different frameworks typically used in DA: i) a deepDA-based framework, where we analysed the performances of the two well-known deep DA methods DANN [30] and ADDA [31], applied on ANNs, and comparing their performance with the one achieved using the same ANN architectures without the DA components, ii) a shallow DA-based framework, where we compared the performances obtained using a typical projection-based method as TCA [26] and KPCA [46], followed by a Support Vector Machine (SVM) [47] classifier, with those achieved using the SVM classifier only. Figure 2 shows the general processing pipeline adopted in this work. Model performances were obtained adopting i) the Leave-One-Subject-Out Cross-Validation (LOSO-CV) strategy for the subject-independent case where, for each iteration, the training set resulted to be a composition of multiple training subjects while the test set was composed by just one test subject, ii) the Hold-Last-Session-Out (HLSO) strategy for the subject-dependent case, where the last session from a chronological point of view was considered as test set while the others are considered as training set. These experiments were not performed on the DEAP dataset since just one session is provided.
For the shallow DA-based experiments, we followed the setup proposed in [16], searching for the best kernel methods among {Linear, RBF, Gaussian}, while for ANNs-based ones, according to the original architectures of the DANN and ADDA methods, full-connected multi-layered neural networks were chosen as models for each architectural component (feature extractor, label predictor, domain classifier). Hyperparameters were tuned using a bayesian optimisation method [48]. In particular, for each architectural component, the number of layers was constrained to a maximum of 3, the number of nodes per layer was searched in the set {1, 2, ..., 1000} and the activation function was searched among ReLU, Sigmoid and LeakyReLU. Moreover, for the ANNs-based experiments, each experiment was made considering early stopping as convergence criterion with 20 epochs as patience; the 10 % of the training set was extracted and considered as validation set using stratified sampling [49] on class labels; optimisation was performed using Adam optimiser [50] with a learning rate that was searched in the space {0.1, 0.01, ..., 0.0001}. In order to ensure fairness in experimental conditions, the best architecture found in the DANN method was also used in ADDA and in the pure ANN without DA components (i.e. domain classifier). The accuracy score was used for each experiment to evaluate the performance of the method.
SEED
The SEED dataset consists of EEG data from 15 subjects while watching 15 video clips of about 4 minutes. Each video clip was chosen to induce positive, neutral and negative emotions. For each subject, three data sessions were collected with an interval of about one week. EEG signals were recorded in 62 channels using the ESI Neuroscan System 3 according to the international 10-20 system, at a sampling rate of 1000 Hz and downsampled to 200 Hz. Following [51], we considered the pre-computed Differential Entropy (DE) features smoothed by Linear Dynamic Systems (LDS). DE features are pre-computed, for each second, in each channel, over the following five bands: Delta (1-3 Hz); Theta (4-7 Hz); Alpha (8-13 Hz); Beta (14-30 Hz); Gamma (31-50 Hz). As in [51], following a sampling stratified on class labels, 1000 samples for each subject were randomly selected as training set due to the limited available memory and computation time.
BCI Competition IV 2a
The BCI Competition IV 2a dataset consists of EEG data acquired from 9 subjects during motor imagery tasks. The dataset involves 4 EEG measurement classes: left hand, right hand, feet, and tongue. 22 Ag/AgCl electrodes recorded EEG signals at a sampling rate of 250 Hz. The EEG signals were filtered using the IIR Butterworth filter of order 5 with a bandpass cut-off frequency of 8 to 30 Hz. The four-class classification problem was reduced to a binary classification problem, thus considering left-hand and right-hand labels as in, for example, [52,53,54]. Finally, the Common Spatial Pattern (CSP) [55] was applied since it is a widely recognised feature extraction technique involved in classification tasks on the motor imagery studies [56].
DEAP
The DEAP dataset consists of EEG data acquired from 32 subjects while they were exposed to 40 of about 1 minute. EEG signals were recorded in 32 channels using the Biosemi ActiveTwo devices 4 at a sampling rate of 512 Hz and downsampled to 128 Hz. After watching each video, each subject was required to rate each video in terms of valence (pleasantness level), arousal (excitation level), dominance (control power), liking (preference) and familiarity (knowledge of the stimulus), where each rating ranged from one (weakest) to nine (strongest). Only the familiarity level ranged from one to five. The EEG signals were recorded by Biosemi ActiveTwo devices at a sampling rate of 512 Hz and downsampled to 128 Hz. Following [40], we labelled each trial discretizing and partitioning the dimensional emotion space as follows:
positive if valence rating is greater than 7; neutral if valence rating is smaller than 7 and greater than 3; negative if valence rating is smaller than 3.
Moreover, as in [40], since trials 18, 16 and 38 had the most participants reporting to have successfully induced positive, neutral and negative emotion, a subset of subjects that reported a successful emotion induction with these trials were was selected. In particular, data related to subjects 2, 5, 10,11,12,13,14,15,19,22,24,26,28, and 31 were involved in the experimental assessments. Finally, DE was applied to EEG data in the bands Delta, Theta, Alpha, Beta and Gamma, as for the SEED dataset.
Results
In this Section, we present the results collected in our series of experiments. For each experiment, the results obtained without normalization are reported under the heading of "noNorm". For the ANNs and SVMs based experiment, with "noDA-ANN" and "noDA-SVM" we refer to the pure architectures without the DA components (thus, only with their feature-extractor and label predictor). For the subject-independent experiments, we report the mean and the standard deviation over the folds for each type of normalisation. On the other hand, for the subject-dependent experiments, we report the mean and the standard deviation over the subjects for each kind of normalisation.
SEED
In Table 1 the results related to the subject-independent experiments on SEED are reported. Regarding the deep-DA experiments, for the Z 0 and Z 2 normalisations, the use of the ANN leads to better results than those obtained through the DANN and ADDA methods. For the Z 3 normalizaion the use of the ANN leads to results comparable with the ones reached by using DANN, but higher than those reached by using ADDA; for the Z 1 normalisation, NoDA-ANN performances are lower than those of DANN, but higher than the ones achieved using ADDA (the same situation is also encountered for the noN orm case). The best performances are attributed to the NoDA-ANN case on the Z 2 normalisation with a mean accuracy of 81.52 ± 7.26. Thus, the use of only Z 2 normalisation outperforms the other tested methods. For the shallow-DA based experiments, instead, we can observe that for the Z 0 normalisation, the use of SVM leads to lower results than using TCA-SVM, but higher than using KPCA-SVM; for the Z 1 and Z 2 normalisations, NoDA-SVM leads to lower results than both DA techniques; for the Z 3 normalisation, NoDA-SVM reaches lower results than TCA-SVM, but comparable with KPCA-SVM; for the noN orm case, NoDA-SVM leads results higher than TCA-SVM but lower than KPCA-SVM. The best performances are attributed to the KPCA-SVM case on the Z 2 normalisation with a mean accuracy of 80.74 ± 6.11. However, the most significant improvement seems to be obtained by the Z 2 normalisation in the NoDA-SVM, improving the performance to 74.71 ± 8.47 % from an initial 52.96±9.82 % accuracy without any normalisation, while the use of DA methods gives an improvement of about 6 %. In Table 2 the results related to the subject-dependent experiments on SEED are reported. For the deep-DA experiments, on the noN orm, Z 0 , Z 1 and Z 2 cases NoDA-ANN leads to higher results than DA method; for the Z 3 normalisation, the use of ANN leads to lower results than the DANN method, but higher than the ADDA method. The best performances are attributed to the NoDA-ANN case on the Z 2 normalisation with a mean accuracy of 83.93 ± 9.60. Regarding the shallow-DA based experiments, on the noN orm, Z 2 and Z 3 normalisations noDA-SVM achieves better results than SVM applied with DA methods; for the Z 0 normalisation, NoDA-SVM performances are lower than those of TCA-SVM but higher than the ones of KPCA-SVM; for the Z 0 normalisation, the use of NoDA-SVM leads to results lower than those of TCA-SVM but comparable with the ones obtained through KPCA-SVM. The best performances are attributed to the NoDA-SVM case on the Z 3 normalisation with a mean accuracy of 86.56±8.15. Therefore, also in this case the use of a simple normalisation method seems to be more effective than the selected DA-methods.
BCI Competition IV 2a
Differently from experiments on SEED, only results related to Z 1 , Z 2 and Z 3 normalizations are reported since the CSP implementation 5 we already performed a z -score normalization, thus making the Z 0 normalization unnecessary in our experiments. In Table 3 the results related to the subject-independent experiments on BCI Competition IV 2a are reported. (12.22) the Z 1 normalization using NoDA-ANN lower performances than both of the DA methods are achieved. The best performances are attributed to the NoDA-ANN case on the Z 2 normalization with a mean accuracy of 64.97 ± 14.28.
For the shallow-DA based experiments, NoDA-SVM always reach accuracies lower than DA methods, except on the noN orm and Z 3 cases where it leads to higher accuracies than TCA-SVM. The best performances is attributed to the KPCA-SVM case on the Z 2 normalization with a mean accuracy of 68.36±11.47.
DEAP
Differently from SEED and BCI Competition IV 2a, experiments on DEAP were performed only for the subject-independent case since the dataset provided a single session. The results are reported in Table 5.
Discussion
The experimental results suggest that the normalisation method one uses plays a crucial role in improving the classification performances in DA approaches on EEG data. We will focus our discussions on results related to the SEED dataset, but similar observations can also be made for BCI Competition IV 2a and DEAP.
Subject-independent experiments
In Table 6, data related to a fold of a subject-independent experiment are represented using t-SNE [58] before the application of any DA method for each normalization type. It is interesting to notice how for Z 1 and Z 2 normalisations, a similar scenario Z0 Z1 Z2 Z3
Domains
Training/Test Labels Table 6: Graphical representation of SEED data on subject-independent experiments after each type of normalization. On the first row, all domains involved in the series of experimental are marked by different colors; on the second row, training and test data are marked with blue and red, respectively; on the third row, data are distinguished by their label. On each column, data transformed by the relative normalization type are shown.
to Figure 1 is verified on these data: after the normalisation stage, clusters of data having the same labels are observable, corroborating the hypothesis that conditional distributions over the subjects could be equal or similar, thus leading the normalisation to reduce the domain shifts without DA methods.
In the ANNs based experiments, we can notice that ANN model achieves the best performances on Z 2 normalisation without using any DA method. Moreover, this can also be observed from how performances are distributed on the ANN method as the type of normalisation changes: accuracy means are distributed from a minimum of ∼ 46 % to a maximum of ∼ 82 %.
On the other hand, for the Projection Matrix-based experiments, the highest performances are reached by the KPCA-SVM method on Z 2 normalisation.
According to how accuracy means vary as the normalisation type changes (from a minimum of ∼ 59 % to a maximum of ∼ 81 %), we can hypothesise that also the right balance between DA methods and normalisation type has an impact on performances.
Comparing the ANNs based experiments with the Projection Matrix-based ones, we can conclude that the impact DA methods could be affected by the choice of the model: in the first case, using ANNs, the DA methods does not give any contribution; in the second case, the DA method contributes to improving the SVM performances.
Subject-dependent experiments
In Table 7, data related to a subject sampled during the subject-dependent experiments are represented before applying any DA method for each normalisation type.
Also in this case, a scenario similar to Figure 1 is verified, particularly on Z 2 Z0 Z1 Z2 Z3
Domains
Training/Test Labels Table 7: Graphical representation of SEED data on subject-dependent experiments after each type of normalization. On the first row, all domains involved in the series of experimental are marked by different colors; on the second row, training and test data are marked with blue and red, respectively; on the third row, data are distinguished by their label. On each column, data transformed by the relative normalization type are shown.
normalisation where data having the same labels are clustered, thus leading to suppose that domains could have equal or similar conditional distributions. In this case, on deep DA-based experiments, the best results are achieved by noDA-ANN on Z 2 normalisation without using DA methods. In contrast, in the shallow DA-based experiments, best results are achieved by noDA-SVM on Z 3 normalisation. Also, for subject-dependent experiments, for the best methods, performances change as the normalisation type changes: accuracy means vary from a minimum of ∼ 45 % to a maximum of ∼ 84 % (Artificial Neural Network based) and from a minimum of ∼ 64 % to a maximum of ∼ 87 % (shallow DA based). Thus, as in the subject-independent case, we can observe that the normalisation type significantly impacts the classifier performances in DA problems. Consequently, a careful choice of normalisation type, DA method and classification model should be made. To sum up, we can state that when one develops and tests a DA method to classify EEG data, the effect of the normalisation step on the classification performances should be carefully weighed, and a suitable choice of the normalisation method could drastically improve the effectiveness of the DA method or, even avoid the use of DA methods.
Conclusions
In this work, we examined the effect of data normalisation in several DA approaches. Starting from the hypothesis that the prior data normalisation could strongly condition the performances reported by several DA methods, considering the z-score as the base normalisation procedure, we firstly defined four z-score variations. Then we conducted several experiments on different EEG datasets to analyse the effect of each normalisation strategy applied with and without DA methods. In particular, we dealt with two scenarios typically encountered in EEG classification problems, the subject-independent and subject-dependent cases, where each subject and session can be considered as a different domain due to the non-stationarity of EEG signals.
The results show that the normalisation stage highly impacts classifier performances in several DA scenarios. The best results are achieved by pure ANNs (deep DA) and SVMs (shallow DA) in several cases, combined with an appropriate normalisation schema, without the need for the investigated DA techniques. However, in other cases, the best results are achieved by DA methods combined with a particular type of normalisation, allowing us to consider that searching for the right balance between DA methods and normalisation type could improve classifier performances.
Understanding the impact that the normalisation strategies have on DA approaches could be helpful to improve the performances obtained through DA methods or, in some cases, to avoid DA methods that often turn out to be highly time and hardware-consuming and leading, moreover, to simpler models.
Data availability
The datasets used during the current study are available at:
-SEED: https://bcmi.sjtu.edu.cn/home/seed/ -BCI Competition IV 2a: https://www.bbci.de/competition/iv/ -DEAP: https://www.eecs.qmul.ac.uk/mmv/datasets/deap/
Fig. 1 :
1A graphical representation of the hypothesis explored in this work. Given two domains D 0 and D 1 sharing the same feature and label spaces and affected by a domain shift (left), the application of a z-score normalization could reduce the domain shift between the domains (right) regardless any DA technique.
Fig. 2 :
2Graphical representation of the processing pipeline adopted in this work. After the data partitioning and normalization stages, the impact of the data normalization is inspected on a given ML technique M with and without DA methods (respectively, M DA and M noDA ). Then, performances are evaluated on the testing set through the models m DA and m noDA fitted during the training stages and compared.
Table 1 :
1SEED -Leave-One-Subject-Out Cross-Validation Accuracy, Mean % (Std %)deep DA
shallow DA
noDA-ANN
DANN
ADDA
noDA-SVM
TCA-SVM
KPCA-SVM
noNorm 45.50 (13.18) 50.65 (12.19) 33.13 (0.22)
52.96 (9.82)
46.68 (13.34) 58.61 (7.50)
Z0
48.31 (14.09) 43.60 (11.86) 43.97 (12.86) 52.47 (11.33) 71.58 (7.16)
48.16 (13.22)
Z1
50.54 (15.13) 60.35 (21.45) 46.53 (12.92) 52.32 (15.06) 79.70 (8.98)
53.59 (16.91)
Z2
81.52 (7.26)
79.03 (7.71)
70.43 (14.17) 74.71 (8.47)
80.09 (6.51)
80.74 (6.11)
Z3
75.22 (7.85)
75.79 (4.78)
60.57 (13.92) 73.24 (8.37)
76.37 (7.44)
73.91 (7.31)
Table 2 :
2SEED -Cross-Session Accuracy, Mean % (Std %)
Deep DA
Shallow DA
noDA-ANN
DANN
ADDA
SVM
TCA-SVM
KPCA-SVM
noNorm 45.47 (20.43) 43.15 (17.43) 37.10 (10.83) 63.63 (18.92) 47.52 (12.45) 61.31 (14.33)
Z0
64.85 (17.38) 51.76 (17.71) 57.33 (19.52) 62.32 (17.33) 79.56 (10.21) 61.66 (16.88)
Z1
66.37 (16.16) 55.25 (19.95) 65.07 (17.50) 63.39 (20.02) 76.96 (12.99) 63.25 (15.92)
Z2
83.93 (9.60)
83.84 (10.55) 76.80 (12.50) 85.59 (9.89)
81.67 (11.83) 83.62 (9.45)
Z3
83.09 (9.98)
84.43 (9.67)
77.80 (12.82) 86.56 (8.15)
83.15 (10.92) 84.09 (10.45)
Table 3 :
3BCI Comp. IV 2a -Leave-One-Subject-Out Cross-Validation Accuracy, Mean % (Std %)Deep DA
Shallow DA
noDA-ANN
DANN
ADDA
noDA-SVM
TCA-SVM
KPCA-SVM
noNorm 61.42 (13.36) 62.19 (13.43) 63.22 (12.90) 61.03 (12.70) 56.10 (10.62) 61.11 (12.62)
Z1
62.11 (13.54) 61.73 (13.02) 62.84 (13.77) 61.72 (10.30) 61.03 (11.76) 61.73 (10.93)
Z2
67.98 (11.81) 67.52 (11.40) 68.58 (10.96) 68.52 (11.35) 67.90 (11.89) 68.44 (12.02)
Z3
63.36 (11.48) 68.13 (13.08) 59.77 (12.80) 67.90 (12.35) 67.44 (13.55) 67.82 (12.48)
For the deep-DA experiments, NoDA-ANN always leads to lower or comparable
results with DA methods, except for the Z 3 normalization where its perfor-
mance are lower than DANN but higher than ADDA. The best performances
are attributed to the DANN case on the Z 3 normalization with a mean accu-
racy of 68.13 ± 13.08. However, the use of the only DANN method without any
normalisation gives an improvement less than 1 %, while the use of the only
normalisation can lead an improvement of about 6 %, showing that the normal-
isation can have a significant effect on the final performance.
For the shallow-DA based experiments instead, for each normalization type in-
cluding noN orm, NoDA-SVM always reaches results higher or comparable with
DA methods. The best performances are attributed to the NoDA-SVM case on
the Z 2 normalization with a mean accuracy of 68.52 ± 11.35.
In Table 4 the results related to the subject-dependent experiments on BCI
Competition IV 2a are reported. Regarding the deep-DA experiments, in the
noN orm, Z 2 and Z 3 cases ANN leads to higher results than DA methods; for
Table 4 :
4BCI Comp. IV 2a -Cross-Session Accuracy, Mean % (Std %)deep DA
shallow DA
noDA-ANN
DANN
ADDA
noDA-SVM
TCA-SVM
KPCA-SVM
noNorm 57.56 (10.47) 55.94 (8.87)
54.79 (15.46) 55.79 (9.38)
50.54 (1.52)
60.88 (13.31)
Z1
54.63 (10.32) 56.94 (8.63)
59.39 (8.87)
55.48 (9.62)
55.79 (9.63)
61.50 (10.04)
Z2
64.97 (14.28) 64.12 (15.39) 58.24 (15.71) 63.43 (15.10) 63.66 (15.17) 68.36 (11.47)
Z3
63.27 (13.77) 62.27 (13.96) 61.30 (16.64) 67.82 (12.48) 67.43 (13.55) 68.13
Table 5 :
5DEAP -Leave-One-Subject-Out Cross-Validation Accuracy, Mean % (Std %)For the deep-DA experiments, on the Z 0 normalization, NoDA-ANN shows lower results than the DA methods; for the Z 1 and Z 3 normalizations, ANN performances are lower than those of DANN but higher than the ones obtained through ADDA; for the Z 2 normalization, NoDA-ANN results are higher than those of DA methods; for the noN orm, NoDA-ANN results are equal to DANN ones and higher than ADDA ones. The best performances are attributed to the DANN case on the Z 3 normalization with a mean accuracy of 41.27 ± 14.91. Regarding the shallow-DA based experiments, noDA-SVM always leads to lower results than DA methods. The best performances are attributed to the KPCA-SVM case on the Z 3 normalization with a mean accuracy of 43.77±12.68. In this case the DA methods give an important improvement in the performance, further increased by the normalisation methods, showing the importance of using both of them.deep DA
shallow DA
noDA-ANN
DANN
ADDA
noDA-SVM
TCA-SVM
KPCA-SVM
noNorm 34.21 (4.11)
34.21 (3.15)
33.93 (2.15)
31.31 (9.76)
36.90 (12.34) 41.23 (10.02)
Z0
30.44 (10.13) 31.11 (13.38) 34.92 (13.64) 32.56 (1.94)
41.23 (13.26) 38.33 (9.86)
Z1
35.60 (8.72)
36.51 (7.90)
34.92 (13.64) 32.55 (10.83) 42.66 (11.70) 34.52 (7.42)
Z2
36.67 (12.45) 35.52 (13.84) 32.94 (11.59) 32.13 (14.77) 42.46 (15.99) 40.67 (15.13)
Z3
39.33 (14.08) 41.27 (14.91) 38.89 (14.16) 33.73 (14.75) 42.62 (17.06) 43.77 (12.68)
https://compumedicsneuroscan.com/
https://www.biosemi.com
In this work we performed CSP on data using the implementation provided by the Python package MNE[57]
A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Christian Mühl, Brendan Allison, Anton Nijholt, Guillaume Chanel, Brain-Computer Interfaces. 12Christian Mühl, Brendan Allison, Anton Nijholt, and Guillaume Chanel. A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces, 1(2):66-84, 2014.
Review on motor imagery based bci systems for upper limb post-stroke neurorehabilitation: From designing to application. Rig Muhammad Ahmed Khan, Das, K Helle, Sadasivan Iversen, Puthusserypady, Computers in Biology and Medicine. 123103843Muhammad Ahmed Khan, Rig Das, Helle K Iversen, and Sadasivan Puthussery- pady. Review on motor imagery based bci systems for upper limb post-stroke neurorehabilitation: From designing to application. Computers in Biology and Medicine, 123:103843, 2020.
Design, implementation, and metrological characterization of a wearable, integrated ar-bci hands-free system for health 4.0 monitoring. Pasquale Arpaia, Luigi Egidio De Benedetto, Duraccio, Measurement. 177109280Pasquale Arpaia, Egidio De Benedetto, and Luigi Duraccio. Design, implementa- tion, and metrological characterization of a wearable, integrated ar-bci hands-free system for health 4.0 monitoring. Measurement, 177:109280, 2021.
Focus: enhancing children's engagement in reading by using contextual bci training sessions. Jin Huang, Chun Yu, Yuntao Wang, Yuhang Zhao, Siqi Liu, Chou Mo, Jie Liu, Lie Zhang, Yuanchun Shi, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsJin Huang, Chun Yu, Yuntao Wang, Yuhang Zhao, Siqi Liu, Chou Mo, Jie Liu, Lie Zhang, and Yuanchun Shi. Focus: enhancing children's engagement in reading by using contextual bci training sessions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1905-1908, 2014.
How to successfully classify eeg in motor imagery bci: a metrological analysis of the state of the art. Pasquale Arpaia, Antonio Esposito, Angela Natalizio, Marco Parvis, Journal of Neural Engineering. Pasquale Arpaia, Antonio Esposito, Angela Natalizio, and Marco Parvis. How to successfully classify eeg in motor imagery bci: a metrological analysis of the state of the art. Journal of Neural Engineering, 2022.
Eeg-based detection of emotional valence towards a reproducible measurement of emotions. A Apicella, P Arpaia, G Mastrati, N Moccaldi, Scientific Reports. 1112021Apicella A., Arpaia P., Mastrati G., and Moccaldi N. Eeg-based detection of emotional valence towards a reproducible measurement of emotions. Scientific Reports, 11(1), 2021.
Emotion recognition using eeg and physiological data for robot-assisted rehabilitation systems. Elif Gümüslü, Duygun Erol Barkana, Hatice Köse, Companion publication of the 2020 international conference on multimodal interaction. Elif Gümüslü, Duygun Erol Barkana, and Hatice Köse. Emotion recognition using eeg and physiological data for robot-assisted rehabilitation systems. In Companion publication of the 2020 international conference on multimodal interaction, pages 379-387, 2020.
Hierarchical convolutional neural networks for eeg-based emotion recognition. Jinpeng Li, Zhaoxiang Zhang, Huiguang He, Cognitive Computation. 102Jinpeng Li, Zhaoxiang Zhang, and Huiguang He. Hierarchical convolutional neural networks for eeg-based emotion recognition. Cognitive Computation, 10(2):368- 380, 2018.
Generator-based domain adaptation method with knowledge free for cross-subject eeg emotion recognition. Dongmin Huang, Sijin Zhou, Dazhi Jiang, Cognitive Computation. Dongmin Huang, Sijin Zhou, and Dazhi Jiang. Generator-based domain adaptation method with knowledge free for cross-subject eeg emotion recognition. Cognitive Computation, pages 1-12, 2022.
Dataset shift in machine learning. Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, Neil D Lawrence, Mit PressJoaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. Mit Press, 2008.
Domain generalization: A survey. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, Chen Change Loy, IEEE Transactions on Pattern Analysis and Machine Intelligence. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
A review of domain adaptation without target labels. M Wouter, Marco Kouw, Loog, IEEE transactions on pattern analysis and machine intelligence. 43Wouter M Kouw and Marco Loog. A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence, 43(3):766- 785, 2019.
A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on knowledge and data engineering. 2210Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transac- tions on knowledge and data engineering, 22(10):1345-1359, 2009.
Understanding development process of machine learning systems: Challenges and solutions. Iftekhar Elizamary De Souza Nascimento, Edson Ahmed, Oliveira, Igor Márcio Piedade Palheta, Tayana Steinmacher, Conte, ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). IEEEElizamary de Souza Nascimento, Iftekhar Ahmed, Edson Oliveira, Márcio Piedade Palheta, Igor Steinmacher, and Tayana Conte. Understanding development pro- cess of machine learning systems: Challenges and solutions. In 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 1-6. IEEE, 2019.
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, International conference on machine learning. PMLRYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by back- propagation. In International conference on machine learning, pages 1180-1189. PMLR, 2015.
Transfer components between subjects for eeg-based emotion recognition. Wei-Long Zheng, Yong-Qi Zhang, Jia-Yi Zhu, Bao-Liang Lu, 2015 international conference on affective computing and intelligent interaction (ACII). IEEEWei-Long Zheng, Yong-Qi Zhang, Jia-Yi Zhu, and Bao-Liang Lu. Transfer compo- nents between subjects for eeg-based emotion recognition. In 2015 international conference on affective computing and intelligent interaction (ACII), pages 917- 922. IEEE, 2015.
A fast, efficient domain adaptation technique for cross-domain electroencephalography (eeg)-based emotion recognition. Xin Chai, Qisong Wang, Yongping Zhao, Yongqiang Li, Dan Liu, Xin Liu, Ou Bai, Sensors. 1751014Xin Chai, Qisong Wang, Yongping Zhao, Yongqiang Li, Dan Liu, Xin Liu, and Ou Bai. A fast, efficient domain adaptation technique for cross-domain electroen- cephalography (eeg)-based emotion recognition. Sensors, 17(5):1014, 2017.
Investigating the impact of data normalization on classification performance. Dalwinder Singh, Birmohan Singh, Applied Soft Computing. 97105524Dalwinder Singh and Birmohan Singh. Investigating the impact of data normal- ization on classification performance. Applied Soft Computing, 97:105524, 2020.
Decoding premovement patterns with task-related component analysis. Feng Duan, Hao Jia, Zhe Sun, Kai Zhang, Yangyang Dai, Yu Zhang, Cognitive Computation. 135Feng Duan, Hao Jia, Zhe Sun, Kai Zhang, Yangyang Dai, and Yu Zhang. Decoding premovement patterns with task-related component analysis. Cognitive Computa- tion, 13(5):1389-1405, 2021.
Personal-zscore: Eliminating individual difference for eeg-based cross-subject emotion recognition. Huayu Chen, Shuting Sun, Jianxiu Li, Ruilan Yu, Nan Li, Xiaowei Li, Bin Hu, IEEE Transactions on Affective Computing. Huayu Chen, Shuting Sun, Jianxiu Li, Ruilan Yu, Nan Li, Xiaowei Li, and Bin Hu. Personal-zscore: Eliminating individual difference for eeg-based cross-subject emotion recognition. IEEE Transactions on Affective Computing, 2021.
Eeg-based measurement system for monitoring student engagement in learning 4. Andrea Apicella, Pasquale Arpaia, Mirco Frosolone, Giovanni Improta, Nicola Moccaldi, Andrea Pollastro, 0. Scientific Reports. 121Andrea Apicella, Pasquale Arpaia, Mirco Frosolone, Giovanni Improta, Nicola Moccaldi, and Andrea Pollastro. Eeg-based measurement system for monitoring student engagement in learning 4.0. Scientific Reports, 12(1):1-13, 2022.
Cross-subject eeg-based emotion recognition through neural networks with stratified normalization. Javier Fernandez, Nicholas Guttenberg, Olaf Witkowski, Antoine Pasquali, Frontiers in neuroscience. 1511Javier Fernandez, Nicholas Guttenberg, Olaf Witkowski, and Antoine Pasquali. Cross-subject eeg-based emotion recognition through neural networks with strati- fied normalization. Frontiers in neuroscience, 15:11, 2021.
Eeg-based affect and workload recognition in a virtual driving environment for asd intervention. Jing Fan, Joshua W Wade, Alexandra P Key, Zachary E Warren, Nilanjan Sarkar, IEEE Transactions on Biomedical Engineering. 651Jing Fan, Joshua W Wade, Alexandra P Key, Zachary E Warren, and Nilanjan Sarkar. Eeg-based affect and workload recognition in a virtual driving environment for asd intervention. IEEE Transactions on Biomedical Engineering, 65(1):43-51, 2017.
Combining inter-subject modeling with a subject-based data transformation to improve affect recognition from eeg signals. Miguel Arevalillo-Herráez, Maximo Cobos, Sandra Roger, Miguel García-Pineda, Sensors. 19132999Miguel Arevalillo-Herráez, Maximo Cobos, Sandra Roger, and Miguel García- Pineda. Combining inter-subject modeling with a subject-based data transfor- mation to improve affect recognition from eeg signals. Sensors, 19(13):2999, 2019.
Learning subject-generalized topographical eeg embeddings using deep variational autoencoders and domain-adversarial regularization. Juan Lorenzo Hagad, Tsukasa Kimura, Ken-Ichi Fukui, Masayuki Numao, Sensors. 2151792Juan Lorenzo Hagad, Tsukasa Kimura, Ken-ichi Fukui, and Masayuki Numao. Learning subject-generalized topographical eeg embeddings using deep variational autoencoders and domain-adversarial regularization. Sensors, 21(5):1792, 2021.
Domain adaptation via transfer component analysis. Ivor W Sinno Jialin Pan, James T Tsang, Qiang Kwok, Yang, IEEE transactions on neural networks. 222Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adap- tation via transfer component analysis. IEEE transactions on neural networks, 22(2):199-210, 2010.
A kernel method for the two-sample-problem. Advances in neural information processing systems. Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Alex Smola, 19Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex Smola. A kernel method for the two-sample-problem. Advances in neural infor- mation processing systems, 19, 2006.
Kernel principal component analysis. Bernhard Schölkopf, Alexander Smola, Klaus-Robert Müller, International conference on artificial neural networks. SpringerBernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Kernel princi- pal component analysis. In International conference on artificial neural networks, pages 583-588. Springer, 1997.
Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, arXiv:1412.4446Domain-adversarial neural networks. arXiv preprintHana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446, 2014.
Domainadversarial training of neural networks. The journal of machine learning research. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, 17Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain- adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016.
Adversarial discriminative domain adaptation. Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionEric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discrim- inative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017.
Two-level domain adaptation neural network for eegbased emotion recognition. Guangcheng Bao, Ning Zhuang, Li Tong, Bin Yan, Jun Shu, Linyuan Wang, Ying Zeng, Zhichong Shen, Frontiers in Human Neuroscience. 14Guangcheng Bao, Ning Zhuang, Li Tong, Bin Yan, Jun Shu, Linyuan Wang, Ying Zeng, and Zhichong Shen. Two-level domain adaptation neural network for eeg- based emotion recognition. Frontiers in Human Neuroscience, 14, 2020.
From regional to global brain: A novel hierarchical spatial-temporal neural network model for eeg emotion recognition. Yang Li, Wenming Zheng, Lei Wang, Yuan Zong, Zhen Cui, IEEE Transactions on Affective Computing. Yang Li, Wenming Zheng, Lei Wang, Yuan Zong, and Zhen Cui. From regional to global brain: A novel hierarchical spatial-temporal neural network model for eeg emotion recognition. IEEE Transactions on Affective Computing, 2019.
Ms-mda: Multisource marginal distribution adaptation for cross-subject and crosssession eeg emotion recognition. Hao Chen, Ming Jin, Zhunan Li, Cunhang Fan, Jinpeng Li, Huiguang He, Frontiers in Neuroscience. 152021Hao Chen, Ming Jin, Zhunan Li, Cunhang Fan, Jinpeng Li, and Huiguang He. Ms-mda: Multisource marginal distribution adaptation for cross-subject and cross- session eeg emotion recognition. Frontiers in Neuroscience, 15, 2021.
Stationarity of the eeg series. S Blanco, Garcia, L R Quian Quiroga, O A Romanelli, Rosso, IEEE Engineering in medicine and biology Magazine. 144S Blanco, H Garcia, R Quian Quiroga, L Romanelli, and OA Rosso. Stationarity of the eeg series. IEEE Engineering in medicine and biology Magazine, 14(4):395-399, 1995.
A review on transfer learning approaches in brain-computer interface. Signal Processing and Machine Learning for Brain-Machine Interfaces. Jake Ahmed M Azab, Toth, S Lyudmila, Mahnaz Mihaylova, Arvaneh, Ahmed M Azab, Jake Toth, Lyudmila S Mihaylova, and Mahnaz Arvaneh. A review on transfer learning approaches in brain-computer interface. Signal Processing and Machine Learning for Brain-Machine Interfaces, pages 81-98, 2018.
Riemannian approaches in braincomputer interfaces: a review. Florian Yger, Maxime Berar, Fabien Lotte, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 25Florian Yger, Maxime Berar, and Fabien Lotte. Riemannian approaches in brain- computer interfaces: a review. IEEE Transactions on Neural Systems and Reha- bilitation Engineering, 25(10):1753-1762, 2016.
Dynamically weighted ensemble classification for nonstationary eeg processing. Cuntai Sidath Ravindra Liyanage, Haihong Guan, Kai Keng Zhang, Jianxin Ang, Tong Heng Xu, Lee, Journal of neural engineering. 10336007Sidath Ravindra Liyanage, Cuntai Guan, Haihong Zhang, Kai Keng Ang, JianXin Xu, and Tong Heng Lee. Dynamically weighted ensemble classification for non- stationary eeg processing. Journal of neural engineering, 10(3):036007, 2013.
Individual differences in eeg theta and alpha dynamics during working memory correlate with fmri responses across subjects. A Jed, Michiro Meltzer, Negishi, C Linda, R Todd Mayes, Constable, Clinical neurophysiology. 11811Jed A Meltzer, Michiro Negishi, Linda C Mayes, and R Todd Constable. Individual differences in eeg theta and alpha dynamics during working memory correlate with fmri responses across subjects. Clinical neurophysiology, 118(11):2419-2436, 2007.
Domain adaptation techniques for eeg-based emotion recognition: a comparative study on two public datasets. Zirui Lan, Olga Sourina, Lipo Wang, Reinhold Scherer, Gernot R Müller-Putz , IEEE Transactions on Cognitive and Developmental Systems. 111Zirui Lan, Olga Sourina, Lipo Wang, Reinhold Scherer, and Gernot R Müller-Putz. Domain adaptation techniques for eeg-based emotion recognition: a comparative study on two public datasets. IEEE Transactions on Cognitive and Developmental Systems, 11(1):85-94, 2018.
Domain adaptation for eeg emotion recognition based on latent representation similarity. Jinpeng Li, Shuang Qiu, Changde Du, Yixin Wang, Huiguang He, IEEE Transactions on Cognitive and Developmental Systems. 122Jinpeng Li, Shuang Qiu, Changde Du, Yixin Wang, and Huiguang He. Domain adaptation for eeg emotion recognition based on latent representation similarity. IEEE Transactions on Cognitive and Developmental Systems, 12(2):344-353, 2019.
Deep representation-based domain adaptation for nonstationary eeg classification. He Zhao, Qingqing Zheng, Kai Ma, Huiqi Li, Yefeng Zheng, IEEE Transactions on Neural Networks and Learning Systems. 322He Zhao, Qingqing Zheng, Kai Ma, Huiqi Li, and Yefeng Zheng. Deep representation-based domain adaptation for nonstationary eeg classification. IEEE Transactions on Neural Networks and Learning Systems, 32(2):535-545, 2020.
Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. Wei-Long Zheng, Bao-Liang Lu, IEEE Transactions on Autonomous Mental Development. 73Wei-Long Zheng and Bao-Liang Lu. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Transactions on Autonomous Mental Development, 7(3):162-175, 2015.
. IEEE Transactions on Neural Networks and Learning Systems. 322IEEE Transactions on Neural Networks and Learning Systems, 32(2):535-545, 2020.
Deap: A database for emotion analysis; using physiological signals. Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, IEEE transactions on affective computing. 31Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. Deap: A database for emotion analysis; using physiological signals. IEEE transactions on affective computing, 3(1):18-31, 2011.
Nonlinear component analysis as a kernel eigenvalue problem. Bernhard Schölkopf, Alexander Smola, Klaus-Robert Müller, Neural computation. 105Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear com- ponent analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299- 1319, 1998.
What is a support vector machine?. S William, Noble, Nature biotechnology. 2412William S Noble. What is a support vector machine? Nature biotechnology, 24(12):1565-1567, 2006.
Practical bayesian optimization of machine learning algorithms. Jasper Snoek, Hugo Larochelle, Ryan P Adams, Advances in neural information processing systems. 25Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimiza- tion of machine learning algorithms. Advances in neural information processing systems, 25, 2012.
Stratified sampling. Van L Parsons, Wiley StatsRef: Statistics Reference Online. Van L Parsons. Stratified sampling. Wiley StatsRef: Statistics Reference Online, pages 1-11, 2014.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Reducing the subject variability of eeg signals with adversarial domain generalization. Bo-Qun Ma, He Li, Wei-Long Zheng, Bao-Liang Lu, International Conference on Neural Information Processing. SpringerBo-Qun Ma, He Li, Wei-Long Zheng, and Bao-Liang Lu. Reducing the subject variability of eeg signals with adversarial domain generalization. In International Conference on Neural Information Processing, pages 30-42. Springer, 2019.
Reducing execution time for real-time motor imagery based bci systems. Sahar Selim, Manal Tantawi, Howida Shedeed, Amr Badr, International Conference on Advanced Intelligent Systems and Informatics. SpringerSahar Selim, Manal Tantawi, Howida Shedeed, and Amr Badr. Reducing execution time for real-time motor imagery based bci systems. In International Conference on Advanced Intelligent Systems and Informatics, pages 555-565. Springer, 2016.
Time window and frequency band optimization using regularized neighbourhood component analysis for multi-view motor imagery eeg classification. Nitesh Singh Malan, Shiru Sharma, Biomedical Signal Processing and Control. 67102550Nitesh Singh Malan and Shiru Sharma. Time window and frequency band opti- mization using regularized neighbourhood component analysis for multi-view mo- tor imagery eeg classification. Biomedical Signal Processing and Control, 67:102550, 2021.
Complex common spatial patterns on time-frequency decomposed eeg for brain-computer interface. Vasilisa Mishuhina, Xudong Jiang, Pattern Recognition. 115107918Vasilisa Mishuhina and Xudong Jiang. Complex common spatial patterns on time-frequency decomposed eeg for brain-computer interface. Pattern Recognition, 115:107918, 2021.
Optimizing spatial filters for robust eeg single-trial analysis. Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe, Klaus-Robert Muller, IEEE Signal processing magazine. 251Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe, and Klaus-Robert Muller. Optimizing spatial filters for robust eeg single-trial anal- ysis. IEEE Signal processing magazine, 25(1):41-56, 2007.
Eeg-based brain-computer interfaces using motor-imagery: Techniques and challenges. Natasha Padfield, Jaime Zabalza, Huimin Zhao, Valentin Masero, Jinchang Ren, Sensors. 1961423Natasha Padfield, Jaime Zabalza, Huimin Zhao, Valentin Masero, and Jinchang Ren. Eeg-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors, 19(6):1423, 2019.
MEG and EEG data analysis with MNE-Python. Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, Matti S Hämäläinen, Frontiers in Neuroscience. 7267Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A. Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, and Matti S. Hämäläinen. MEG and EEG data analysis with MNE- Python. Frontiers in Neuroscience, 7(267):1-13, 2013.
Journal of machine learning research. Laurens Van Der Maaten, Geoffrey Hinton, 9Visualizing data using t-sneLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Jour- nal of machine learning research, 9(11), 2008.
| [] |
[
"A goodness-of-fit test based on a recursive product of spacings",
"A goodness-of-fit test based on a recursive product of spacings"
] | [
"Philipp Eller [email protected]@mpp.mpg.de ",
"Lolian Shtembari ",
"\nTechnical University Munich\nGarchingGermany\n",
"\nMax Planck Institute for Physics\nExzellenzcluster ORIGINS\nGarching, MunichGermany, Germany\n"
] | [
"Technical University Munich\nGarchingGermany",
"Max Planck Institute for Physics\nExzellenzcluster ORIGINS\nGarching, MunichGermany, Germany"
] | [] | A: We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including a detailed discussion of the parameterization of its distribution via asymptotic bootstrapping as well as a novel per-quantile error estimation of the empirical distribution. Two example applications are provided, using the test to boost the sensitivity in generic "bump hunting", and employing the test to detect supernovae. The article is rounded off with an extended performance comparison to other, established goodness-of-fit tests. | 10.1088/1748-0221/18/03/p03048 | [
"https://export.arxiv.org/pdf/2111.02252v2.pdf"
] | 241,033,267 | 2111.02252 | 166dd4b80e33e8b848da0b50fee80a19345d2a72 |
A goodness-of-fit test based on a recursive product of spacings
26 Oct 2022
Philipp Eller [email protected]@mpp.mpg.de
Lolian Shtembari
Technical University Munich
GarchingGermany
Max Planck Institute for Physics
Exzellenzcluster ORIGINS
Garching, MunichGermany, Germany
A goodness-of-fit test based on a recursive product of spacings
26 Oct 2022P JINSTK : test statistichypothesis testgoodness of fitspacinginterarrival timewaiting timegap A X P : 211102252
A: We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including a detailed discussion of the parameterization of its distribution via asymptotic bootstrapping as well as a novel per-quantile error estimation of the empirical distribution. Two example applications are provided, using the test to boost the sensitivity in generic "bump hunting", and employing the test to detect supernovae. The article is rounded off with an extended performance comparison to other, established goodness-of-fit tests.
Introduction
Assessing the goodness-of-fit of a distribution given a number of random samples is an oftenencountered problem in data analysis. Such statistical hypothesis tests find applications in many fields, ranging from the natural and social sciences over engineering to quality control. Several nonparametric tests exist, some of which have become standard tools, including the Kolgogorov-Smirnov (KS) test [1,2] or the Anderson-Darling (AD) test [3]. [4] provides a comprehensive overview of existing tests, and a comparison of their performance for the case of detecting non-uniformity for a set of alternative distributions.
In this work, we are in contrast interested in the case where the bulk of samples are actually distributed according to the null hypothesis, and only few additional samples are introduced that are following a different distribution, representing a narrow excess over a known background. We present the new test statistic "recursive product of spacings", or short RPS, that is based on the spacings between ordered samples, and introduced in Sec. 2. In Sec. 3 we provide a parametrization of its distribution based on simulations, introducing techniques to estimate the asymptotic result of infinite bootstrapping steps in order to improve the quality of our fits. Subsequently we discuss the quality of the approximation deriving a per-quantile error estimate up to a desired confidence level.
The rest of the article focuses on some illustrations and example applications, as well as a detailed performance comparison to several other test statistics.
Goodness-of-fit Tests
Suppose that we have obtained samples , and want to quantitatively test the hypothesis of those samples being random variates of a known distribution ( ), i.e. independent and identically distributed (i.i.d.) according to ( ). Here, we consider only continuous distributions ( ) with cumulative ( ), and hence can transform samples onto the unit interval [0, 1] via = ( ). This reduces the task at hand to test transformed samples being distributed according to the standard uniform distribution U (0, 1).
First, let us briefly introduce other, existing test statistics to which we will compare the RPS statistic. We consider in particular two groups of statistics, those based on the empirical distribution (EDF Statistics), and those based on the spacings between ordered samples (Spacings Statistics). An comprehensive overview of existing test statistics can be found in [4].
EDF Statistics
This class of test statistics compares the empirical distribution function (EDF) ( ) to the cumulative distribution function (CDF) ( ), (here ( ) = ). In particular the following are widely used and considered here: Similar are a type of statistics defined on the ordered set. Given the samples { 1 , 2 , . . . , }, we define the ordered set of samples as { (1) , (2) , . . . , ( ) }, where ( ) < ( +1) ∀ . The expected value of ordered sample is /( + 1), and we define the deviation to the expected values as = ( ) − /( + 1) for each sample . Based on this we can write out the following two statistics:
• Pyke's Modified KS (C) [7, 8]: = max(max( ), − min( )) • Brunk's Modified KS (K) [9]: = max( ) − min( )
Spacings Statistics
Based on the ordered set, we can further define the + 1 spacings as = ( ) − ( −1) , with (0) = 0 and ( +1) = 1. Several test statistics built from these spacings are considered in literature, including:
• Moran (M) [10]:
= − +1 =1 log • Greenwood (G) [11]: = +1 =1 2
In the context of a fixed rate Poisson process, these spacings can also be interpreted as interarrival times or waiting times. In some other areas, spacings are also referred to as gaps.
So-called higher order spacings can be defined by summing up neighbouring spacings. Here we consider the overlapping -th order spacings ( ) = ( + ) − ( ) . With those, we can define generalisations of Moran and Greenwood, respectively, as discussed by Cressie:
• Logarithms of higher order spacings (Lm) [12]:
( ) = − − +1 =0 log ( )
• Squares of higher order spacings (Sm) [13]:
( ) = − +1 =0 ( ( ) ) 2
For our comparisons presented later, we choose = 2 and = 3, respectively, to limit ourselves to a finite list of tests.
Other statistics based on spacings exist and are being actively developed and used, such as, for example, tests based on the smallest or largest spacings [14].
Recursive Product of Spacings (RPS)
In this work, our goal is to construct a new test statistic, that is sensitive to narrow features or clusters in an otherwise uniform distribution of samples. The tell-tale sign we are looking for is a localized group of uncommonly small spacings of the ordered data. For this purpose, we propose a new class of test statistics, that are including higher order spacings in a recursive way.
The recursive product of spacings (RPS) can be thought of as an extension of the Moran statistic, and is defined as
( ) = +1 + + · · · + 1 , (2.1)
where the term +1 is the simple sum of negative log spacings equivalent to the Moran statistic
+1 = − +1 ∑︁ =1 log ( ) . (2.2)
All following terms are computed in the same way
= − ∑︁ =1 log ,(2.3)
but with modified spacings , defined as:
= +1 + +1 +1 (2.4)
which there are of, and that depend on the spacings +1 used to compute the previous term +1 (hence the recursiveness). In order to better understand Eq. 2.4 we can turn to Fig. 1, where we show how to transition from +1 (top) to (bottom): in the top plot we show a list of events (blue), where we also highlight the boundaries 0 and 1 since they contribute to defining spacings; in the middle plot the middle points of the top row spacings are shown, forming a reduced set of "events", which is then transformed in order to ensure that the spacings of the new set sum up to 1, as shown in the bottom plot; the number of spacings going from the top plot to the bottom one is reduced by one, showing how we have a finite number of reduction steps in the definition of the . We can see that term is identical to (2) up to a normalization factor 1/ . It is important to include such a normalization for the spacings of each level, as this ensures that the case of equidistant spacings-the most regular and uniform case-yields the smallest possible RPS value. This minimum value of ( ), given by the configuration of equidistant samples, can be expressed easily, as each spacing is equal to 1 , and thus:
min( ( )) = − +1 ∑︁ =1 ∑︁ =1 log 1 = +1 ∑︁ = · log( ). (2.5)
At the other extreme, very small spacings will yield a large contribution to the sum of Eq. 2.3, thus max( ( )) = ∞ for any given number of samples . These extrema show that RPS measures the irregularity in sample positions. The RPS statistic increases the more samples aggregate into local clusters.
The RPS quantity calculated so far has an infinite support. Since approximating the tails of distributions is often easier when dealing with a bounded quantity instead, we transform the in order to bound its support in the range [0, 1], via * ( ) = min( ( )) ( ) (2.6) into a new quantity * that we will consider when using our test-statistic.
The following pseudo code illustrates how the computation of the RPS value can be implemented: This algorithm has a computational complexity of O ( 2 ), and can become inefficient for very large sample sizes . In this work we limit ourselves to ≤ 1000.
In an analogue way, we can also define an extension to Greenwood ( ), that instead of logarithms of spacings, sums over the squares of spacings. This means that we substitute Eq. 2.3 with = =1 2 , while keeping the definition of from Eq. 2.4. We call this recursive form the "RSS" test statistic in the following comparison. (1) , (2) , . . . , ( )
= [0,, 1] = 0 = [first + 1 : last] − [first : last − 1] ⊲ initial spacings while ( ) > 1 do = − ( ( )) = [first : last − 1] + [first + 1 : last] ⊲ spacings for next iteration = / ( ) ⊲ normalize _ = _ ( )/
Illustration
To illustrate better how our test statistic works, and to highlight differences to other tests, we use the example set of samples drawn from a uniform (null hypothesis 0 ) and a non-uniform distribution, respectively, shown in Fig Non-uniform Samples The Moran test is based on the spacings between samples, and the smallest and largest spacings in the specific example are present in the uniform case. This leads to a more extreme test statistic value and hence p-value = ( ≥ | 0 ) of 0.117 for the uniform case, while it evaluates to = 0.335 in the non-uniform case. The feature of samples clustering locally together, as in the non-uniform case, is-by construction-completely missed by Moran's test.
The KS test can detect such clustering via the CDF, however in our chosen example it is challenged by the fact that samples trend towards the left in the uniform case, while they are more balanced in the non-uniform case. This leads to p-values of 0.048 for uniform, and 0.356 for non-uniform, respectively.
The RPS test, however, taking into account also spacing between spacings, finds a p-value of 0.532 for the uniform case, and a much lower p-value of 0.057 for the non-uniform samples. The behaviour of RPS is further illustrated in Fig. 3, that shows the individual contribution of spacings of all recursion levels that build up the test statistic value. The Moran statistic corresponds to the sum over the first row ( 16 ), while all subsequent levels are added for RPS. The uniform samples exhibit the most extreme values in the first level, but then even out rapidly. Contrary to that, the non-uniform samples with their clustering give rise to larger values at later levels, explaining the lower observed p-value.
M 16 Uniform Samples Non-Uniform Samples M 15 M 14 M 13 M 12 M 11 M 10 M 9 M 8 M 7 M 6 M 5 M 4 M 3 M 2 1 2 3 4 5 6 7 TS summand value
Cumulative Distribution of RPS
In order to use RPS as a statistical test yielding p-values, we need its cumulative distribution . In the case of = 1 that has only two spacings-the simplest non-trivial case we can encounter-we can easily derive the distribution of * (1), which is:
* ( ; = 1) = 1 − √︃ 1 − 4 −1 (3.1)
For ≥ 2, however, it is not simple to derive this distribution. Therefore, we resort to numerically approximating the distribution of * discussed in the following section.
Approximate Distribution
We have built an approximation for the cumulative distribution * ( ; ) precise enough to compute meaningful p-values up to relatively extreme values of up to 10 −7 , and large sample sizes of up to 1000. Figure 4 shows some examples of * distributions for a few values of . We base our approximation on simulation, drawing events with uniform distribution in the range [0, 1] for a given , and collecting = 2 · 10 8 samples of * ( ). Such simulation could be directly used to calculate p-value estimates by counting the fraction of trials below or above an observed * value for a fixed . However, we want to provide a continuous and smooth function valid for any ≤ 1000. For this, we use simulated data to infer the values of our test statistics corresponding to a discrete list of specific quantiles ∈ [10 −7 , 1 − 10 −7 ]. Taking the -th element in the sorted simulation set gives an estimate for the value of ( = / ). In order to improve this estimate, we could use bootstrapping [15], collecting different realisations of by resampling the original dataset with replacement, resulting in a distribution of values of for each , from which we can then extract the mean and the standard deviation, indicative of the error (see Fig 5). Instead of manually performing the bootstrapping, we can calculate the probability of each sample to represent a specific quantile if we were to sample randomly with replacement. For simplicity, let us consider rational quantiles that can be expressed in the form = ; the probability that the -th sample could end up representing the -th quantile is:
, = , + 1 − ; − , + 1 − ; − 1 (3.2)
where ( , ; ) is the cumulative function of the Beta distribution with parameters [ , ] estimated at . The distribution Beta( , + 1 − ) represents the -th order statistic of the uniform distribution [16], i.e. the -th largest element of a set on uniformly distributed random variable. Eq. 3.2 corresponds to the limiting case of performing an infinite number of bootstrapping steps and can be used to quickly estimate the mean and standard deviation of all ( ) for a choice on , especially when dealing with large datasets:
E = ∑︁ =1 · , (3.3) Std = ∑︁ =1 − E 2 · ,(3.4)
It would be inefficient to produce such simulation for any , and hence we repeat the above procedure for only 180 different choices of between 2 and 1000 following approximately a logarithmic spacing.
Fitting procedure
Using Eq. 3.3 and Eq. 3.4 we are able to define a grid of points with mean ( , ) and standard deviation ( , ). Our goal is to estimate a set of pointsˆ( , ), which will be the basis to interpolate and infer the distribution of the test statistic for all values of and defined above. The pointŝ ( , ) is allowed to deviate from the means ( , ) within the uncertainties ( , ), and can thereby provide a more accurate approximation by smoothing out stochastic noise. Additionally, points from the analytic solution for = 1 (Eq. 3.1) are added to the list as anchor points at the boundary.
Given a trial set˜( , ), we interpolate a cubic spline polynomial across the values of for each value of , similarly to the fits shown in Fig 5. Given one such cubic spline, we evaluate the third derivative on both sides of each node, calculating the square of their difference and summing up across all nodes. Since we are using cubic splines, the third derivative is not continuous, and the "size" of the discontinuity is indicative of the smoothness of the interpolation. Summing up the contributions form all nodes of all cubic splines construct the smoothing cost function. The construction of this cost function is based on [17][18][19], where smoothness is treated very similarly. The estimation of the cubic spline coefficients and the evaluation of the smoothness cost function can be represented as a quadratic objective function, which we want to minimize:
(˜) ∝ 1 2˜· ·˜+h ·˜(3.5)
In addition to obtaining a smooth fit, there are also some additional constraints that need to be considered: monotonicity and sum of squared residuals.
Since the samples˜( | ) should represent a cumulative density function, then it is important they are properly ordered, ensuring that˜( | ) ≤˜( | ) for ≤ . This is ensured including a number of linear inequality constraints modelled as a linear constraint matrix:
·˜≤ (3.6)
Lastly, we assume that the values˜( , ) are normally distributed with means ( , ) and standard deviations ( , ). Since we want to move away from the initial values ( , ) in order to obtain a smoother fit, it is important to limit this movement the further away we get and we do so by considering the sum of squared residuals, which is a typical measure to account for the global deviation from the mean. Since we assume gaussian deviations, the sum of all squared residuals can be modelled by a 2 distribution with degrees of freedoms, where is the total number of parameters, i.e. the number of nodes. Given this distribution, we can estimate the value of the cost function to be limited to the mean ( ) plus one standard deviation ( √ 2 ) of the 2 distribution, thus: Figure 5 shows a fitted spline representation ofˆ( | ) for different values of . Based on the resulting list of corresponding andˆvalues, that we obtained for any , we generate another spline interpolation as the approximation of the desired cumulative distribution (ˆ; ) for a given . As the cumulative distribution function is strictly monotonous inˆ, we use the [20] monotonic spline interpolation on the points [ˆ( | ), ] to produce the final CDFs, shown in Figure 4 for a few values of .
∑︁ =1 (˜− ) 2 2 ≤ + √ 2 · (3.7)
Error estimation
Finally, we are also able to estimate the precision of our approximation. Given any set of i.i.d. random variables, such as , the corresponding list of estimated quantiles represents a random set of uniform variates. For any rational quantile = we can estimate the 98% credible interval ( 0.01 , 0.99 ) using the distribution of the -th order statistic Beta( , + 1 − ). Given the credible interval, we calculate the relative error of against the extrema of the interval, considering the largest value representative of the relative error of a random EDF up to a specified credible level. The results of the estimated relative error for our choice of = 2 · 10 8 and for quantiles as low as = 10 −7 are shown in Figure 6a. As expected, the errors are increasing towards smaller p-values and exhibit an approximately linear behaviour in the log-log plot. We see that the estimated upper bound of the relative error for a p-value of 10 −3 is below 1%, while for a p-value of 10 −5 it increases to < 10% and ultimately to < 100% for p-values of 10 −7 . Such a "large" relative error for small p-values may sound alarming at first, but estimating a p-value of 10 −7 and knowing it could actually be closer to 2 · 10 −7 would hardly change the statistical interpretation of a result. In order to show the validity of these results, we compute the relative error of our approximate distributions against a test dataset containing 10 times more samples using bootstrapping. We do so for a few choices of number of events , and in Figure 6b it can bee seen that the behavior of the relative error is in complete agreement with our analytic estimates of Figure 6a.
So defined, the relative error ( | ) is a function of the quantile and number of samples , but this relationship can also be inverted in order to determine the number of samples necessary to achieve a desired relative error for a specific quantile: ( | ). Our choice of = 2 · 10 8 was in fact guided by the requirement of having a relative error lower than 100% for a p-value of 10 −7 in at least 99% of cases.
It is worth stressing that these estimates of the relative error are accurate with respect to the EDF that was sampled for each independent , but might be subject to small changes after the smoothing fit we performed in order to regularize and infer the distributions for all missing values of .
Implementation
The RPS test is made available as open-source packages for Python and Julia , respectively, with the p-value parametrizations initially available up to 1000 samples.
Below we give a minimal example to evaluate the RPS test for an array in both language implementations, with x being: The python library can be used like the following:
>>> from spacings import rps >>> rps (x , " uniform " ) RPStestResult ( statistic = .95473788632456 8 , pvalue = .886539997 1924 9) and the Julia equivalent giving identical results in the following: >>> using SpacingsTests >>> rps (x , Uniform () ) ( statistic = .95473788632456 8 , pvalue = .886539997 1924 9)
Example Application 1: Bump Hunting
In this section, we illustrate how the RPS test could be used in a physics scenario. We consider a detector that collects a number of events in an observable , where could for example be the energy of an event, the detection time, or a reconstructed quantity like an invariant mass. We expect some or all of the observed events to follow a known background distribution ( ), but there may be an additional contribution of events from an unknown signal distribution ( )-such as a rare, https://pypi.org/project/spacings/ https://juliapackages.com/p/spacingstatistics exotic particle decay with unknown mass. Hence we want to quantify the goodness-of-fit of the background only model to our data. A resulting low p-value could indicate the presence of events distributed according to an additional, unknown signal distribution.
In the example here, we use an exponential distribution ( ) = − for the background model (null-hypothesis). In order to illustrate how the presence of an actual signal (alternative hypothesis) would affect the outcome, we also inject additional events following a normal distribution centred at = 1 and width = 0.05. The number of events is Poisson fluctuated for both background and signal, with expected values of = 100 and varied as specified. In Fig. 7, an example distribution of observed events is shown, together with the assumed background distribution, and the distribution with injected signal (here = 5). The example case chosen is similar to that, for instance, of a search for an exotic particle with unknown mass -a problem sometime referred to as "bump hunting". In this case, would represent an invariant mass.
N.B., we do not assume that we know the rate of the underlying processes, meaning that the number of observed counts is not included in our analysis other than for the calculation of the test statistic. This means that we test for the "shape" of the distribution, not its normalization. The conversion of events via the CDF of the distribution under test transforms the problem into a test of uniformity. The p-value distributions under the assumption of 0 (i.e. only background is present) for repeated trials with = 100, and various injected = [0, 3,6,9,12,15] are shown in Fig. 8. All distributions with no signal ( = 0) show a flat p-value distribution as expected, since in that case all events are drawn from the background distribution . For trials with injected signal, the distributions are trending towards smaller p-values, indicating the worsened goodness-of-fit for the background only model. In the example, all tests exhibit this behaviour, while the RPS test offers the largest rejection probability of the null hypothesis.
We quantify the sensitivity of the analysis to reject the background only model at different
Example Application 2: Trigger for Transient Neutrino Emission
This section summarizes one of the first applications of the RPS test in astrophysics, namely for triggering transient events in cryogenic neutrino detectors [21] such as the RES-NOVA experiment [22].
While the technical details about the experimental setup, the simulation and the application of the RPS test can be found in the aforementioned references, here we will summarize some highlights.
A significance level in terms of numbers of standard deviations can be translated to a p-value as one minus the integral over a unit normal distribution form − to + . Cryogenic neutrino detectors can be described as counting experiments, that output a temporal data stream of observed neutrino interactions. Without the presence of a transient neutrino source, we only expect some activity from background events. If a source of neutrinos is placed at an observable distance, such as a core-collapse supernova (CC-SN) at 10 kpc, we expect a short burst of neutrinos resulting in an excess in the observed counts over the background only expectation. The sources of such transient neutrinos can vary in their overall duration, temporal distribution and amplitude. Figure 10 shows as examples the expected counts of two different neutrino sources, a CC-SN and a failed CC-SN, respectively, together with a constant background expectation at a rate of 0.18 Hz.
To issue alerts in near real time about the presence of such sources, one needs a triggering system with a chosen false alarm rate (FAR), which is set to 1 per week for SNEWS [23]. The standard approach for building such triggers is the usage of Poisson statistics, that analyse the data stream in windows of a fixed length, and check the level of observed counts compared to the expectation from background, see for example Ref. [24]. The Poisson approach works well if the window size is chose optimally for a given signal. However, if the chosen window size does not match the signal, the performance is affected as either the signal is not contained in the window (window too small), or the window is too large and the signal is washed out by background events. Performance of the Poisson test, as a function of the background rate and for the case of optimal window choice for the CC-SN and the failed CC-SN signals is shown in Fig. 11.
The RPS test can likewise be used to analyze data streams to look for transient phenomena. Here we do not make any explicit assumption on the background rate, but rather assume that the background is constant in rate, which means the distribution of background events in the time dimension is following a uniform distribution. With RPS we can test for this uniformity, which can be used to detect short additional contribution of events in the data. The performance of the RPS test as trigger is also shown in Fig. 11.
In the case where the window size for the Poisson test is optimal, the performance can not be matched with RPS (the panels in the upper left and lower right, respectively, in Fig. 11) and results in up to 10% lower sensitivity. However, the more interesting case is when using the window size optimized for one signal for the analysis of a different signal (the panels in the upper right and lower left, respectively, in Fig. 11). The RPS test is more robust to such changes, and in the example of searching for a failed CC-SN signal with a window optimized on a particular CC-SN scenario, we find up to 20% increase in sensitivity.
In general, what we find is that the RPS test being non parametric and able to deal with much larger analysis windows is more robust to changing conditions. Less assumptions about the background rate and the expect signals have to be made at the trade off of being non-optimal to one specific signal choice, but good performance for the more agnostic case of unknown signal distributions. This makes RPS an interesting choice for a general-purpose, agnostic trigger algorithm for the search of transient events.
Performance Comparison
This section presents an in-depth performance comparison of the RPS test to several other tests referenced in the introduction (KS, AD, CvM and Moran-all those that allow to compute p-values). We are interested in detecting small changes in an otherwise uniform distribution, and therefore construct the following generic benchmark scenario: For one simulation of a specific test case ( , , ) we generate (1 − ) · random variates from a standard uniform distribution U (0, 1), where is a signal fraction. In addition, we include · samples distributed according to Δ + U (0, ) with the offset Δ = U (0, 1 − ), i.e. a more narrow uniform distribution of width over a random interval within (0, 1). In our comparison, we vary all three parameters of ( , , ), i.e. the number of samples , as well as the fraction and width of the injected signal events. A sensitive test should be able to detect the presence of the added, narrower signal samples by reporting a low Numbers of samples are rounded to the closest integer p-value. Figure 12: Comparison of the performance (median p-value of repeated trials, individual panel's xy-axis) as a function of the number of total samples (large y-axis), the width of the signal (large x-axis) samples, and the fraction of signal samples (individual panel's x-axis). The number of signal samples is rounded to the closest integer, hence the "step"-like features visible mostly in the first few rows. Figure 12 show the performance of our choice of tests as a function of the above three parameters. As a metric, we show the median p-value obtained from repeated trials, and we interpret a lower reported median p-value as a more powerful test. This number can be interpreted as the median significance at which we expect to be able to reject the null hypothesis. What can be observed is, that for all the tested scenarios the RPS test is performing either on par or significantly better than the Moran test. The EDF based tests (KS, AD or CvM) start to dominate in terms of performance only for relatively wide signals of around 25% total width or more. When analysing the goodness-of-fit given a large number of samples, i.e. order of several hundreds, the differences between RPS and the EDF-style tests start to become smaller. Overall, the outcome of this performance study suggests that when signals are expected of widths that span over less than a 25% percentile of the null hypothesis distribution, and if the number of samples is < 1000, the RPS tests compares very favourably against all others considered.
We also investigated other metrics to judge the test's performance, such as the area under the receiver operating characteristics (ROC) curve between signal and null hypothesis trials. The overall picture does not change substantially.
Conclusions
The RPS test statistic is a sensitive measure to detect deviations of samples from a continuous distribution with known CDF. The analytic distribution of the RPS statistic is not available for > 1, but a high accuracy parameterization valid up to sample sizes of = 1000 is provided in order to use RPS as a goodness-of-fit test. In the presented test scenarios, the RPS test outperforms other tests significantly under certain circumstances, in particular when the observed sample is small ( < 1000) and introduced deviations are narrow, i.e. concentrated over a small quantile. Two example physics analysis cases were presented, we show that the sensitivity of a "bump hunting" experiment could be boosted by up to a factor of two by choosing the RPS test over others. And we show how RPS can be used to build a robust and agnostic trigger algorithm for a SN experiment.
•
Kolmogorov
Figure 1 :
1Example of the reduction step included in the calculation. Given an initial set of events (top; blue), the middle points are calculated (top and middle; green), which are then scaled in order to fill the [0, 1] interval, forming a new set of data (bottom; red). The evolution of sample positions on the [0, 1] interval are annotated via the arrows.
Figure 2 :
2Example of 15 standard uniformly distributed samples (left) and 10 standard uniformly + 5 normally ( = 0.5, = 0.1) distributed samples (right). The sample positions on the [0, 1] interval are annotated via the arrows + text.
Figure 3 :
3Illustration of the test statistic contributions from all recursion levels for the uniformly distributed samples (left) and the non-uniform samples (right). The sum over the first level only ( 16 ) is equivalent to the Moran statistic.
Figure 4 :
4Example of CDFs of the * distribution for a few different values of . N.B.: the x-axis is displayed in inverted logarithm.
Figure 5 :
5Example of spline fitted -values across for a few extreme p-values. The colored bands show the 1, 2 and 3 sigma bands estimated via bootstrapping, the black, dashed lines show the approximations by the spline fits.
(a )
)Estimated relative error of empirical p-value with respect to the 98 % credible interval and 2·10 8 samples. The vertical axis reports the scale of the relative error in percent for two extremes, the 1% and the 99% quantile of the order statistic distribution.(b) Estimated relative error of fitted p-value with respect to p-values obtained via bootstrapping. The vertical axis reports the scale of the relative error in percent for two extremes, the 1% and the 99% quantile of the bootstrapping distribution. Results for = 75.
Figure 6 :
6Per-quantile relative error stimation of the approximate RPS distribution.
Figure 7 :
7Example physics problem, with observed events distributed in . We test the goodnessof-fit of the background only model (blue) to the samples. Here the samples have been generated according to a different distribution with an injected signal (orange).
Figure 8 :Figure 9 :
89p-value distributions for background only samples ( = 0) and background plus randomised signal injections comparing to the background model for several choices of test statistics. significance levels under the assumption of the presence of a signal. Therefore we check the median p-value of repeated trials, and at what value of it crosses specific critical values (See left panel ofFig. 9). In our chosen example, for a signal of strength = 10 we expect to reject the background only model using RPS at the 2 significance level , whereas for the other tests, a signal of at least = 20 is needed to achieve the same. Such a large signal of = 20 would allow to reject the background only model at > 4 significance with the RPS test. The expected significance level at which the background model can be excluded under the assumption of a signal, as a function of for the different tests.
Figure 10 :Figure 11 :
1011Example of observed counts at a neutrino detector for signals from a core-collapse SN (at time = 15 s) and a failed core-collapse SN (at = 5 s) for progenitors stars with 27 and 40 respectively, both at a distance of 10 kpc. (Modified version of a Figure from Ref. [21]) Maximum distance probed at a 95% success rate as a function of time with respect to two sample signals, the Core-Collapse SN and the failed Core-Collapse SN, obtained using analysis windows optimised on each of the tested signals. (Figure from Ref. [21])
AcknowledgementsWe would like to thank Allen Caldwell, Oliver Schulz and Johannes Buchner for helpful discussions and comments. This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany s Excellence Strategy -EXC-2094 -390783311
Sulla determinazione empirica di una legge di distribuzione. A Kolmogorov, G. Ist. Ital. 4A. Kolmogorov, Sulla determinazione empirica di una legge di distribuzione, G. Ist. Ital. 4 (1933) .
Table for estimating the goodness of fit of empirical distributions. N Smirnov, Annals of Mathematical Statistics. 19279N. Smirnov, Table for estimating the goodness of fit of empirical distributions, Annals of Mathematical Statistics 19 (1948) 279.
A test of goodness of fit. T W Anderson, D A Darling, 10.1080/01621459.1954.10501232Journal of the American Statistical Association. 50765T.W. Anderson and D.A. Darling, A test of goodness of fit, Journal of the American Statistical Association 50 (1954) 765.
A comparison of uniformity tests. Y Marhuenda, D Morales, M C Pardo, 10.1080/02331880500178562Statistics. 39315Y. Marhuenda, D. Morales and M.C. Pardo, A comparison of uniformity tests, Statistics 39 (2005) 315.
On the composition of elementary errors. H Cramer, 10.1080/03461238.1928.10416862Scandinavian Actuarial Journal. 13H. Cramer, On the composition of elementary errors, Scandinavian Actuarial Journal 1928 (1928) 13.
. R V Mises, Wahrscheinlichkeit Statistik, Wahrheit, 10.1007/978-3-662-36230-3Springer-Verlag Berlin HeidelbergR.V. Mises, Wahrscheinlichkeit Statistik und Wahrheit, Springer-Verlag Berlin Heidelberg (1928), 10.1007/978-3-662-36230-3.
R Pyke, 10.1214/aoms/1177706269The Supremum and Infimum of the Poisson Process. 30568R. Pyke, The Supremum and Infimum of the Poisson Process, The Annals of Mathematical Statistics 30 (1959) 568 .
Tests for serial correlation in regression analysis based on the periodogram of least-squares residuals. J Durbin, Biometrika. 561J. Durbin, Tests for serial correlation in regression analysis based on the periodogram of least-squares residuals, Biometrika 56 (1969) 1.
On the Range of the Difference between Hypothetical Distribution Function and Pyke's Modified Empirical Distribution Function. H D Brunk, 10.1214/aoms/1177704578The Annals of Mathematical Statistics. 33525H.D. Brunk, On the Range of the Difference between Hypothetical Distribution Function and Pyke's Modified Empirical Distribution Function, The Annals of Mathematical Statistics 33 (1962) 525 .
A goodness-of-fit test using moran's statistic with estimated parameters. R C H Cheng, M A Stephens, Biometrika. 76385R.C.H. Cheng and M.A. Stephens, A goodness-of-fit test using moran's statistic with estimated parameters, Biometrika 76 (1989) 385.
The statistical study of infectious diseases. M Greenwood, Journal of the Royal Statistical Society. 10985M. Greenwood, The statistical study of infectious diseases, Journal of the Royal Statistical Society 109 (1946) 85.
On the logarithms of high-order spacings. N Cressie, Biometrika. 63343N. Cressie, On the logarithms of high-order spacings, Biometrika 63 (1976) 343.
An optimal statistic based on higher order gaps. N Cressie, Biometrika. 66619N. Cressie, An optimal statistic based on higher order gaps, Biometrika 66 (1979) 619.
On the sum of ordered spacings. L Shtembari, A Caldwell, L. Shtembari and A. Caldwell, On the sum of ordered spacings, 2020.
B Efron, 10.1214/aos/1176344552Bootstrap Methods: Another Look at the Jackknife, Annals Statist. 71B. Efron, Bootstrap Methods: Another Look at the Jackknife, Annals Statist. 7 (1979) 1.
H A David, H N Nagaraja, Order statistics. WileyH.A. David and H.N. Nagaraja, Order statistics, Wiley (2003).
An algorithm for smoothing, differentiation and integration of experimental data using spline functions. P Dierckx, 10.1016/0771-050X(75)90034-0Journal of Computational and Applied Mathematics. 1165P. Dierckx, An algorithm for smoothing, differentiation and integration of experimental data using spline functions, Journal of Computational and Applied Mathematics 1 (1975) 165.
A fast algorithm for smoothing data on a rectangular grid while using spline functions. P Dierckx, https:/arxiv.org/abs/https:/doi.org/10.1137/0719093SIAM Journal on Numerical Analysis. 191286P. Dierckx, A fast algorithm for smoothing data on a rectangular grid while using spline functions, SIAM Journal on Numerical Analysis 19 (1982) 1286 [https://doi.org/1 .1137/ 719 93].
Curve and surface fitting with splines. P Dierckx, Monographs on numerical analysis. P. Dierckx, Curve and surface fitting with splines, in Monographs on numerical analysis, 1996.
A method for constructing local monotone piecewise cubic interpolants. F N Fritsch, J Butland, https:/arxiv.org/abs/https:/doi.org/10.1137/0905021SIAM Journal on Scientific and Statistical Computing. 5300F.N. Fritsch and J. Butland, A method for constructing local monotone piecewise cubic interpolants, SIAM Journal on Scientific and Statistical Computing 5 (1984) 300 [https://doi.org/1 .1137/ 9 5 21].
Online triggers for supernova and pre-supernova neutrino detection with cryogenic detectors. P Eller, N Ferreiro, L Pattavina, L Shtembari, P. Eller, N. Iachellini Ferreiro, L. Pattavina and L. Shtembari, Online triggers for supernova and pre-supernova neutrino detection with cryogenic detectors, 22 5. 335 .
RES-NOVA sensitivity to core-collapse and failed core-collapse supernova neutrinos. 10.1088/1475-7516/2021/10/064JCAP. 106421 3. 8672RES-NOVA collaboration, RES-NOVA sensitivity to core-collapse and failed core-collapse supernova neutrinos, JCAP 10 (2021) 064 [21 3. 8672].
SNEWS 2.0: a next-generation supernova early warning system for multi-messenger astronomy. 10.1088/1367-2630/abde33New J. Phys. 23312012 11. 35SNEWS collaboration, SNEWS 2.0: a next-generation supernova early warning system for multi-messenger astronomy, New J. Phys. 23 (2021) 031201 [2 11. 35].
On-line recognition of supernova neutrino bursts in the LVD detector. N Y Agafonova, 10.1016/j.astropartphys.2007.09.005Astropart. Phys. 2851671 . 259N.Y. Agafonova et al., On-line recognition of supernova neutrino bursts in the LVD detector, Astropart. Phys. 28 (2008) 516 [ 71 . 259].
| [] |
[
"Biometrics in the Era of COVID-19: Challenges and Opportunities",
"Biometrics in the Era of COVID-19: Challenges and Opportunities"
] | [
"Marta Gomez-Barrero ",
"Pawel Drozdowski ",
"Christian Rathgeb ",
"Jose Patino ",
"Massimiliano Todisco ",
"Andreas Nautsch ",
"Naser Damer ",
"Jannier Priesnitz ",
"Nicholas Evans ",
"Christoph Busch "
] | [] | [] | Since early 2020, the COVID-19 pandemic has had a considerable impact on many aspects of daily life. A range of different measures have been implemented worldwide to reduce the rate of new infections and to manage the pressure on national health services. A primary strategy has been to reduce gatherings and the potential for transmission through the prioritisation of remote working and education. Enhanced hand hygiene and the use of facial masks have decreased the spread of pathogens when gatherings are unavoidable. These particular measures present challenges for reliable biometric recognition, e.g. for facial-, voiceand hand-based biometrics. At the same time, new challenges create new opportunities and research directions, e.g. renewed interest in non-constrained iris or periocular recognition, touchless fingerprint-and vein-based authentication and the use of biometric characteristics for disease detection. This article presents an overview of the research carried out to address those challenges and emerging opportunities. | 10.1109/tts.2022.3203571 | [
"https://arxiv.org/pdf/2102.09258v2.pdf"
] | 231,951,302 | 2102.09258 | c5fc3a08a849354d8d2764aaae7246bee6c48455 |
Biometrics in the Era of COVID-19: Challenges and Opportunities
Marta Gomez-Barrero
Pawel Drozdowski
Christian Rathgeb
Jose Patino
Massimiliano Todisco
Andreas Nautsch
Naser Damer
Jannier Priesnitz
Nicholas Evans
Christoph Busch
Biometrics in the Era of COVID-19: Challenges and Opportunities
1Index Terms-COVID-19BiometricsMaskHygieneTouch- less biometricsRemote authenticationMobile biometrics
Since early 2020, the COVID-19 pandemic has had a considerable impact on many aspects of daily life. A range of different measures have been implemented worldwide to reduce the rate of new infections and to manage the pressure on national health services. A primary strategy has been to reduce gatherings and the potential for transmission through the prioritisation of remote working and education. Enhanced hand hygiene and the use of facial masks have decreased the spread of pathogens when gatherings are unavoidable. These particular measures present challenges for reliable biometric recognition, e.g. for facial-, voiceand hand-based biometrics. At the same time, new challenges create new opportunities and research directions, e.g. renewed interest in non-constrained iris or periocular recognition, touchless fingerprint-and vein-based authentication and the use of biometric characteristics for disease detection. This article presents an overview of the research carried out to address those challenges and emerging opportunities.
I. INTRODUCTION
Since early 2020, the world has been grappling with the COVID-19 pandemic caused by the new SARS-CoV-2 coronavirus. At the time of writing, there have been more than 250 million confirmed infections while more than five million have succumbed to the virus or related complications [1]. The main vector of disease transmission is exposure to respiratory particles resulting from direct or close physical contact with infected individuals. Transmission can also occur from the transfer of viral particles from contaminated surfaces or objects to the eyes, nose or mouth [1].
Various preventive measures have been adopted worldwide to help curb the spread of the virus by reducing the risk of new infections. These include local, national and international travel restrictions, the banning of large gatherings and the encouragement of physical distancing, remote working and education, and strict quarantine policies, see e.g. [2]. Two of the most broadly adopted measures are the (sometimes mandatory) use of protective facial coverings or masks [3] (a) Surgical mask [5] (b) Cloth mask [5] (c) Filter mask 1 (d) Printed mask 2 Fig. 1. Examples of typical protective face masks enhanced hand hygiene (handwashing or disinfection using hydroalcoholic gel). Facial masks, such as those illustrated in Fig. 1, can reduce viral transmission through respiratory particles [4], while enhanced hand hygiene can reduce the rate of new infections through contact with contaminated surfaces or objects. Preventive measures, as well as the virus itself, have necessitated consequential shifts and disruption to daily life, with potentially long-lasting repercussions impacting individuals, social and professional practices and processes, businesses both small and large, as well as the global economy. Such measures have had a considerable impact in our daily lives. For instance, the use of facial masks covering the mouth and nose in public spaces can decrease the usefulness of surveillance systems or prevent us from unlocking our smartphone using face recognition technologies. In this context, this article focuses on the impact of the COVID-19 pandemic on biometric recognition. Biometric technologies can be used for automated identity verification and to distinguish individuals based on their personal biological and behavioural characteristics (e.g. face and voice). Biometric solutions fre-arXiv:2102.09258v2 [cs.CY] 12 Jul 2022 quently supplement or replace traditional knowledge-and token-based security systems since, as opposed to passwords and access cards, biometric characteristics cannot be forgotten or lost. Furthermore, biometrics inherently and seamlessly enable diverse application scenarios which are either difficult or infeasible using more traditional methods, e.g. continuous authentication [6], [7], forensics [8], and surveillance [9]. Biometric technologies have come to play an integral role in society, e.g., for identity management, surveillance, access control, social and welfare management, and automatic border control, with these applications alone being used either directly or indirectly by billions of individuals [10]- [13]. While reliance upon biometric technologies has reached a profound scale, health-related measures introduced in response to the COVID-19 pandemic have been shown to impact either directly or indirectly upon their reliability [14]. It should be however noted that the new measures have a limited impact on other biometric characteristics such as ear [15]. Even though this fact will also lead to renewed efforts directed to such biometric characteristics in order to achieve accurate and deployable systems in the near future, we limit the scope of this article to those biometric characteristics affected by healthrelated measures. Table I provides a brief overview of the operational prevalence and COVID-19-related impacts and technological challenges in the context of the most widely (in operational systems) used biometric characteristics. They are reviewed and discussed in further detail in the remainder of this article, including a short introduction and description for each characteristic for the non-expert readers. This work represents a narrative/integrated review. It is meant to selectively assess relevant works in the field of biometrics that (in)directly tackle challenges caused by the COVID-19 pandemic. It is aiming at offering guidance about future research directions and enabling new perspectives to emerge.
The rest of the article is organised as follows. The impact of facial masks on biometrics technologies is discussed in Section II. Section III addresses impacts upon mobile and remote biometric authentication. Section IV describes new opportunities and applications that have emerged as a result 1 Source: www.ikatehouse.com 2 Source: www.thenationalnews.com of the COVID-19 pandemic. The societal impact of these changes is discussed in Section V and concluding remarks are presented in Section VI.
II. INFLUENCE OF FACIAL COVERINGS ON BIOMETRIC
RECOGNITION
The use of facial coverings, such as masks, occlude a substantial part of the lower face. Such occlusions or obstructions change dramatically the operational conditions for numerous biometric recognition technologies. Such changes can make biometric recognition especially challenging. A review of the impacts of facial coverings is presented in this section, with a focus upon facial, periocular, iris, and voice biometrics.
A. Face recognition
The natural variation among individuals yields a good interclass separation and thus makes the use of facial characteristics for biometric recognition especially appealing. Traditional solutions rely upon handcrafted features based on texture, keypoints, and other descriptors for face recognition [16]. More recently, the use of deep learning and massive training datasets has led to breakthrough advances. The best systems perform reliably even with highly unconstrained and low-quality data samples [17], [18]. Relevant to the study presented here is a large body of research on occluded face detection [19] and recognition [20], though occlusion-invariant face recognition remains challenging [21]. Most work prior to the COVID-19 pandemic addresses occlusions from, e.g., sunglasses, partial captures, or shadows which typify unconstrained, 'in-thewild' scenarios. The use of facial masks therefore presents a new and significant challenge to face recognition systems, especially considering the stringent operating requirements for application scenarios in which face recognition technology is often used, e.g. automated border control. The requirement for extremely low error rates typically depend on the acquisition of unoccluded images of reasonable quality.
The most significant evaluation of the impact of masks upon face recognition solutions was conducted by the National Institute of Standards and Technology (NIST) [22], [23]. The evaluation was performed using a large dataset of facial images with superimposed, digitally generated masks of varying size, shape, and colour. The evaluation tested the face recognition performance of algorithms submitted to the ongoing Face Recognition Vendor Test (FRVT) benchmark in terms of biometric verification performance (i.e., one-to-one comparisons). The false-negative error rates (i.e., false non-match rate) for algorithms submitted prior to the pandemic [22], where observed to increase by an order of magnitude, even for the most reliable algorithms. Even some of the best-performing algorithms (as judged from evaluation with unmasked faces) failed almost completely, with false-negative error rates of up to 50%.
Of course, these results may not be entirely surprising given that systems designed prior to the pandemic are unlikely to have been optimised for masked face data. The study itself also had some limitations, e.g. instead of using genuine images collected from mask-wearing individuals, it used synthetically generated images where masks were superimposed using automatically derived facial landmarks. Despite the shortcomings, the study nonetheless highlights the general challenges to biometric face recognition from face coverings and masks. The general observations are that: 1) the degradation in verification reliability increases when the mask covers a larger proportion of the face including the nose; 2) reliability degrades more for mated biometric comparisons than for non-mated comparisons, i.e. masks increase the rate of false non-match rate more than the false match rate; 3) different mask shapes and colors lead to differences in the impact upon verification reliability, a finding which emphasises the need for evaluation using genuine masked face data; 4) in many cases, masked faces are not even detected.
A follow-up study [23], also conducted by NIST, evaluated systems that were updated with enhancements designed to improve reliability for masked faces. In addition to greater variability in mask designs, the study also considered both masked probe as well as masked reference face images. While reliability was observed to improve for masked faces, it remained substantially degraded compared to unmasked faces (approximately an order of magnitude lower). The degraded performance of masked faces was equivalent to that for unmasked faces and state-of-the-art systems from 2017. Increases in false-match rates were also observed when both reference, as well as probe faces are masked. Full details and results are available from the NIST FRVT Face Mask Effects website [24].
Results from the related DHS Biometric Rally show similar trends [25]. The DHS study was conducted in a setup simulating real operational conditions using systems submitted by commercial vendors. Significant difficulties in image acquisition as well as general degradation in biometric performance were observed for masked faces. Like the NIST study, the DHS study too found that, even with masked faces, today's systems perform as well as state-of-the-art systems from only a few years ago [25] tested with unmasked face images.
These US-based studies are complemented by a number of academic studies. Two datasets [5], [26] of masked face images have been collected in Europe and China to support research efforts. While [26] provides data, however, it does not provide a formal evaluation of the effect of masks on face recognition performance. Moreover, this study did not address a specific usecase scenario, e.g. collaborative face verification. Damer et al. [5], [27], [28] released a database of real masked face images that were collected in three collaborative sessions. They include realistic variation in the capture environment, masks, and illumination. Evaluation results show similar trends exposed by the NIST study [22]: difficulties in face detection and greater impacts upon mated comparisons than non-mated comparisons. While significantly smaller than the NIST dataset in the number of data subjects and images, the use of real instead of synthetically generated masked faces images increases confidence in results.
From a technical perspective, face masks can be considered as a subset of general face occlusions, and thus previous works on this issue are relevant. A number of works have proposed to automatically detect, and synthetically in-paint, the occluded face areas. This aimed at generating realistic and occlusionfree face images, as well as enabling a more accurate face recognition. Most of the better performing face completion solutions are based on deep generative models [29], [30]. A recent study by Mathai et al. [31] has shown that face completion can be beneficial for occluded face recognition accuracy, given that the occlusions are detected accurately. They have also pointed out that the completion of occlusions on the face boundaries did not have significant effect, which is not the case of face mask occlusions. Thus, these results indicate that face image completion solutions are possible candidates to enhance masked face recognition performance.
The use of transparent masks or shields may combat to some extent the impact of opaque masks upon face recognition systems. Transparent masks, such as those shown in Fig. 2, allow some portion of the masked face to remain visible but even their impact is likely non-trivial. Transparent masks can cause light reflections, visual distortions and/or blurring. Both opaque and transparent masks, as well as strategies to counter their impact, may increase the threat of presentation attacks. For example, it is conceivable that masks with specific patterns could be used to launch concealment or impersonation attacks, e.g. using concepts similar to those in [32].
Regardless of the exact type of face mask, wearing one can have an effect on the face image quality. Most biometric systems estimate the quality of a detected face image prior to feature extraction [33]. This quality estimation indicates the suitability of the image for recognition purposes [34], [35]. For existing systems, the quality threshold configurations might lead to disregarding samples with face masks and thus increase the failure to extract rate. This link between face occlusions and face image quality has been probed in previous works, however, not exclusively for mask occlusions. One of these works, presented by Lin and Tang [36], built on the assumption that occlusions negatively effect the face image quality, in order to detect such occlusion. A recent study by Zhang et al. [37] has demonstrated the effect of occlusion on the estimated face mage quality, along with presenting an efficient multi-branch face quality assessment algorithm. The authors pointed out that images with alignment distortion, occlusion, pose or blur tend to obtain lower quality scores.
The studies conducted thus far highlight the challenges to face recognition systems in the COVID-19 era and raise numerous open questions. These include, but are not limited to large-scale tests using images with real and not digitally generated masks, identification (i.e. one-to-many search), demographic differentials, presence of additional occlusions such as glasses, the effect on face image quality [38], unconstrained data acquisition in general, as well as effects on the accuracy of human examiners [23], [28]. In addition, new areas of research have been opened, such as the automatic detection of whether a subject is wearing the mask correctly (i.e., covering mouth and nose) [39].
To foster research on the aforementioned issues, the Masked Face Recognition Competition (MFR) [40] was organised in 2021. The main goals of this competition were not only the enhancement of recognition performance in the presence of masks, but also the analysis of the deployability of the proposed solutions. A private dataset representing a collaborative multi-session real masked capture scenario was used to evaluate the submitted solutions. In comparison to one of the top performing academic face recognition solutions, 10 out of the 18 submitted solutions did achieve a higher masked face verification accuracy, thereby showing the way for future face recognition approaches. This was followed by a series of works that targeted enhancing the accuracy of masked face recognition, either by training task-specific models [41] or processing face templates extracted by existing models [42].
B. Iris recognition
The human iris, an externally visible structure in the human eye, exhibits highly complex patterns which vary among individuals. The phenotypic distinctiveness of these patterns allow their use for biometric recognition [43]. The acquisition of iris images typically requires a camera with near-infrared (NIR) illumination so that sufficient detail can be extracted for even darkly pigmented irides. Recent advances support acquisition in semi-controlled environments at a distance even from only reasonably cooperative data subjects on the move (e.g. while walking) [44], [45].
Solutions to iris recognition which use mobile devices and which operate using only visible wavelength illumination have been proposed in recent years [46]- [48]. Attempts to use image super-resolution, a technique of generating high-resolution images from low resolution counterparts, have also shown some success by increasing image quality [49]. However, iris recognition solutions seem more dependent than face recognition solutions upon the use of constrained scenarios that lead to the acquisition of high quality images [17], [18]. Nevertheless, iris recognition systems have now been in operation worldwide for around two decades. Near-infrared iris recognition has been adopted in huge deployments of biometrics technology, e.g. in the context of the Indian Aadhaar programme through which more than 1 billion citizens have been enrolled using iris images [50] in addition to other biometric data. Due to their high computational efficiency and reliability [51], iris recognition systems are used successful within the Aadhaar programme for intensive identification (1-N search) and deduplication (N -N search) [11].
The success of automated border control systems used in the United Arab Emirates [10], where it is common for individuals to conceal a substantial part of their face on account of religious beliefs, serve to demonstrate the robustness of iris recognition systems to face coverings. In these scenarios, such as that illustrated in Fig. 3, whereas face recognition systems generally fail completely, iris recognition systems may still perform reliably so long as the iris remains visible. They are also among the least intrusive of all approaches to biometric recognition. This would suggest that, at least compared to face recognition counterparts, the reliability of iris recognition systems should be relatively unaffected as a consequence of mask wearing in the COVID-19 era.
It is worth mentioning that the usefulness of the anatomy of the human eye with regard to biometrics is not limited to the irides. For example, the retinal blood vessels are suitable for the purposes of biometric recognition. However, retinal imaging requires close proximity of a highly cooperative data subject to the specialised acquisition device which sends a beam of light inside the eye to fully illuminate the retina (see e.g. [52]). Although retinal structures exhibit a high degree of distinctiveness and hence good biometric performance, the need for a specialised sensor and the perceived intrusiveness of the acquisition process have been considered as obstacles to adoption of this biometric characteristic. The blood vessels present in the ocular surface have also been shown to exhibit some discriminative power and hence suitability of biometric recognition [53]. The acquisition process for those, albeit less arduous than for the retinal images, still requires a highresolution camera and subject cooperation in gazing in the required directions. Thus far, however, biometric recognition with ocular vasculature received relatively little attention beyond academic studies.
C. Periocular recognition and soft-biometrics
Periocular recognition, namely recognition observing biometric characteristics from the area surrounding the eye [54], offers potential for a compromise between the respective strengths and weaknesses of face and iris recognition systems. Unlike face recognition, periocular recognition can be reliable even when substantial portions of the face are occluded (opaque masks) or distorted (transparent masks). Unlike iris recognition, periocular recognition can be reliable in relatively unconstrained acquisition scenarios. Compared to alternative ocular biometrics, periocular recognition systems are also less demanding in terms of subject cooperation.
Due to those and other properties, periocular recognition was explored extensively during the last decade. Similarly to work in iris recognition, much of it has direct relevance to biometrics in the COVID-19 era, in particular with regards the wearing of face masks. In fact, one of the most popular use cases thus far for periocular recognition involves consumer mobile devices [55], [56] which can readily capture high quality images of the periocular region with onboard cameras. This approach to biometric recognition, e.g. to unlock a personal device, is of obvious appeal in the COVID-19 era when masks must be worn in public spaces and where tactile interactions, e.g. to enter a password or code, must preferably be avoided.
In most works, reliable verification rates can be achieved by extracting features from the periocular region. However, the error rates are not yet as good as those yielded by face verification schemes under controlled scenarios. Nevertheless, the periocular features can be used to improve the performance of unconstrained facial images as shown in [56]. Similarly, Park et al. showed in [57] how the rank-1 accuracy was multiplied by a factor of two in a similar scenario using a synthetic dataset of face images treated artificially to occlude all but the face region above the nose. In other words, the success chances of correctly identifying a person within a group are doubled when the periocular information is analysed in parallel to the global face image. Some newer works have also explored the fair of these methods across gender [58], reporting an equivalent performance of males and females for ocular-based mobile user-authentication at lower false match rates.
In addition to the aforementioned works, some multimodal approaches combining face, iris, and the periocular region have been proposed for mobile devices [59], also incorporating template protection in order to comply with the newest data privacy regulations such as the European GDPR [60].
As pointed out in Sect. II-B, in such uncontrolled conditions where the iris cannot always be used due to a low quality or resolution of the samples, that lack of quality of acquired biometric information can be addressed using super-resolution. Even though some approaches have already been proposed for the periocular region, based mostly on deep learning models [49], [61], there is still a long way ahead before they are deployed in practical applications.
In addition to providing identity information, facial images can also be used to extract other soft biometric information, such as age range, gender, or ethnicity. Alonso-Fernandez et al. benchmarked the performance of six different CNNs for soft-biometrics. Also for this prupose, the results obtained indicate the possibility of performing soft-biometrics classification using images containing only the ocular or mouth regions, without a significant drop in performance in comparison to using the entire face. Furthermore, it can be observed in their study how different CNN models perform better for different population groups in terms of age or ethnicity. Therefore, the authors indicated that the fusion of information stemming form different architectures may improve the performance of the periocular region, making it eventually similar to that of unoccluded facial images. Similarly, the periocular region can be also utilised to estimate emotions using handcrafted textural features [62] or deep learning [63].
D. Voice recognition
Progress in voice recognition has been rapid in recent years [64]- [68]. Being among the most convenient of all biometrics technologies, voice recognition is now also among the most ubiquitous, being used for verification across a broad range of different services and devices, e.g. telephone banking services and devices such as smart phones, speakers, and watches that either contain or provide access to personal or sensitive data.
The consequences of COVID-19 upon voice recognition systems depend largely on the effect of face masks on the production of speech. Face masks obstruct the lower parts of the face and present an obstacle to the usual transmission of speech sounds; they interfere with the air pressure variations emanating from the mouth and nose. The effect is similar to acoustic filters such as sound absorbing fabrics used for soundproofing or automobile exhaust mufflers [69]. Since masks are designed to hinder the propagation of viral particles of sub-micron size, typically they consist of particularly dense fabric layers. The effect on speech is an often-substantial attenuation and damping. A study on the impact of fabrics on sound is reported in [70], [71], which shows how acoustic effects are influenced by the particular textile and its thickness, density and porosity. Denser structures tend to absorb sound at frequencies above 2 kHz, while thicker structures absorb sound of frequencies below 500 Hz. With these bands overlapping that of human speech, masks attenuate and distort speech signals and hence degrade the reliability of voice biometric systems that are trained with normal (unmasked) speech.
Masks can also have a negative impact on presentation attack detection (PAD) systems, which present countermeasures to discriminate bonafide vs spoofed speech. These systems are based on spectral features obtained from the two classes. It becomes clear that any modification/deviation of the bonafide spectrum results in greater difficulty in detecting it. Moreover, other countermeasure systems are based on the detection of the POP noise [72]: a bonafide user emits pop noise which naturally incurred while speaking close to the microphone. This noise is attenuated by the mask and, consequently, PAD performance decreases. Fig. 4 shows speech waveforms and corresponding spectrograms derived using the short-time Fourier transform (STFT) for four different recordings of read speech. The text content is identical for all four recordings: allow each child to have an ice pop. The first is for a regular, mask-free recording while the other three are for the same speaker wearing a surgical mask, a thin or light cloth mask and a dense cloth mask. Note that the word pop pronounced at the end of the sentence becomes less and less noticeable as you wear heavier masks. Another notable effect concerns the attenuation of high frequencies for heavier masks, which affects not only recognition performance but also speech intelligibility [73].
Related to these aforementioned issues, a study of the impact of face coverings upon the voice biometrics is reported in [74]. It assessed and analysed the acoustic properties of four coverings (motorcycle helmet, rubber mask, surgical mask and scarf). The impact of all four coverings was found to be negligible for frequencies less than 1 kHz, while substantial levels of attenuation were observed for frequencies above 4 kHz; 4 kHz is not a general mark, since peaks at 1.8 kHz are reported for some masks. Face coverings were shown to degrade the accuracy of an i-vector/PLDA speaker recognition system. However, the treatment of speech data with inverted mask transfer functions was shown to improve accuracy to a level closer to the original. Similarly, face masks distort speech data above 4 kHz. The degradation to performance, however, is modest since the substantial effects are at higher frequencies where speech energy (and discriminative biometric information) is typically lower than it is at lower frequencies where the effects are much milder.
To reflect the current issues in the voice biometrics community, the 2020 findings of the 12 th Computational Paralinguistics Challenge (COMPARE) considered a mask detection subchallenge. System fusion results for the challenge baselines show that the task is far from being solved. Speech signals, in this context, are not only relevant to voice biometrics but are usable to detect signal distortions.
The existing work stands to show that facial masks do affect voice-based technologies, and there is potential to compensate these effects. Thus the relevance of speaker recognition increases in this time, since it is unintrusive and touchless , that is, it can be done at distance, without any physical interaction (over the phone).
III. REMOTE AND MOBILE BIOMETRIC RECOGNITION
The COVID-19 pandemic has caused disruptions to many aspects of life. As a result of physical interactions being necessarily limited or even forbidden, many have had no alternative but to work remotely or to receive education online. With authentication being needed to access many services and resources, and without the possibility of physical means to identification, the deployment of biometric solutions for remote authentication has soared in recent times [75]. Remote biometric authentication has already attracted significant attention [9], [76] and is already being exploited for, e.g. eBanking, eLearning, and eBoarders. With an increasing percentage of personal mobile devices now incorporating fingerprint, microphone and imaging sensors, remote biometric authentication is deployable even without the need for costly, specialist or shared equipment. The latter is of obvious appeal in a pandemic, where the use of touchless, personal biometric sensors and devices can help reduce spread of the virus.
Some specific biometric characteristics lend themselves more naturally to remote authentication than others. They are dictated by the level of required user cooperation and the need for specialist sensors. Face, voice, and keystroke/mouse dynamics are among the most popular characteristics for remote biometric authentication [77], [78]. These characteristics can be captured with sensors which are likely to be embedded in the subjects' devices, e.g. camera, microphone, keyboard and mouse. As discussed in the following, remote biometric authentication entails a number of specific challenges related to mobile biometrics, remote education, as well as security and privacy.
A. Mobile biometrics
The ever-increasing number of smartphones in use today has fueled research in mobile biometric recognition solutions, e.g. mobile face recognition [79] and mobile voice recognition [80]- [82]. Numerous biometric algorithms specifically designed or adapted to the mobile environment have been proposed in the literature [83]. Additionally, commercial solutions for mobile biometric recognition based on inbuilt smartphone sensors or hardware/software co-design are already available.
Proposed solutions can be categorized depending on where the comparison of biometric data takes place:
• Biometric comparison is performed on the client side, as proposed by the Fast IDentity Online (FIDO) Alliance [84]. An advantage of this scheme is that biometric data is kept on the user device, leading to improved privacy protection. On the other hand, users may require specific sensors and installed software to enable authentication. • Biometric comparison is performed on the server side.
These comparisons depend upon the secure transmission of biometric data (see Section III-C), with relatively little specific software being required on the user device. One limiting factor of mobile biometrics stems from processing complexity and memory footprints. Whereas server side computation capacity and memory resources are typically abundant, mobile devices resources running on battery power are relatively limited. Many state-of-the-art biometric recognition algorithms are based on large (deep) neural networks which require a large amount of data storage and are computationally expensive, thereby prohibiting their deployment on mobile devices. This has spurned research in efficient, and low footprint approaches to biometric computation, e.g. using smaller, more shallow neural networks [85]. A number of different approaches to compress neural networks have been proposed, e.g. based on student-teacher networks [86] or pruning [87]. These approaches trade model size and inference time against system performance. However, this trade-off still Fig. 5. BioID ® Identity Proofing for e-learning platforms [93] has to be optimized for mobile systems, while the implications of limited resources extend to other biometric sub-processes too, e.g. PAD.
In summary, mobile biometric authentication clearly has a role to play in the COVID-19 era. Touchless, personal mobile biometrics solutions can help to deliver reliable authentication while also meeting strict hygiene requirements, even if the efficient integration of biometric recognition technologies into mobile device platforms remains challenging.
B. Biometrics in remote education
The use of learning management systems has increased dramatically in recent years, not least due to the promotion of home-schooling and eLearning during the COVID-19 pandemic. Learning management systems deliver remote education via electronic media. eLearning systems often require some form of identity management for the authentication of remote students. Biometrics solutions have proved extremely popular, with a number of strategies to integrate biometric recognition in eLearning environments having been proposed in recent years [88], [89].
In the eLearning arena, biometric technologies are used for user login, user monitoring, attention or emotion estimation, and authorship verification. Fig. 5 shows an example for user login to an eLearning platform. Both one-time authentication (biometric verification at a single point in time) and continuous authentication (periodic over time) have utility in eLearning scenarios. Whereas one-time authentication might be suitable to authenticate students submitting homework, continuous authentication may be preferred to prevent students cheating while sitting remote examinations [90]. In order to minimise inconvenience, continuous biometric authentication calls for the use of biometric characteristics which require little to no user cooperation [88], e.g. text-independent keystroke dynamics [91], [92].
Presentation attacks can present a substantial threat to biometric technologies deployed in such scenarios (see Section III-C). This might be why, despite significant research interest, only few biometric recognition systems have been deployed in operational eLearning scenarios [88]. Even so, eLearning systems will likely become more popular while the pandemic continues and, once operational, their use will likely be maintained in the future.
C. Security and privacy in remote biometrics
The remote collection of biometric information gives rise to obvious security and privacy concerns; the trustworthiness of the collection environment cannot be guaranteed. One of the potentially gravest threats in this case, especially given the absence of any human supervision (e.g. in contrast to the automatic boarder control use case), is that of presentation attacks or 'spoofing' [94]- [96]. Presentation attacks involve the presentation of false, manipulated or synthesized samples to a biometric system made by an attacker to masquerade as another individual. Diverse presentation attack instruments, ranging from face masks to gummy fingers, have all been proved a threat. The detection of presentation attacks in a remote setting can be more challenging that in a local setting, depending on whether detection countermeasures are implemented on the client side or the server side. In case PAD is performed on the client side, hardware-based detection approaches can be employed, though these require specific, additional equipment beyond those used purely for recognition. Even these approach might still be vulnerable to presentation attacks, as demonstrated for Apple's Face ID system [97]. If PAD is implemented on the server side, then software-based attack detection mechanisms represent the only solution. Such software-based PAD for remote face and voice recognition were explored in the EU-H2020 TeSLA project [98]. It is expected that more research will be devoted to this topic in the future [99], [100].
In addition to the threat of direct attacks performed at the sensor level, there is also the possibility of indirect attacks performed at the system level. The storage of personal biometric information on mobile devices as well as the transmission of this information from the client to a cloud based server calls for strong data protection mechanisms. While traditional encryption and cryptographic protocols can obviously be applied to the protection of biometric data, any processing applied to the data required prior decryption, which still leaves biometric information vulnerable to interception. Encryption mechanisms designed specifically for biometric recognition in the form of template protection [101] overcome this vulnerability by enabling comparison of biometric data in the encrypted domain. Specific communication architectures that ensure privacy protection in remote biometric authentication scenarios where biometric data is transmitted between a client and a server have already been introduced, e.g. the Biometric Open Protocol Standard (BOPS) [102] which supports the homomorphic encryption [103] of biometric data.
As it has been described in this section, the use of remote biometric authentication in the times of COVID-19 provides many advantages. However, in order to achieve trustworthy identity management, it also requires appropriate mechanisms to protect privacy. Countermeasures to prevent or detect presentation attacks are also essential. The latter is usually more challenging in a remote authentication scenario, where means of detecting attacks may be more limited compared to conventional (accessible) biometric systems.
IV. EMERGING TECHNOLOGIES
As discussed in the previous sections, the COVID-19 pandemic poses specific challenges to biometric technologies. However, it is also expected to foster research and development in emerging biometrics characteristics which stand to meet new requirements relating to the pandemic, as well as the use of biometric information directly for virus detection and monitoring e.g. of infected individuals. Such emerging biometric technologies are described in the following.
A. Touchless, hand-based biometrics
Hydro-alcoholic gel, strongly advocated as a convenient means to disinfection during the COVID era, can be used to protect the users of touch-based sensors such as those used for fingerprint recognition [104]. While they serve to reduce sensor contamination and pathogen transmission, hydro-alcoholic gels tend to dry the skin. The sensitivity of fingerprint sensors to variability in skin hydration is well known. It can degrade the quality of acquired fingerprints and hence also recognition reliability [105]. Severe dryness can even prevent successful acquisition as illustrated in Fig. 6, thereby resulting in failures to acquire.
Hygiene concerns have increased societal resistance to the use of touch-based sensors. These concerns have in turn fueled research efforts in 2D or 3D touchless fingerprint recognition systems [106], [107] such as those illustrated in Fig. 7. Touchless fingerprint sensors are generally either prototype hardware designs [108], [109] or are adapted from general purpose devices adapted to touchless fingerprint recognition [110], [111].
Both the capture and processing of fingerprints must usually be adapted to touchless acquisition [106]. The majority of touchless finger image acquisition sensors deliver colour images for which general image processing techniques are employed to improve contrast and sharpness. Traditional minutiae extractors and comparators may then be employed.
The interoperability of both touch-based and touchless devices is naturally desirable, e.g. to avoid the need for enrolment in two different systems. Interoperability has proven to be non-trivial [112], [113]. While some differences between the two systems, e.g. mirroring, colour-to-grayscale conversion or inverted back-and foreground, can be readily compensated for without degrading accuracy, others, e.g. the aspect ratio or deformation estimation, prove more challenging [114], [115] and can degrade reliability. Note that fingerprint images acquired using touchless sensors do not exhibit the deformations caused by pressing the finger onto a surface that characterise images acquired from touch-based sensors. Moreover, DPI alignment and ridge frequency estimation is required to enable a meaningful comparison of fingerprints acquired from touchbased and touchless sensors.
As an alternative to fingerprint recognition, some ATMs already incorporate fingervein-based recognition sensors which are robust to variability in skin hydration as well as presentation attacks. Images of the finger or hand are captured with NIR illumination, since light at NIR frequencies is absorbed differently by hemoglobin and the skin, thereby allowing for the detection of vein patterns. Touchless fingervein and palmvein sensors have been developed [116]- [118], though the lack of any control in the collection process typically causes significant rotation and translation variation. The quality of the capturing device as well as strategies to compensate for nuisance variation are hence key to the collection of high quality images and reliable performance. Touchless capturing device designs have been presented by various researchers, e.g. in [116]. This work showed that the degradation in recognition performance resulting from touchless acquisition can be addressed using finger misplacement corrections. On the other hand, the approach presented in [117] extracts a region of interest from captured samples and uses an oriented element feature extraction scheme to improve robustness.
The use of finger vein recognition for mobile devices is also emerging. Debiasi et al. developed an auxiliary NIR illumination device for smartphones which supports the capture of hand vascular patterns [119]. The device is connected and controlled via Bluetooth and can be adapted to different smartphones. The authors also presented a challenge response protocol in order to prevent replay and presentation attacks and showed that acceptable verification performance can be achieved using standard finger vein recognition algorithms. The VeinSeek Pro app 7 is able to capture vein images from the hand without the need for extra hardware. This approach is based on the fact that different colors of light penetrate 7 https://www.veinseek.com/ different depths within the skin. By removing the signal from superficial layers of the skin, the authors argue that they can more easily see deeper structures. However, to the best of our knowledge there is no analysis so far of the feasibility of using these images for vein-based biometric recognition.
In summary, in the era of the COVID-19 pandemic, touchless hand-based biometric recognition seems to be a viable alternative to conventional touch-based systems. These technologies achieve similar levels of performance as touch-based technologies [106], [107], [116]. Some commercial products based on prototypical hardware design and general purpose devices, e.g. smartphones, are already available on the market. Nonetheless, touchless recognition remains an active field of research where several challenges need to be tackled, in particular recognition in challenging environmental conditions, e.g. uncontrolled background or varying illumination [106], [120].
B. COVID detection with biometric-related technologies
COVID-19 attacks the human body at many levels, but the damage to the respiratory system is what often proves fatal. The production of human speech starts with air in the lungs being forced through the vocal tract. Diminished lung capacity or disease hence impacts upon speech production and there have been attempts to characterise the effects of COVID-19 upon speech as means to detect and diagnose infection [121]- [123].
Initial efforts involved the collection and annotation of databases of speech as well as non-speech sounds recorded from healthy speakers and those infected with the COVID-19 virus [124]. The data typically includes recordings of coughs [125]- [127], breathing sounds [128], [129] as well as speech excerpts [130].
The database described in [130] contains recordings of five spoken sentences and in-the-wild speech, all recorded using the Wechat App from 52 COVID-confirmed and hospitalised patients in Wuhan, China, who also rated their sleep quality, fatigue, and anxiety (low, mid, and high). After data preprocessing, 260 audio samples were obtained. While these early works highlight the potential of biometrics and related technology to help in the fight against the COVID-19 pandemic, they also highlight the need for homogenised and balanced databases which can then be used to identify more reliable and consistent biomarkers indicative of COVID-19 infection. Outcomes of these studies are very encouraging: the detection of COVID-19 through voice, but also through coughing or the sound of breathing, has an accuracy comparable to that of the antigen or saliva test [131]- [134].
Thermal face imaging has also come to play a major role during the pandemic, especially for the rapid surveillance of potential infections among groups of travellers on the move, e.g. in airports [135] and shopping centres [136]. Thermal face images can be used to detect individuals with fever [137], a possible symptom of COVID-19 infection. Similar face captures can also be used as an alternative capture spectrum for face recognition [138]- [140], however, with verification performances inferior to the visible [141], [142]. Despite the ease with which thermal monitoring can be deployed, it is argued in [143] that body temperature monitoring will be insufficient on its own to prevent the spread of COVID-19 into previously uninfected countries or regions and the seeding of local transmission. The European Union Aviation Safety Agency (EASA) concludes that thermal screening equipment, including thermal scanners will miss between 1% and 20% of passengers carrying a fever [144].
V. SOCIETAL IMPACT
As any other technology used by a large population, biometric recognition systems affect the society. So far, the positive aspects of such systems (e.g., faster authentication for border crossing or convenience for smartphone unlocking) have outweighed their disadvantages, mostly related to privacy and security issues [145], [146]. Such issues have been thoroughly analysed and (partially) dealt with, thereby increasing the acceptance of the users and boosting the deployment of biometric systems. Nevertheless, in the last years new, concerns have arisen related to the fairness of biometric recognition algorithms [147] and their trustworthiness [148], [149]. In addition, societal and ethical aspects of presentation attack detection methods have also been analysed [150].
In the context of the COVID-19 pandemic, the use of contact-based biometric systems have similarly lead to healthrelated concerns. Systems where contact with the capture device is necessary could still be employed in a private scenario (e.g., for unlocking your own smartphone or for remote for authentication from your own laptop), but contactless approaches will be preferred for global applications (e.g., building access control) in order to prevent the spread of viruses. In fact, it can be argued that the use of contact-less biometrics can even reduce the transmission of pathogens in some scenarios such as airport [151]. This trend will probably remain even after the COVID-19 pandemic can be considered to be over.
On the other hand, the need for further digitalisation in almost all societal levels, including sensitive applications such as online exams or eHealth systems, where subject identification is of the utmost importance, has increased the acceptance of biometric technologies as a convenient and reliable means of authentication. Thus, more research is being done in this area [152], [153], together with socioeconomic analysis of success and failure of big-scale implementations of such systems [154].
However, further digitalisation also brings some disadvantages. In general, and not only regarding biometric recognition, the tracking activities and health checks implemented worldwide in order to prevent the spread of COVID-19 have had deep implications on the privacy and freedom of the subjects. For instance, free travel within Schengen has been suspended for months, needing to fulfill certain criteria in terms of negative COVID-19 tests, vaccination status, or registration forms to enter a country 8 . In addition, facial recognition systems have been used in countries such as Poland, China, or Russia to ensure that individuals in quarantine remain at 8 https://reopen.europa.eu/en/ home. In spite of the benefits for the collective health, "the use of biometrics (including facial recognition) in response to COVID-19 raises a number of privacy and security concerns, particularly when these technologies are being used in the absence of specific guidance or fully informed and explicit consent. Individuals may also have problems exercising a wide range of fundamental rights, including the right of access to their personal data, the right to erasure, and the right to be informed as to the purposes of processing and who that data is shared with", as the Organisation for Economic Co-operation and Development (OECD) states in its policy response to Coronavirus (COVID-19) [155]. Thus, the OECD gives a number of recommendations including the use of privacy-bydesign approaches, such as the ones described in Sect.III-C, and the limitation on the time sensitive data can be stored.
The added societal concerns due to the exploitation of sensitive biometric data have been also addressed by The British Academy [156]. As the Academy points out, "Sharing data is crucial for furthering research and maximising its potential to help overcome the current pandemic and better prepare for future health crises". However, bias or errors derived from the use of biometric technologies for authentication can result in negative impacts such as discrimination, and diminish the trust on COVID-19 related technologies. Therefore, the Academy recommends maintaining a human element in the loop. In addition, existing digital inequalities might also limit the potential benefits of health technologies and increase the social disadvantages of some groups. The report also includes some numbers: "6 million people in the UK cannot turn on a device and up to 50% of those are aged under 65". Furthermore, in order to minimise the potential discrimination caused by biometric technologies, several characteristics should be considered: apps which rely on voice recognition software that may not work effectively for those with a speech impairment, can be beneficial for those with reduced sight.
In March 2022, the European Data Protection Supervisor (EDPS) published a report on COVID-19 related processing of the Union institutions, bodies, offices and agencies (EUIs) [157]. In this survey, the EDPS reviews body temperature checks, contact tracing, COVID testing and handling of results, monitoring presence within the premises, vaccination campaigns, access control, and the use of IT-tools in telework. Regarding access control, where biometric recognition systems can be in place, the EUIs correctly informed the individuals about the processing activities carried our and specified a time limit for data retention, as recommended by the OECD. However, as the report points out, the lawful grounds of this identification requirement may not be given, since "staff members [...] cannot provide freely given, specific, informed and unambiguous as well as explicit consent". Similarly, "consent would also not be appropriate for visitors, who are in most cases obliged to come to the EUI premises for work purposes". Also, some EUIs had not indicated that they process health data even if they were doing so. In view of these negative impact on the privacy rights of the individuals, the EDPS recommends the EUIs to check the lawfulness and regularly reassess the necessity and proportionality of the existing COVID-related processing activities.
From those reports we can conclude that biometrics and other technologies have not only provided the subjects with additional advantages to access digital services, but have also had a negative impact on their right to privacy. Thus, we would like to urge the community to assess the necessity of identity checks before implementing them, and use all the available tools to minimise the negative impact of such a control: biometric template protection schemes to prevent sensitive data leakage, or presentation attack detection modules to minimise the success chances of identity theft.
VI. CONCLUSIONS
This article has summarised the main challenges posed by the pandemic to biometric recognition, as well as the new opportunities for existing biometrics to be harnessed or adapted to the COVID-19 era, or where biometrics technology itself has potential to help in the fight against the virus. The use of hygienic masks covering the nose and mouth, as well as the secondary impacts of strict hygiene measures implemented to control the spread of pathogens all have potential to impact upon biometrics technology, thereby calling for new research to maintain reliable recognition performance.
Facial biometrics are among the most impacted characteristic; masks occlude a considerable part of the face, leading to degraded recognition performance. This is the case not only for opaque masks but also for transparent face shields, since reflections caused variation that is non-trivial to model. Opportunities to overcome these difficulties are found by focusing parts of the face that remain uncovered, namely the iris and the wider periocular region.
Whereas solutions to iris recognition that use the NIR spectrum are well studied, numerous efforts in recent years have focused on less constrained approaches to iris recognition that use mobile devices and the visible spectrum. Given the lower quality of such images, image super-resolution techniques have been proposed to improve image quality. Such techniques can also be applied to the full periocular region. To date, the adoption of such systems is low, but likely to increase in the future.
Hand-based biometric systems are also affected by the new hygiene practices which typically result in drier skin, lower quality fingerprint images and degraded recognition performance. Both touch-based and touch-less systems are affected. Vein-based recognition systems are more robust to variations in skin condition. In contrast to traditional touchedbased vein sensors, touch-less capture devices introduced in the last two years can reduce the risk of infection from contact with a contaminated surface. Further research is nonetheless needed to bridge the gap between the performance of less constrained, touchless systems and their better constrained touch-based counterparts.
Like facial biometrics, voice biometric systems are also impacted by the wearing of facial masks which can interfere with speech production. Like many other forms of illness, COVID-19 infections can also interfere with the human speech production system and also degrade recognition performance. These same effects upon the speech production mechanism, however, offer potential for the detection of pulmonary complications such as those associated with serious COVID-19 infections.
Still, the challenges in ensuring reliable biometric recognition performance have grown considerably during the COVID-19 era and call for renewed research efforts. With many now working or receiving education at home, some of the greatest challenges relate to the use of biometric technology in remote, unsupervised verification scenarios. This in turn gives greater importance to continuous authentication, presentation attack detection, or biometric template protection to ensure security and privacy in such settings which have come to so define the COVID-19 era.
and M. Gomez-Barrero is with the Hochschule Ansbach, Germany. P. Drozdowski, C. Rathgeb, J. Priesnitz, and C. Busch are with the da/sec -biometrics and internet security research group, Hochschule Darmstadt, Germany. J. Patino, M. Todisco, A. Nautsch, and N. Evans are with EURECOM, France. N. Damer is with the Fraunhofer Institute for Computer Graphics Research IGD, Germany. This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE and the DFG-ANR RESPECT Project (406880674 / ANR-18-CE92-0024).Manuscript received December 1, 2021; revised July, 2022.
Fig. 2 .
2Examples of alternative protective masks
Fig. 3 .
3IrisGuard Inc. UAE enrolment station5
Fig. 4 .
4Examples of four spectrograms of the utterance: allow each child to have an ice pop, pronounced by the same speaker wearing different types of masks: (a) mask-free, (b) surgical, (c) cloth and (d) dense cloth mask.
Fig. 6 .
6Example of a dry fingerprint and the same fingerprint with normal moist (taken from[105]).(a) Stationary touchless6 (b) Mobile touchless
Fig. 7 .
7Touchless capturing of fingerprints
TABLE I OVERVIEW
IOF COMMONLY USED BIOMETRIC CHARACTERISTICS IN THE CONTEXT OF COVID-19.Biometric
Data acquisition
Application area
Operational
Impact of
characteristic
hardware
mobile devices access control forensics surveillance
prevalence
COVID-19
Face
commodity hardware
wide
high
NIR Iris
special sensor
()
wide
low
VIS Iris
commodity hardware
()
low
low
Touch-based Fingerprint
special sensor
wide
high
Touchless Fingerprint
commodity hardware
low
low
Touch-based Hand Vein
special sensor
low
low
Touchless Hand Vein
special sensor
()
low
low
Voice
commodity hardware
wide
medium
Source: https://www.theclearmask.com/product 4 Source: https://3dk.berlin/en/covid-19/474-kit-for-face-shield-mask-withtwo-transparent-sheets.html
Source: https://en.wikipedia.org/wiki/File:IrisGuard-UAE.JPG
Coronavirus disease (COVID-19) pandemic. World Health Organization. last accessedWorld Health Organization, "Coronavirus disease (COVID-19) pan- demic," https://www.who.int/emergencies/diseases/novel-coronavirus- 2019, last accessed: July 13, 2022.
European Union. Re-open euEuropean Union, "Re-open eu," https://reopen.europa.eu/, last ac- cessed: July 13, 2022.
What countries require masks in public or recommend masks. #Masks4All, "What countries require masks in public or recommend masks?" https://masks4all.co/what-countries-require-masks-in-public/, last accessed: July 13, 2022.
Face masks: what the data say. L Peeples, Nature. 5867828L. Peeples, "Face masks: what the data say," Nature, vol. 586, no. 7828, pp. 186-189, October 2020.
The effect of wearing a mask on face recognition performance: an exploratory study. N Damer, J H Grebe, C Chen, F Boutros, F Kirchbuchner, A Kuijper, International Conference of the Biometrics Special Interest Group (BIOSIG). Gesellschaft für Informatik e. N. Damer, J. H. Grebe, C. Chen, F. Boutros, F. Kirchbuchner, and A. Kuijper, "The effect of wearing a mask on face recognition performance: an exploratory study," in International Conference of the Biometrics Special Interest Group (BIOSIG). Gesellschaft für Informatik e.V., September 2020, pp. 1-10.
Continuous user authentication on mobile devices: Recent progress and remaining challenges. V M Patel, R Chellappa, D Chandra, B Barbello, IEEE Signal Processing Magazine. 334V. M. Patel, R. Chellappa, D. Chandra, and B. Barbello, "Continuous user authentication on mobile devices: Recent progress and remaining challenges," IEEE Signal Processing Magazine, vol. 33, no. 4, pp. 49- 61, July 2016.
A study on continuous authentication using a combination of keystroke and mouse biometrics. S Mondal, P Bours, Neurocomputing. 230S. Mondal and P. Bours, "A study on continuous authentication using a combination of keystroke and mouse biometrics," Neurocomputing, vol. 230, pp. 1-22, March 2017.
Handbook of biometrics for forensic science. M Tistarelli, C Champod, SpringerM. Tistarelli and C. Champod, Handbook of biometrics for forensic science. Springer, 2017.
M Tistarelli, S Z Li, R Chellappa, Handbook of Remote Biometrics. SpringerM. Tistarelli, S. Z. Li, and R. Chellappa, Handbook of Remote Biometrics. Springer, 2009.
Iris recognition and the challenge of homeland and border control security in UAE. A N Al-Raisi, A M Al-Khouri, Telematics and Informatics. 252A. N. Al-Raisi and A. M. Al-Khouri, "Iris recognition and the challenge of homeland and border control security in UAE," Telematics and Informatics, vol. 25, no. 2, pp. 117-132, May 2008.
Aadhaar technology and architecture: principles, design, best practices and key lessons. A Dalwai, Tech. Rep. Unique Identification Authority of India (UIDAI)A. Dalwai, "Aadhaar technology and architecture: principles, design, best practices and key lessons," Unique Identification Authority of India (UIDAI), Tech. Rep., March 2014.
European Commission. Smart bordersEuropean Commission, "Smart borders," https://ec.europa.eu/home- affairs/what-we-do/policies/borders-and-visas/smart-borders en, 2018, last accessed: July 13, 2022.
Automated Fingerprint Identification System (AFIS) overview -a short history. Thales, last accessedThales, "Automated Fingerprint Identification System (AFIS) overview -a short history," https://www.thalesgroup.com/en/markets/digital- identity-and-security/government/biometrics/afis-history, April 2019, last accessed: July 13, 2022.
Impact on biometrics of covid-19. S Carlaw, Biometric Technology Today. 20204S. Carlaw, "Impact on biometrics of covid-19," Biometric Technology Today, vol. 2020, no. 4, pp. 8-9, 2020.
The unconstrained ear recognition challenge 2019. Ž Emeršič, A Kumar, B Harish, W Gutfeter, J Khiarak, Proc. Int. Conf. on Biometrics (ICB). Int. Conf. on Biometrics (ICB)Ž. Emeršič, A. Kumar, B. Harish, W. Gutfeter, J. Khiarak et al., "The unconstrained ear recognition challenge 2019," in Proc. Int. Conf. on Biometrics (ICB), 2019, pp. 1-15.
Handbook of face recognition. S Z Li, A K Jain, SpringerS. Z. Li and A. K. Jain, Handbook of face recognition. Springer, 2011.
Deep face recognition: A survey. I Masi, Y Wu, T Hassner, P Natarajan, Conference on graphics, patterns and images (SIBGRAPI). I. Masi, Y. Wu, T. Hassner, and P. Natarajan, "Deep face recognition: A survey," in Conference on graphics, patterns and images (SIBGRAPI).
. IEEE. IEEE, October 2018, pp. 471-478.
A survey on deep learning based face recognition. G Guo, N Zhang, Computer Vision and Image Understanding. 189102805G. Guo and N. Zhang, "A survey on deep learning based face recognition," Computer Vision and Image Understanding, vol. 189, p. 102805, December 2019.
Grid loss: Detecting occluded faces. M Opitz, G Waltner, G Poier, H Possegger, H Bischof, European Conference on Computer Vision (ECCV). SpringerM. Opitz, G. Waltner, G. Poier, H. Possegger, and H. Bischof, "Grid loss: Detecting occluded faces," in European Conference on Computer Vision (ECCV). Springer, October 2016, pp. 386-402.
A survey of face recognition techniques under occlusion. D Zeng, R N J Veldhuis, L Spreeuwers, arXiv:2006.11366arXiv preprintD. Zeng, R. N. J. Veldhuis, and L. Spreeuwers, "A survey of face recog- nition techniques under occlusion," arXiv preprint arXiv:2006.11366, 2020.
Occlusion robust face recognition based on mask learning with pairwise differential siamese network. L Song, D Gong, Z Li, C Liu, W Liu, International Conference on Computer Vision (ICCV). L. Song, D. Gong, Z. Li, C. Liu, and W. Liu, "Occlusion robust face recognition based on mask learning with pairwise differential siamese network," in International Conference on Computer Vision (ICCV).
. Ieee/Cvf, IEEE/CVF, October 2019, pp. 773-782.
Ongoing face recognition vendor test (FRVT) part 6A: Face recognition accuracy with masks using pre-COVID-19 algorithms. M Ngan, P Grother, K Hanaoka, Tech. Rep. NISTIR. 8311National Institute of Standards and TechnologyM. Ngan, P. Grother, and K. Hanaoka, "Ongoing face recognition vendor test (FRVT) part 6A: Face recognition accuracy with masks using pre-COVID-19 algorithms," National Institute of Standards and Technology, Tech. Rep. NISTIR 8311, July 2020.
Ongoing face recognition vendor test (FRVT) part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms. National Institute of Standards and Technology. 8331Tech. Rep. NISTIR--, "Ongoing face recognition vendor test (FRVT) part 6B: Face recognition accuracy with face masks using post-COVID-19 algo- rithms," National Institute of Standards and Technology, Tech. Rep. NISTIR 8331, November 2020.
FRVT face mask effects. National Institute of Standards and TechnologyNational Institute of Standards and Technology, "FRVT face mask effects," https://pages.nist.gov/frvt/html/frvt facemask.html, November 2020, last accessed: July 13, 2022.
Biometric Technology Rally at MdTF. Department of Homeland SecurityDepartment of Homeland Security, "Biometric Technology Rally at MdTF," https://mdtf.org/Rally2020, 2020, last accessed: July 13, 2022.
Z Wang, G Wang, B Huang, Z Xiong, Q Hong, H Wu, P Yi, K Jiang, N Wang, Y Pei, H Chen, Y Miao, Z Huang, J Liang, arXiv:2003.09093Masked face recognition dataset and application. arXiv preprintZ. Wang, G. Wang, B. Huang, Z. Xiong, Q. Hong, H. Wu, P. Yi, K. Jiang, N. Wang, Y. Pei, H. Chen, Y. Miao, Z. Huang, and J. Liang, "Masked face recognition dataset and application," arXiv preprint arXiv:2003.09093, 2020.
Extended evaluation of the effect of real and simulated masks on face recognition performance. N Damer, F Boutros, M Süßmilch, F Kirchbuchner, A Kuijper, 10.1049/bme2.12044IET Biom. 105N. Damer, F. Boutros, M. Süßmilch, F. Kirchbuchner, and A. Kuijper, "Extended evaluation of the effect of real and simulated masks on face recognition performance," IET Biom., vol. 10, no. 5, pp. 548-561, 2021. [Online]. Available: https://doi.org/10.1049/bme2.12044
Masked face recognition: Human versus machine. N Damer, F Boutros, M Süßmilch, M Fang, F Kirchbuchner, A Kuijper, https:/ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/bme2.12077IET Biometrics. N. Damer, F. Boutros, M. Süßmilch, M. Fang, F. Kirchbuchner, and A. Kuijper, "Masked face recognition: Human versus machine," IET Biometrics, 2022. [Online]. Available: https://ietresearch.onlinelibrary. wiley.com/doi/abs/10.1049/bme2.12077
Generative face completion. Y Li, S Liu, J Yang, M.-H Yang, Conference on Computer Vision and Pattern Recognition (CVPR). Y. Li, S. Liu, J. Yang, and M.-H. Yang, "Generative face completion," in Conference on Computer Vision and Pattern Recognition (CVPR).
. IEEE. IEEE, July 2017, pp. 5892-5900.
Symmetry-aware face completion with generative adversarial networks. J Zhang, R Zhan, D Sun, G Pan, Asian Conference on Computer Vision (ACCV). Springer11364J. Zhang, R. Zhan, D. Sun, and G. Pan, "Symmetry-aware face completion with generative adversarial networks," in Asian Conference on Computer Vision (ACCV), vol. 11364. Springer, December 2018, pp. 289-304.
Does generative face completion help face recognition. J Mathai, I Masi, W Abdalmageed, International Conference on Biometrics (ICB). IEEEJ. Mathai, I. Masi, and W. AbdAlmageed, "Does generative face completion help face recognition?" in International Conference on Biometrics (ICB). IEEE, June 2019, pp. 1-8.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. M Sharif, S Bhagavatula, L Bauer, M K Reiter, Conference on Computer and Communications Security. ACMM. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition," in Conference on Computer and Communications Security. ACM, October 2016, pp. 1528-1540.
Face image quality assessment: A literature survey. T Schlett, C Rathgeb, O Henniger, J Galbally, J Fiérrez, C Busch, CoRR. T. Schlett, C. Rathgeb, O. Henniger, J. Galbally, J. Fiérrez, and C. Busch, "Face image quality assessment: A literature survey," CoRR, vol. abs/2009.01103, 2020.
SER-FIQ: unsupervised estimation of face image quality based on stochastic embedding robustness. P Terhörst, J N Kolf, N Damer, F Kirchbuchner, A Kuijper, CVPR. IEEEP. Terhörst, J. N. Kolf, N. Damer, F. Kirchbuchner, and A. Kuijper, "SER-FIQ: unsupervised estimation of face image quality based on stochastic embedding robustness," in CVPR. IEEE, 2020, pp. 5650- 5659.
CR-FIQA: face image quality assessment by learning sample relative classifiability. F Boutros, M Fang, M Klemt, B Fu, N Damer, abs/2112.06592CoRR. F. Boutros, M. Fang, M. Klemt, B. Fu, and N. Damer, "CR- FIQA: face image quality assessment by learning sample relative classifiability," CoRR, vol. abs/2112.06592, 2021. [Online]. Available: https://arxiv.org/abs/2112.06592
Quality-driven face occlusion detection and recovery. D Lin, X Tang, 10.1109/CVPR.2007.3830522007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007). Minneapolis, Minnesota, USAIEEE Computer SocietyD. Lin and X. Tang, "Quality-driven face occlusion detection and recovery," in 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA. IEEE Computer Society, 2007. [Online]. Available: https://doi.org/10.1109/CVPR.2007.383052
Multibranch face quality assessment for face recognition. L Zhang, X Shao, F Yang, P Deng, X Zhou, Y Shi, 10.1109/ICCT46805.2019.894725519th IEEE International Conference on Communication Technology, ICCT 2019. Xi'an, ChinaIEEEL. Zhang, X. Shao, F. Yang, P. Deng, X. Zhou, and Y. Shi, "Multi- branch face quality assessment for face recognition," in 19th IEEE International Conference on Communication Technology, ICCT 2019, Xi'an, China, October 16-19, 2019. IEEE, 2019, pp. 1659-1664. [Online]. Available: https://doi.org/10.1109/ICCT46805.2019.8947255
The effect of face morphing on face image quality. B Fu, N Spiller, C Chen, N Damer, Proceedings of the 20th International Conference of the Biometrics Special Interest Group, BIOSIG 2021, Digital Conference. ser. LNI, A. Brömme, C. Busch, N. Damer, A. Dantcheva, M. Gomez-Barrero, K. B. Raja, C. Rathgeb, A. F. Sequeira, and A. Uhlthe 20th International Conference of the Biometrics Special Interest Group, BIOSIG 2021, Digital ConferenceGesellschaft für Informatik e.V., 2021B. Fu, N. Spiller, C. Chen, and N. Damer, "The effect of face morphing on face image quality," in Proceedings of the 20th International Conference of the Biometrics Special Interest Group, BIOSIG 2021, Digital Conference, September 15-17, 2021, ser. LNI, A. Brömme, C. Busch, N. Damer, A. Dantcheva, M. Gomez-Barrero, K. B. Raja, C. Rathgeb, A. F. Sequeira, and A. Uhl, Eds., vol. P-315. Gesellschaft für Informatik e.V., 2021, pp. 173-180. [Online].
How to correctly detect face-masks for covid-19 from visual information?. B Batagelj, P Peer, V Štruc, S Dobrišek, Applied Sciences. 1152070B. Batagelj, P. Peer, V.Štruc, and S. Dobrišek, "How to correctly detect face-masks for covid-19 from visual information?" Applied Sciences, vol. 11, no. 5, p. 2070, 2021.
Mfr 2021: Masked face recognition competition. F Boutros, N Damer, J N Kolf, K Raja, Proc. Int. Joint Conf. on Biometrics (IJCB). Int. Joint Conf. on Biometrics (IJCB)F. Boutros, N. Damer, J. N. Kolf, K. Raja et al., "Mfr 2021: Masked face recognition competition," in Proc. Int. Joint Conf. on Biometrics (IJCB), 2021, pp. 1-10.
Mask-invariant face recognition through template-level knowledge distillation. M Huber, F Boutros, F Kirchbuchner, N Damer, 10.1109/FG52635.2021.966708116th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021. Jodhpur, IndiaM. Huber, F. Boutros, F. Kirchbuchner, and N. Damer, "Mask-invariant face recognition through template-level knowledge distillation," in 16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Jodhpur, India, December 15- 18, 2021. IEEE, 2021, pp. 1-8. [Online]. Available: https: //doi.org/10.1109/FG52635.2021.9667081
Selfrestrained triplet loss for accurate masked face recognition. F Boutros, N Damer, F Kirchbuchner, A Kuijper, 10.1016/j.patcog.2021.108473Pattern Recognit. 124108473F. Boutros, N. Damer, F. Kirchbuchner, and A. Kuijper, "Self- restrained triplet loss for accurate masked face recognition," Pattern Recognit., vol. 124, p. 108473, 2022. [Online]. Available: https://doi.org/10.1016/j.patcog.2021.108473
How iris recognition works. J Daugman, Transactions on Circuits and Systems for Video Technology (TCSVT). 14J. Daugman, "How iris recognition works," Transactions on Circuits and Systems for Video Technology (TCSVT), vol. 14, no. 1, pp. 21-30, January 2004.
Iris on the move™. J R Matey, Encyclopedia of Biometrics, S. Z. Li and A. K. JainSpringer USJ. R. Matey, "Iris on the move™," in Encyclopedia of Biometrics, S. Z. Li and A. K. Jain, Eds. Springer US, 2009, pp. 805-810.
Long range iris recognition: A survey. K Nguyen, C Fookes, R Jillela, S Sridharan, A Ross, Pattern Recognition. 72K. Nguyen, C. Fookes, R. Jillela, S. Sridharan, and A. Ross, "Long range iris recognition: A survey," Pattern Recognition, vol. 72, pp. 123-143, December 2017.
Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. H Proença, Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 32H. Proença, "Iris recognition: On the segmentation of degraded images acquired in the visible wavelength," Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 8, pp. 1502-1516, July 2009.
Smartphone based visible iris recognition using deep sparse filtering. K B Raja, R Raghavendra, V K Vemuri, C Busch, Pattern Recognition Letters. 57K. B. Raja, R. Raghavendra, V. K. Vemuri, and C. Busch, "Smartphone based visible iris recognition using deep sparse filtering," Pattern Recognition Letters, vol. 57, pp. 33-42, May 2015.
Ocular biometrics in the visible spectrum: A survey. A Rattani, R Derakhshani, Image and Vision Computing. 59A. Rattani and R. Derakhshani, "Ocular biometrics in the visible spectrum: A survey," Image and Vision Computing, vol. 59, pp. 1-16, March 2017.
An efficient superresolution single image network using sharpness loss metrics for iris. J Tapia, M Gomez-Barrero, C Busch, Proc. Int. Workshop on Information Forensics and Security (WIFS). Int. Workshop on Information Forensics and Security (WIFS)J. Tapia, M. Gomez-Barrero, and C. Busch, "An efficient super- resolution single image network using sharpness loss metrics for iris," in Proc. Int. Workshop on Information Forensics and Security (WIFS).
. IEEE. IEEE, December 2020.
Unique Identification Authority of India. Aadhaar dashboardUnique Identification Authority of India, "Aadhaar dashboard," https: //www.uidai.gov.in/aadhaar dashboard/, last accessed: July 13, 2022.
Searching for doppelgängers: assessing the universality of the IrisCode impostors distribution. J Daugman, C Downing, IET Biometrics. 52J. Daugman and C. Downing, "Searching for doppelgängers: assessing the universality of the IrisCode impostors distribution," IET Biometrics, vol. 5, no. 2, pp. 65-75, June 2016.
Retina verification system based on biometric graph matching. S M Lajevardi, A Arakala, S A Davis, K J Horadam, Transactions on Image Processing. 229S. M. Lajevardi, A. Arakala, S. A. Davis, and K. J. Horadam, "Retina verification system based on biometric graph matching," Transactions on Image Processing, vol. 22, no. 9, pp. 3625-3635, June 2013.
P Rot, M Vitek, K Grm, Ž Emeršič, P Peer, V Štruc, Deep Sclera Segmentation and Recognition. SpringerP. Rot, M. Vitek, K. Grm,Ž. Emeršič, P. Peer, and V.Štruc, Deep Sclera Segmentation and Recognition. Springer, 2020, ch. 13, pp. 395-432.
A survey on periocular biometrics research. F Alonso-Fernandez, J Bigun, Pattern Recognition Letters. 82F. Alonso-Fernandez and J. Bigun, "A survey on periocular biometrics research," Pattern Recognition Letters, vol. 82, pp. 92-105, October 2016.
Smartphone authentication system using periocular biometrics. K B Raja, R Raghavendra, M Stokkenes, C Busch, Proc. Int. Conf. of the Biometrics Special Interest Group. Int. Conf. of the Biometrics Special Interest GroupK. B. Raja, R. Raghavendra, M. Stokkenes, and C. Busch, "Smartphone authentication system using periocular biometrics," in Proc. Int. Conf. of the Biometrics Special Interest Group (BIOSIG), 2014, pp. 1-8.
Periocular biometrics in mobile environment. T De Freitas Pereira, S Marcel, International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEET. de Freitas Pereira and S. Marcel, "Periocular biometrics in mobile environment," in International Conference on Biometrics Theory, Ap- plications and Systems (BTAS). IEEE, September 2015, pp. 1-7.
Periocular biometrics in the visible spectrum. U Park, R R Jillela, A Ross, A K Jain, Transactions on Information Forensics and Security (TIFS). 61U. Park, R. R. Jillela, A. Ross, and A. K. Jain, "Periocular biometrics in the visible spectrum," Transactions on Information Forensics and Security (TIFS), vol. 6, no. 1, pp. 96-106, March 2011.
Probing fairness of mobile ocular biometrics methods across gender on visob 2.0 dataset. A Krishnan, A Almadan, A Rattani, International Conference on Pattern Recognition. A. Krishnan, A. Almadan, and A. Rattani, "Probing fairness of mobile ocular biometrics methods across gender on visob 2.0 dataset," in International Conference on Pattern Recognition, 2021, pp. 229-243.
Multimodal authentication system for smartphones using face, iris and periocular. K B Raja, R Raghavendra, M Stokkenes, C Busch, International Conference on Biometrics (ICB). IEEEK. B. Raja, R. Raghavendra, M. Stokkenes, and C. Busch, "Multi- modal authentication system for smartphones using face, iris and periocular," in International Conference on Biometrics (ICB). IEEE, May 2015, pp. 143-150.
Multi-biometric template protection -a security analysis of binarized statistical features for bloom filters on smartphones. M Stokkenes, R Raghavendra, M K Sigaard, K Raja, M Gomez-Barrero, C Busch, International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEEM. Stokkenes, R. Raghavendra, M. K. Sigaard, K. Raja, M. Gomez- Barrero, and C. Busch, "Multi-biometric template protection -a security analysis of binarized statistical features for bloom filters on smartphones," in International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, December 2016, pp. 1-6.
Periocular recognition under unconstrained conditions using CNN-based super-resolution. V M Ipe, T Thomas, International Conference on Advanced Communication and Networking (ICACN). SpringerV. M. Ipe and T. Thomas, "Periocular recognition under uncon- strained conditions using CNN-based super-resolution," in Interna- tional Conference on Advanced Communication and Networking (ICACN). Springer, December 2019, pp. 235-246.
Expression recognition using the periocular region: A feasibility study. F Alonso-Fernandez, J Bigun, C Englund, International Conference on Signal-Image Technology Internet-Based Systems (SITIS). IEEEF. Alonso-Fernandez, J. Bigun, and C. Englund, "Expression recog- nition using the periocular region: A feasibility study," in Interna- tional Conference on Signal-Image Technology Internet-Based Systems (SITIS). IEEE, November 2018, pp. 536-541.
Emotion detection using periocular region: A cross-dataset study. N Reddy, R Derakhshani, International Joint Conference on Neural Networks (IJCNN). IEEEN. Reddy and R. Derakhshani, "Emotion detection using periocular region: A cross-dataset study," in International Joint Conference on Neural Networks (IJCNN). IEEE, July 2020, pp. 1-6.
An overview of text-independent speaker recognition: From features to supervectors. T Kinnunen, H Li, Speech communication. 521T. Kinnunen and H. Li, "An overview of text-independent speaker recognition: From features to supervectors," Speech communication, vol. 52, no. 1, pp. 12-40, 2010.
Speaker recognition by machines and humans: A tutorial review. J H L Hansen, T Hasan, IEEE Signal Processing Magazine. 326J. H. L. Hansen and T. Hasan, "Speaker recognition by machines and humans: A tutorial review," IEEE Signal Processing Magazine, vol. 32, no. 6, pp. 74-99, 11 2015.
Articulation rate filtering of CQCC features for automatic speaker verification. M Todisco, H Delgado, N Evans, Proc. Interspeech. InterspeechM. Todisco, H. Delgado, and N. Evans, "Articulation rate filtering of CQCC features for automatic speaker verification," in Proc. Inter- speech, 2016.
Voxceleb: a large-scale speaker identification dataset. A Nagrani, J S Chung, A Zisserman, arXiv:1706.08612arXiv preprintA. Nagrani, J. S. Chung, and A. Zisserman, "Voxceleb: a large-scale speaker identification dataset," arXiv preprint arXiv:1706.08612, 2017.
X-vectors: Robust dnn embeddings for speaker recognition. D Snyder, D Garcia-Romero, G Sell, D Povey, S Khudanpur, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEED. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur, "X-vectors: Robust dnn embeddings for speaker recognition," in 2018 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP). IEEE, 2018, pp. 5329-5333.
Automotive sound absorbing material survey results. A Zent, J T Long, SAE 2007 Noise and Vibration Conference and Exhibition. SAE InternationalA. Zent and J. T. Long, "Automotive sound absorbing material survey results," in SAE 2007 Noise and Vibration Conference and Exhibition. SAE International, May 2007, pp. 1-7.
Investigation on sound absorption properties for recycled fibrous materials. H S Seddeq, N M Aly, A A Marwa, M H Elshakankery, Journal of Industrial Textiles. 431H. S. Seddeq, N. M. Aly, A. A. Marwa, and M. H. Elshakankery, "Investigation on sound absorption properties for recycled fibrous materials," Journal of Industrial Textiles, vol. 43, no. 1, pp. 56-73, July 2013.
Acoustic energy absorption properties of fibrous materials: A review. X Tang, X Yan, Composites Part A: Applied Science and Manufacturing. 101X. Tang and X. Yan, "Acoustic energy absorption properties of fibrous materials: A review," Composites Part A: Applied Science and Manu- facturing, vol. 101, pp. 360-380, October 2017.
VoicePop: A pop noise based anti-spoofing system for voice authentication on smartphones. Q Wang, X Lin, M Zhou, Y Chen, C Wang, Q Li, X Luo, Conference on Computer Communications. IEEEQ. Wang, X. Lin, M. Zhou, Y. Chen, C. Wang, Q. Li, and X. Luo, "VoicePop: A pop noise based anti-spoofing system for voice authenti- cation on smartphones," in Conference on Computer Communications. IEEE, April 2019, pp. 2062-2070.
Large-Scale Mixed-Bandwidth Deep Neural Network Acoustic Modeling for Automatic Speech Recognition. K.-N C Mac, X Cui, W Zhang, M Picheny, Proc. Interspeech. InterspeechK.-N. C. Mac, X. Cui, W. Zhang, and M. Picheny, "Large-Scale Mixed- Bandwidth Deep Neural Network Acoustic Modeling for Automatic Speech Recognition," in Proc. Interspeech 2019, 2019, pp. 251-255.
Analysis of face mask effect on speaker recognition. R Saeidi, I Huhtakallio, P Alku, Interspeech. ISCAR. Saeidi, I. Huhtakallio, and P. Alku, "Analysis of face mask effect on speaker recognition," in Interspeech. ISCA, September 2016, pp. 1800-1804.
Remote authentication keeps the world working: A biometric update interview series. C Burt, last accessedC. Burt, "Remote authentication keeps the world working: A biometric update interview series," https://www.biometricupdate.com/202005/ remote-authentication-keeps-the-world-working-a-biometric-update- interview-series, last accessed: July 13, 2022.
. G Guo, H Wechsler, Mobile Biometrics, ser. Security. Institution of Engineering and Technology. G. Guo and H. Wechsler, Mobile Biometrics, ser. Security. Institution of Engineering and Technology, 2017.
An enhanced model of biometric authentication in e-learning: Using a combination of biometric features to access e-learning environments. N Kaur, P W C Prasad, A Alsadoon, L Pham, A Elchouemi, International Conference on Advances in Electrical, Electronic and Systems Engineering (ICAEES). IEEEN. Kaur, P. W. C. Prasad, A. Alsadoon, L. Pham, and A. Elchouemi, "An enhanced model of biometric authentication in e-learning: Using a combination of biometric features to access e-learning environments," in International Conference on Advances in Electrical, Electronic and Systems Engineering (ICAEES). IEEE, November 2016, pp. 138-143.
A multi-biometric system for continuous student authentication in e-learning platforms. G Fenu, M Marras, L Boratto, Pattern Recognition Letters. 113G. Fenu, M. Marras, and L. Boratto, "A multi-biometric system for continuous student authentication in e-learning platforms," Pattern Recognition Letters, vol. 113, pp. 83-92, October 2018.
A survey of mobile face biometrics. A Rattani, R Derakhshani, Computers & Electrical Engineering. 72A. Rattani and R. Derakhshani, "A survey of mobile face biometrics," Computers & Electrical Engineering, vol. 72, pp. 39-52, November 2018.
The 2013 speaker recognition evaluation in mobile environment. E Khoury, B Vesnicer, J Franco-Pedroso, R Violato, Z Boulkcnafet, L M Mazaira Fernández, M Diez, J Kosmala, H Khemiri, T Cipr, R Saeidi, M Günther, J Žganec-Gros, R Z Candil, F Simões, M Bengherabi, A Marquina, M Penagarikano, A Abad, M Boulayemen, P Schwarz, D Van Leeuwen, J González-Domínguez, M U Neto, E Boutellaa, P G Vilda, A Varona, D Petrovska-Delacrétaz, P Matějka, J González-Rodríguez, T Pereira, F Harizi, L J Rodriguez-Fuentes, L E Shafey, M Angeloni, G Bordel, G Chollet, S Marcel, International Conference on Biometrics (ICB). IEEEE. Khoury, B. Vesnicer, J. Franco-Pedroso, R. Violato, Z. Boulkc- nafet, L. M. Mazaira Fernández, M. Diez, J. Kosmala, H. Khemiri, T. Cipr, R. Saeidi, M. Günther, J.Žganec-Gros, R. Z. Candil, F. Simões, M. Bengherabi, A.Álvarez Marquina, M. Penagarikano, A. Abad, M. Boulayemen, P. Schwarz, D. Van Leeuwen, J. González- Domínguez, M. U. Neto, E. Boutellaa, P. G. Vilda, A. Varona, D. Petrovska-Delacrétaz, P. Matějka, J. González-Rodríguez, T. Pereira, F. Harizi, L. J. Rodriguez-Fuentes, L. E. Shafey, M. Angeloni, G. Bordel, G. Chollet, and S. Marcel, "The 2013 speaker recognition evaluation in mobile environment," in International Conference on Biometrics (ICB). IEEE, June 2013, pp. 1-8.
System and method for speaker recognition on mobile devices. M G Gomar, 867M. G. Gomar, "System and method for speaker recognition on mobile devices," May 2015, uS Patent 9,042,867.
Smart and robust speaker recognition for context-aware in-vehicle applications. I Bisio, C Garibotto, A Grattarola, F Lavagetto, A Sciarrone, Transactions on Vehicular Technology (TVT). 679I. Bisio, C. Garibotto, A. Grattarola, F. Lavagetto, and A. Sciarrone, "Smart and robust speaker recognition for context-aware in-vehicle applications," Transactions on Vehicular Technology (TVT), vol. 67, no. 9, pp. 8808-8821, June 2018.
Selfie Biometrics, ser. Advances and Challenges. A Rattani, R Derakhshani, A Ross, SpringerA. Rattani, R. Derakhshani, and A. Ross, Selfie Biometrics, ser. Advances and Challenges. Springer, 2019.
Fast identity online. Fido Alliance, FIDO Alliance, "Fast identity online," https://fidoalliance.org/, 2020, last accessed: July 13, 2022.
Do deep nets really need to be deep. L J Ba, R Caruana, International Conference on Neural Information Processing Systems. MIT Press2L. J. Ba and R. Caruana, "Do deep nets really need to be deep?" in International Conference on Neural Information Processing Systems - Volume 2. MIT Press, December 2014, pp. 2654-2662.
Face model compression by distilling knowledge from neurons. P Luo, Z Zhu, Z Liu, X Wang, X Tang, Conference on Artificial Intelligence (AAAI). ACMP. Luo, Z. Zhu, Z. Liu, X. Wang, and X. Tang, "Face model compres- sion by distilling knowledge from neurons," in Conference on Artificial Intelligence (AAAI). ACM, February 2016, pp. 3560-3566.
Pruning convolutional neural networks for resource efficient inference. P Molchanov, S Tyree, T Karras, T Aila, J Kautz, International Conference on Learning Representations (ICLR). Open-Review.net. P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, "Pruning convolutional neural networks for resource efficient inference," in International Conference on Learning Representations (ICLR). Open- Review.net, April 2017, pp. 1-17.
Biometric technologies for elearning: State-of-the-art, issues and challenges. C Rathgeb, K Pöppelmann, E Gonzalez-Sosa, International Conference on Emerging eLearning Technologies and Applications (ICETA). C. Rathgeb, K. Pöppelmann, and E. Gonzalez-Sosa, "Biometric tech- nologies for elearning: State-of-the-art, issues and challenges," in International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020, pp. 1-6.
Remote biometric verification for elearning applications: Where we are. P S Sanna, G L Marcialis, International Conference Image Analysis and Processing (ICIAP). SpringerP. S. Sanna and G. L. Marcialis, "Remote biometric verification for elearning applications: Where we are," in International Conference Image Analysis and Processing (ICIAP). Springer, September 2017, pp. 373-383.
Continuous biometric user authentication in online examinations. E Flior, K Kowalski, International Conference on Information Technology: New Generations (ITNG). IEEEE. Flior and K. Kowalski, "Continuous biometric user authentication in online examinations," in International Conference on Information Technology: New Generations (ITNG). IEEE, April 2010, pp. 488- 492.
Keystroke biometrics for student authentication: A case study. A Morales, J Fierrez, Conference on Innovation and Technology in Computer Science Education (ITiCSE). ACM337A. Morales and J. Fierrez, "Keystroke biometrics for student authenti- cation: A case study," in Conference on Innovation and Technology in Computer Science Education (ITiCSE). ACM, June 2015, p. 337.
A study on continuous authentication using a combination of keystroke and mouse biometrics. S Mondal, P Bours, Neurocomputing. 230S. Mondal and P. Bours, "A study on continuous authentication using a combination of keystroke and mouse biometrics," Neurocomputing, vol. 230, pp. 1 -22, 2017.
Identity assured online exams & personalized elearning. Gmbh Bioid, BioID, GmbH, "Identity assured online exams & personalized e- learning," https://www.bioid.com/online-exams-e-learning/, 2020, last accessed: July 13, 2022.
Enhancing security and privacy in biometrics-based authentication systems. N Ratha, J Connell, R Bolle, IBM Systems Journal. 403N. Ratha, J. Connell, and R. Bolle, "Enhancing security and privacy in biometrics-based authentication systems," IBM Systems Journal, vol. 40, no. 3, pp. 614-634, March 2001.
S Marcel, M S Nixon, J Fierrez, N Evans, Handbook of Biometric Anti-Spoofing: Presentation Attack Detection. SpringerS. Marcel, M. S. Nixon, J. Fierrez, and N. Evans, Handbook of Biometric Anti-Spoofing: Presentation Attack Detection. Springer, 2019.
Presentation attack detection methods for face recognition systems: A comprehensive survey. R Raghavendra, C Busch, ACM Comput. Surv. 501R. Raghavendra and C. Busch, "Presentation attack detection methods for face recognition systems: A comprehensive survey," ACM Comput. Surv., vol. 50, no. 1, pp. 1-37, 2017.
How Bkav tricked iPhone X's Face ID with a mask. Bkav Corp, Bkav Corp, "How Bkav tricked iPhone X's Face ID with a mask," https: //www.youtube.com/watch?v=i4YQRLQVixM, 2017, last accessed: July 13, 2022.
Enhancing trust in eAssessment -the TeSLA system solution. S Bhattacharjee, M Ivanova, A Rozeva, M Durcheva, S Marcel, International Technology Enhanced Assessment Conference (TEA). S. Bhattacharjee, M. Ivanova, A. Rozeva, M. Durcheva, and S. Marcel, "Enhancing trust in eAssessment -the TeSLA system solution," in International Technology Enhanced Assessment Conference (TEA), 2018, pp. 1-18.
Partial attack supervision and regional weighted inference for masked face presentation attack detection. M Fang, F Boutros, A Kuijper, N Damer, 10.1109/FG52635.2021.966705116th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021. Jodhpur, IndiaM. Fang, F. Boutros, A. Kuijper, and N. Damer, "Partial attack supervision and regional weighted inference for masked face presentation attack detection," in 16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Jodhpur, India, December 15-18, 2021. IEEE, 2021, pp. 1-8. [Online]. Available: https://doi.org/10.1109/FG52635.2021.9667051
Real masks and spoof faces: On the masked face presentation attack detection. M Fang, N Damer, F Kirchbuchner, A Kuijper, 10.1016/j.patcog.2021.108398Pattern Recognit. 123108398M. Fang, N. Damer, F. Kirchbuchner, and A. Kuijper, "Real masks and spoof faces: On the masked face presentation attack detection," Pattern Recognit., vol. 123, p. 108398, 2022. [Online]. Available: https://doi.org/10.1016/j.patcog.2021.108398
A survey on biometric cryptosystems and cancelable biometrics. C Rathgeb, A Uhl, EURASIP Journal on Information Security. 3C. Rathgeb and A. Uhl, "A survey on biometric cryptosystems and cancelable biometrics," EURASIP Journal on Information Security, vol. 3, March 2011.
IEEE 2410-2019 -IEEE Standard for Biometric Open Protocol. Institute of Electrical and Electronics EngineersInstitute of Electrical and Electronics Engineers, IEEE 2410-2019 - IEEE Standard for Biometric Open Protocol, June 2019.
Practical homomorphic encryption: A survey. C Moore, M O'neill, E O'sullivan, Y Doroz, B Sunar, International. Symposium on Circuits and Systems (ISCAS). IEEEC. Moore, M. O'Neill, E. O'Sullivan, Y. Doroz, and B. Sunar, "Practical homomorphic encryption: A survey," in International. Symposium on Circuits and Systems (ISCAS). IEEE, June 2014, pp. 2792-2795.
Fingerprint biometric system hygiene and the risk of COVID-19 transmission. K Okereafor, I Ekong, I O Markson, K Enwere, Biomedical Engineering. 5119623K. Okereafor, I. Ekong, I. O. Markson, and K. Enwere, "Fingerprint biometric system hygiene and the risk of COVID-19 transmission," Biomedical Engineering, vol. 5, no. 1, p. e19623, April 2020.
Fingerprint skin moisture impact on biometric performance. M A Olsen, M Dusio, C Busch, International Workshop on Biometrics and Forensics (IWBF). IEEEM. A. Olsen, M. Dusio, and C. Busch, "Fingerprint skin moisture impact on biometric performance," in International Workshop on Biometrics and Forensics (IWBF). IEEE, March 2015, pp. 1-6.
An overview of touchless 2D fingerprint recognition. J Priesnitz, C Rathgeb, N Buchmann, C Busch, M Margraf, EURASIP Journal on Image and Video Processing. J. Priesnitz, C. Rathgeb, N. Buchmann, C. Busch, and M. Margraf, "An overview of touchless 2D fingerprint recognition," EURASIP Journal on Image and Video Processing, 2020.
Contactless 3D Fingerprint Identification, ser. Advances in Computer Vision and Pattern Recognition. A Kumar, SpringerA. Kumar, Contactless 3D Fingerprint Identification, ser. Advances in Computer Vision and Pattern Recognition. Springer, 2018.
A low-cost multimodal biometric sensor to capture finger vein and fingerprint. R Raghavendra, K B Raja, J Surbiryala, C Busch, International Joint Conference on Biometrics (IJCB). IEEER. Raghavendra, K. B. Raja, J. Surbiryala, and C. Busch, "A low-cost multimodal biometric sensor to capture finger vein and fingerprint," in International Joint Conference on Biometrics (IJCB). IEEE, September 2014, pp. 1-7.
Full 3D touchless fingerprint recognition: Sensor, database and baseline performance. J Galbally, G Bostrom, L Beslay, International Joint Conference on Biometrics (IJCB). IEEEJ. Galbally, G. Bostrom, and L. Beslay, "Full 3D touchless fingerprint recognition: Sensor, database and baseline performance," in Interna- tional Joint Conference on Biometrics (IJCB). IEEE, October 2017, pp. 225-233.
Contactless fingerprint identification using level zero features. A Kumar, Y Zhou, Conference on Computer Vision and Pattern Recognition Workshops. CVPRW). IEEEA. Kumar and Y. Zhou, "Contactless fingerprint identification using level zero features," in Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, June 2011, pp. 114-119.
Fingerphoto recognition with smartphone cameras. C Stein, C Nickel, C Busch, International Conference of Biometrics Special Interest Group (BIOSIG). IEEEC. Stein, C. Nickel, and C. Busch, "Fingerphoto recognition with smart- phone cameras," in International Conference of Biometrics Special Interest Group (BIOSIG). IEEE, September 2012, pp. 1-12.
Interoperability assessment 2019: Contactless-to-contact fingerprint capture. J M Libert, J D Grantham, B Bandini, K Ko, S Orandi, C I Watson, Tech. Rep. NISTIR. 8307National Institute of Standards and TechnologyJ. M. Libert, J. D. Grantham, B. Bandini, K. Ko, S. Orandi, and C. I. Watson, "Interoperability assessment 2019: Contactless-to-contact fingerprint capture," National Institute of Standards and Technology, Tech. Rep. NISTIR 8307, May 2020.
Evaluating the operational impact of contactless fingerprint imagery on matcher performance. S Orandi, J M Libert, B Bandini, K Ko, J D Grantham, C I Watson, Tech. Rep. NISTIR. 8315National Institute of Standards and TechnologyS. Orandi, J. M. Libert, B. Bandini, K. Ko, J. D. Grantham, and C. I. Watson, "Evaluating the operational impact of contactless fingerprint imagery on matcher performance," National Institute of Standards and Technology, Tech. Rep. NISTIR 8315, September 2020.
Touchless-to-touch fingerprint systems compatibility method. P Salum, D Sandoval, A Zaghetto, B Macchiavello, C Zaghetto, International Conference on Image Processing (ICIP). IEEEP. Salum, D. Sandoval, A. Zaghetto, B. Macchiavello, and C. Zaghetto, "Touchless-to-touch fingerprint systems compatibility method," in In- ternational Conference on Image Processing (ICIP). IEEE, September 2017, pp. 3550-3554.
Matching contactless and contact-based conventional fingerprint images for biometrics identification. C Lin, A Kumar, Transactions on Image Processing. 27C. Lin and A. Kumar, "Matching contactless and contact-based con- ventional fingerprint images for biometrics identification," Transactions on Image Processing, vol. 27, no. 4, pp. 2008-2021, April 2018.
Combined fully contactless finger and hand vein capturing device with a corresponding dataset. C Kauba, B Prommegger, A Uhl, Sensors. 1922C. Kauba, B. Prommegger, and A. Uhl, "Combined fully contactless finger and hand vein capturing device with a corresponding dataset," Sensors, vol. 19, no. 22, pp. 5014-5039, November 2019.
Contactless finger-vein verification based on oriented elements feature. H Ma, S Y Zhang, Infrared Physics & Technology. 97H. Ma and S. Y. Zhang, "Contactless finger-vein verification based on oriented elements feature," Infrared Physics & Technology, vol. 97, pp. 149-155, March 2019.
On palm vein as a contactless identification technology. F Marattukalam, W H Abdulla, Australian & New Zealand Control Conference (ANZCC). IEEEF. Marattukalam and W. H. Abdulla, "On palm vein as a contactless identification technology," in Australian & New Zealand Control Con- ference (ANZCC). IEEE, November 2019, pp. 270-275.
Near-infrared illumination add-on for mobile hand-vein acquisition. L Debiasi, C Kauba, B Prommegger, A Uhl, International Conference on Biometrics Theory, Applications and Systems (BTAS). L. Debiasi, C. Kauba, B. Prommegger, and A. Uhl, "Near-infrared illumination add-on for mobile hand-vein acquisition," in International Conference on Biometrics Theory, Applications and Systems (BTAS).
. IEEE. IEEE, October 2018, pp. 1-9.
Chapter 6 -fingerphoto authentication using smartphone camera captured under varying environmental conditions. A Malhotra, A Sankaran, A Mittal, M Vatsa, R Singh, Human Recognition in Unconstrained Environments. Academic PressA. Malhotra, A. Sankaran, A. Mittal, M. Vatsa, and R. Singh, "Chapter 6 -fingerphoto authentication using smartphone camera captured under varying environmental conditions," in Human Recognition in Unconstrained Environments. Academic Press, 2017, pp. 119-144.
COVID-19 and computer audition: An overview on what speech & sound analysis could contribute in the SARS-CoV-2 corona crisis. B W Schuller, D M Schuller, K Qian, J Liu, H Zheng, X Li, arXiv:2003.11117arXiv preprintB. W. Schuller, D. M. Schuller, K. Qian, J. Liu, H. Zheng, and X. Li, "COVID-19 and computer audition: An overview on what speech & sound analysis could contribute in the SARS-CoV-2 corona crisis," arXiv preprint arXiv:2003.11117, 2020.
Audio, speech, language, & signal processing for COVID-19: A comprehensive overview. G Deshpande, B W Schuller, arXiv:2011.14445arXiv preprintG. Deshpande and B. W. Schuller, "Audio, speech, language, & signal processing for COVID-19: A comprehensive overview," arXiv preprint arXiv:2011.14445, 2020.
The voice of COVID-19: Acoustic correlates of infection. K D Bartl-Pokorny, F B Pokorny, A Batliner, S Amiriparian, A Semertzidou, F Eyben, E Kramer, F Schmidt, R Schönweiler, M Wehler, B W Schuller, arXiv:2012.09478arXiv preprintK. D. Bartl-Pokorny, F. B. Pokorny, A. Batliner, S. Amiriparian, A. Semertzidou, F. Eyben, E. Kramer, F. Schmidt, R. Schönweiler, M. Wehler, and B. W. Schuller, "The voice of COVID-19: Acoustic correlates of infection," arXiv preprint arXiv:2012.09478, 2020.
COVID-19 open source data sets: A comprehensive survey. J Shuja, E Alanazi, W Alasmary, A Alashaikh, Applied Intelligence. J. Shuja, E. Alanazi, W. Alasmary, and A. Alashaikh, "COVID-19 open source data sets: A comprehensive survey," Applied Intelligence, pp. 1- 30, September 2020.
AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. A Imran, I Posokhova, H N Qureshi, U Masood, S Riaz, K Ali, C N John, M Nabeel, arXiv:2004.01275arXiv preprintA. Imran, I. Posokhova, H. N. Qureshi, U. Masood, S. Riaz, K. Ali, C. N. John, and M. Nabeel, "AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app," arXiv preprint arXiv:2004.01275, 2020.
Exploring automatic diagnosis of COVID-19 from crowdsourced respiratory sound data. C Brown, J Chauhan, A Grammenos, J Han, A Hasthanasombat, D Spathis, T Xia, P Cicuta, C Mascolo, arXiv:2006.05919arXiv preprintC. Brown, J. Chauhan, A. Grammenos, J. Han, A. Hasthanasombat, D. Spathis, T. Xia, P. Cicuta, and C. Mascolo, "Exploring automatic diagnosis of COVID-19 from crowdsourced respiratory sound data," arXiv preprint arXiv:2006.05919, 2020.
Coswara-a database of breathing, cough, and voice sounds for COVID-19 diagnosis. N Sharma, P Krishnan, R Kumar, S Ramoji, S R Chetupalli, P K Ghosh, S Ganapathy, arXiv:2005.10548arXiv preprintN. Sharma, P. Krishnan, R. Kumar, S. Ramoji, S. R. Chetupalli, P. K. Ghosh, and S. Ganapathy, "Coswara-a database of breathing, cough, and voice sounds for COVID-19 diagnosis," arXiv preprint arXiv:2005.10548, 2020.
Smartphone-based self-testing of COVID-19 using breathing sounds. M Faezipour, A Abuzneid, 26Telemedicine and e-HealthM. Faezipour and A. Abuzneid, "Smartphone-based self-testing of COVID-19 using breathing sounds," Telemedicine and e-Health, vol. 26, no. 10, pp. 1202-1205, October 2020.
Design and development of smartphone-enabled spirometer with a disease classification system using convolutional neural network. S Trivedy, M Goyal, P R Mohapatra, A Mukherjee, Transactions on Instrumentation and Measurement. S. Trivedy, M. Goyal, P. R. Mohapatra, and A. Mukherjee, "Design and development of smartphone-enabled spirometer with a disease classification system using convolutional neural network," Transactions on Instrumentation and Measurement, pp. 7125-7135, March 2020.
An early study on intelligent analysis of speech under COVID-19: Severity, sleep quality, fatigue, and anxiety. J Han, K Qian, M Song, Z Yang, Z Ren, S Liu, J Liu, H Zheng, W Ji, T Koike, X Li, Z Zhang, Y Yamamoto, B W Schuller, Interspeech. ISCAJ. Han, K. Qian, M. Song, Z. Yang, Z. Ren, S. Liu, J. Liu, H. Zheng, W. Ji, T. Koike, X. Li, Z. Zhang, Y. Yamamoto, and B. W. Schuller, "An early study on intelligent analysis of speech under COVID-19: Severity, sleep quality, fatigue, and anxiety," in Interspeech. ISCA, October 2020, pp. 4946-4950.
. M R Kamble, J A Gonzalez-Lopez, T Grau, J M Espin, L Cascioli, Y Huang, A Gomez-Alanis, J Patino, R Font, A M Peinado, A M Gomez, N Evans, M A Zuluaga, M Todisco, PANACEA Cough Sound-Based Diagnosis of COVID-19 for the DiCOVA 2021M. R. Kamble, J. A. Gonzalez-Lopez, T. Grau, J. M. Espin, L. Cascioli, Y. Huang, A. Gomez-Alanis, J. Patino, R. Font, A. M. Peinado, A. M. Gomez, N. Evans, M. A. Zuluaga, and M. Todisco, "PANACEA Cough Sound-Based Diagnosis of COVID-19 for the DiCOVA 2021
Proc. Interspeech. InterspeechChallenge," in Proc. Interspeech 2021, 2021, pp. 906-910.
Exploring auditory acoustic features for the diagnosis of covid-19. M R Kamble, J Patino, M A Zuluaga, M Todisco, ICASSP 2022 -2022 IEEE International Conference on Acoustics, Speech and Signal Processing. M. R. Kamble, J. Patino, M. A. Zuluaga, and M. Todisco, "Exploring auditory acoustic features for the diagnosis of covid-19," in ICASSP 2022 -2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 566-570.
Diagnosis of COVID-19 Using Auditory Acoustic Cues. R K Das, M Madhavi, H Li, Proc. Interspeech. InterspeechR. K. Das, M. Madhavi, and H. Li, "Diagnosis of COVID-19 Using Auditory Acoustic Cues," in Proc. Interspeech 2021, 2021, pp. 921- 925.
Investigating Feature Selection and Explainability for COVID-19 Diagnostics from Cough Sounds. F Avila, A H Poorjam, D Mittal, C Dognin, A Muguli, R Kumar, S R Chetupalli, S Ganapathy, M Singh, Proc. Interspeech. InterspeechF. Avila, A. H. Poorjam, D. Mittal, C. Dognin, A. Muguli, R. Kumar, S. R. Chetupalli, S. Ganapathy, and M. Singh, "Investigating Feature Selection and Explainability for COVID-19 Diagnostics from Cough Sounds," in Proc. Interspeech 2021, 2021, pp. 951-955.
Airports deploy thermal cameras to control covid-19, science suggests it's merely 'safety theatre. B Goodwin, L M Alvarez, B. Goodwin and L. M. Alvarez, "Airports deploy thermal cameras to control covid-19, science suggests it's merely 'safety theatre'," https://www.computerweekly.com/news/252485233/Airports-deploy- thermal-cameras-to-control-Covid-19-science-suggests-its-merely- safety-theatre, 2020, last accessed: July 13, 2022.
Thermal screening, masks, hand hygiene mandatory in malls under new guidelines. NDTVlast accessedNDTV, "Thermal screening, masks, hand hygiene mandatory in malls under new guidelines," https://www.ndtv.com/india-news/24- 30-degrees-ac-temperature-social-distancing-detailed-guidelines-for- malls-2240889, 2020, last accessed: July 13, 2022.
A thermal camera based continuous body temperature measurement system. J.-W Lin, M.-H Lu, Y.-H Lin, International Conference on Computer Vision Workshops (ICCVW). IEEE/CVF. J.-W. Lin, M.-H. Lu, and Y.-H. Lin, "A thermal camera based continuous body temperature measurement system," in International Conference on Computer Vision Workshops (ICCVW). IEEE/CVF, October 2019, pp. 1681-1687.
Crossspectrum thermal to visible face recognition based on cascaded image synthesis. K Mallat, N Damer, F Boutros, A Kuijper, J.-L Dugelay, International Conference on Biometrics (ICB). IEEEK. Mallat, N. Damer, F. Boutros, A. Kuijper, and J.-L. Dugelay, "Cross- spectrum thermal to visible face recognition based on cascaded image synthesis," in International Conference on Biometrics (ICB). IEEE, June 2019, pp. 1-8.
Attribute-guided deep polarimetric thermal-to-visible face recognition. S M Iranmanesh, N M Nasrabadi, International Conference on Biometrics (ICB). IEEES. M. Iranmanesh and N. M. Nasrabadi, "Attribute-guided deep polari- metric thermal-to-visible face recognition," in International Conference on Biometrics (ICB). IEEE, June 2019, pp. 1-8.
Cascaded generation of high-quality color visible face images from thermal captures. N Damer, F Boutros, K Mallat, F Kirchbuchner, J Dugelay, A Kuijper, abs/1910.09524CoRR. N. Damer, F. Boutros, K. Mallat, F. Kirchbuchner, J. Dugelay, and A. Kuijper, "Cascaded generation of high-quality color visible face images from thermal captures," CoRR, vol. abs/1910.09524, 2019. [Online]. Available: http://arxiv.org/abs/1910.09524
ArcFace: Additive angular margin loss for deep face recognition. J Deng, J Guo, N Xue, S Zafeiriou, Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation / IEEEJ. Deng, J. Guo, N. Xue, and S. Zafeiriou, "ArcFace: Additive angular margin loss for deep face recognition," in Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation / IEEE, June 2019, pp. 4690-4699.
Near infrared face recognition: A literature survey. S Farokhi, J Flusser, U U Sheikh, Computer Science Review. 21S. Farokhi, J. Flusser, and U. U. Sheikh, "Near infrared face recogni- tion: A literature survey," Computer Science Review, vol. 21, pp. 1-17, August 2016.
Effectiveness of airport screening at detecting travellers infected with novel coronavirus (2019-nCoV). B J Quilty, S Clifford, S Flasche, R M Eggo, Eurosurveillance. 2552000080B. J. Quilty, S. Clifford, S. Flasche, and R. M. Eggo, "Effectiveness of airport screening at detecting travellers infected with novel coronavirus (2019-nCoV)," Eurosurveillance, vol. 25, no. 5, p. 2000080, February 2020.
EASA ECDC COVID-19 aviation health safety protocol. European Union Aviation Safety Agency (EASAlast accessedEuropean Union Aviation Safety Agency (EASA), "EASA ECDC COVID-19 aviation health safety protocol," https: //www.easa.europa.eu/document-library/general-publications/covid- 19-aviation-health-safety-protocol, 2020, last accessed: July 13, 2022.
Privacy and data protection issues of biometric applications. E J Kindt, Springer1E. J. Kindt, Privacy and data protection issues of biometric applica- tions. Springer, 2016, vol. 1.
Ethical, legal, and social implications of biometric technologies. S Tanwar, S Tyagi, N Kumar, M S Obaidat, Biometric-based physical and cybersecurity systems. SpringerS. Tanwar, S. Tyagi, N. Kumar, and M. S. Obaidat, "Ethical, legal, and social implications of biometric technologies," in Biometric-based physical and cybersecurity systems. Springer, 2019, pp. 535-569.
Fairness in biometrics: a figure of merit to assess biometric verification systems. T De Freitas Pereira, S Marcel, arXiv:2011.02395arXiv preprintT. de Freitas Pereira and S. Marcel, "Fairness in biometrics: a figure of merit to assess biometric verification systems," arXiv preprint arXiv:2011.02395, 2021.
A K Jain, D Deb, J J Engelsma, arXiv:2105.06625Biometrics: Trust, but verify. arXiv preprintA. K. Jain, D. Deb, and J. J. Engelsma, "Biometrics: Trust, but verify," arXiv preprint arXiv:2105.06625, 2021.
Demographic fairness in biometric systems: What do the experts say. C Rathgeb, P Drozdowski, N Damer, D C Frings, C Busch, abs/2105.14844CoRR. C. Rathgeb, P. Drozdowski, N. Damer, D. C. Frings, and C. Busch, "Demographic fairness in biometric systems: What do the experts say?" CoRR, vol. abs/2105.14844, 2021. [Online]. Available: https://arxiv.org/abs/2105.14844
Societal and ethical implications of anti-spoofing technologies in biometrics. A P Rebera, M E Bonfanti, S Venier, Science and engineering ethics. 201A. P. Rebera, M. E. Bonfanti, and S. Venier, "Societal and ethical implications of anti-spoofing technologies in biometrics," Science and engineering ethics, vol. 20, no. 1, pp. 155-169, 2014.
COVID-19: Effective and responsible biometric solutions and concepts in a time of pandemic -building a resilient response. Tech. Rep"COVID-19: Effective and responsible biometric solutions and con- cepts in a time of pandemic -building a resilient response," ç, Tech. Rep., 2020.
Handwriting biometrics: Applications and future trends in e-security and e-health. M Faúndez-Zanuy, J Fiérrez, M A Ferrer, M D Cabrera, R Tolosana, R Plamondon, 10.1007/s12559-020-09755-zCogn. Comput. 125M. Faúndez-Zanuy, J. Fiérrez, M. A. Ferrer, M. D. Cabrera, R. Tolosana, and R. Plamondon, "Handwriting biometrics: Applications and future trends in e-security and e-health," Cogn. Comput., vol. 12, no. 5, pp. 940-953, 2020. [Online]. Available: https: //doi.org/10.1007/s12559-020-09755-z
Exhaustive description of the system architecture and prototype implementation of an IoT-based eHealth biometric monitoring system for elders in independent living. C Vizitiu, C Bîrȃ, A Dinculescu, A Nistorescu, M Marin, MDPI Sensors. 2151837C. Vizitiu, C. Bîrȃ, A. Dinculescu, A. Nistorescu, and M. Marin, "Exhaustive description of the system architecture and prototype im- plementation of an IoT-based eHealth biometric monitoring system for elders in independent living," MDPI Sensors, vol. 21, no. 5, p. 1837, 2021.
Biometric identification for socioeconomic development in ghana. J Effah, E Owusu-Oware, R Boateng, Information Systems Management. 372J. Effah, E. Owusu-Oware, and R. Boateng, "Biometric identification for socioeconomic development in ghana," Information Systems Man- agement, vol. 37, no. 2, pp. 136-149, 2020.
Tracking and tracing COVID: Protecting privacy and data while using apps and biometrics. Tech. Rep. OECD"Tracking and tracing COVID: Protecting privacy and data while using apps and biometrics," Organisation for Economic Co-operation and Development (OECD), Tech. Rep., 2020.
The COVID decade: Understanding the long-term societal impacts of covid-19. The British Academy, Tech. Rep. "The COVID decade: Understanding the long-term societal impacts of covid-19," The British Academy, Tech. Rep., 2021.
Survey on COVID-19 related processing activities by EUIs. 2022Tech. RepEuropean Data Protection Supervisor (EPDS)"Survey on COVID-19 related processing activities by EUIs," European Data Protection Supervisor (EPDS), Tech. Rep., 2022.
| [] |
[
"Detecting and Distinguishing Majorana Zero Modes with the Scanning Tunneling Microscope",
"Detecting and Distinguishing Majorana Zero Modes with the Scanning Tunneling Microscope"
] | [
"Berthold Jäck \nDepartment of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA\n\nDepartment of Physics\nThe Hong Kong Institute of Science and Technology\nClearwater BayKowloon, Hong Kong\n",
"Yonglong Xie \nDepartment of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA\n\nDepartment of Physics\nHarvard University\n02318CambridgeMassachusettsUSA\n",
"Ali Yazdani [email protected] \nDepartment of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA\n"
] | [
"Department of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nThe Hong Kong Institute of Science and Technology\nClearwater BayKowloon, Hong Kong",
"Department of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nHarvard University\n02318CambridgeMassachusettsUSA",
"Department of Physics\nJoseph Henry Laboratories\nPrinceton University\n08544PrincetonNew JerseyUSA"
] | [] | The goal of creating topologically protected qubits using non-Abelian anyons is currently one of the most exciting areas of research in quantum condensed matter physics. Majorana zero modes (MZM), which are non-Abelian anyons predicted to emerge as localized zero energy states at the ends of one-dimensional topological superconductors, have been the focus of these efforts. In the search for experimental signatures of these novel quasi-particles in different material platforms, the scanning tunneling microscope (STM) has played a key role. The power of high-resolution STM techniques is perhaps best illustrated by their application in identifying MZM in one-dimensional chains of magnetic atoms on the surface of a superconductor. In this platform, STM spectroscopic mapping has demonstrated the localized nature of MZM zero-energy excitations at the ends of such chains, while experiments with superconducting and magnetic STM tips have been used to uniquely distinguish them from trivial edge modes. Beyond the atomic chains, STM has also uncovered signatures of MZM in two-dimensional materials and topological surface and boundary states, when they are subjected to the superconducting proximity effect. Looking ahead, future STM experiments can advance our understanding of MZM and their potential for creating topological qubits, by exploring avenues to demonstrate their non-Abelian statistics.feasible pathway for realizing topological p-wave superconductivity. | 10.1038/s42254-021-00328-z | [
"https://arxiv.org/pdf/2103.13210v1.pdf"
] | 232,335,790 | 2103.13210 | 23ad88352decbe97e8c2353d424c50b33ca78713 |
Detecting and Distinguishing Majorana Zero Modes with the Scanning Tunneling Microscope
Berthold Jäck
Department of Physics
Joseph Henry Laboratories
Princeton University
08544PrincetonNew JerseyUSA
Department of Physics
The Hong Kong Institute of Science and Technology
Clearwater BayKowloon, Hong Kong
Yonglong Xie
Department of Physics
Joseph Henry Laboratories
Princeton University
08544PrincetonNew JerseyUSA
Department of Physics
Harvard University
02318CambridgeMassachusettsUSA
Ali Yazdani [email protected]
Department of Physics
Joseph Henry Laboratories
Princeton University
08544PrincetonNew JerseyUSA
Detecting and Distinguishing Majorana Zero Modes with the Scanning Tunneling Microscope
*Correspondence should be addressed to: [email protected] and
The goal of creating topologically protected qubits using non-Abelian anyons is currently one of the most exciting areas of research in quantum condensed matter physics. Majorana zero modes (MZM), which are non-Abelian anyons predicted to emerge as localized zero energy states at the ends of one-dimensional topological superconductors, have been the focus of these efforts. In the search for experimental signatures of these novel quasi-particles in different material platforms, the scanning tunneling microscope (STM) has played a key role. The power of high-resolution STM techniques is perhaps best illustrated by their application in identifying MZM in one-dimensional chains of magnetic atoms on the surface of a superconductor. In this platform, STM spectroscopic mapping has demonstrated the localized nature of MZM zero-energy excitations at the ends of such chains, while experiments with superconducting and magnetic STM tips have been used to uniquely distinguish them from trivial edge modes. Beyond the atomic chains, STM has also uncovered signatures of MZM in two-dimensional materials and topological surface and boundary states, when they are subjected to the superconducting proximity effect. Looking ahead, future STM experiments can advance our understanding of MZM and their potential for creating topological qubits, by exploring avenues to demonstrate their non-Abelian statistics.feasible pathway for realizing topological p-wave superconductivity.
Introduction
Majorana zero modes (MZM) are non-Abelian anyons that emerge as localized zerodimensional end states of one-dimensional (1D) topological superconductors 1 . Unlike fermions and bosons, anyons are quasi-particles, whose particle interchange modifies the quantummechanical ground state of the host system. The ground state of a system with non-Abelian properties has multiple degenerate configurations, which are not specified by the spatial locations of the MZM. Adiabatic "braiding" of MZM provides the means to perform qubit operations within the subspace of the degenerate quantum state manifold, while their "fusion" can be used as means of qubit read out. In a qubit based on MZM, the quantum information is stored non-locally and protected by a topological energy gap of the system, and therefore it would be more resilient to local perturbation that can cause quantum decoherence [2][3][4][5] . To date, strong evidence for the existence of MZM has come from material platforms in which the proximity effect from a conventional superconductor is used in concert with strong spin-orbit and Zeeman or ferromagnetic exchange interactions to engineer a system with a topological superconducting ground state [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] . In particular, experiments on semiconducting nanowires [21][22][23] and chains of magnetic atoms [24][25][26] provided evidence for the existence of MZM in a condensed matter setting over the past years.
STM experiments combine the ability to visualize the atomic structure of condensed matter systems with the capability of studying their electronic structure with high energy and spatial resolution. In past two decades, these capacities have made STM a tool of choice for the investigation of quantum phases of matter at microscopic length scales 27 . Especially in the context of topological superconductivity, STM has emerged as a valuable tool to visualize the presence of MZM, which appear as localized zero-bias peaks (ZBP) in spectroscopic measurements, across a variety of topological quantum materials 24,[28][29][30][31][32] .
Despite the tremendous progress and achievements in research on topological superconductivity, skepticism of the research community toward the interpretation of localized ZBP as signatures of MZM exists. The debate on possible trivial origins of the observed ZBP [33][34][35][36][37][38][39][40] and lack of consistency of experimental results in some studies [41][42][43] emphasizes the key question of how trivial ZBP can be distinguished from topological ZBP in suitable experimental setups.
In this article, we will review the current state of research on topological superconductivity and MZM using STM experiments. We describe the pivotal role of STM in exploration of topological superconductivity across various material platforms, and we discuss how STM experiments with functional tips can probe other MZM properties, such as its particle-hole symmetry and spin, which can be used as a diagnostic tool to unequivocally distinguish topological from trivial ZBP [44][45][46][47] . We close by outlining existing theoretical proposals 48 and future experiments, which aim at demonstrating the non-Abelian exchange statistics and anyonic ground state degeneracy on the atomic scale using chains of magnetic atoms. Considering the recent developments in assembling such chains atom by atom with the STM tip 30,[49][50][51] , the realization of these concepts is in sight and would constitute milestone achievements in the quest toward topologically protected quantum computation.
Majorana modes in 1D chains
Kitaev model for topological superconductivity in spinless 1D systems
The idea of a 1D topological superconductor hosting MZM was first introduced in a simple Depending on the model parameters, the system exhibits two distinct ground states (Box 1), which can be best understood by decomposing the electronic states into pairs of MZM. In the trivial regime (Box 1, upper panel), the system simply consists of ordinary fermions, i.e. pairs of MZM, located on each lattice site. In the topological regime (Box 1, center panel), pairing between the MZM of neighboring sites results in an unpaired localized MZM (red sphere) at the opposite ends of the chain. The emergence of these edge excitations at zero energy cost represents a distinct localized signature of MZM that can be detected with the STM (Box 1, lower panel). In these experiments, this signature would appear as a peak at zero bias voltage, the so-called ZBP, in the measured differential tunnel conductance (dI/dV)-spectrum. We note that the bias-voltagedependent dI/dV-spectrum is proportional to the energy-dependent local density of states of the sample, where energy maps to bias voltage as E=eV.
The main challenge for realizing the Kitaev model is to make electrons behave as spinless fermions and to induce superconducting pairing among them. A fully spin-polarized system that breaks time-reversal symmetry (TRS) can be considered as having spinless electrons; however, intrinsic pairing instability between such spin-polarized electrons, so-called p-wave pairing has been elusive in nature. Various concepts have been proposed to engineer a 1D superconductor with an effective p-wave pairing 6,[8][9][10][11][12][14][15][16][18][19][20] via the proximity effect from a conventional s-wave superconductor. The central idea is built on the combination of Rashba spin-orbit coupling (SOC) with time-reversal symmetry breaking, such as induced by an external magnetic field or ferromagnetism, to lift the spin degeneracy at the Kramer's point.
Box 1
The Kitaev model 1 describes the hopping of spinless electrons with strength, t, between sites of a one-dimensional (1D) chain in the presence of p-wave superconducting pairing, ∆p. Fractionalizing the fermionic modes into pairs of Majorana quasiparticles, the system can assume topologically distinct ground states: A topologically trivial state with on-site pairing and a topologically nontrivial state with nearest-neighbor pairing of Majorana quasiparticles, respectively. Intuitively, in the topologically non-trivial case two individual Majorana quasiparticles remain localized to the chain end.
Material realization of Kitaev model of 1D topological superconductivity can be achieved by considering the normal state band structure of a 1D ferromagnet under the influence of Rashba SOC with amplitude ESO 6,44 . Ferromagnetic exchange splitting of strength J separates minority and majority bands. When the Fermi level lies in the minority band, only one spin species is populated, and the system can be regarded as spinless. Rashba SOC imprints a momentum-dependent spin texture on these 1D states, which facilitates proximity-induced pairing by an s-wave superconductor of the electrons in the otherwise fully spin-polarized minority band. This combination of magnetism and superconductivity, therefore, realizes a 1D topological superconducting state with p-wave pairing symmetry that can host zero-dimensional MZM at its ends.
STM experiments are ideally suited to study topological superconductivity and MZM across nanoscopic material platforms. In addition to the study of topographic sample properties 52 , STM experiments can detect localized MZM at the ends of one-dimensional topological superconductors, such as ferromagnetic chains of atoms placed on top of a superconducting surface 24 . MZM would appear as zero-energy states inside a superconducting gap in the measured dI/dV-spectrum of scanning tunneling spectroscopy measurements. The use of functional tips diversifies the spectroscopic toolbox of STM measurements, facilitating the measurement of electron-hole symmetry 45 or spin signature of MZM 28,44,46 , see Sec. 4.
One approach to realize this concept is shown in in Box 1, which depicts the normal state band structure of a 1D ferromagnet under the influence of Rashba SOC 6,44 . Due to the ferromagnetic exchange splitting, the minority and the majority bands are energetically separated.
When the Fermi level lies in the minority band, only one spin species is populated, and the system can be regarded as spinless. Rashba SOC imprints a momentum-dependent spin texture on these 1D states, which facilitates proximity-induced pairing by an s-wave superconductor of the electrons in the otherwise fully spin-polarized minority band. This combination of magnetism and superconductivity, therefore, realizes a 1D topological superconducting state with p-wave pairing symmetry that can host zero-dimensional MZM at its ends.
A related approach utilizing interplay between magnetism and superconductivity that arrives at the Kitaev model considers the in-gap Shiba states 53,54 induced by magnetic atoms in a superconductor. In the limit that that overlap between the magnetic atoms is weak (unlike the band picture described above), considering only the overlap between the Shiba state, theoretical analysis shows that a topological superconductor with MZM can be created, provided there is helical order in the 1D magnetic chain 6,[9][10][11][12] . The stability of such helical order in this limit has been subject of considerable theoretical studies (e.g. Ref. 55 ); however, the combination of ferromagnetism with strong SOC is found to be equivalent to helical magnetic order 56 , and have proven to be a more The inset illustrates the conjectured zig-zag structure of Fe chain. c, STM topography of a Fe chain, which is overlaid with the dI/dV-signal recorded at VB=30 mV with tips of opposite spin polarization, reproduced from 24 . d, Spectroscopic line-cut along the Fe chain axis as a function of energy and tip-position. e, dI/dV spectra recorded at the chain end (red) and its average in the middle of the chain (blue). f, Stacked STM images for the artificially constructed Fe chains of various lengths. g, Top: 3D-rendered STM image of a 40-atom-long Fe chain measured with a nonmagnetic PtIr tip. Spin-polarized STM images recorded with Fe-coated PtIr tips sensitive to the out-of-plane (middle panel) and the in-plane (lower panel) component of the spins in the chain. The magnetization directions of the tips are schematically depicted in the inset. h, dI/dV spectra obtained at the positions from rhenium surface to the bulk of the chain. Red curve, labelled 1, is taken at the chain end. i, Atomic assembly with the STM tip was used to build small chains of gadolinium (Gd) atoms on the surface of proximitized Bi (110). j, High-resolution STS measurements at milli-Kelvin temperatures of the superconducting gap on top of the Gd atoms reveal a rich dI/dV-spectrum. Six Shiba states with various spectral weight on top of different atoms are observed. k, These characteristics can be understood by considering the interplay of RKKYinteraction, spin-orbit coupling and surface magnetic anisotropy. Panels a-c, d, e are reproduced from 45 , panel c from 24 , panels f-h from 30 , and panels i-k from 51 .
Ferromagnetic iron chains on lead
Chains of magnetically coupled atoms on the surface of a superconductor (Box 1 and Fig.1) have been used to realize the 1D Kitaev model and have been established as an novel platform to study topological superconductivity and to directly visualize the presence of MZM with STM [24][25][26]45,46 . In these experiments, localized ZBP within the superconducting gap were observed at the chain ends and were interpreted as the spectroscopic signature of MZM 24 . Fig. 1a shows an STM topography of an iron (Fe) chain on a lead (Pb) (110) substrate. The regularly shaped Fe chain was realized through self-assembly of evaporated Fe atoms that arrange in a linear zig-zag structure, which is illustrated in the inset of Fig. 1b. Spin-polarized STM experiments were used to demonstrate the ferromagnetic nature of the Fe chains, as shown in Fig.1c, as well as the presence of Rashba SOC on the surface of Pb. These measurements, in combination with spectroscopic measurements of the normal states of these Fe chains, and their comparison with theoretical calculation provides evidence that these chains are in a regime for topological superconductivity to emerge per the mechanism depicted in Box 1.
The existence of topological superconductivity along the chain and localized MZM at the Fe chain ends was demonstrated through high-resolution scanning tunneling spectroscopy (STS) measurements of the superconducting gap region, as shown in Fig. 1d. While the superconducting dI/dV-spectrum at the chain's bulk is dominated by energy-symmetric Shiba states 53,54 , the spectrum at the chain's end features a localized peak at zero applied bias. This is also highlighted by individual point spectra in Fig. 1e and the zero-bias conductance profile in Fig. 1b. The ZBP at the chain end has no detectable splitting at milli-Kelvin temperatures, and it can be interpreted as the probability density of a MZM localized to the chain end 6,9 . Spectroscopic imaging with the STM revealed the spatial distribution of the ZBP, which shows the importance of chain-substrate hybridization in understanding such measurements. The dI/dV-map in Fig. 1a (lower panel), which was recorded at zero applied bias voltage, shows a zeroenergy state at the chain end. It features an intriguing spatial pattern, whose maximum is situated near the chain sides. The comparison with model calculations, capturing this "eye" shape feature, demonstrates that substantial spectral weight of the ZBP resides in the host superconductor that is the surface of Pb (110) substrate. This finding highlights the significant role of hybridization between electronic states of chain and substrate, which is also reflected in the ZBP localization length. The observed localization length of the zero-energy state in Fig.1f (~10Å) is much shorter than the superconducting coherence length, which can be understood in terms of a strong Fermi velocity renormalization induced by chain-substrate hybridization 57 (111). b, dI/dV map of the same area at measured at a bias voltage set point of 1.30 meV. c, Linecut of the deconvoluted LDOS along the white arrow in panel b, showing the spatial dispersion of the ingap state. The edge states display a X-shape at the interface of the cluster. d, Large scale STM topography of Au(111) nanowire array. e, STM topography of the black rectangle region in a, showing EuS island deposited on top of a gold nanowire. f, ZBP emerge in the dI/dV-spectra, which were measured at specific edge positions, 1 and 8, of the EuS island as indicated in panel e. g, STM topography of a CrBr3 monolayer deposited on the surface of NbSe2. h, dI/dV spectra recorded with the STM tip located at different spatial positions, which are indicated by the correspondingly colored dots in panel g. i, Top row, left panel: STM topography of the sample, All the other panels show dI/dV maps recorded at different indicated energies. Panels a-c reproduced from 29 , panels d-f reproduced from 32 , and panels g-i reproduced from. 58
Other Atomic Chains
A crucial figure of merit for systems hosting MZM is the size of their p-wave gap, which is ~ ΔEso/J, with ΔEso as the spin-orbit splitting and J as the ferromagnetic exchange interaction in atomic chains 6 . The topological gap protects the MZM from poisoning through quasiparticles in its vicinity, and it is key to the application of MZM for topologically protected quantum computation. Experimentally, one practical way to estimate a lower bound of the p-wave gap size is to consider the energy of the peak in a tunneling spectrum, which appears at the smallest energy above zero bias. In the case of the Fe chain platform, this energy is found to be at a modest value of around 150 eV for a peak in the chain center (corresponding to T<2 K) 24 . Comparably small gap values are also found for the semiconducting nano-wire platform 21,23 , putting the MZM at risk of being poisoned through thermally excited quasiparticles 59 . Hence, a strong motivation exists for discovering other material platforms beyond the Fe chain platform with the potential to stabilize the larger p-wave gaps.
Following the approach of synthesizing Fe chains on Pb(110) 24 , Ruby et al., prepared cobalt (Co) chains on Pb(110) 60 . However, in stark contrast to the Fe chains, the zero-energy mode appears delocalized along the Co chain and no ZBP was found at the chain ends, possibly owing to an even number of bands crossing Fermi energy. The contrasting experimental outcomes on the Fe and Co chain platforms 24,60 , respectively raise the question about the best strategy to engineer 1D topological superconductors. Guidance may come from previous model calculations 6,24 , which outline that an increased number of magnetic atoms in the unit cell results in a larger number of energy bands and can reduce the size of the topological phase space. It is, therefore, desirable to tailor atomic chains with preferably simple lattice structures that admit the largest topological phase space.
Kim et al., explored this direction and used atomic manipulation with the STM tip to manually assemble linear chains of Fe atoms on a rhenium substrate (Fig. 1f) 30 . Spin-polarized measurements with different tip magnetization directions reveal the spin-spiral arrangement of these Fe atoms (Fig. 1g), which is suitable to induce topological superconductivity as outlined in an earlier proposal 9 . Spatial dependent STS measurements on these chains display an enhanced zero bias LDOS at the chain end (Fig. 1h), when the number of atoms in the chain exceeds twelve, consistent with a topological superconducting phase and MZM. However, the small superconducting gap of Re, ∆Re=0.28 meV, renders the clear detection of a distinct ZBP difficult, therefore, making the presence of MZM in this system debatable.
More recent efforts have focused on extending such atomic manipulation experiments to materials, which combine a larger superconducting gap with a longer Fermi wavelength and strong spin-orbit coupling. Such system can be obtained by the epitaxial growth of bismuth(110) thin films on the surface of niobium crystal 51 . In this platform, depending on the separation of spins, the interplay between Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, spin-orbit coupling, and surface magnetic anisotropy stabilizes different types of spin alignments. These alignments influence the hybridization of the in-gap Shiba states (Fig. 1i-k) and show promise of engineering the band structure of such states for creating topological phases. They also show that spin-spin interaction can be tuned on length scale longer than interatomic distances in the presence of large proximity gap from the Nb substrate, therefore providing the possibility to create a helical spin chains atom-by-atom in the presence of robust superconductivity.
Majorana modes in 2D systems
From 1D chains to 2D Islands.
The idea of combining magnetism and superconductivity with strong Rashba SOC to engineer topological superconductivity can naturally be extended to two dimensions 29,58,61 . A 2D topological superconductor is predicted to harbor propagating Majorana edge modes along its onedimensional boundary 62,63 . Such chiral 1D Majorana mode is characterized by a linear, i.e. massless, quasiparticle energy-momentum dispersion, which connects the edges of the topological superconducting gap. It is expected to give rise to a flat dI/dV-signal inside the superconducting gap, by which it can be detected in STS measurements.
The first report of realizing such a system is that of a monolayer of superconducting Pb covering Co islands on a silicon (111) substrate 29 . Fig. 2a shows an STM image of a Co island covered with Pb film. While the presence of Pb film hinders the structural characterization of the underneath Co islands, the locations of which can yet be identified by inspecting its influence on the superconducting electronic states of Pb. A Conductance map taken at zero bias inside the superconducting gap of the monolayer Pb film on the same area reveals a concentric ring-like pattern of high dI/dV amplitude, see Fig. 2b. The dispersive character of this 1D edge mode is illustrated in Fig. 2c, where the authors have examined its spatial evolution across the center of the Co island (Fig. 2b). The observation of dispersive and gapless edge modes on this and a similar sample platform 61 suggests the realization of a two-dimensional topological superconductor using arrays of magnetic atoms, potentially hosting chiral MZM.
2D islands of the magnetic insulator EuS, which are deposited on top of gold (Au) nanoribbons, were also explored as an alternative way to engineer topological superconductivity in the Au(111) surface state (Fig. 2d,e) 18,32 . This concept is motivated by the large SOC of the Au (111) surface state, ESO=110 meV 64 , which could favor the realization of very large topological gaps. In the presence of superconductivity induced by the vanadium substrate underneath the gold nanoribbons, the authors report the sighting of localized ZBP at the islands edges in STS measurements, when an in-plane magnetic field is applied, Fig. 2f 32 . The appearance of ZBP at specified island edge positions (1 and 8 in Fig. 3d) is, furthermore, linked to the magnetic field direction. Unlike other 2D platforms hosting 1D chiral edge modes, here theoretical modeling suggests that the directional in-plane magnetic field in concert with a magnetic exchange field underneath the EuS island could stabilize zero-dimensional MZM instead.
In addition to these material systems, van-der-Waal (vdW) heterostructures fabricated from transition-metal dichalcogenides are other promising candidates for realizing 2D p+ip topological superconductivity. Platforms based on vdW-heterostructures are insofar desirable, as they permit the controlled assembly of designer quantum materials 65,66 , by stacking layers of vdW-coupled 2D materials with various properties. A recent study followed this approach and deposited monolayer islands of the magnetic insulator CrBr3 67 on the surface of superconducting bulk NbSe2, see 68 . Further experiments will, therefore, be necessary to address the deviating spectral characteristics of the edge state LDOS in this study and to elucidate its inhomogeneous spatial pattern, see Fig. 2h, which contrasts the results from STM experiments on other candidate material platforms 29,61,68 .
Majorana vortex core states in two-dimensional topological superconductors.
One-and two-dimensional topological boundary states of TRS topological insulators 69 provide alternative pathways to realize MZMs in condensed matter systems 70 . Owing to the nontrivial bulk topology, their Fermion doubling is lifted. These states, therefore, provide a natural platform for realizing TRS topological superconductivity via the proximity effect from adjacent bulk s-wave superconductors 13,[71][72][73] . The helicity of the topological boundary modes favors the realization of large topological s-wave gaps with sizes comparable to the gap size of the host superconductors on the order of milli-electron volts. This presents a clear advantage over the discussed p-wave superconductors, where the gap size is reliant on the strength of the spin-orbit coupling 74 and may be far smaller than s-wave superconductor used for the proximity effect 21,24 .
However, the challenges to realize topological superconductivity in material platforms [75][76][77][78] , which host TRS topological boundary modes, lies in isolating the effect of superconductivity on bulk of such materials as compared to their boundary modes, where we expect superconductivity to have a topological nature. In this regard, a major leap forward was achieved by the possibility of intrinsic superconductivity with a hard superconducting gap in the 2D topological surface states of FeTe0.55Se0.45 and Li0.84Fe0.16OHFeSe 95,96 . Ensuing low-temperature STM experiments on the surface of these materials (Fig. 3d for FeTe0.55Se0.45) reported the observation of a sharp and spatially homogeneous ZBP inside the vortex core states (Fig. 3e, f) 31,96 . While the ZBP's properties, such as its spatial extent, spectral width, and temperature dependence have been The debate on the origin of the ZBP could partially be resolved by a high-resolution study of the vortex core sub-gap states on the surface of FeTe0.55Se0.45, which was performed at a temperature of 80 mK 80 . This experiment, first, reproduced the experimental observation of vortex core ZBP. In addition, the trivial CdGM states appeared as distinct peaks at finite energy, well separated from the vortex core ZBP, see Fig. 3g. Interestingly, it was also reported that the relative occurrence of ZBP inside vortex cores decreases when the strength of an externally applied magnetic field increases. This intriguing observation, shown in Fig. 3h, hints at hybridization of MZM residing in neighboring vortices, by which their energy is shifted to finite energy 101 .
Other studies addressed the origin of vortex core ZBP on the Li0.84Fe0.16OHFeSe and 103 . This quantized conduction value would be independent of the tunnel junction conductance and yield a so-called conductance plateau, when the tip-sample distance is changed.
In the context of charge tunneling into a superconductor, it is instructive to generally consider that varying tip-sample distance changes the tunnel contact conductance 52 and can realize various tunnel regimes, where charge transport is dominated by different mechanisms. At comparably large tip-sample distances, such as those commonly used for topographic imaging and STS measurements, tunneling of individual quasiparticles dominates and gives rise to a superconducting gap, ∆, and coherence peaks at energies, |E|=∆, in the tunneling spectrum, e.g. see Fig. 3f. At the same time, Cooper pair-breaking effects, which can result from finite temperature, magnetic adatoms or vortex cores for example, result in a finite quasiparticle density of states at E<∆. Such 'soft' gaps provide relaxation pathways for quasiparticles 104 and they facilitate tunneling into sub-gap states, such as MZM, Shiba states, and CdGM states, even at E<∆.
In the presence of a 'hard' gap, such tunneling processes of individual quasiparticles would, by contrast, be suppressed. Considering the case of comparably small tip-sample distances, charge tunneling at E<∆ is dominated by Andreev reflections that give rise to rich spectral features inside the superconducting gap 105 . By injecting and extracting an electron and hole, respectively to form a Cooper pair, this tunnel process between tip and sample is mediated by Andreev bound states in the tunnel barrier and influences the spectral characteristics of charge tunneling into sub-gap states such as MZM, Shiba states, and CdGM-states 103,106 .
Concerning tunneling into the vortex core ZBP on the FeTe0.55Se0.45 surface, the authors report that the ZBP amplitude is independent of tunnel contact conductance, when the tip-sample distance is decreased 81 domain wall for localizing a MZM was overcome by decorating the bilayer edges with selfassembled ferromagnetic Fe clusters (see Fig. 4a), which were found to induce a trivial magnetization gap inside the edge state band structure 28,119 .
Majorana zero modes in one-dimensional topological edge states
High-resolution spectroscopic measurements with the STM demonstrated the emergence of a localized ZBP at the Fe cluster-topological edge state interface (Fig. 4b-d), confirming the early theory proposals for the emergence of MZM on such a platform 13,110,111 . Additionally, the presence of the ZBP showed a characteristic dependence on the Fe cluster magnetization, see Fig. 4e, justifying its interpretation as a charge signature of a MZM 28 . We note that this observation could provide avenues to manipulate MZM with nanoscale magnetic switches, where the cluster magnetization could be tuned with external magnetic fields or spin-polarized currents form the STM tip. While the reported ZBP properties were consistent with results from model calculations, only one cluster-edge state interface at the so-called A edge (cf. Fig. 4a)
How to distinguish trivial from topological ZBP
Our review illustrates that ZBP have been sighted in STM experiments across a variety of material platforms, which have been proposed to realize various theoretical concepts for topological superconductivity. While several control experiments, such the suppression of the ZBP in the absence of superconductivity 24,58 or studies of the ZBP's temperature and magnetic field dependence 31,79,96,107 , are commonly performed, the interpretation of the observed ZBP as a charge signature of a MZM rests on the assumption that a ZBP is detected, when the parameters of the system make it most likely to be in a topological superconducting phase. An accidental trivial zeroenergy state, such as a Shiba state fine-tuned to zero bias, will, however, result in experimental characteristics similar to those of a MZM. Sighting of a ZBP on a suitable material platform by itself does not, therefore, constitute sufficient evidence to justify its interpretation as a charge signature of a MZM. STM experiments with functional tips have the ability probe other predicted MZM properties, such as its spin and electron-hole symmetry, and they were established as a way out of this predicament to distinguish trivial from topological ZBP, as we will discuss in the following.
MZM electron-hole symmetry
The intrinsic electron-hole symmetry of a MZM, which is imposed by its Bogoliubovquasiparticle character 126
MZM spin signature
Theoretical investigations have proposed that the MZM can leave unique fingerprints in spin-sensitive measurements, therefore providing the means to distinguish topological MZM from a trivial zero energy state. 92,[127][128][129][130][131][132] . STM experiments with magnetic STM tips, also called spinpolarized spectroscopy (SPS), can measure a sample's spin-polarization with atomic resolution 133 , and they are particularly well-suited for these kind of studies. When tunneling occurs between a ferromagnetic tip and sample, respectively, the tunnel conductance at given bias voltage will depend on their relative magnetization direction. This effect is also called tunnel-magneto resistance, and it allows to calculate the spin-polarization of the tunnel current, which gives direct access to a sample's magnetic properties 133 . In case the distance between tip and sample is adjusted so as to maintain a constant current set-point, the spin-dependent contribution to the tunnel current, which arises from unequal normal state spin densities, % ↑/↓ , in a sample, is compensated (see Fig. 5b); this is also referred to as set-point effect. The same measurement in the superconducting state opens a possibility to detect any spin polarization of the in-gap states beyond that caused by the ferromagnetism of the atomic chains in the normal state. predicts that a spin chain with helical magnetic order would be driven out of the topological phase as a function of this angle, see Fig.6b. Hence, in a situation where the chain displays a semicirclelike geometry, trivial and topological segments can coexist, and the MZM would be localized at the boundary between these phases along the chain, depending on the local angle of the field with respect to the chain. Spatially resolved STM spectroscopy as function of position along the chain and the applied magnetic field at different angles, would enable us to examine this approach for manipulating the chain topology.
An extension of this proposal considers the properties of a T-junction chain as a function of a rotating in-plane magnetic field 48 . Examining the influence of the magnetic field on a trijunction geometry with a 120º angle, theoretical analyses predict that the magnetic field rotation can drive each arm of such tri-junctions sequentially in and out of the topological phase. As shown in Fig. 6c, at each orientation of the applied field, precisely one pair of MZM appears at the ends of two out of three chains in the tri-junction and can be detected by their ZBP at respective locations. The process by which MZMs in these tri-junctions are fused and created sequentially as a function of field rotation results in their exchange at a rotation angle of π and a single braid at 2π. A real space manipulation of a MZM pair, following this pattern, corresponds to a braiding process, which is at the heart of the complex physics of MZMs and could be visualized in STS measurements with the STM.
We here propose that chains of magnetic atoms may also serve as a suitable testbed to demonstrate the ground state degeneracy of a system containing more than two MZM. Theoretical analyses of MZM quantum dot devices propose that coupling three (out of two pairs of) degenerate MZM to metallic leads constitutes an effective spin-1 particle, which gives rise to the topological Kondo effect 146
and elegant model proposed by Kitaev in 2001 1 . It describes spinless electrons hopping on a 1D lattice in the presence of a nearest-neighbor p-wave superconducting pairing interaction.
Figure 1 |
1Kitaev model and atomic chain platform. a,. STM topography (top) and zero-bias conductance map (bottom) of a Fe chain on Pb(110) substrate. b, Zero bias conductance profile taken along the Fe chain in panel a (see also red dashed line in panel d).
. These conclusions were further supported by experiments on Fe chains that where covered with a monolayer of Pb; STM measurements revealed the clear presence of MZM signature in this overlayer 45 .
Figure 2 |
2Majorana quasiparticles in 2D systems with ferromagnetism. a, STM topography of the Pb/Co/Si
Fig. 2g. Low-temperature STM experiments found a ZBP localized to the CrBr3 island edge 58 , see Fig. 2h, and spectroscopic imaging with the STM revealed the 1D character of this ZBP, see Fig. 2i. Based on model calculations, the authors interpret this edge state as signature of a chiral Majorana mode, and they argue that the ferromagnetic CrBr3 layer induces a 2D topological superconducting phase with Chern number three in the top-most layer of the bulk NbSe2 substrate. However, 1D chiral Majorana modes with approximate linear dispersion around zero energy are expected to produce a flat but not a peaked LDOS inside the superconducting gap
Figure 3 |
3MZM vortex core states in 2D topological surface states a, Schematics of a 3D TI bulk superconductor hybrid structure, in which a MZM is expected to localize as a ZBP in vortex cores on the surface. b, Zero bias conductance map recorded with an STM reveals a ZBP inside the vortex core of the Bi2Se3/NbSe2 hybrid structure. c, Spectroscopic line cut along the black line in panel b, illustrating the spatial evolution of the ZBP. Panels b, c reproduced from 79 . d, STM topography of the FeTe0.55Se0.45 surface showing its atomically resolved square-lattice structure as well as the presence of surface impurities. e, A zero bias differential conductance map in the same area as d reveals vortices in the presence of an applied magnetic field of B=0.5 T. f, dI/dV point spectrum at the center of a vortex core (red rectangle in panel e) demonstrates the existence of a sharp ZBP.. g, Finite energy CdGM-states and a distinct ZBP appearing in the dI/dV-spectra recorded at the center of vortex cores on the FeTe0.55Se0.45 surface at a temperature of 80 mK. The red curve displays the experimental data and the blue curves correspond to fits for the peak analysis. h Probability for the occurrence of low energy states as a function of energy and for different applied magnetic fields (B={1, 2, 3, 4, 6} T, left to right). i, Schematics of an STM experiment to probe the conductance of electronic states occurring inside vortex cores on the surface of FeTe0.55Se0.45, e.g. the ZBP shown in the inset, as a function of tunnel junction transparency, by adjusting the tip-sample separation d. j, 3D-plot of the measured dI/dV-amplitude as a function of energy and tunnel contact transparency, GN. Panels b, c reproduced from79 , panel d-f reproduced from 31 , panels g,h reproduced from 80 , panels i,j reproduced from 81 .An early proposal to realize TRS topological superconductivity is based on the 2D topological surface state of 3D TI 82,83 , where proximity induced pairing of the helical surface Dirac electrons stabilizes a p+ip 2D topological superconducting phase 7 . A vortex core resulting from the application of an external magnetic field locally breaks TRS and represents a topological defect84,85 , which can host a MZM at its center (seeFig. 3a). First experimental efforts to realize this proposal focused on heterostructures of epitaxially grown Bi2Se3 83,86,87 thin films on the surface of superconducting bulk NbSe2[88][89][90] . Spectroscopic measurements with the STM and complementing spin-resolved STM experiments on this material platform reported a zero bias peak inside the vortex cores at moderate magnetic fields(Fig. 3b), whose properties were consistent with the presence of a localized MZM 79,91-93 . However, spectroscopic imaging experiments revealed a continuous ZBP splitting into a pair of finite energy states over a range of 40 nm away from the vortex core(Fig. 3c). While such characteristics could result from other trivial low lying in-gap states 79,91,94 , exhibiting a spatial distribution different from that of the ZBP, they contest the interpretation of the ZBP as a charge signature of a MZM. The interpretation of these sub-gap states is, additionally, complicated by a soft superconducting gap, resulting from a weak superconducting proximity effect in the quintuple layered structure of Bi2Se3 79,88 .
reported to be consistent with those of a MZM, trivial vortex core states inside the superconducting gap, so-calledCaroli-de Gennes-Matricon (CdGM) states 97,98 , would exhibit similar characteristics; also see Sec. 4 'MZM spin signature'. Only a fraction of vortices were actually found to host a ZBP, while the others host CdGM states at finite energy. Hence, open questions about the origin of the ZBP and the role of the significant surface disorder remain. In the case of FeTe0.55Se0.45, disorder was also found to induce trivial surface states in some regions of the sample surface 99 , and other experiments even reported the entire absence of any vortex core ZBP 100 .
FeTe0.55Se0.45 surfaces 81,102 , by investigating their spectral properties as function of tip-sample distance, see Fig. 3i. Theory predicts that tunneling into a localized MZM via resonant Andreev reflection results in quantized conductance of the ZBP, which assumes universal value of the conductance quantum, ! = "# ! $ (e -elementary charge, h -Planck's constant)
, see Fig. 3j. Similar observations of plateau-like behavior were reported for tunneling into one vortex core on the surface of Li0.84Fe0.16OHFeSe 102 . While the appearance of a conductance plateau at zero bias voltage is qualitatively consistent with resonant Andreev reflection into MZM 103 , the reported plateau amplitude assumes non-universal values below the theoretically expected quantized value ! 81 or even exceeds ! , when the tip is continuously lowered toward the sample surface 102 . These tunneling characteristics question their interpretation as signature of Andreev tunneling into a MZM. In fact, comparable experiments on quasiparticle tunneling into trivial Shiba states at E<∆ report similar results 104 . In that case, plateau behavior can be unambiguously assigned to a tunneling blockade due to a suppressed quasiparticle relaxation rate at low temperatures. Hence, the conductance plateau can be considered as a rather generic property of quasiparticle tunneling into localized sub-gap states, challenging its suitability as a tool to distinguish trivial from non-trivial zero energy states in STM experiments. Beyond the search for MZM in the vortex cores of an externally applied magnetic field, the surface of FeTe0.55Se0.45 has also inspired other concepts in which a MZM could localize at structural defects. STM experiments on 1D line defects in monolayer FeSe0.5Te0.5 reported the observation of ZBP pairs, which are localized at the ends of 1D line defects 107 , and the authors conjecture that these states could be the signatures of a Majorana Kramer's pair. Further STM experiments on structural domain walls, appearing on the surface of FeSe0.45Te0.55, observed a filling of the superconducting gap with an energetically flat quasiparticle density of states along the 1D domain wall trajectory 68 . Since such flat dI/dV amplitude inside the domain wall superconducting gap can arise from a linearly dispersing 1D quasiparticle state, this phenomenon is interpreted as a signature of a 1D chiral Majorana mode.
Figure 4 |
4Topological superconductivity and MZM in 1D topological edge states. a Schematic of a topological hybrid structure to realize MZM based on the helical hinge states of a Bi bilayer. Proximity-induced superconductivity ∆SC is realized by realizing a Bi(111) thin film on top of the bulk superconductor Nb(110). A ferromagnetic Fe cluster adsorbed to the bilayer edge can induce a trivial magnetization gap ∆H. The topological mass domain realized at the interface between these regions localizes a MZM, which can be probed via local spectroscopy with an STM tip. b Close up STM topography of a Fe cluster decorating the bilayer edge. c & d A spectroscopic line cut along the black dashed line in b and individual point spectra demonstrate the emergence of a localized ZBP at the interface of the edge state with the Fe cluster. Panels a-d reproduced from28 . e Magnetization components of several Fe clusters as reconstructed from spin-polarized STM experiments (Mx points along the bilayer edge, My perpendicular in-plane to the edge and Mz out of plane). f Schematic of an STM experiment on a vdW-heterostructure assembled from NbSe2 and 1T'-WTe2 flakes deposited on a gold electrode, which serves as drain for the tunnel current.g, STM topography (top) and topographic line cut (bottom) of the edge of a WTe2 monolayer flake on the surface of NbSe2. h, The dI/dV-spectrum (yellow line) recorded on the WTe2 monolayer edge displays a soft superconducting gap, which can be separated into dI/dV-contributions arising from the LDOS of WTe2 edge state and the NbSe2 substrate. Panels a-e reproduced from 28 and panels f-h reproduced from 115 . The topological edge state of bismuth (Bi) 78,116 is a promising alternative, because it appears on the edges of bilayers on the surface of Bi(111), where their properties can be explored with STM experiments 117-119 . Moreover, high quality Bi(111) thin films can be grown epitaxially 120 , which has facilitated the observation of superconducting pairing inside the edge states of Bi(111) thin films grown on Nb(110) 28 . The challenge of inducing the topological mass
was reported to show a prominent ZBP, whereas the other interface of the cluster with the B edge revealed an enhanced zero bias conductance. This observation can be accounted for by the hybridization of the topological edge state with the bulk states along the B edge. However, further experiments are desirable, in which the ferromagnetic cluster decorates the center of an A edge, and a pair of distinct ZBPs emerges on both sides of the cluster. Also in the context of 2D topological insulators, vdW-heterostructures have been established as an attractive new platform to study proximity-induced s-wave superconductivity with large topological gaps in the 1D helical edge state of monolayer transition-metaldichalcogenides 77 . Superconducting pairing in the topological edge state of monolayer 1T'-WTe2 121,122 has been realized, by depositing a micrometer-sized flake of this compound on top of the surface of bulk NbSe2, see Fig. 4f. 115 Spectroscopic measurements with the STM on the edge of this flake revealed the presence of a superconducting gap in the LDOS of the topological edge state, see Fig. 4g,h. This observation opens an attractive avenue to explore topological superconductivity and MZM in experiments on other heterostructures, which include flakes of 2D ferromagnets67,123,124 or local gates with the potential to engender topological superconductivity through the intrinsic proximity effect 125 .
, can be probed in tunnel spectroscopy experiments with superconducting STM tips47 . The presence of a superconducting tip gap, ∆T, facilitates a separate measurement of the electron and hole sectors of a ZBP at different energies, by shifting their spectral weight to finite voltages. An electron-hole symmetric ZBP will be mapped to a pair of peaks with equal amplitude appearing at V=±∆T/e. While thermally excited resonances can render the detection of this symmetry challenging 25 , high-resolution STS performed at dilution-refrigerator temperatures observed such electron-hole symmetry of the ZBP at the end of the ferromagnetic Fe chain on Pb(110)45 . In these experiments, trivial Shiba states appeared as pairs of asymmetric peaks at V>∆T, reflecting their intrinsic electron-hole asymmetry54 , and a pair of peaks with equal amplitude was measured at ∆T. This observation further consolidated the MZM interpretation of the ZBP on the Fe chain platform[24][25][26]45 , and underscored the potential of functional STM tips for the study of MZM.
Figure 5 |
5Detection of MZM using superconducting and spin polarized STM spectroscopy. a, Individual dI/dV spectra, which were recorded with a superconducting STM tip on the bare Pb (black line) substrate, at the Fe chain center (Mid., purple line), and at the Fe chain end (red line). The end spectrum shows electron-hole symmetric peak amplitude at e|V| = Dt. b, Spectra measured on a Fe chain at 100 mT with UP (red) and DOWN (blue) polarized tips, showing the compensation of the unequal spin densities of the ferromagnetic Fe chain, cf.Fig.1e, in the dI/dV-spectrum by the STM set-point effect. d, Experimentally obtained spectra at the end of chain (left column) and at the center of chain (right column) and their corresponding polarization. Yellow and blue curves are taken with UP and DOWN polarized tips, respectively. Red arrows mark the zero-energy end state and black arrows mark the van-hove singularity of the Shiba band. Panel a reproduced from45 and panels b,c reproduced from 46 .
Figure 6 |
6MZM manipulation detected with the STM a, Schematics of a STM experiments on a Fe chain with helical spin-order to probe the emergence of a MZM end state as a function of the inplane angle, ¶, between the chain and magnetic field axes, respectively. b, Topological phase diagram of the experimental setup shown in panel a. The amplitude, B, with respect to the superconducting gap, ∆, and the in-plane angle, ¶, of the magnetic field can tune between topological and trivial electronic phases. c Schematics of an experiment to study MZM braiding on Fe helical chains by applying a rotating in-plane magnetic field to Y-junction geometry of a magnetic chain with helical spin order. Panels a-c reproduced from48 . d Coupling two pairs of MZM on a superconducting island deposited on a metallic substrate can induce a topological Kondo effect, the screening cloud of which could be detected in STS experiments. At the same time, theoretical studies of the atomic chain platform proposed STM measurements to test the two essential properties of MZM that are key to their application as topologically protected qubits-their non-Abelian exchange statistics and anyonic character. This outlines the future direction of research on MZM with STM experiments.The possibly simplest demonstration of controlling MZM occurring at the ends of magnetic atomic chains on a superconducting surface is the study of their properties as function of the relative angle of an in-plane magnetic field to that of the chain direction, seeFig.6a. Theory 48
. In a similar fashion, we envision an atomic scale platform with two atomic chains placed on a superconducting film, each of which hosts a pair of MZM, see Fig.6d. If a thin electrically insulating layer isolates the superconductor from a normal metallic substrate forming a Coulomb island, we conjecture that the coupling, ti, of the MZM with the conduction electrons in the substrate underneath could mediate a Kondo-type interaction governed by ti and the Coulomb energy, displaying non-Fermi liquid behavior. The resulting Kondo screening cloud, which arises in the metal substrate surrounding the island, can be detected in STS measurements 147 , see Fig.6d. The emergence of the topological Kondo effect requires the existence of more than two degenerate MZM. The experimental detection of Kondo cloud would, therefore, provide direct evidence for the anyonic character of MZM. We envision that such a sample platform could be realized on the basis of vdW-heterostructures, by assembling mono-and few-layer flakes of suitable materials, e.g. superconducting monolayer NbSe2, insulating h-BN and metallic Graphene 66 . The recent demonstration of atom by atom assembly of magnetic nanostructures on the surface of different superconducting substrates with an STM tip 30,49,50 makes the possibility of these studies more likely. These experiments are rather challenging as they require obtaining atomically clean superconducting surfaces 30,148 that permit atomic assembly of elaborate atomic structures, and the right material combination to realize 1D topological superconductivity with MZM end states. These developments will, ultimately, be driven by the realization of novel material platforms and concepts, such as those based on higher-order topological superconductors 149-151 or quantum-spin liquids 152-154 . Nevertheless, STM experiments, combining the unique ability to visualize, investigate, and potentially manipulate MZM with atomic scale precision, will continue to be at the forefront of research on topological superconductivity and MZM in condensed matter systems.
Jeon et al. confirmed this hypothesis by performing spin-polarized STM experiments on the sub-gap states appearing in Fe chains on Pb using Fe coated Cr tips 46 . Consistent with the model calculations, the Shiba states within the chain exhibit an energy-asymmetric spin considerations, this observation directly demonstrates its non-trivial origin and firmly establishes the MZM interpretation of the ZBP in the Fe chain on Pb(110) platform. Similar SPS measurements have also been employed to confirm the MZM nature of the ZBP observed in the proximitized topological edge state of Bi 28 . The observed strong spin-polarization of the ZBP was found to be of opposite sign to that of the Shiba state on top of the Fe cluster. This finding is consistent with analytical and numerical model calculations, which predict that MZM and Shiba state spins point along different spatial axes. Additional results from recent spin-polarized STM experiments on Fe adatoms adsorbed to the surface of FeSe0.45Te0.55 underscore the relevance of diagnostic tools that probe other MZM properties than just the presence of a ZBP by itself. Initial spectroscopic measurements on top of the adatoms revealed the presence of a sharp ZBP 33 , which were interpreted as signatures of MZM occurring inside a quantum anomalous vortex core 135 . We note that similar results and conclusions were also reported for Fe adatoms deposited on the surface LiFeAs and PbTaSe2 136 . However, a recent study using spin-polarized measurements by Wang et al. experimentally demonstrated the absence of finite spin-polarization for the ZBP observed on top of Fe adatoms on FeSe0.45Te0.55 34 . The conflicting outcome of these two studies clearly precludes the interpretation of the ZBP on this platform as a signature of a MZM. More broadly, this result emphasizes that tracking the temperature and field-dependence of a ZBP alone does not provide sufficient information to draw conclusions on its topological origin. Other auspicious proposals to probe the MZM origin of ZBP with STM experiments exist. They involve the use of microwave radiation coupled to the tunnel junction for accurately testing spectral properties of ZBP 137,138 , and a shot noise analysis of the tunnel current to discriminate tunneling into a MZM from tunneling into trivial states 139 . Finally, we anticipate that Josephson STM measurements 140 , which use a superconducting STM tip, could present a valuable tool to scrutinize the presence of topological p-wave superconducting pairing in the bulk of 1D and 2D systems 29,32,58,61 . Josephson STM focuses on the measurement of a Cooper pair current near zero bias voltage, through which it can quantify the amplitude of the superconducting order parameter with atomic resolution 141,142 . Since tunneling of Cooper pairs between spin-singlet (s-wave) superconductivity in the STM tip and spin-triplet (p-wave) superconductivity in the sample cannot occur, Cooper pair current would be locally suppressed. Such local variations can be detected with Josephson STM and would yield a clear experimental signature for p-wave superconductivity in 1D and 2D systems.5. Outlook and potential future experimentsThe diversity of results from STM experiments presented in this review article illustrate the significant role STM has played for exploring topological superconductivity and visualizing the presence of MZM across a variety of material platforms24,[28][29][30][31][32]58,61,68,79,115,143 . More recently, a variety of STM experiments also succeeded in establishing novel material platforms that lend themselves to the integration into electron transport experiments; noteworthy recent examples of such material platforms include the hinge state of bismuth, whose properties can be studied in quantum transport experiments based on bismuth nano-ribbons28,114 , and the topological edge state of monolayer 1T-WTe2 in vdW-heterostructures devices115,125,144 . These developments close the gap between microscopic studies of MZM properties with the STM and measurements of global material properties in device transport, and they provide avenues to explore more complex MZM braiding experiments in the future13,111,145 .Model calculations of spin-dependent tunneling into localized sub-gap states of a magnetic
impurity on a superconducting surface reveal that the spin-polarization of a pair of Shiba states at
±E is asymmetric about zero energy and that the Shiba state spin-density of states is bound by %
↑/↓
outside the superconducting gap 54 . Accordingly, a trivial Shiba state tuned to zero energy is
expected to show no spin-polarization in SPS experiments, because the spin densities are offset by
the set-point effect of the tunnel current 44 . By contrast, the MZM spin-densities are, in first order,
only dependent on the magnetic exchange interaction, and a MZM is expected to show a finite
spin-polarization despite the set-point effect. Hence, a topological ZBP can be distinguished from
a trivial ZBP by measuring its spin-polarization in SPS experiments 44 .
polarization. Comparable results were also reported in similar experiments on Co chains on
Pb(110) 60 and single magnetic impurities 134 . More importantly, additional measurements at the Fe
chain end revealed a distinct spin-polarized signal of the ZBP. By virtue of the above
Unpaired Majorana fermions in quantum wires. A Y Kitaev, Physics-Uspekhi. 44131Kitaev, A. Y. Unpaired Majorana fermions in quantum wires. Physics-Uspekhi 44, 131 (2001).
Fault-tolerant quantum computation by anyons. A Y Kitaev, Ann. Phys. (N. Y). 303Kitaev, A. Y. Fault-tolerant quantum computation by anyons. Ann. Phys. (N. Y). 303, 2-30 (2003).
Non-Abelian anyons and topological quantum computation. C Nayak, S H Simon, A Stern, M Freedman, S Sarma, Das, Rev. Mod. Phys. 801083Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Sarma, S. Das. Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083 (2008).
Topological quantum computation-from basic concepts to first experiments. Science (80-. ). A Stern, N H Lindner, 339Stern, A. & Lindner, N. H. Topological quantum computation-from basic concepts to first experiments. Science (80-. ). 339, 1179-1184 (2013).
A short introduction to topological quantum computation. V Lahtinen, J K Pachos, SciPost Phys. 3Lahtinen, V. & Pachos, J. K. A short introduction to topological quantum computation. SciPost Phys. 3, (2017).
Topological superconductivity induced by ferromagnetic metal chains. J Li, Phys. Rev. B. 90235433Li, J. et al. Topological superconductivity induced by ferromagnetic metal chains. Phys. Rev. B 90, 235433 (2014).
Superconducting proximity effect and Majorana fermions at the surface of a topological insulator. L Fu, C L Kane, Phys. Rev. Lett. 10096407Fu, L. & Kane, C. L. Superconducting proximity effect and Majorana fermions at the surface of a topological insulator. Phys. Rev. Lett. 100, 96407 (2008).
Two-dimensional p-wave superconducting states with magnetic moments on a conventional s-wave superconductor. S Nakosai, Y Tanaka, N Nagaosa, Phys. Rev. B. 88180503Nakosai, S., Tanaka, Y. & Nagaosa, N. Two-dimensional p-wave superconducting states with magnetic moments on a conventional s-wave superconductor. Phys. Rev. B 88, 180503 (2013).
Proposal for realizing Majorana fermions in chains of magnetic atoms on a superconductor. S Nadj-Perge, I K Drozdov, B A Bernevig, A Yazdani, Phys. Rev. B. 8820407Nadj-Perge, S., Drozdov, I. K., Bernevig, B. A. & Yazdani, A. Proposal for realizing Majorana fermions in chains of magnetic atoms on a superconductor. Phys. Rev. B 88, 20407 (2013).
Topological superconductivity and Majorana fermions in RKKY systems. J Klinovaja, P Stano, A Yazdani, D Loss, Phys. Rev. Lett. 111186805Klinovaja, J., Stano, P., Yazdani, A. & Loss, D. Topological superconductivity and Majorana fermions in RKKY systems. Phys. Rev. Lett. 111, 186805 (2013).
Topological superconducting phase in helical Shiba chains. F Pientka, L I Glazman, F Von Oppen, Phys. Rev. B. 88155420Pientka, F., Glazman, L. I. & von Oppen, F. Topological superconducting phase in helical Shiba chains. Phys. Rev. B 88, 155420 (2013).
Unconventional topological phase transitions in helical Shiba chains. F Pientka, L I Glazman, F Von Oppen, Phys. Rev. B. 89180505Pientka, F., Glazman, L. I. & von Oppen, F. Unconventional topological phase transitions in helical Shiba chains. Phys. Rev. B 89, 180505 (2014).
Josephson current and noise at a superconductor/quantum-spin-Hallinsulator/superconductor junction. L Fu, C L Kane, Phys. Rev. B. 79161408Fu, L. & Kane, C. L. Josephson current and noise at a superconductor/quantum-spin-Hall- insulator/superconductor junction. Phys. Rev. B 79, 161408 (2009).
Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures. R M Lutchyn, J D Sau, S Sarma, Das, Phys. Rev. Lett. 10577001Lutchyn, R. M., Sau, J. D. & Sarma, S. Das. Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures. Phys. Rev. Lett. 105, 77001 (2010).
Helical liquids and Majorana bound states in quantum wires. Y Oreg, G Refael, F Von Oppen, Phys. Rev. Lett. 105177002Oreg, Y., Refael, G. & Von Oppen, F. Helical liquids and Majorana bound states in quantum wires. Phys. Rev. Lett. 105, 177002 (2010).
Andreev reflection from noncentrosymmetric superconductors and Majorana bound-state generation in half-metallic ferromagnets. M Duckheim, P W Brouwer, Phys. Rev. B. 8354513Duckheim, M. & Brouwer, P. W. Andreev reflection from noncentrosymmetric superconductors and Majorana bound-state generation in half-metallic ferromagnets. Phys. Rev. B 83, 54513 (2011).
Topological superconducting phase and Majorana fermions in half-metal/superconductor heterostructures. S B Chung, H.-J Zhang, X.-L Qi, S.-C Zhang, Phys. Rev. B. 8460510Chung, S. B., Zhang, H.-J., Qi, X.-L. & Zhang, S.-C. Topological superconducting phase and Majorana fermions in half-metal/superconductor heterostructures. Phys. Rev. B 84, 60510 (2011).
Topological superconductivity and Majorana fermions in metallic surface states. A C Potter, P A Lee, Phys. Rev. B. 8594516Potter, A. C. & Lee, P. A. Topological superconductivity and Majorana fermions in metallic surface states. Phys. Rev. B 85, 94516 (2012).
Interplay between classical magnetic moments and superconductivity in quantum one-dimensional conductors: toward a self-sustained topological Majorana phase. B Braunecker, P Simon, Phys. Rev. Lett. 111147202Braunecker, B. & Simon, P. Interplay between classical magnetic moments and superconductivity in quantum one-dimensional conductors: toward a self-sustained topological Majorana phase. Phys. Rev. Lett. 111, 147202 (2013).
Self-organized topological state with Majorana fermions. M M Vazifeh, M Franz, Phys. Rev. Lett. 111206802Vazifeh, M. M. & Franz, M. Self-organized topological state with Majorana fermions. Phys. Rev. Lett. 111, 206802 (2013).
Signatures of Majorana fermions in hybrid superconductorsemiconductor nanowire devices. Science (80-. ). V Mourik, 336Mourik, V. et al. Signatures of Majorana fermions in hybrid superconductor- semiconductor nanowire devices. Science (80-. ). 336, 1003-1007 (2012).
Zero-bias peaks and splitting in an Al--InAs nanowire topological superconductor as a signature of Majorana fermions. A Das, Nat. Phys. 8Das, A. et al. Zero-bias peaks and splitting in an Al--InAs nanowire topological superconductor as a signature of Majorana fermions. Nat. Phys. 8, 887-895 (2012).
Majorana bound state in a coupled quantum-dot hybrid-nanowire system. Science (80-. ). M T Deng, 354Deng, M. T. et al. Majorana bound state in a coupled quantum-dot hybrid-nanowire system. Science (80-. ). 354, 1557-1562 (2016).
Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor. S Nadj-Perge, 346Nadj-Perge, S. et al. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor. Science (80-. ). 346, 602-607 (2014).
End States and Subgap Structure in Proximity-Coupled Chains of Magnetic Adatoms. M Ruby, Phys. Rev. Lett. 115197204Ruby, M. et al. End States and Subgap Structure in Proximity-Coupled Chains of Magnetic Adatoms. Phys. Rev. Lett. 115, 197204 (2015).
Probing atomic structure and Majorana wavefunctions in mono-atomic Fe chains on superconducting Pb surface. R Pawlak, npj Quantum Inf. 216035Pawlak, R. et al. Probing atomic structure and Majorana wavefunctions in mono-atomic Fe chains on superconducting Pb surface. npj Quantum Inf. 2, 16035 (2016).
Spectroscopic Imaging of Strongly Correlated Electronic States. A Yazdani, E H Da Silva Neto, P Aynajian, Annu. Rev. Condens. Matter Phys. 7Yazdani, A., da Silva Neto, E. H. & Aynajian, P. Spectroscopic Imaging of Strongly Correlated Electronic States. Annu. Rev. Condens. Matter Phys. 7, 11-33 (2016).
Observation of a Majorana zero mode in a topologically protected edge channel. Science (80-. ). B Jäck, 364Jäck, B. et al. Observation of a Majorana zero mode in a topologically protected edge channel. Science (80-. ). 364, 1255 LP -1259 (2019).
Two-dimensional topological superconductivity in Pb/Co/Si(111). G C Ménard, Nat. Commun. 82040Ménard, G. C. et al. Two-dimensional topological superconductivity in Pb/Co/Si(111). Nat. Commun. 8, 2040 (2017).
Toward tailoring Majorana bound states in artificially constructed magnetic atom chains on elemental superconductors. H Kim, Sci. Adv. 45251Kim, H. et al. Toward tailoring Majorana bound states in artificially constructed magnetic atom chains on elemental superconductors. Sci. Adv. 4, eaar5251 (2018).
Evidence for Majorana bound states in an iron-based superconductor. Science (80-. ). D Wang, 362Wang, D. et al. Evidence for Majorana bound states in an iron-based superconductor. Science (80-. ). 362, 333 LP -335 (2018).
Signature of a pair of Majorana zero modes in superconducting gold surface states. S Manna, Proc. Natl. Acad. Sci. Natl. Acad. Sci117Manna, S. et al. Signature of a pair of Majorana zero modes in superconducting gold surface states. Proc. Natl. Acad. Sci. 117, 8775 LP -8782 (2020).
Observation of a robust zero-energy bound state in iron-based superconductor Fe(Te,Se). J.-X Yin, Nat. Phys. 11Yin, J.-X. et al. Observation of a robust zero-energy bound state in iron-based superconductor Fe(Te,Se). Nat. Phys. 11, 543-546 (2015).
Spin-polarized Yu-Shiba-Rusinov states in an iron based superconductor. D Wang, J Wiebe, R Zhong, G Gu, R Wiesendanger, Wang, D., Wiebe, J., Zhong, R., Gu, G. & Wiesendanger, R. Spin-polarized Yu-Shiba- Rusinov states in an iron based superconductor. in (2020).
Physical mechanisms for zero-bias conductance peaks in Majorana nanowires. H Pan, S Das Sarma, Phys. Rev. Res. 213377Pan, H. & Das Sarma, S. Physical mechanisms for zero-bias conductance peaks in Majorana nanowires. Phys. Rev. Res. 2, 13377 (2020).
Generic quantized zero-bias conductance peaks in superconductor-semiconductor hybrid structures. H Pan, W S Cole, J D Sau, S Das Sarma, Phys. Rev. B. 10124506Pan, H., Cole, W. S., Sau, J. D. & Das Sarma, S. Generic quantized zero-bias conductance peaks in superconductor-semiconductor hybrid structures. Phys. Rev. B 101, 24506 (2020).
Flux-induced topological superconductivity in full-shell nanowires. Science (80-. ). S Vaitiekenas, 367Vaitiekenas, S. et al. Flux-induced topological superconductivity in full-shell nanowires. Science (80-. ). 367, (2020).
Flux-tunable Andreev bound states in hybrid full-shell nanowires. M Valentini, arXiv2008.02348arXiv Prepr.Valentini, M. et al. Flux-tunable Andreev bound states in hybrid full-shell nanowires. arXiv Prepr. arXiv2008.02348 (2020).
Topological superconductivity in hybrid devices. S M Frolov, M J Manfra, J D Sau, Nat. Phys. 16Frolov, S. M., Manfra, M. J. & Sau, J. D. Topological superconductivity in hybrid devices. Nat. Phys. 16, 718-724 (2020).
Non-Majorana states yield nearly quantized conductance in proximatized nanowires. P Yu, 10.1038/s41567-020-01107-wNat. Phys. Yu, P. et al. Non-Majorana states yield nearly quantized conductance in proximatized nanowires. Nat. Phys. (2021). doi:10.1038/s41567-020-01107-w
Quantized Majorana conductance. H Zhang, Nature. 556Zhang, H. et al. Quantized Majorana conductance. Nature 556, 74-79 (2018).
Editorial Expression of Concern: Quantized Majorana conductance. H Zhang, Nature. 581Zhang, H. et al. Editorial Expression of Concern: Quantized Majorana conductance. Nature 581, E4-E4 (2020).
Retraction Note: Quantized Majorana conductance. H Zhang, 10.1038/s41586-021-03373-xNature. Zhang, H. et al. Retraction Note: Quantized Majorana conductance. Nature (2021). doi:10.1038/s41586-021-03373-x
Majorana spin in magnetic atomic chain systems. J Li, S Jeon, Y Xie, A Yazdani, B A Bernevig, Phys. Rev. B. 97125119Li, J., Jeon, S., Xie, Y., Yazdani, A. & Bernevig, B. A. Majorana spin in magnetic atomic chain systems. Phys. Rev. B 97, 125119 (2018).
High-resolution studies of the Majorana atomic chain platform. B E Feldman, Nat. Phys. 13Feldman, B. E. et al. High-resolution studies of the Majorana atomic chain platform. Nat. Phys. 13, 286-291 (2017).
Distinguishing a Majorana zero mode using spin-resolved measurements. S Jeon, Science. 358Jeon, S. et al. Distinguishing a Majorana zero mode using spin-resolved measurements. Science (80-. ). 358, 772 LP -776 (2017).
Robust Majorana Conductance Peaks for a Superconducting Lead. Y Peng, F Pientka, Y Vinkler-Aviv, L I Glazman, F Von Oppen, Phys. Rev. Lett. 115266804Peng, Y., Pientka, F., Vinkler-Aviv, Y., Glazman, L. I. & von Oppen, F. Robust Majorana Conductance Peaks for a Superconducting Lead. Phys. Rev. Lett. 115, 266804 (2015).
Manipulating Majorana zero modes on atomic rings with an external magnetic field. J Li, T Neupert, B A Bernevig, A Yazdani, Nat. Commun. 710395Li, J., Neupert, T., Bernevig, B. A. & Yazdani, A. Manipulating Majorana zero modes on atomic rings with an external magnetic field. Nat. Commun. 7, 10395 (2016).
Observation of tunable single-atom Yu-Shiba-Rusinov states. A Odobesko, Phys. Rev. B. 102174504Odobesko, A. et al. Observation of tunable single-atom Yu-Shiba-Rusinov states. Phys. Rev. B 102, 174504 (2020).
Atomic-scale spin-polarization maps using functionalized superconducting probes. L Schneider, P Beck, J Wiebe, R Wiesendanger, Schneider, L., Beck, P., Wiebe, J. & Wiesendanger, R. Atomic-scale spin-polarization maps using functionalized superconducting probes. arXiv Supercond. (2020).
Tuning interactions between spins in a superconductor. H Ding, to appearDing, H. et al. Tuning interactions between spins in a superconductor. to appear (2021).
C J Chen, 10.1093/acprof:oso/9780199211500.001.0001Introduction to Scanning Tunneling Microscopy: Second Edition. Monographs on the Physics and Chemistry of Materials. Oxford University PressChen, C. J. Introduction to Scanning Tunneling Microscopy: Second Edition. Monographs on the Physics and Chemistry of Materials (Oxford University Press, 2007). doi:10.1093/acprof:oso/9780199211500.001.0001
Probing the Local Effects of Magnetic Impurities on Superconductivity. A Yazdani, B A Jones, C P Lutz, M F Crommie, D M Eigler, 275Yazdani, A., Jones, B. A., Lutz, C. P., Crommie, M. F. & Eigler, D. M. Probing the Local Effects of Magnetic Impurities on Superconductivity. Science (80-. ). 275, 1767 LP - 1770 (1997).
Impurity-induced states in conventional and unconventional superconductors. A V Balatsky, I Vekhter, J.-X Zhu, Rev. Mod. Phys. 78Balatsky, A. V, Vekhter, I. & Zhu, J.-X. Impurity-induced states in conventional and unconventional superconductors. Rev. Mod. Phys. 78, 373-433 (2006).
Spiral magnetic order and topological superconductivity in a chain of magnetic adatoms on a two-dimensional superconductor. M H Christensen, M Schecter, K Flensberg, B M Andersen, J Paaske, Phys. Rev. B. 94144509Christensen, M. H., Schecter, M., Flensberg, K., Andersen, B. M. & Paaske, J. Spiral magnetic order and topological superconductivity in a chain of magnetic adatoms on a two-dimensional superconductor. Phys. Rev. B 94, 144509 (2016).
Spin-selective Peierls transition in interacting one-dimensional conductors with spin-orbit interaction. B Braunecker, G I Japaridze, J Klinovaja, D Loss, Phys. Rev. B. 8245127Braunecker, B., Japaridze, G. I., Klinovaja, J. & Loss, D. Spin-selective Peierls transition in interacting one-dimensional conductors with spin-orbit interaction. Phys. Rev. B 82, 45127 (2010).
Strong Localization of Majorana End States in Chains of Magnetic Adatoms. Y Peng, F Pientka, L I Glazman, F Von Oppen, Phys. Rev. Lett. 114106801Peng, Y., Pientka, F., Glazman, L. I. & von Oppen, F. Strong Localization of Majorana End States in Chains of Magnetic Adatoms. Phys. Rev. Lett. 114, 106801 (2015).
Topological superconductivity in a van der Waals heterostructure. S Kezilebieke, Nature. 588Kezilebieke, S. et al. Topological superconductivity in a van der Waals heterostructure. Nature 588, 424-428 (2020).
Majorana qubit decoherence by quasiparticle poisoning. D Rainis, D Loss, Phys. Rev. B. 85174533Rainis, D. & Loss, D. Majorana qubit decoherence by quasiparticle poisoning. Phys. Rev. B 85, 174533 (2012).
Exploring a Proximity-Coupled Co Chain on Pb(110) as a Possible Majorana Platform. M Ruby, B W Heinrich, Y Peng, F Von Oppen, K J Franke, Nano Lett. 17Ruby, M., Heinrich, B. W., Peng, Y., von Oppen, F. & Franke, K. J. Exploring a Proximity-Coupled Co Chain on Pb(110) as a Possible Majorana Platform. Nano Lett. 17, 4473-4477 (2017).
Atomic-scale interface engineering of Majorana edge modes in a 2D magnet-superconductor hybrid system. A Palacio-Morales, Sci. Adv. 56600Palacio-Morales, A. et al. Atomic-scale interface engineering of Majorana edge modes in a 2D magnet-superconductor hybrid system. Sci. Adv. 5, eaav6600 (2019).
Topological Superconductivity and High Chern Numbers in 2D Ferromagnetic Shiba Lattices. J Röntynen, T Ojanen, Phys. Rev. Lett. 114236803Röntynen, J. & Ojanen, T. Topological Superconductivity and High Chern Numbers in 2D Ferromagnetic Shiba Lattices. Phys. Rev. Lett. 114, 236803 (2015).
Two-dimensional chiral topological superconductivity in Shiba lattices. J Li, Nat. Commun. 712297Li, J. et al. Two-dimensional chiral topological superconductivity in Shiba lattices. Nat. Commun. 7, 12297 (2016).
Spin Splitting of an Au(111) Surface State Band Observed with Angle Resolved Photoelectron Spectroscopy. S Lashell, B A Mcdougall, E Jensen, Phys. Rev. Lett. 77LaShell, S., McDougall, B. A. & Jensen, E. Spin Splitting of an Au(111) Surface State Band Observed with Angle Resolved Photoelectron Spectroscopy. Phys. Rev. Lett. 77, 3419-3422 (1996).
Boron nitride substrates for high-quality graphene electronics. C R Dean, Nat. Nanotechnol. 5Dean, C. R. et al. Boron nitride substrates for high-quality graphene electronics. Nat. Nanotechnol. 5, 722-726 (2010).
Van der Waals heterostructures. A K Geim, I V Grigorieva, Nature. 499Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419-425 (2013).
Direct observation of van der Waals stacking-dependent interlayer magnetism. Science (80-. ). W Chen, 366Chen, W. et al. Direct observation of van der Waals stacking-dependent interlayer magnetism. Science (80-. ). 366, 983 LP -987 (2019).
Evidence for dispersing 1D Majorana channels in an iron-based superconductor. Science (80-. ). Z Wang, 367Wang, Z. et al. Evidence for dispersing 1D Majorana channels in an iron-based superconductor. Science (80-. ). 367, 104 LP -108 (2020).
Colloquium: Topological insulators. M Z Hasan, C L Kane, Rev. Mod. Phys. 82Hasan, M. Z. & Kane, C. L. Colloquium: Topological insulators. Rev. Mod. Phys. 82, 3045-3067 (2010).
Search for Majorana Fermions in Superconductors. C W J Beenakker, Annu. Rev. Condens. Matter Phys. 4Beenakker, C. W. J. Search for Majorana Fermions in Superconductors. Annu. Rev. Condens. Matter Phys. 4, 113-136 (2013).
Electrically Detected Interferometry of Majorana Fermions in a Topological Insulator. A R Akhmerov, J Nilsson, C W J Beenakker, Phys. Rev. Lett. 102216404Akhmerov, A. R., Nilsson, J. & Beenakker, C. W. J. Electrically Detected Interferometry of Majorana Fermions in a Topological Insulator. Phys. Rev. Lett. 102, 216404 (2009).
Manipulation of the Majorana Fermion, Andreev Reflection, and Josephson Current on Topological Insulators. Y Tanaka, T Yokoyama, N Nagaosa, Phys. Rev. Lett. 103107002Tanaka, Y., Yokoyama, T. & Nagaosa, N. Manipulation of the Majorana Fermion, Andreev Reflection, and Josephson Current on Topological Insulators. Phys. Rev. Lett. 103, 107002 (2009).
Unconventional Superconductivity on a Topological Insulator. J Linder, Y Tanaka, T Yokoyama, A Sudbø, N Nagaosa, Phys. Rev. Lett. 10467001Linder, J., Tanaka, Y., Yokoyama, T., Sudbø, A. & Nagaosa, N. Unconventional Superconductivity on a Topological Insulator. Phys. Rev. Lett. 104, 67001 (2010).
Majorana quasiparticles in condensed matter. R Aguado, Riv. Del Nuovo Cim. 40Aguado, R. Majorana quasiparticles in condensed matter. Riv. Del Nuovo Cim. 40, 523- 593 (2017).
Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells. Science (80-. ). B A Bernevig, T L Hughes, S.-C Zhang, 314Bernevig, B. A., Hughes, T. L. & Zhang, S.-C. Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells. Science (80-. ). 314, 1757 LP -1761 (2006).
Quantum Spin Hall Effect in Inverted Type-II Semiconductors. C Liu, T L Hughes, X.-L Qi, K Wang, S.-C Zhang, Phys. Rev. Lett. 100236601Liu, C., Hughes, T. L., Qi, X.-L., Wang, K. & Zhang, S.-C. Quantum Spin Hall Effect in Inverted Type-II Semiconductors. Phys. Rev. Lett. 100, 236601 (2008).
Quantum spin Hall effect in two-dimensional transition metal dichalcogenides. Science (80-. ). X Qian, J Liu, L Fu, J Li, 346Qian, X., Liu, J., Fu, L. & Li, J. Quantum spin Hall effect in two-dimensional transition metal dichalcogenides. Science (80-. ). 346, 1344 LP -1347 (2014).
Higher-order topology in bismuth. F Schindler, Nat. Phys. 14Schindler, F. et al. Higher-order topology in bismuth. Nat. Phys. 14, 918-924 (2018).
Experimental Detection of a Majorana Mode in the core of a Magnetic Vortex inside a Topological Insulator-Superconductor ${\mathrm{Bi}}_{2}{\mathrm{Te}}_{3}/{\mathrm{NbSe}}_{2}$ Heterostructure. J.-P Xu, Phys. Rev. Lett. 11417001Xu, J.-P. et al. Experimental Detection of a Majorana Mode in the core of a Magnetic Vortex inside a Topological Insulator-Superconductor ${\mathrm{Bi}}_{2}{\mathrm{Te}}_{3}/{\mathrm{NbSe}}_{2}$ Heterostructure. Phys. Rev. Lett. 114, 17001 (2015).
Zero-energy vortex bound state in the superconducting topological surface state of Fe(Se,Te). T Machida, Nat. Mater. 18Machida, T. et al. Zero-energy vortex bound state in the superconducting topological surface state of Fe(Se,Te). Nat. Mater. 18, 811-815 (2019).
Nearly quantized conductance plateau of vortex zero mode in an iron-based superconductor. Science (80-. ). S Zhu, 367Zhu, S. et al. Nearly quantized conductance plateau of vortex zero mode in an iron-based superconductor. Science (80-. ). 367, 189 LP -192 (2020).
Topological Insulators in Three Dimensions. L Fu, C L Kane, E J Mele, Phys. Rev. Lett. 98106803Fu, L., Kane, C. L. & Mele, E. J. Topological Insulators in Three Dimensions. Phys. Rev. Lett. 98, 106803 (2007).
Observation of a large-gap topological-insulator class with a single Dirac cone on the surface. Y Xia, Nat. Phys. 5Xia, Y. et al. Observation of a large-gap topological-insulator class with a single Dirac cone on the surface. Nat. Phys. 5, 398-402 (2009).
Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect. N Read, D Green, Phys. Rev. B. 61Read, N. & Green, D. Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect. Phys. Rev. B 61, 10267-10297 (2000).
Non-Abelian Statistics of Half-Quantum Vortices in $\mathit{p}$-Wave Superconductors. D A Ivanov, Phys. Rev. Lett. 86Ivanov, D. A. Non-Abelian Statistics of Half-Quantum Vortices in $\mathit{p}$-Wave Superconductors. Phys. Rev. Lett. 86, 268-271 (2001).
Topological insulators in Bi2Se3, Bi2Te3 and Sb2Te3 with a single Dirac cone on the surface. H Zhang, Nat. Phys. 5Zhang, H. et al. Topological insulators in Bi2Se3, Bi2Te3 and Sb2Te3 with a single Dirac cone on the surface. Nat. Phys. 5, 438-442 (2009).
A tunable topological insulator in the spin helical Dirac transport regime. D Hsieh, Nature. 460Hsieh, D. et al. A tunable topological insulator in the spin helical Dirac transport regime. Nature 460, 1101-1105 (2009).
The Coexistence of Superconductivity and Topological Order in the Bi<. M.-X Wang, 336sub>2</sub>Se<sub>3</sub> Thin Films. Science (80-.Wang, M.-X. et al. The Coexistence of Superconductivity and Topological Order in the Bi<sub>2</sub>Se<sub>3</sub> Thin Films. Science (80-. ). 336, 52 LP -55 (2012).
Momentum-space imaging of Cooper pairing in a half-Dirac-gas topological superconductor. S.-Y Xu, Nat. Phys. 10Xu, S.-Y. et al. Momentum-space imaging of Cooper pairing in a half-Dirac-gas topological superconductor. Nat. Phys. 10, 943-950 (2014).
Artificial Topological Superconductor by the Proximity Effect. J.-P Xu, Phys. Rev. Lett. 112217001Xu, J.-P. et al. Artificial Topological Superconductor by the Proximity Effect. Phys. Rev. Lett. 112, 217001 (2014).
Evolution of Density of States and a Spin-Resolved Checkerboard-Type Pattern Associated with the Majorana Bound State. T Kawakami, X Hu, Phys. Rev. Lett. 115177001Kawakami, T. & Hu, X. Evolution of Density of States and a Spin-Resolved Checkerboard-Type Pattern Associated with the Majorana Bound State. Phys. Rev. Lett. 115, 177001 (2015).
Theory of spin-selective Andreev reflection in the vortex core of a topological superconductor. L.-H Hu, C Li, D.-H Xu, Y Zhou, F.-C Zhang, Phys. Rev. B. 94224501Hu, L.-H., Li, C., Xu, D.-H., Zhou, Y. & Zhang, F.-C. Theory of spin-selective Andreev reflection in the vortex core of a topological superconductor. Phys. Rev. B 94, 224501 (2016).
Majorana Zero Mode Detected with Spin Selective Andreev Reflection in the Vortex of a Topological Superconductor. H.-H Sun, Phys. Rev. Lett. 116257003Sun, H.-H. et al. Majorana Zero Mode Detected with Spin Selective Andreev Reflection in the Vortex of a Topological Superconductor. Phys. Rev. Lett. 116, 257003 (2016).
Vortex lines in topological insulatorsuperconductor heterostructures. C.-K Chiu, M J Gilbert, T L Hughes, Phys. Rev. B. 84144507Chiu, C.-K., Gilbert, M. J. & Hughes, T. L. Vortex lines in topological insulator- superconductor heterostructures. Phys. Rev. B 84, 144507 (2011).
Observation of topological superconductivity on the surface of an ironbased superconductor. Science (80-. ). P Zhang, 360Zhang, P. et al. Observation of topological superconductivity on the surface of an iron- based superconductor. Science (80-. ). 360, 182 LP -186 (2018).
Robust and Clean Majorana Zero Mode in the Vortex Core of High-Temperature Superconductor $\mathbf{(}{\mathrm{Li}}_{0.84}{\mathrm{Fe}}_{0.16}\mathbf{)}\mathrm{OHFeSe} $. Q Liu, Phys. Rev. X. 841056Liu, Q. et al. Robust and Clean Majorana Zero Mode in the Vortex Core of High- Temperature Superconductor $\mathbf{(}{\mathrm{Li}}_{0.84}{\mathrm{Fe}}_{0.16}\mathbf{)}\mathrm{OHFeSe} $. Phys. Rev. X 8, 41056 (2018).
Bound Fermion states on a vortex line in a type II superconductor. C Caroli, P G De Gennes, J Matricon, Phys. Lett. 9Caroli, C., De Gennes, P. G. & Matricon, J. Bound Fermion states on a vortex line in a type II superconductor. Phys. Lett. 9, 307-309 (1964).
Scanning-Tunneling-Microscope Observation of the Abrikosov Flux Lattice and the Density of States near and inside a Fluxoid. H F Hess, R B Robinson, R C Dynes, J M Valles, J V Waszczak, Phys. Rev. Lett. 62Hess, H. F., Robinson, R. B., Dynes, R. C., Valles, J. M. & Waszczak, J. V. Scanning- Tunneling-Microscope Observation of the Abrikosov Flux Lattice and the Density of States near and inside a Fluxoid. Phys. Rev. Lett. 62, 214-216 (1989).
Half-integer level shift of vortex bound states in an iron-based superconductor. L Kong, Nat. Phys. 15Kong, L. et al. Half-integer level shift of vortex bound states in an iron-based superconductor. Nat. Phys. 15, 1181-1187 (2019).
Discrete energy levels of Caroli-de Gennes-Matricon states in quantum limit in FeTe0.55Se0.45. M Chen, Nat. Commun. 9970Chen, M. et al. Discrete energy levels of Caroli-de Gennes-Matricon states in quantum limit in FeTe0.55Se0.45. Nat. Commun. 9, 970 (2018).
Scalable Majorana vortex modes in iron-based superconductors. C.-K Chiu, T Machida, Y Huang, T Hanaguri, F.-C Zhang, Sci. Adv. 6443Chiu, C.-K., Machida, T., Huang, Y., Hanaguri, T. & Zhang, F.-C. Scalable Majorana vortex modes in iron-based superconductors. Sci. Adv. 6, eaay0443 (2020).
Quantized conductance of Majorana zero mode in the vortex of the topological superconductor (Li0. 84Fe0. 16) OHFeSe. C Chen, Chinese Phys. Lett. 3657403Chen, C. et al. Quantized conductance of Majorana zero mode in the vortex of the topological superconductor (Li0. 84Fe0. 16) OHFeSe. Chinese Phys. Lett. 36, 57403 (2019).
Majorana Fermion Induced Resonant Andreev Reflection. K T Law, P A Lee, T K Ng, Phys. Rev. Lett. 103237001Law, K. T., Lee, P. A. & Ng, T. K. Majorana Fermion Induced Resonant Andreev Reflection. Phys. Rev. Lett. 103, 237001 (2009).
Tunneling Processes into Localized Subgap States in Superconductors. M Ruby, Phys. Rev. Lett. 11587001Ruby, M. et al. Tunneling Processes into Localized Subgap States in Superconductors. Phys. Rev. Lett. 115, 87001 (2015).
Conduction Channel Transmissions of Atomic-Size Aluminum Contacts. E Scheer, P Joyez, D Esteve, C Urbina, M H Devoret, Phys. Rev. Lett. 78Scheer, E., Joyez, P., Esteve, D., Urbina, C. & Devoret, M. H. Conduction Channel Transmissions of Atomic-Size Aluminum Contacts. Phys. Rev. Lett. 78, 3535-3538 (1997).
Interplay between Yu-Shiba-Rusinov states and multiple Andreev reflections. A Villas, Phys. Rev. B. 101235445Villas, A. et al. Interplay between Yu-Shiba-Rusinov states and multiple Andreev reflections. Phys. Rev. B 101, 235445 (2020).
Atomic line defects and zero-energy end states in monolayer Fe(Te,Se) high-temperature superconductors. C Chen, Nat. Phys. 16Chen, C. et al. Atomic line defects and zero-energy end states in monolayer Fe(Te,Se) high-temperature superconductors. Nat. Phys. 16, 536-540 (2020).
Quantum Spin Hall Effect in Graphene. C L Kane, E J Mele, Phys. Rev. Lett. 95226801Kane, C. L. & Mele, E. J. Quantum Spin Hall Effect in Graphene. Phys. Rev. Lett. 95, 226801 (2005).
Quantum Spin Hall Insulator State in HgTe Quantum Wells. Science (80-. ). M König, 318König, M. et al. Quantum Spin Hall Insulator State in HgTe Quantum Wells. Science (80-. ). 318, 766 LP -770 (2007).
Splitting of a Cooper Pair by a Pair of Majorana Bound States. J Nilsson, A R Akhmerov, C W Beenakker, Phys. Rev. Lett. 101120403Nilsson, J., Akhmerov, A. R. & Beenakker, C. W. J. Splitting of a Cooper Pair by a Pair of Majorana Bound States. Phys. Rev. Lett. 101, 120403 (2008).
Proposal for the detection and braiding of Majorana fermions in a quantum spin Hall insulator. S Mi, D I Pikulin, M Wimmer, C W J Beenakker, Phys. Rev. B. 87241405Mi, S., Pikulin, D. I., Wimmer, M. & Beenakker, C. W. J. Proposal for the detection and braiding of Majorana fermions in a quantum spin Hall insulator. Phys. Rev. B 87, 241405 (2013).
Induced superconductivity in the quantum spin Hall edge. S Hart, Nat. Phys. 10Hart, S. et al. Induced superconductivity in the quantum spin Hall edge. Nat. Phys. 10, 638-643 (2014).
Edge-mode superconductivity in a two-dimensional topological insulator. V S Pribiag, Nat. Nanotechnol. 10Pribiag, V. S. et al. Edge-mode superconductivity in a two-dimensional topological insulator. Nat. Nanotechnol. 10, 593-597 (2015).
Ballistic edge states in Bismuth nanowires revealed by SQUID interferometry. A Murani, Nat. Commun. 815941Murani, A. et al. Ballistic edge states in Bismuth nanowires revealed by SQUID interferometry. Nat. Commun. 8, 15941 (2017).
Proximity-induced superconducting gap in the quantum spin Hall edge state of monolayer WTe2. F Lüpke, Nat. Phys. 16Lüpke, F. et al. Proximity-induced superconducting gap in the quantum spin Hall edge state of monolayer WTe2. Nat. Phys. 16, 526-530 (2020).
Quantum Spin Hall Effect and Enhanced Magnetic Response by Spin-Orbit Coupling. S Murakami, Phys. Rev. Lett. 97236805Murakami, S. Quantum Spin Hall Effect and Enhanced Magnetic Response by Spin-Orbit Coupling. Phys. Rev. Lett. 97, 236805 (2006).
One-dimensional topological edge states of bismuth bilayers. I K Drozdov, Nat. Phys. 10Drozdov, I. K. et al. One-dimensional topological edge states of bismuth bilayers. Nat. Phys. 10, 664-669 (2014).
Resolving the topological classification of bismuth with topological defects. A K Nayak, Sci. Adv. 56996Nayak, A. K. et al. Resolving the topological classification of bismuth with topological defects. Sci. Adv. 5, eaax6996 (2019).
Observation of backscattering induced by magnetism in a topological edge state. B Jäck, Y Xie, Andrei Bernevig, B Yazdani, A , Proc. Natl. Acad. Sci. 117Jäck, B., Xie, Y., Andrei Bernevig, B. & Yazdani, A. Observation of backscattering induced by magnetism in a topological edge state. Proc. Natl. Acad. Sci. 117, 16214 LP - 16218 (2020).
Coexistence of Topological Edge State and Superconductivity in Bismuth Ultrathin Film. H.-H Sun, Nano Lett. 17Sun, H.-H. et al. Coexistence of Topological Edge State and Superconductivity in Bismuth Ultrathin Film. Nano Lett. 17, 3035-3039 (2017).
Quantum spin Hall state in monolayer 1T'-WTe2. S Tang, Nat. Phys. 13Tang, S. et al. Quantum spin Hall state in monolayer 1T'-WTe2. Nat. Phys. 13, 683-687 (2017).
Imaging quantum spin Hall edges in monolayer WTe<sub>2</sub>. Y Shi, Sci. Adv. 58799Shi, Y. et al. Imaging quantum spin Hall edges in monolayer WTe<sub>2</sub> Sci. Adv. 5, eaat8799 (2019).
Electrical control of 2D magnetism in bilayer CrI3. B Huang, Nat. Nanotechnol. 13Huang, B. et al. Electrical control of 2D magnetism in bilayer CrI3. Nat. Nanotechnol. 13, 544-548 (2018).
Large intrinsic anomalous Hall effect in half-metallic ferromagnet Co 3 Sn 2 S 2 with magnetic Weyl fermions. Q Wang, Nat. Commun. 9Wang, Q. et al. Large intrinsic anomalous Hall effect in half-metallic ferromagnet Co 3 Sn 2 S 2 with magnetic Weyl fermions. Nat. Commun. 9, 1-8 (2018).
Electrically tunable low-density superconductivity in a monolayer topological insulator. Science (80-. ). V Fatemi, 362Fatemi, V. et al. Electrically tunable low-density superconductivity in a monolayer topological insulator. Science (80-. ). 362, 926 LP -929 (2018).
Quantizing Majorana fermions in a superconductor. C Chamon, R Jackiw, Y Nishida, S.-Y Pi, L Santos, Phys. Rev. B. 81224515Chamon, C., Jackiw, R., Nishida, Y., Pi, S.-Y. & Santos, L. Quantizing Majorana fermions in a superconductor. Phys. Rev. B 81, 224515 (2010).
Spin and Majorana Polarization in Topological Superconducting Wires. D Sticlet, C Bena, P Simon, Phys. Rev. Lett. 10896802Sticlet, D., Bena, C. & Simon, P. Spin and Majorana Polarization in Topological Superconducting Wires. Phys. Rev. Lett. 108, 96802 (2012).
Selective Equal-Spin Andreev Reflections Induced by Majorana Fermions. J J He, T K Ng, P A Lee, K T Law, Phys. Rev. Lett. 11237001He, J. J., Ng, T. K., Lee, P. A. & Law, K. T. Selective Equal-Spin Andreev Reflections Induced by Majorana Fermions. Phys. Rev. Lett. 112, 37001 (2014).
Signatures of Majorana Zero Modes in Spin-Resolved Current Correlations. A Haim, E Berg, F Von Oppen, Y Oreg, Phys. Rev. Lett. 114166406Haim, A., Berg, E., von Oppen, F. & Oreg, Y. Signatures of Majorana Zero Modes in Spin-Resolved Current Correlations. Phys. Rev. Lett. 114, 166406 (2015).
Spin-polarized edge currents and Majorana fermions in one-and two-dimensional topological superconductors. K Björnson, S S Pershoguba, A Balatsky, A M Black-Schaffer, Phys. Rev. B. 92214501Björnson, K., Pershoguba, S. S., Balatsky, A. V & Black-Schaffer, A. M. Spin-polarized edge currents and Majorana fermions in one-and two-dimensional topological superconductors. Phys. Rev. B 92, 214501 (2015).
Majorana fermion fingerprints in spinpolarised scanning tunnelling microscopy. Phys. E Low-dimensional Syst. P Kotetes, D Mendler, A Heimes, G Schön, Nanostructures. 74Kotetes, P., Mendler, D., Heimes, A. & Schön, G. Majorana fermion fingerprints in spin- polarised scanning tunnelling microscopy. Phys. E Low-dimensional Syst. Nanostructures 74, 614-624 (2015).
Spin and charge signatures of topological superconductivity in Rashba nanowires. P Szumniak, D Chevallier, D Loss, J Klinovaja, Phys. Rev. B. 9641401Szumniak, P., Chevallier, D., Loss, D. & Klinovaja, J. Spin and charge signatures of topological superconductivity in Rashba nanowires. Phys. Rev. B 96, 41401 (2017).
Spin mapping at the nanoscale and atomic scale. R Wiesendanger, Rev. Mod. Phys. 81Wiesendanger, R. Spin mapping at the nanoscale and atomic scale. Rev. Mod. Phys. 81, 1495-1550 (2009).
Spin-Resolved Spectroscopy of the Yu-Shiba-Rusinov States of Individual Atoms. L Cornils, Phys. Rev. Lett. 119197002Cornils, L. et al. Spin-Resolved Spectroscopy of the Yu-Shiba-Rusinov States of Individual Atoms. Phys. Rev. Lett. 119, 197002 (2017).
Quantum Anomalous Vortex and Majorana Zero Mode in Iron-Based Superconductor Fe(Te,Se). K Jiang, X Dai, Z Wang, Phys. Rev. X. 911033Jiang, K., Dai, X. & Wang, Z. Quantum Anomalous Vortex and Majorana Zero Mode in Iron-Based Superconductor Fe(Te,Se). Phys. Rev. X 9, 11033 (2019).
Field-free platform for Majorana-like zero mode in superconductors with a topological surface state. S S Zhang, Phys. Rev. B. 101100507Zhang, S. S. et al. Field-free platform for Majorana-like zero mode in superconductors with a topological surface state. Phys. Rev. B 101, 100507 (2020).
Microwave-assisted tunneling and interference effects in superconducting junctions under fast driving signals. P Kot, Phys. Rev. B. 101134507Kot, P. et al. Microwave-assisted tunneling and interference effects in superconducting junctions under fast driving signals. Phys. Rev. B 101, 134507 (2020).
Photon-assisted resonant Andreev reflections: Yu-Shiba-Rusinov and Majorana states. S A González, Phys. Rev. B. 10245413González, S. A. et al. Photon-assisted resonant Andreev reflections: Yu-Shiba-Rusinov and Majorana states. Phys. Rev. B 102, 45413 (2020).
Discriminating Majorana from Shiba bound-states by tunneling shot-noise tomography. V Perrin, M Civelli, P Simon, Perrin, V., Civelli, M. & Simon, P. Discriminating Majorana from Shiba bound-states by tunneling shot-noise tomography. in (2020).
Fluctuation dominated Josephson tunneling with a scanning tunneling microscope. O Naaman, W Teizer, R C Dynes, Phys. Rev. Lett. 8797004Naaman, O., Teizer, W. & Dynes, R. C. Fluctuation dominated Josephson tunneling with a scanning tunneling microscope. Phys. Rev. Lett. 87, 97004 (2001).
Critical Josephson current in the dynamical Coulomb blockade regime. B Jäck, Phys. Rev. B. 9320504Jäck, B. et al. Critical Josephson current in the dynamical Coulomb blockade regime. Phys. Rev. B 93, 20504 (2016).
Scanning Josephson spectroscopy on the atomic scale. M T Randeria, B E Feldman, I K Drozdov, A Yazdani, Phys. Rev. B. 93161115Randeria, M. T., Feldman, B. E., Drozdov, I. K. & Yazdani, A. Scanning Josephson spectroscopy on the atomic scale. Phys. Rev. B 93, 161115 (2016).
Chiral superconductivity in heavy-fermion metal UTe2. L Jiao, Nature. 579Jiao, L. et al. Chiral superconductivity in heavy-fermion metal UTe2. Nature 579, 523- 527 (2020).
Observation of the quantum spin Hall effect up to 100 kelvin in a monolayer crystal. S Wu, Science. 359Wu, S. et al. Observation of the quantum spin Hall effect up to 100 kelvin in a monolayer crystal. Science (80-. ). 359, 76 LP -79 (2018).
Proposal for a scalable charging-energy-protected topological qubit in a quantum spin Hall system. D Pikulin, Pikulin, D. Proposal for a scalable charging-energy-protected topological qubit in a quantum spin Hall system. in (2020).
Topological Kondo Effect with Majorana Fermions. B Béri, N R Cooper, Phys. Rev. Lett. 109156803Béri, B. & Cooper, N. R. Topological Kondo Effect with Majorana Fermions. Phys. Rev. Lett. 109, 156803 (2012).
Tunneling into a Single Magnetic Atom: Spectroscopic Evidence of the Kondo Resonance. V Madhavan, W Chen, T Jamneala, M F Crommie, N S Wingreen, Science. 280Madhavan, V., Chen, W., Jamneala, T., Crommie, M. F. & Wingreen, N. S. Tunneling into a Single Magnetic Atom: Spectroscopic Evidence of the Kondo Resonance. Science (80-. ). 280, 567 LP -569 (1998).
Preparation and electronic properties of clean superconducting Nb(110) surfaces. A B Odobesko, Phys. Rev. B. 99115437Odobesko, A. B. et al. Preparation and electronic properties of clean superconducting Nb(110) surfaces. Phys. Rev. B 99, 115437 (2019).
Majorana Corner Modes in a High-Temperature Platform. Z Yan, F Song, Z Wang, Phys. Rev. Lett. 12196803Yan, Z., Song, F. & Wang, Z. Majorana Corner Modes in a High-Temperature Platform. Phys. Rev. Lett. 121, 96803 (2018).
Majorana corner states in a two-dimensional magnetic topological insulator on a high-temperature superconductor. T Liu, J J He, F Nori, Phys. Rev. B. 98245413Liu, T., He, J. J. & Nori, F. Majorana corner states in a two-dimensional magnetic topological insulator on a high-temperature superconductor. Phys. Rev. B 98, 245413 (2018).
Inversion-Protected Higher-Order Topological Superconductivity in Monolayer ${\mathrm{WTe}}_{2}$. Y.-T Hsu, W S Cole, R.-X Zhang, J D Sau, Phys. Rev. Lett. 12597001Hsu, Y.-T., Cole, W. S., Zhang, R.-X. & Sau, J. D. Inversion-Protected Higher-Order Topological Superconductivity in Monolayer ${\mathrm{WTe}}_{2}$. Phys. Rev. Lett. 125, 97001 (2020).
Local probes for charge-neutral edge states in two-dimensional quantum magnets. J Feldmeier, W Natori, M Knap, J Knolle, Phys. Rev. B. 102134423Feldmeier, J., Natori, W., Knap, M. & Knolle, J. Local probes for charge-neutral edge states in two-dimensional quantum magnets. Phys. Rev. B 102, 134423 (2020).
Tunneling spectroscopy of quantum spin liquids. E J König, M T Randeria, B Jäck, Phys. Rev. Lett. 125267206König, E. J., Randeria, M. T. & Jäck, B. Tunneling spectroscopy of quantum spin liquids. Phys. Rev. Lett. 125, 267206 (2020).
STM as a single Majorana detector of Kitaev's chiral spin liquid. M Udagawa, S Takayoshi, T Oka, arXiv2008.07399arXiv Prepr.Udagawa, M., Takayoshi, S. & Oka, T. STM as a single Majorana detector of Kitaev's chiral spin liquid. arXiv Prepr. arXiv2008.07399 (2020).
| [] |
[
"Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit",
"Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit"
] | [
"Kaihua Qin \nImperial College London\nImperial College London\nImperial College London\nImperial College London\n\n",
"Liyi Zhou \nImperial College London\nImperial College London\nImperial College London\nImperial College London\n\n",
"Benjamin Livshits \nImperial College London\nImperial College London\nImperial College London\nImperial College London\n\n",
"Arthur Gervais \nImperial College London\nImperial College London\nImperial College London\nImperial College London\n\n"
] | [
"Imperial College London\nImperial College London\nImperial College London\nImperial College London\n",
"Imperial College London\nImperial College London\nImperial College London\nImperial College London\n",
"Imperial College London\nImperial College London\nImperial College London\nImperial College London\n",
"Imperial College London\nImperial College London\nImperial College London\nImperial College London\n"
] | [] | Credit allows a lender to loan out surplus capital to a borrower. In the traditional economy, credit bears the risk that the borrower may default on its debt, the lender hence requires an upfront collateral from the borrower, plus interest fee payments.Due to the atomicity of blockchain transactions, lenders can offer flash loans, i.e. loans that are only valid within one transaction and must be repaid by the end of that transaction. This concept has lead to a number of interesting attack possibilities, some of which have been exploited recently (February 2020).This paper is the first to explore the implication of flash loans for the nascent decentralized finance (DeFi) ecosystem. We analyze two existing attacks vectors with significant ROIs (beyond 500k%), and then go on to formulate finding flash loan-based attack parameters as an optimization problem over the state of the underlying Ethereum blockchain as well as the state of the DeFi ecosystem. Specifically, we show how two previously executed attacks can be "boosted" to result in a profit of 829.5k USD and 1.1M USD, respectively, which is a boost of 2.37× and 1.73×, respectively. | 10.1007/978-3-662-64322-8_1 | [
"https://arxiv.org/pdf/2003.03810v2.pdf"
] | 212,633,562 | 2003.03810 | 47072c24806046a9c4827467d7047af8c6a07b62 |
Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit
Kaihua Qin
Imperial College London
Imperial College London
Imperial College London
Imperial College London
Liyi Zhou
Imperial College London
Imperial College London
Imperial College London
Imperial College London
Benjamin Livshits
Imperial College London
Imperial College London
Imperial College London
Imperial College London
Arthur Gervais
Imperial College London
Imperial College London
Imperial College London
Imperial College London
Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit
Credit allows a lender to loan out surplus capital to a borrower. In the traditional economy, credit bears the risk that the borrower may default on its debt, the lender hence requires an upfront collateral from the borrower, plus interest fee payments.Due to the atomicity of blockchain transactions, lenders can offer flash loans, i.e. loans that are only valid within one transaction and must be repaid by the end of that transaction. This concept has lead to a number of interesting attack possibilities, some of which have been exploited recently (February 2020).This paper is the first to explore the implication of flash loans for the nascent decentralized finance (DeFi) ecosystem. We analyze two existing attacks vectors with significant ROIs (beyond 500k%), and then go on to formulate finding flash loan-based attack parameters as an optimization problem over the state of the underlying Ethereum blockchain as well as the state of the DeFi ecosystem. Specifically, we show how two previously executed attacks can be "boosted" to result in a profit of 829.5k USD and 1.1M USD, respectively, which is a boost of 2.37× and 1.73×, respectively.
I. Introduction
A central component of our economy is credit: to foster economic growth, market participants can borrow and lend assets to each other. If credit creates new and sustainable value, it may be perceived as a positive force. An abuse of credit, however, i.e. when borrowers take on more debt than they're able to repay, would necessary entail negative future consequences. Excessive debt may lead to a debt default -i.e. a borrower is no longer capable to repay the loan plus interest payment. This leads us to the following intriguing question: What if it were possible to offer credit, without bearing the risk that the borrower does not pay back the debt? Such a concept appears impractical in the traditional financial world. No matter how small the borrowed amount, and how short the loan term, the risk of the borrower defaulting remains. If one were absolutely certain that a debt would be repaid, one could offer loans of nearly infinitive volume -or lend to individuals independently of demographics and geographic location, effectively giving access to capital to rich and poor alike.
Given the peculiarities of blockchain-based smart contracts, so-called flash loans emerged. A flash loan is a loan that is only valid within one blockchain transaction. Flash loans fail, if the borrower does not repay its debt before the end of the transaction borrowing the loan. That is, because a blockchain transaction can be reverted during its execution, if the condition of a repayment is not satisfied. Such instant loan yields three novel properties, absent in centralized financial economies:
• No debt default risk: A lender offering a flash loan bears no risk that the borrower defaults on its debt 1 . Because a transaction and its instructions must be executed atomically, a flash loan is not granted if the transaction fails due to a debt default. • No need for collateral: Because the lender is guaranteed to be paid back, the lender can issue credit without upfront collateral from the borrower: a flash loan is non-collateralized. • Loan size: Flash loans are taken from a public smart contract-governed liquidity pool. Any borrower can borrow the entire pool at any point in time. As of March 2020, the two largest flash loan pools [5], [15] each offer in excess of 20M USD. To the best of our knowledge, this is the first paper which investigates flash loans. We categorize their use cases and explore their dangers. We meticulously dissect two events where talented traders realized a profit of each about 350k USD and 600k USD with two independent flash loans. We show how these traders however, have forgone the opportunity to realize a profit exceeding 829.5k USD and 1.1M USD, respectively. We realize this by finding the optimal adversarial parameters the trader should have employed, using a parametrized optimizer (cf. Figure 1).
This paper makes the following contributions:
• Flash loan usage analysis. We provide a comprehensive overview of how and where the technique of 1 Besides the risk of smart contract vulnerabilities. flash loans can and is utilized. • Post mortem of existing attacks. We provide a detailed analysis of two existing attacks that used flash loans and generated an ROI beyond 500k%: a pump and arbitrage from the 15th of February 2020 and an oracle manipulation from the 18th of February 2020. • Attack parameter optimization framework.
Given several DeFi systems covering exchanges, credit/lending and margin trading systems, we provide a framework to determine the parameters that yield the maximum revenue a trader can achieve when utilizing a particular flash loan strategy. • Opportunity loss. We analyze previously proposed and executed attacks to quantify the opportunity loss for the attacker given their optimal behavior, as determined by the framework above. We experimentally validate the opportunity loss of both aforementioned attacks on their respective blockchain state.
Paper organization: The remainder of the paper is organized as follows. Section II elaborates on the DeFi background. Section III outlines flash loan use cases. Section IV, dissects two known flash loan attacks and Section V shows how to optimize their revenue. Section VI provides a discussion. We outline related work in Section VII. We conclude the paper in Section VIII.
II. Background Decentralized ledgers, such as Bitcoin [36], enable the performance of transactions among peers without trusting third parties. At its core, a blockchain is a chain of blocks [8], [36], extended by miners by crafting new blocks that contain transactions. Smart contracts [39] allow the execution of complicated transaction types enabling DeFi.
A. Decentralized Finance (DeFi)
Decentralized Finance is a conglomerate of financial cryptocurrency-related protocols defined by open-source smart contracts. These protocols for instance allow to lend and borrow assets [31], [18], exchange [15], [38], margin trade [15], [4], short and long [4], and allow to create derivative assets [18]. At the time of writing, the DeFi space accounts for over 1bn USD in smart contract locked capital among different providers. The majority of the DeFi platforms operate on the Ethereum blockchain, governed by the Ethereum Virtual Machine (EVM).
B. Reverting EVM State Transitions
The Ethereum blockchain is in essence a replicated state machine. To achieve a state transition, one applies as input transactions which modify the EVM state following rules encoded within deployed smart contracts. The EVM state is only altered if the transaction execute successfully. Otherwise, the EVM state is reverted to the previous, nonmodified state. Transactions can fail due to three reasons: (i) insufficient transaction fees (i.e. due to an out-of-gas exception), (ii) due to a conflicting transaction (e.g. using the same nonce) or (iii) due to a particular condition within the to be executed transaction that cannot be met. State reversion hence appears to be a necessary feature.
C. Flash Loans
Reversing EVM state changes, allows for an intriguing new financial concept: flash loans. A flash loan is only valid within a single transaction (cf. Figure 2). 1 . Take a f l a s h l o a n 2 . Use t h e l e n t a s s e t s 3 . Pay back t h e l o a n p l u s i n t e r e s t s Flash loans rely on the atomicity of blockchain (and, specifically, Ethereum) transactions within a single block. Atomicity has two important implications on flash loans. First, non-collateralized lending: A lender does not need to provide upfront collateral to request a loan of any size, up to the flash loan liquidity pool amount. Any lender, willing to pay the required transactions fees (which typically amount to a few USD) is an eligible lender. Second, riskfree lending: If a lender is not able to pay back the loan, the flash loan transaction fails. Besides smart contract, and more generally blockchain vulnerabilities, the lender is hence not exposed to the risks of a debt default.
D. DeFi Actors
In the following, we define the on-chain actors that we consider within this work and focus on a single blockchain. Trader a trader possesses a private/public key pair and is eligible to sign and send transactions towards other accounts and smart contracts. Liquidity Provider a trader with surplus capital may chose to offer this capital to other traders, e.g. as collateral within a DEX or lending platform. Liquidity Taker a trader which is servicing liquidity provider with fees in exchange to accessing the available capital.
E. DeFi Platforms
We briefly summarize relevant DeFi platforms, such as exchanges [38], [26], margin trading [15], [4], credit/lending [31], [18] DeFi platforms. Within this paper, we are not covering alternative DeFi platforms such as stablecoins [31], prediction markets and insurance systems.
Exchanges:
We observe the following DeFi exchanges.
Limit order book (LOB) DEX:
An order book is a collection of bid and ask orders. Traders post buy/bid or sell/ask orders for an asset of the market to a LOB. A bid order positions the trader as a buyer, while an ask positions the trader as a seller. Buyers aim to purchase an asset at the lowest price possible, while sellers aim for the highest possible selling price. When a trader specifies an order with a fixed or better price, the trader issues a socalled limit order [2]. Once buyers and sellers post orders with compatible prices, their orders can be matched. A liquidity provider contributes bid and asks, to facilitate a match (i.e. market making). Several blockchain exchanges operate a LOB within a smart contract [32], [27], [26].
Automated market maker (AMM) DEX: An alternative exchange design is to collect funds within a liquidity pool, e.g. two pools for an AMM asset pair X/Y . The state (or depth) of an AMM market X/Y is defined as (x, y), where x represents the amount of asset X and y the amount of asset Y in the liquidity pool. Liquidity providers can deposit/withdraw in both pools X and Y to in/decrease liquidity. AMM DEX support endpoint such as SwapXforY to trade an asset X for Y . The simplest AMM mechanism is a constant product market maker, which for an arbitrary asset pair X/Y , keeps the product x × y constant during trades. A number of DEX operate under the AMM model [38], [26].
When trading on an exchange, price slippage may occur, i.e. the change in the price of an asset during a trade. The greater the quantity to be traded, the greater the slippage.
Margin trading: Trading on margin offers the opportunity for traders to borrow assets from the trading platform (or broker) and trade with these borrowed assets. The trader typically must provide collateral and the trading platform then enables the trader to borrow several multipliers of the collateral for margin trading. Multiple DeFi platforms offer margin trading [4], [15].
Credit and lending: With over 900M USD locked capital, credit represents one of the most significant recent usecases for blockchain based DeFi systems. Because borrowers are only represented with weak identities (e.g. public keys), they must provide between 125% [15] to 150% [31] collateral of an asset x to borrow 100% of another asset y. Different DeFi lending platforms exist, ranging from userto-user lending, to pooled lending [18] and lending that enable decentralized stable coins.
III. Use Cases for Flash Loan
In this section, we are analyze the possible use cases for flash loans. It is in general difficult to qualify these activities as fully benign or malicious-it depends on the intent of the people orchestrating these transactions.
A. Arbitrage
The value of an asset is typically determined by demand and supply of the market, across different exchanges. Due to a lack of instantaneous synchronization among exchanges, the same asset can be traded at slightly different prices on each exchange. Related work compared Bitcoin, Ethereum and Ripple price variation across 14 exchanges in Europe, Korea, Japan and the US (excluding China) from 1st January 2017 to 28th February 2018 [30]. The study found twice, price deviations beyond 50% during several hours. Arbitrage is the process of exploiting price differences among exchanges for a financial gain [3]. In fact, arbitrage helps synchronizing exchanges by incentivizing traders to equate the price of the same asset across exchanges. To perform arbitrage, a trader needs a reserve of an asset at different exchanges -i.e. arbitrage requires an extensive portfolio and volatility risk management.
How flash loans change arbitrage risks: Given flash loans, a trader can perform arbitrage on different DEX, without the need to hold a monetary position or being exposed to volatility risks. The trader can simply open a loan, perform an arbitrage trade and pay back the loan plus interests. One may argue that flash loans render arbitrage risk-free, the risks of smart contract vulnerabilities, however, remain.
B. Wash Trading
The trading volume of an asset, is a metric indicating the trading popularity of an asset. The most popular assets therefore, are supposed to be traded the moste.g. Bitcoin to date enjoys the highest trading volume (reported up to 50T USD per day) of all cryptocurrencies.
Malicious exchanges or traders can mislead other traders by artificially inflating the trading volume of an asset to attract interests. In September 2019, 73 out of the top 100 exchanges on Coinmarketcap [9] were wash trading over 90% of their volumes [1]. In centralized exchanges operators can easily and freely create fake trades in the backend, while decentralized exchanges settle trades onchain. Wash trading on DEX thus requires wash traders to hold and use real assets. Flash loans can remove this "obstacle" and wash trading comes at a cost of the loan interest, trading fees, and (blockchain) transaction fees, e.g. gas. A wash trading endeavour to increase the 24hour volume by 50% on the ETH/DAI market of Uniswap would for instance cost about 1, 298 USD (cf. Figure 3). We visualize in Figure 3
C. Collateral Swapping
We classify DeFi platforms that rely on users providing cryptocurrencies [15], [5], [31] as follows: (i) a DeFi system where a new asset is minted and backed-up with userprovided collateral (e.g. MakerDAO's DAI or SAI [31]) and (ii) a DeFi system where long-term loans are offered and assets are aggregated within liquidity pools (e.g. margin trading [4] or long term loans [5]). Once a collateral position is opened, DeFi platforms store the collateral assets in a vault until the new/borrowed asset are destroyed/returned. Because cryptocurrency prices fluctuate, this asset lock-in bears a currency risk. With flash loans, it is possible to replace the collateral asset with another asset, even if a user does not possess sufficient funds to destroy/return the new/borrowed asset. A user can close an existing collateral position with borrowed funds, and then immediately open a new collateral position using a different asset. Collateral swapping example: On February 20th, 2020, a flash loan borrowed 20.00 DAI (from Aave) to perform a collateral swap (on MakerDAO) 5 . Before this transaction, the transaction sender used 0.18 WETH as collateral for instantiating 20.00 DAI (on MakerDAO). The transaction sender first withdraws all WETH using the 20.00 DAI flash loan, then converts 0.18 WETH for 178.08 BAT (using Uniswap). Finally the user creates 20.03 DAI using BAT as collateral, and pays back 20.02 DAI (with fee to Aave). This transaction converts the collateral from WETH to BAT and the user gained 0.01 DAI, with an estimated gas fee of 0.86 USD.
D. Flash Minting
Cryptocurrency assets are commonly known as either inflationary (further units of an asset can be mined) or deflationary (the total number of units of an asset are finite). Flash minting is an idea to allow an instantaneous minting of an arbitrary amount of an asset -the newlymined units exist only during one transaction. It is yet unclear where this idea might be applicable to, the minted assets could momentarily increase liquidity. Flash minting example: A flash mint function (cf. Figure 4) can be integrated into an ERC20 token, to mint an arbitrary number of coins within a transaction only. Before the transaction terminates, the minted coins will be burned. If the available amount of coins to be burned by the end of the transaction is less than those that were minted, the transaction is reverted (i.e. not executed). An example ERC20 flash minting code could take the following form 6 :
IV. Flash Loan Post-Mortem
In this section we investigate how flash loans are used and outline in depth two malicious flash loan transactions which yielded an ROI beyond 500k%. To our knowledge, flash loans only appeared in the beginning of 2020.
A. Flash Loan Uses in the Wild
We first consider flash loans offered by the Aave [5] on the Ethereum blockchain, which started operating on the 8th of January 2020. To our knowledge this is one of the first DeFi platforms to widely advertise flash loan capabilities (although others, such as dYdX also allow the non-documented possibility to borrow flash loans). At the time of writing, Aave charges a constant 0.09% interest fee for flash loans and amassed a total liquidity of 22M USD.
We collect flash loan data between the 8th of January 2020 and the 26th of February 2020 with a full archive Ethereum node gathering all event logs of the Aave smart contract 7 . We then map the transaction data to a known list of projects (cf. Appendix A). In Figure 5 we show our analysis of Aave flash loans, and manually label with which platforms the flash loans interacts with. We observe that most flash loans interact with lending/exchange DeFi systems and that the flash loan's transaction costs (i.e. gas) appears significant (at times beyond 4M gas, compared to 21k gas for regular Ether transfer).
B. Pump and Arbitrage
A flash loan transaction 8 , followed by 74 transactions, yielded a profit of 1 193.69 ETH (350k USD) given a transaction fee of 132.36 USD (cumulative 50 237 867 gas, 0.5 ETH). We show in Section V-E that the parameters chosen by the adversary are not optimal, the adversary could have earned a profit exceeding 829.5k USD.
Attack intuition:
The core of this trade utilises a margin trade on a DEX (bZx) to increase the price of WBTC/ETH on another DEX (Uniswap) and thus creates an arbitrage opportunity. The trader then borrows WBTC using ETH as collateral (on Compound), and then purchases ETH at a "cheaper" price on the distorted (Uniswap) DEX market. To maximise the profit, the adversary then converts the "cheap" ETH to purchase WBTC at a non-manipulated market price over a period of two days after the flash loan. The adversary then returns WBTC (to Compound) to redeem the ETH collateral. As demonstrated in Figure 6, this trade mainly consists of two parts. For simplicity, we omit the conversion between WETH (the ERC20-tradable version of ETH) and ETH.
Flash Loan (one block): The first part of the attack (cf. Figure 6) consists of 7 steps within a single transaction. 8
C. Oracle Manipulation
In the following, we discuss the details of a second flash loan trade, which yields a profit of 2, 381.41 ETH (c. 650k USD) within a single transaction 9 given a transaction fee of 118.79 USD. Before diving into the details, we cover additional required background knowledge. We again show how the chosen attack parameters were suboptimal and present in Section V-E attack parameters that would yield a profit of 1.1M USD instead. For this attack, the adversary involves three different exchanges for the same sUSD/ETH market pair (the Kyber-Uniswap reserve, Kyber, and Synthetix). Two of these exchanges (Kyber, Kyber-Uniswap) act as price oracle for the lending platform (bZx) from which the adversary borrows assets.
Price oracle: One of the goals of the DeFi ecosystem is to not rely on trusted third parties. This premise holds both for asset custody as well as additional information, such as asset pricing. One common method to determine an asset price is hence to rely on the pricing information of an on-chain DEX (e.g. Uniswap). One drawback of this approach, is the danger of a DEX price manipulation.
Attack intuitionn:
The core of this trade is an oracle manipulation using a flash loan on the asset pair sUSD/ETH. The manipulation lowers the price of sUS-D/ETH (from 268.30 sUSD/ETH to 106.05 sUSD/ETH on Uniswap and 108.44 sUSD/ETH on Kyber Reserve). In a second step, the adversary benefits from this sUSD/ETH price decrease by borrowing ETH with sUSD as collateral.
Adversarial oracle manipulation:
We identify a total of 6 steps steps within this transaction (cf. Figure 7). In step 1 , the trader borrows a flash loan of 7, 500.00 ETH (on bZx). In the next three steps ( 2 , 3 , 4 ), the adversary converts a total of 4, 417.86 ETH to 943, 837.59 sUSD (at an average of 213.64 sUSD/ETH).
Step 2 purchases sUSD with ETH at 171.15 sUSD/ETH (on the Kyber-Uniswap reserve) and step 3 purchases sUSD with ETH at 111.23 sUSD/ETH (on Kyber). The third involved party is the lending platform bZx, which uses the DEX Kyber as a price oracle. Step 2 and 3 allow the adversary to borrow more sUSD with ETH, because the price of sUSD/ETH perceived by the lending platform decreased by over 58% since the beginning of the attack.
Step 4 converts ETH to sUSD on a third exchange market (Synthetix), which is yet unaffected by the previous trades. This exchange is not serving as price oracle for the lending platform (bZx).
The adversarial trader then uses the sum of the purchased sUSD (
V. Optimal DeFi Attack Parameter Generation
In light of the complexity of the aforementioned DeFi attacks (cf. Section IV), in this section we propose a constrained optimization framework that allows to efficiently discover the optimal trade parameters to maximize the resulting expected revenue.
A. System Model and Assumptions
The system considered is limited to one decentralized ledger which supports pseudo-Turing complete smart contracts (e.g. similar to the Ethereum Virtual Machine; state transitions can be reversed given certain conditions, such as out-of-gas, or insufficient funds returned). Our system comprises of regular users, or traders, which do hold at least one private/public key pair to denote their blockchain address. The private key enables users to transfer cryptocurrency assets and interact/invoke smart contracts.
We assume that the underlying blockchain is not compromised by a malicious adversary. We therefore assume that the share of consensus participants corrupted by the adversary is bounded by the threshold required to maintain safety and liveness of the underlying blockchain. In the Nakamoto consensus-based blockchains, for example, we assume that the fraction of the computational power of the adversary does not exceed 1/3 [21], [20]. Similarly, in Byzantine fault-tolerant systems, (e.g. Proof-of-Stake based), we assume that the number of faulty processes does not exceed 1/3 of the number of consensus participants. The previous assumptions guarantee the chain quality and common-prefix properties [20]. We consider a transaction to be securely included within the blockchain after k confirmations, where k depends on the transaction value [21] and the chain-growth property [20].
Importantly, flash loans only apply to a single transaction and hence we limit our analysis to what may happen within a single blockchain block.
B. Threat and Network Model
Foremost, we assume that the cryptographic primitives of the considered blockchain are secure. We also assume the presence of at least one computationally-bounded and economically rational adversary A. A attempts to exploit the availability of flash loans for financial gain. A may perform any action that maximizes its economic revenue, such as censor or delay transactions, observe unconfirmed transactions on the network layer or the memory pool, and mount Sybil attacks [14]. For the network layer we follow related work [17], [13] in assuming that honest nodes are well-connected, and that communication channels are semi-synchronous. Importantly, we assume that transactions broadcast by users are received by honest users within an upper bound time. The adversary may collude with other adversaries. While A is not required to provide its own collateral to perform the presented attacks, the adversary must be financially capable to pay transaction fees. The adversary may amass more capital which possibly could increase its impact and ROI.
C. Modelling the State of DeFi
We start by modeling different components that may engage in a DeFi attack. To facilitate optimal parameter solving, we quantitatively formalize every endpoint provided by DeFi platforms as a state transition function S = T (S; p) with the constraints C(S; p), where S is the given state, p are the parameters chosen by the adversary and S is the output state. The state can represent, for example, the adversarial balance or any internal status of the DeFi platform, while the constraints are set by the execution requirements of the Ethereum Virtual Machine (e.g. the Ether balance of an entity should never be a negative number) or the rules defined by the respective DeFi platform (e.g. a flash loan must be repaid before the transaction termination plus loan fees). Note that when quantifying profits, we ignore the loan interest/fee payments and Ethereum transaction fees, which are negligible in the present DeFi attacks. The constraints are enforced on the input parameters and output states to ensure that the optimizer yields valid parameters.
We define the balance state function B(E; X; S) to denote the balance of currency X held by entity E at a given state S. The constraint of Equation 3 must always be satisfied.
∀(E, X, S), B(E; X; S) ≥ 0(3)
In the following, we detail the quantitative DeFi models applied in this work. Note that we do not include all the states involved in the DeFi attacks but only those relevant to the constrained optimization.
Flash loan:
We assume a flash loan platform F with z X amount of asset X, which the adversary A can borrow. The required interest to borrow b of X is represented by interest(b).
X; S) + b X s.t. z X − b X ≥ 0 (4) B(A; X; S ) = B(A; X; S) − b X − interest(b X ) s.t. B(A; X; S) − b X − interest(b X ) ≥ 0 (5)
Fixed price trading: We define the endpoint SellXforY that allows the adversary A to trade q X amount of X for Y at a fixed price p m . maxY is the maximum amount of Y available for trading.
S) + q X p m s.t. B(A; X; S) − q X ≥ 0 maxY − q X p m ≥ 0 (6)
Constant product automated market maker:
The constant product AMM is with a market share of 77% among the AMM DEX, the most common AMM model in current DeFi ecosystem [38]. We denote by M an AMM instance with trading pair X/Y and exchange fee rate f.
State: We consider the following states variables that can be modified in an AMM state transition. SwapXforY and SwapYforX, which are the relevant endpoints for the DeFi attacks discussed within this work. p X is a parameter that represents the amount of X the adversary intends to trade. A inputs p X amount of X in AMM liquidity pool and receives o Y amount of Y as output. The constant product rule [38] requires that Equation 7 holds.
u X (S) × u Y (S) = (u X (S) + (1 − f)p X ) × (u Y (S) − o Y ) (7)
We define the transition functions and constraints of SwapXforY in Equation 8 (analogously for SwapYforX ).
) + o Y u X (S ) = u X (S) + p X u Y (S ) = u Y (S) − o Y where o Y = p X × (1 − f) × u Y (S) u X (S) + p X × (1 − f) s.t. B(M; X; S) − p X ≥ 0 (8)
Because an AMM DEX M transparently exposes all price transitions on-chain, it can be used as a price oracle by the other DeFi platforms. The price of Y with respect to X given by M at state S is defined in Equation 9.
p Y (M; S) = u X (S) u Y (S)(9)
Automated price reserve: The automated price reserve is another type of AMM that automatically calculates the exchange price depending on the assets hold in inventory. We denote a reserve holding the asset pair X/Y with R. A minimum price minP and a maximum price maxP is set when initiating R. R relies on a liquidity ratio parameter lr to calculate the asset price. We assume that R holds k X (S) amount of X at state S. We define the price of Y in Equation 10.
P Y (R; S) = minP × e lr×k X (S)(10)
The endpoint ConvertXtoY provided by R allows the adversary A to exchange X for Y.
State: We consider the following state variables:
• The inventory of X in the reserve: k X (S), which equals to B(R; X; S) • Balance of X held by A: B(A; X; S) • Balance of Y held by A: B(A; Y; S) Transitions: We denote as h X the amount of X that A inputs in the exchange to trade against Y. The exchange output amount of Y is calculated by the following formulation.
j Y = e −lr×h X − 1 lr × P Y (R; S)
We define the transition functions within Equation 11.
k X (S ) = k X (S) + h X B(A; X; S ) = B(A; X; S) − h X B(A; Y; S ) = B(A; Y; S) + j Y where j Y = e −lr×h X − 1 lr × P Y (R; S) s.t. B(A; X; S) − h X ≥ 0 P Y (R; S ) − minP ≥ 0 maxP − P Y (R; S ) ≥ 0 (11)
Collateralized lending & borrowing:
We consider a collateralized lending platform L, which provides the CollateralizedBorrow endpoint that requires the user to collateralize an asset X with a collateral factor cf (s.t. 0 < cf < 1) and borrows another asset Y at an exchange rate er. The collateral factor determines the upper limit that a user can borrow. For example, if the collateral factor is 0.75, a user is allowed to borrow up to 75% of the value of the collateral. The exchange rate is for example determined by an outsourced price oracle. z Y denotes the maximum amount of Y available for borrowing.
State: We hence consider the following state variables and ignore the balance changes of L for simplicity. Transitions: The parameter c X represents the amount of asset X that A aims to collateralize. Although A is allowed to borrow less than his collateral would allow for, we assume that A makes use the entirety of his collateral. Equation 12 shows the transition functions of CollateralizedBorrow.
) + b Y where b Y = c X × cf er s.t. B(A; X; S ) − c X ≥ 0; z Y − b Y ≥ 0 (12)
A can retrieve its collateral by repaying the borrowed asset through the endpoint CollateralizedRepay. We show the transition functions in Equation 13 and for simplicity ignore the loan interest fee.
B(A; X; S ) = B(A; X; S) + c X B(A; Y; S ) = B(A; Y; S) − b Y s.t. B(A; Y; S) − b Y ≥ 0 (13)
Margin trading: A margin trading platform T allows the adversary A to short/long an asset Y by collateralizing asset X at a leverage , where ≥ 1. We focus on the MarginShort endpoint which is relevant to the discussed DeFi attack in this work. We assume A shorts Y with respect to X on F. The parameter d X denotes the amount of X that A collateralizes upfront to open the margin. w X represents the amount of X held by F that is available for the short margin. A is required to overcollateralize at a rate of ocr in a margin trading. In our model, when a short margin (short Y with respect to X) is opened, F performs a trade on external X/Y markets (e.g. Uniswap) to convert the leveraged X to Y. The traded Y is locked until the margin is closed or liquidated.
) + l Y where l Y = d X × ocr × emp s.t. B(A; X; S) − c X ≥ 0 w X + d X − d X × ocr ≥ 0(14)
D. Parametrized Optimization
Our parametrized optimizer (cf. Figure 1) is designed to solve the optimal parameters that maximizes the revenue given an on-chain state, Defi models (cf. Section V-C) and attack vector. An attack vector specifies the execution order of different endpoints across various DeFi platforms, depending on which we formalize a unidirectional chain of transition functions (cf. Equation 15).
S i = T i (S i−1 ; p i )(15)S i = T i (S i−1 ; p i ) = T i (T i−1 (S i−2 ; p i−1 ); p i ) = T i (T i−1 (...T 1 (S 0 , p 1 )...; p i−1 ); p i ) = ACC i (S 0 ; p 1:i )(16)
Therefore the constraints generated in each step can be expressed as Equation 17.
C i (S i ; p i ) ⇐⇒ C i (ACC i (S 0 ; p 1:i ); p i )(17)
We assume an attack vector composed of N transition functions. The objective function can be calculated from the initial state S 0 and the final state S N (e.g. the increase of the adversarial balance).
O(S 0 ; S N ) ⇐⇒ O(S 0 ; ACC(S 0 ; p 1:N ))(18)
Given the initial state S 0 , we formulate an attack vector into a constrained optimization problem with respect to all the parameters p 1:N (cf. Equation 19).
maximize O(S 0 ; ACC(S 0 ; p 1:N ))
s.t. C i (ACC i (S 0 ; p 1:i ); p i ) ∀i ∈ [1, N ](19)
E. Optimizing the Pump and Arbitrage
In the following, we evaluate our parametrized optimization framework on the existing attacks described in Section IV. Figure 8 summarizes the on-chain state when the attack was executed (i.e. S 0 ). We use these blockchain records as the initial state in our evaluation. X and Y denote ETH and WBTC respectively. For simplicity, we ignore the trading fees in the constant product AMM (i.e. f = 0 for M). The endpoints executed in the pump and arbitrage attack are listed in the execution order as follows.
1) Loan (dYdX) 2) CollateralizedBorrow (Compound) 3) MarginShort(bZx) & SwapXforY (Uniswap) 4) SwapYforX (Uniswap) 5) Repay (dYdX) 6)
SellXforY & CollateralizedRepay(Compound) In in the pump and arbitrage attack vector, we intend to tune the following two parameters, (i) p 1 : the amount of X collateralised to borrow Y in the endpoint 2) and (ii) p 2 : the amount of X collateralised to short Y in the endpoint 3). Following the procedure of Section V-D, we proceed with detailing the construction of the constraint system.
0):
We assume the initial balance of X owned by A is B 0 (cf. Equation 20), and we refer the reader to Figure 8 for the remaining initial state values.
B(A; X; S 0 ) = B 0(20)
1) Loan: A gets a flash loan of X amounts p 1 + p 2 in total B(A; X; S 1 ) = B 0 + p 1 + p 2 with the constraints
p 1 ≥ 0, p 2 ≥ 0, v X − p 1 − p 2 ≥ 0 2) CollateralizedBorrow: A collateralizes p 1 amount of X to borrow Y from the lending platform L B(A; X; S 2 ) = B(A; X; S 1 ) − p 1 = B 0 + p 2 B(A; Y; S 2 ) = p 1 × cf er with the constraint z Y − p 1 × cf er ≥ 0 3) MarginShort & SwapXforY:
A opens a short margin with p 2 amount of X at a leverage of on the margin trading platform T; T swaps the leveraged X for Y at the constant product AMM M 9. Constraints generated for the pump and arbitrage attack. We remark that u X (S 4 ) is a nonlinear component with respect to p 1 and p 2 .
B(A; X; S 3 ) = B(A; X; S 2 ) − p 2 = B 0 u X (S 3 ) = u X (S 0 ) + p 2 × ocr u Y (S 3 ) = u X (S 0 ) × u Y (S 0 ) u X (S 3 ) L(A; Y; S 3 ) = u Y (S 0 ) − u Y (S 3 ) with the constraint w X + p 2 − p 2 × ocr ≥ 0 4) SwapYforX: A dumps all the borrowed Y at M B(A; Y; S 4 ) = 0 u Y (S 4 ) = u Y (S 3 ) + B(A; Y; S 2 ) u X (S 4 ) = u X (S 3 ) × u Y (S 3 ) u Y (S 4 ) B(A; X; S 4 ) = B 0 + u X (S 3 ) − u X (S 4 )p 1 ≥ 0, p 2 ≥ 0 v X − p 0 − p 1 ≥ 0 z Y − p 1 ×cf er ≥ 0 w X + p 2 − p 2 × ocr ≥ 0 B 0 + u X (S 0 ) + p 2 × ocr − u X (S 4 ) − p 1 − p 2 ≥ 0 Fig.
The objective function is the adversarial ETH revenue (cf. Equation 21).
O(S 0 ; p 1 ; p 2 ) = B(A; X; S 6 ) − B 0 = u X (S 3 ) − u X (S 4 ) − p 2 − p m × B(A; Y; S 2 ) = u X (S 0 ) + p 2 × ocr − u X (S 4 ) − p 2 − p 1 × cf × p m er(21)
Constraints: We summarize the constraint in Figure 9, five linear constraints and one nonlinear constraint, which implies that the optimization can be solved efficiently.
F. Optimizing the Pump and Arbitrage Attack
We apply the Sequential Least Squares Programming (SLSQP) algorithm from SciPy 10 to solve the optimization problem. Our program is evaluated on a Ubuntu 18.04.2 machine, 16 CPU cores and 32GB RAM. We repeated our experiment for 1 000 times, the optimizer spent 6.1ms on average converging to the optimum.
Optimal pump and arbitrage parameters:
The optimizer provides a maximum revenue of 2, 778.94 ETH when setting the parameters (p 1 ; p 2 ) to (2, 470.08; 1, 456.23), while in the original attack the parameters (5, 500; 1, 300) only yield 1, 171.70 ETH. Note, due to the ignorance of trading fees and precision differences, there is a minor discrepancy between the original attack revenue calculated with our model and the real revenue which is 1, 193.69 ETH (cf. Section IV). This is a 829.5k USD gain over the attack that took place, using the price of ETH at that time.
Optimal parameter validation:
We experimentally validate the optimal pump and arbitrage attack by forking the Ethereum blockchain with Ganache 11 at block 9484687 (one block prior to the original attack transaction). We then implement the pump and arbitrage attack in solidity v0.6.3. In the Pump and Arbitrage attack, revenues are divided into two parts: part one from the flash loan transaction, and part two which is a follow-up operation in later blocks (cf. Section IV) to repay the loan. For simplicity, we chose to only validate the first part, abiding
G. Optimizing the Oracle Manipulation Attack
In the oracle manipulation attack, X denotes ETH and Y denotes sUSD. Again, we ignore the trading fees in the constant product AMM (i.e. f = 0 for M). The initial state variables are presented in Figure 10. We assume that A owns zero balance of X or Y. We list the endpoints involved in the oracle manipulation attack vector as follows.
1) Loan(bZx) 2) SwapXforY(Uniswap) 3) ConvertXtoY(Kyber reserve) 4) SellXforY(Synthetix) 5) CollateralizedBorrow(bZx) 6) Repay(bZx)
There are three parameters to optimize in this attack, (i) p 1 : the amount of X used to swap for Y in step 2); ii the amount of X used to swap for Y in step 3); (iii) the amount of Y used to exchange for Y in step 4). We construct the constrained optimization problem as follows.
1) Loan:
A gets a flash loan of X amounts p 1 + p 2 + p 3 B(A; X; S 1 ) = p 1 + p 2 + p 3 with the constraints The objective function is the remaining balance of X after repaying the flash loan (cf. Equation 22).
p 1 ≥ 0, p 2 ≥ 0, p 3 ≥ 0, v X − p 1 − p 2 − p 3 ≥ 0 2) SwapXforY: A swaps p 1 amount of X for Y from the constant product AMM M B(A; X; S 2 ) = B(A; X; S 1 ) − p 1 = p 2 + p 3 u X (S 2 ) = u X (S 0 ) + p 1 u Y (S 2 ) = u X (S 0 ) × u Y (S 0 ) u X (S 2 ) B(A; Y; S 2 ) = u Y (S 0 ) − u Y (S 2 ) 3) ConvertXtoY: A converts p 2 amount of X to Y from the automated price reserve R B(A; X; S 3 ) = B(A; X; S 2 ) − p 2 = p 1 k X (S 3 ) = k X (S 0 ) + p 2 P Y (R; S 3 ) = minP × e lr×k X (S3) B(A; Y; S 3 ) = B(A; Y; S 2 ) + e −lr×p2 − 1 lr × P Y (R; S 0 ) s.t. maxP − P Y (R; S 3 ) ≥O(S 0 ; p 1 ; p 2 ; p 3 ) = B(A; X; S 6 ) = B(A; X; S 5 ) − p 1 − p 2 − p 3 = B(A; Y; S 4 ) × cf × P Y (M; S 2 ) − p 1 − p 2 − p 3(22)
Constraints: We summarize the produced constraints of the oracle manipulation attack vector in Figure 11. Five constraints are linear and the other two are nonlinear.
H. Finding Optimal Oracle Manipulation Parameters
We execute our optimizer 1, 000 times on the same Ubuntu 18.04.2 machine with 16 CPU cores and 32 GB RAM. The average convergence time is 12.9ms. Fig. 11. Constraints generated for the oracle manipulation attack. We remark that B(A; Y; S 4 ) and P Y (M; S 2 ) are nonlinear components with respect to p 1 , p 2 and p 3 .
Objective B(A; Y; S 4 ) × cf × P Y (M; S 2 ) − p 1 − p 2 − p 3 function Constraints p 1 ≥ 0, p 2 ≥ 0, p 3 ≥ 0 v X − p 1 − p 2 − p 3 ≥ 0 maxP − minP × e lr×(k X (S 0 )+p 2 ) ≥ 0 maxY − p 3 pm ≥ 0 z Y − B(A; Y; S 4 ) × cf × P Y (M; S 2 ) ≥ 0
Optimal oracle manipulation parameters: The optimizer discovers that setting (p 1 ; p 2 ; p 3 ) to (898.58; 546.80; 3, 517.86) results in about 6, 323.93 ETH in profit for the adversary. This results in a gain of 1.1M USD instead of about 600k USD.
Optimal parameter validation:
We fork the Ethereum blockchain with Ganache at block 9504626 (one block prior to the original adversarial transaction). We then implement the oracle manipulation attack solidity v0.6.3. We validate that executing the adversarial smart contract with parameters (p 1 ; p 2 ; p 3 ) = (898.58; 546.8; 3, 517.86) renders a profit of 6, 262.28 ETH, while the original attack parameters yield 2, 381.41 ETH. The attack consumes 11.3M gas (which exceeds the block gas limit (9.7M) on the Ethereum main network). By analyzing the adversarial validation contract, we find 460 is the maximum value of p 2 that makes the gas consumption under the block limit. Following the similar methodology in Section V-F, we add the new constraint to the optimizer, which then gives the optimal parameters (714.3; 460; 3, 517.86). The augmented validation contract makes a profit of 4, 167.01 ETH and consumes 9.6M gas.
VI. Discussion
The current generation of DeFi had developed organically, without much scrutiny when it comes to financial security; it, therefore, presents an interesting security challenge to confront. DeFi, on the one hand welcomes innovation and the advent of new protocols, such as MakerDAO, Compound, and Uniswap. On the other hand, despite a great deal of effort spent on trying to secure smart contacts [28], [23], [11], [40], [37], and to avoid various forms of market manipulation, etc. [33], [34], [7], there has been little-to-no effort to secure entire protocols.
As such, DeFi protocols join the ecosystem, which leads to both exploits against protocols themselves as well as multi-step attacks that utilize several protocols such as the two attack in Section IV. In a certain poignant way, this highlights the fact the DeFi, lacking a central authority that would enforce a strong security posture, is ultimately vulnerable to a multitude of attacks effectively by design. Flash loans are merely a mechanism that accelerates these attacks. It does so by requiring no collateral (except for the minor gas costs), which, in a certain way, democratizes the attack, opening this strategy to the masses. However, it is quite likely that there will be other mechanisms invented that will enable further, potentially even more devastating, attacks in the near future.
Responsible disclosure: It is somewhat unclear how to perform responsible disclosure within DeFi, given that the underlying vulnerability and victim are not always perfectly clear and that there is a lack of security standards to apply. We plan to reach out to Aave, Kyber, and Uniswap to disclose the contents of this paper.
Determining what is malicious: An interesting question remains whether we can qualify the use of flash loans (cf. Section III), as clearly malicious (or clearly benign). We believe this is a difficult question to answer and prefer to withhold the value judgement. The two attacks in Section IV are clearly malicious: pump and arbitrate involves manipulating the WBTC/ETH price on Uniswap; the oracle manipulation attack involves price oracle by manipulatively lowering the price of ETH against sUSD on Kyber. However, the arbitrage mechanism in general is not malicious -it is merely a consequence of the decentralized nature of the DeFi ecosystem, where many exchanges and DEXs are allowed to exist without much coordination with each other. As such, arbitrage will continue to exist as an phenomenon, with good and bad consequences.
Does extra capital help:
The main attraction of flash loans stems from them not requiring a collateral that needs to be raised. One can, however, wonder whether extra capital would make the attacks we focus on more potent and the ROI greater. Based on our results, extra collateral for the two attacks of Section IV would not increase the ROI, as the liquidity constraints of the intermediate protocols do not allow for a higher impact.
Potential defenses:
Here we discuss several potential defenses. However, we would be the first to admit that these are not foolproof and come with potential downsides that would significantly hamper normal interactions.
• Should DEX accept trades coming from flash loans? • Should DEX accept coins from an address if the previous block did not show those funds in the address? • Would introducing a delay may make sense, e.g. in governance voting, or price oracles? • When designing a DeFi protocol, a single transaction should be limited in its abilities: a DEX should not allow a single transaction triggering a slippage beyond 100%.
Looking into the future: In the future, we anticipate DeFi protocols eventually starting to comply with a higher standard of security testing, both within the protocol itself, as well as part of integration testing into the DeFi ecosystem. We believe that eventually, this may lead to some form of DeFi standards where it comes to financial security, similar to what is imposed on banks and other financial institutions in traditional centralized (government-controlled) finance. We anticipate that either whole-system penetration testing or an analytical approach of modeling the space of possibilities like in this paper are two ways to improve future DeFi protocols.
VII. Related Work
There is a growing body of work focusing of various forms of manipulation and financially-driven attacks in cryptocurrency markets. Because some of the phenomena presented in this paper are so new, there is a paucity of directly related work. However, existing work can be divided into the following categories.
Crypto manipulation: A thorough crypto manipulation study by Daian et al. [12] analyses the behaviour of competitive arbitrage bots. Gandal et al. [19] demonstrate that the unprecedented spike in the USD-BTC exchange rate in late 2013 was possibly caused by price manipulation. Makarov et al. [29] probe arbitrage opportunities in crypto markets. Many scholars use GARCH models to fit the time series of Bitcoin price. Dyhrberg et al. [16] explore the financial asset capabilities of Bitcoin and suggests categorizing Bitcoin as something between gold and US Dollar; Katsiampa [25] emphasizes modelling accuracy and recommends the AR-CGARCH model for price retro-fitting. Bariviera et al. [6] compute the Hurst exponent by means of the Detrended Fluctuation Analysis method and conclude that the market liquidity does not affect the level of long-range dependence. Corbet et al. [10] demonstrate that Bitcoin shows characteristics of an speculative asset rather than a currency also with the presence of futures trading in Bitcoin. Some recent papers focus on the phenomenon of pump-and-dump for manipulating crypto coin prices [41], [24].
Governance Attacks: DeFi protocols such as Maker-DAO [31] operate with a decentralized governance mechanism. Holders of a voting token, are eligible to propose and vote on changes to the protocol. By design, an entity that is capable of amassing a sufficient number of voting tokens is eligible to perform unilaterally significant changes (e.g. to liquidate and receive all collateral). Governance attacks [35] could be aggravated with flash loans [22].
VIII. Conclusion
This paper is the first one to present a detailed exploration of the flash loan mechanism on the Ethereum network. While proposed as a clever mechanism within DeFi, flash loans are starting to be used as financial attack vectors to effectively pull money in the form of cryptocurrency out of DeFi. In this paper we analyze existing flash loan-based attacks in detail and then proceed to propose optimizations that significantly improve the ROI of these attacks. Specifically, we are able to show how two previously executed attacks can be "boosted" to result in a revenue of 829.5k USD and 1.1M USD, respectively, which is a boost of 2.37× and 1.73×, respectively.
Fig. 2 .
2Flash loan approach, summarized.
Fig. 4 .
4Flash mint example.
Fig. 5 .
5Classifying the usage of flash loans in the wild, based on an analysis of transactions between 8th of January, 2020 and the 26th of February, 2020 on Aave[5]. Unknown indicates a private contract we could not attach an owner to.
State:
In a flash loan, the state is represented by the balance of A, i.e. B(A; X; S). Transitions: We define the transition functions of Loan in Equation 4 and Repay in Equation 5, where the parameter b X denotes the loaned amount. B(A; X; S ) = B(A;
State: We consider the following state variables: • Balance of asset X held by A: B(A; X; S) • Balance of asset Y held by A: B(A; Y; S) Transitions: Transition functions of SellXforY are defined in Equation 6. B(A; X; S ) = B(A; X; S) − q X B(A; Y; S ) = B(A; Y;
•
Amount of X in AMM liquidity pool: u X (S), which equals to B(M; X; S) • Amount of Y in AMM liquidity pool: u Y (S), which equals to B(M; Y; S) • Balance of X held by A: B(A; X; S) • Balance of Y held by A: B(A; Y; S) Transitions: Among the endpoints of M, we focus on
B
(A; X; S ) = B(A; X; S) − p X B(A; Y; S ) = B(A; Y; S
•
Balance of asset X held by A: B(A; X; S) • Balance of asset Y held by A: B(A; Y; S)
State:
In a short margin trading, we consider the following state variables: • Balance of X held by A: B(A; X; S) • The locked amount of Y: L(A; Y; S) Transitions: We assume F transacts from an external market at a price of emp. The transition functions and constraints are specified in Equation 14. B(A; X; S ) = B(A; X; S) − c X L(A; Y; S ) = L(A; Y; S
Fig. 8 .
8Initial on-chain states of the pump and arbitrage attack. By nesting transition functions, it is trivial to obtain the cumulative state transition functions ACC i (S 0 ; p 1:i ); p i ) that satisfy Equation 16, where p 1:i = (p 1 , ..., p i ).
5 )
5Repay: A repays the flash loan B(A; X; S 5 ) = B(A; X; S 4 ) − p 1 − p 2 with the constraint B(A; X; S 4 ) − p 1 − p 2 ≥ 0 6) SellXforY & CollateralizedRepay: A buys Y from the market with the market price p m and retrieves the collateral from L B(A; X; S 6 ) = B(A; X; S 5 ) + p 1 − B(A; Y; S 2 ) × p m
0 4 )
4SellXforY: A sells p 3 amount of X for Y at the price of p m B(A; X; S 4 ) = B(A; X; S 3 ) − p 3 = 0 B(A; Y; S 4 ) = B(A; Y; S 3 ) + p 3 p m with the constraint maxY − p 3 p m ≥ 0 5) CollateralizedBorrow: A collateralizes all owned Y to borrow X according to the price given by the constant product AMM M (i.e. the exchange rate er = 1 P Y (M;S2) ) B(A; Y; S 5 ) = 0 B(A; X; S 5 ) = B(A; Y; S 4 ) × cf × P Y (M; S 2 ) with the constraint z Y − B(A; Y; S 4 ) × cf × P Y (M; S 2 ) ≥ 0 6) Repay: A repays the flash loan B(A; X; S 6 ) = B(A; X; S 5 ) − p 1 − p 2 − p 3 with the constraint B(A; X; S 5 ) − p 1 − p 2 − p 3 ≥ 0
Arbitrage example: On 18th Jan 2020, a flash loan borrowed 3, 137.41 DAI from Aave [5] to make an arbitrage trade on the AMM DEX Uniswap 2 . To prepare the arbitrage, DAI is converted to 3137.41 SAI using MakerDAO's migration contract 3 . The arbitrage converts SAI for 18.16 ETH using SAI/ETH Uniswap, and then immediately converts 18.16 ETH back to 3, 148.39 DAI using DAI/ETH Uniswap. After the arbitrage, 3, 148.38 DAI is transferred back to Aave to pay the loan plus fee. This transaction costs 0.02 ETH of gas (about 5.63 USD at the time of writing). Note that even though the transaction sender gains 3.29 DAI from the arbitrage, this particular transaction is not profitable.
the required cost to create fake volumes in Uniswap markets. At the time of writing, the0%
20%
40%
60%
80%
100%
Growth Rate of Volumes
0
500
1'000
1'500
2'000
2'500
3'000
Cost (USD)
ETH/DAI -Interest: 0.09%
ETH/DAI -Interest: 1Wei
ETH/WBTC -Interest: 0.09%
ETH/WBTC -Interest: 1Wei
Fig. 3. Wash trading cost on two Uniswap markets with flash loans
costing 0.09% (Aave) and a constant of 1 Wei (dYdX) respectively.
The 24-hour volumes of ETH/DAI and ETH/WBTC market were
963 786 USD and 67 690 USD respectively (1st of March, 2020).
transaction fee amounts to 0.01 USD, the flash loan inter-
ests range from a constant 1 Wei (on dYdX) to 0.09% (on
Aave), and exchange fees are about 0.3% (on Uniswap).
Wash trading example: On March 2nd, 2020, a
flash loan of 0.01 ETH borrowed from dYdX performed
two back-and-forth trades (first converted 0.01 ETH
to 122.1898 LOOM and then converted 122.1898 LOOM
back to 0.0099 ETH) on Uniswap ETH/LOOM market 4 .
The 24-hour trading volume of the ETH/LOOM market
increased by 25.8% (from 17.71 USD to 22.28 USD) as a
result of the two trades.
executed on the 15th of February, 2020, transaction id: 0xb5c8 bd9430b6cc87a0e2fe110ece6bf527fa4f170a4bc8cd032f768fc5219838, 264.71 USD/ETH 1) Flash Loan Provider (dYdX) borrow 10'000.00 ETH 2) Lending (Compound cETH) lend 5'500.00 2'470.08 ETH to mint cETH 4) Margin Trade Provider (bZx) 5x short 1'300.00 1'456.23 ETH against WBTC 3) Lending (Compound cWBTC) borrow 112.00 50.77 WBTC with cETH collateral Exchange (WBTC Uniswap) convert 5'637.62 6'314.97 ETH to 51.35 53.30 WBTC 5) Exchange (WBTC Uniswap) convert 112.00 50.77 WBTC to 6'871.41 6'219.28 ETH 6) Flash Loan Provider (DyDx) pay back 10'000.00 WETH Flash loan transaction (block 9484688) a) Exchange (Kyber) convert 4'377.72 1'984.11 ETH to 112.00 50.77 WBTC b) Lending (Compound cWBTC) return 112.00 50.77 WBTC for cETH c) Lending (Compound cETH) return cETH for 5'500.00 2'470.08 ETH Block 9484917 -9496602Fig. 6. Procedure diagram of the pump and arbitrage attack. Solid box represents a single state change operation, and dashed box represents aggregated state change operation. The attack consists of two parts a single flash loan transaction and several loan redemption transactions.The numbers within rectangles represent the optimal parameters found by our parametrized optimizer (cf. Section V). step 1 , the adversarial trader borrows a flash loan of 10, 000.00 ETH from a flash loan provider (dYdX). In step 2 and 3 , the adversarial trader uses 5, 500.00 out of the 10, 000.00 ETH as collateral, to borrow 112.00 WBTC on a lending platform (Compound). More specifically, the adversary first deposits 5, 500 ETH to Compound, in exchange of 274, 843.68 cETH (cTokens) as a proof of owning this liquidity. The adversary then borrows WBTC (on Compound) using the cETH tokens as collateral. Note that the adversarial trader does not return the 112.00 WBTC within the flash loan. This means the adversarial trader takes the risk of forced liquidation against the 274, 843.68 cETH collateral if the price fluctuates. In steps 4 , the trader opens a short position for ETH against WBTC (on bZx), with a 5× leverage. Upon receiving this request, bZx transacts 5, 637.62 ETH on an exchange(Uniswap) for only 51.35 WBTC (at 109.79 ETH/WBTC). Note that at the start of block 9484688, Uniswap has a total supply of 2, 817.77 ETH and 77.09 WBTC (at 36.55 ETH/WBTC). The slippage of this transaction is signifi-In step 6 , the trader pays back the loan, paying a 1×1011 Wei fee. Note that dYdX only requires a fee of 1 Wei. After the flash loan transaction (i.e. the first part of this pump and arbitrage trade), the trader gained 71.41 ETH, and has an over-collateralized loan of 5, 500 ETH for 112 WBTC (49.10 ETH/WBTC). If the ETH/WBTC market price is above this loan exchange rate, the adversary can redeem the loan's collateral as follows.execution order
multiple
times
In cant with 239.84% (cf. Equation 1).
124.41 − 36.55
36.55
= 239.84%
(1)
Both DEXes, Uniswap and bZx, allowed for such high
slippage to occur. In step 5 , the trader converts 112.00
WBTC borrowed from lending platform (Compound) to
6 871.41 ETH on DEX (Uniswap) (at 61.35 ETH/WBTC).
Similarly, the slippage can be calculated per Equation 2.
1
61.35 − 1
36.55
1
36.55
= −40.42%
(2)
Loan redemption: The second part of the trade con-
sists of three recurring steps, (step a -c ), between
Ethereum block 9484917 and 9496602. Those transac-
tions aim to redeem ETH by paying back the WBTC
borrowed earlier (on Compound). To avoid slippage
when purchasing WBTC, the trader executes the sec-
ond part in small amounts over a period of two days
on the DEX (Kyber, Uniswap). In total, the adversarial
trader exchanged 4, 377.72 ETH for 112 WBTC (at 39.08
ETH/WBTC) to redeem 5, 500.00 ETH.
Finding the victim: We investigate who of the partic-
ipating entities is losing money. Note that in step 4 of
Figure 6, the short position (on bZx) borrows 5, 637.62 −
1, 300 = 4, 337.62 ETH from the lending provider (bZx),
with 1, 300 ETH collateral. Step 4 requires to purchase
WBTC at a price of 328.49 ETH/WBTC, with both, the
adversary's collateral and the pool funds of the liquidity
provider. 328.49 ETH/WBTC does not correspond to the
market price of 36.55 ETH/WBTC prior to the attack,
hence the liquidity provider overpay by nearly a magnitude
of the WBTC price.
How much are the victims losing: We now quan-
tify the losses by the liquidity providers. The loan
provider lose 4, 337.62 (ETH from loan providers) -51.35
(WBTC left in short position) × 39.08 (market ex-
change rate ETH/WBTC) = 2, 330.86 ETH. The adver-
Fig. 7. Procedure diagram of the oracle manipulation attack. Solid
box represents a single state change operation, and dashed box rep-
resents aggregated state change operation. The numbers within rect-
angles represent the optimal parameters found by our parametrized
optimizer (cf. Section V).
sary gains 5, 500.00 (ETH loan collateral in Compound) -
4, 377.72 (ETH spent to purchase WBTC) + 71.41 (part
1) = 1, 193.69 ETH in total.
Arbitrage: is more money left on the table: Due to
the attack, Uniswap's price reduced from 36.55 to 11.50
ETH/WBTC. This creates an arbitrage opportunity,
where a trader can sell ETH against WBTC on Uniswap
to synchronize the price. 1, 233.79 ETH would yield 60.65
WBTC, instead of 33.76 WBTC, realizing an arbitrage
profit of 26.89 WBTC (286, 035.04 USD).
= max(106.05, 108.44) × 1.5 = 162.66 sUSD/ETH on bZx). Now the adversary possesses 6, 799.27 + 3, 082.14 ETH and in the last step pays back the flash loan amounting to 7, 500.00 ETH. The adversary therefore generates a revenue of 2, 381.41 ETH while only paying 0.42 ETH (118.79 USD) transaction fees. the victim: The adversary distorted the price oracle (i.e. Uniswap and Kyber) from 268.30 sUSD/ETH to 107.83 sUSD/ETH, while other DeFi platforms remain unaffected with 268.30 sUSD/ETH. Similar to the Pump and Arbitrage attack, the lenders on bZx are the victims losing cryptocurrency as a result of the distorted price oracle. The lender lost 6, 799.27 ETH -1, 099, 841 sUSD, which is estimated to be 2, 699.97 ETH (at 268.30 sUSD/ETH).1, 099, 841.39) as collateral
to borrows 6, 799.27 ETH (at
exchange rate
collateral factor
Finding The adversary gains 6, 799.27 (ETH from
borrowing) -3, 517.86 (ETH to purchase sUSD) -360
(ETH to purchase sUSD) -540 (ETH to purchase sUSD)
= 2, 381.41 ETH.
transaction id: 0x4555a69b40fa465b60406c4d23e2eb98d8aee51de f21faa28bb7d2b4a73ab1a9 3 address: 0xc73e0383F3Aff3215E6f04B0331D58CeCf0Ab849
cf. https://etherscan.io/address/0x09b4c8200f0cb51e6d44a1974 a1bc07336b9f47f#code 7 address: 0x398eC7346DcD622eDc5ae82352F02bE94C62d119
executed on the 18th February 2020, transaction id: 0x762881b07 feb63c436dee38edd4ff1f7a74c33091e534af56c9f7d49b5ecac15, 282.91 USD/ETH
https://www.scipy.org/. We use the minimize function in the optimize package. 11 https://www.trufflesuite.com/ganache
O n e L e v e r a g e " , " 0 x1F573D6Fb3F13d689FF844B4cE37794d79a7FF1C " : " Banc or " , " 0 xFa8C4B17ac43A025977F5feD843B6c8c4EA52F1c " : " DSProxy " , " 0 x2E642b8D59B45a1D8c5aEf716A84FF44ea665914 " : " Uniswap " , " 0 xE03374cAcf4600F56BDDbDC82c07b375f318fc5C " : " Ba ncor " , " 0 x 3 0 9 6 2 7 a f 6 0 F 0 9 2 6 d a a 6 0 4 1 B 8 2 7 9 4 8 4 3 1 2 f 2 b f 0 6 0 " : " Banc or " , " 0 x09cabEC1eAd1c0Ba254B09efb3EE13841712bE14 " : " Uniswap " , " 0 x0D8775F648430679A709E98d2b0Cb6250d2887EF " : "BAT" , " 0 x207737F726c13C1298B318D233AAa6164EE6b712 " : " DSProxy " , " 0 x818E6FECD516Ecc3849DAf6845e3EC868087B755 " : " Kyber " , " 0 x35D1b3F3D7966A1DFe207aa4514C12a259A0492B " : "MakerDAO " , " 0 x39755357759cE0d7f32dC8dC45414CCa409AE24e " : " O a s i s " , " 0 x A 0 b 8 6 9 9 1 c 6 2 1 8 b 3 6 c 1 d 1 9 D 4 a 2 e 9 E b 0 c E 3 6 0 6 e B 4 8 " : "USDC" , " 0 xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 " : "WETH9" , " 0 x7778d1011e19C0091C930d4BEfA2B0e47441562A " : " O n e L e v e r a g e " , " 0 x89d24A6b4CcB1B6fAA2625fE562bDD9a23260359 " : " SAI " , " 0 xd3ec78814966Ca1Eb4c923aF4Da86BF7e6c743bA " : " Ba ncor " , " 0 x 1 9 c 0 9 7 6 f 5 9 0 D 6 7 7 0 7 E 6 2 3 9 7 C 8 7 8 2 9 d 8 9 6 D c 0 f 1 F 1 " : "MakerDAO " , " 0 x35A679A2A63F774BBEc5E80E32aE436BC3b5d98e " : " DSProxy " , }
Bti market surveillance report -september 2019 -bti. Bti market surveillance report -september 2019 -bti. http s://www.bti.live/bti-september-2019-wash-trade-report/. (Accessed on 02/24/2020).
. Sec Glossary, SEC Glossary, 2019.
. Arbitrage, Arbitrage, 2020.
Bzx network. Bzx network, 2020.
. Aave. Aave Protocol. Aave. Aave Protocol. https://github.com/aave/aave-protocol, 2020.
Some stylized facts of the Bitcoin market. Aurelio F Bariviera, Marãŋa Josãľ Basgall, Waldo Hasperué, Marcelo Naiouf, Physica A: Statistical Mechanics and its Applications. 484Aurelio F. Bariviera, MarÃŋa JosÃľ Basgall, Waldo Hasperué, and Marcelo Naiouf. Some stylized facts of the Bitcoin market. Physica A: Statistical Mechanics and its Applications, 484:82- 90, 2017.
Tesseract: Real-Time Cryptocurrency Exchange using Trusted Hardware. Iddo Bentov, Yan Ji, Fan Zhang, Yunqi Li, Xueyuan Zhao, Lorenz Breidenbach, Philip Daian, Ari Juels, Conference on Computer and Communications Security. Iddo Bentov, Yan Ji, Fan Zhang, Yunqi Li, Xueyuan Zhao, Lorenz Breidenbach, Philip Daian, and Ari Juels. Tesseract: Real-Time Cryptocurrency Exchange using Trusted Hardware. Conference on Computer and Communications Security, 2019.
Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. Joseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A Kroll, Edward W Felten, Security and Privacy (SP), 2015 IEEE Symposium on. IEEEJoseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A Kroll, and Edward W Felten. Sok: Research perspectives and challenges for bitcoin and cryptocur- rencies. In Security and Privacy (SP), 2015 IEEE Symposium on, pages 104-121. IEEE, 2015.
Bitcoin market capitalization. Coinmarketcap, CoinMarketCap. Bitcoin market capitalization, 2019.
Bitcoin Futures -What use are they?. Shaen Corbet, Brian Lucey, Maurice Peat, Samuel Vigne, Economics Letters. 172Shaen Corbet, Brian Lucey, Maurice Peat, and Samuel Vigne. Bitcoin Futures -What use are they? Economics Letters, 172:23-27, 2018.
Crytic, Echidna, Ethereum fuzz testing framework. Crytic. Echidna: Ethereum fuzz testing framework.
Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges. Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, Ari Juels, Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, and Ari Juels. Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges. IEEE Security and Privacy 2020, 2020.
Ouroboros praos: An adaptively-secure, semisynchronous proof-of-stake blockchain. Bernardo David, Peter Gaži, Aggelos Kiayias, Alexander Russell, Conference on the Theory and Applications of Cryptographic Techniques. SpringerBernardo David, Peter Gaži, Aggelos Kiayias, and Alexan- der Russell. Ouroboros praos: An adaptively-secure, semi- synchronous proof-of-stake blockchain. In Conference on the Theory and Applications of Cryptographic Techniques, pages 66-98. Springer, 2018.
The sybil attack. John R Douceur, International workshop on peer-to-peer systems. SpringerJohn R Douceur. The sybil attack. In International workshop on peer-to-peer systems, pages 251-260. Springer, 2002.
Bitcoin, gold and the dollar -A GARCH volatility analysis. Anne Haubo, Dyhrberg , Finance Research Letters. 16Anne Haubo Dyhrberg. Bitcoin, gold and the dollar -A GARCH volatility analysis. Finance Research Letters, 16:85-92, 2015.
Bitcoin-ng: A scalable blockchain protocol. Ittay Eyal, Adem Efe Gencer, Emin Gun, Robbert Sirer, Van Renesse, USENIX Symposium on Networked Systems Design and Implementation (NSDI 16). Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robbert Van Renesse. Bitcoin-ng: A scalable blockchain protocol. In USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), 2016.
. Compound Finance. Compound finance. Compound Finance. Compound finance, 2019.
Price manipulation in the Bitcoin ecosystem. Neil Gandal, Tyler Hamrick, Tali Moore, Oberman, Journal of Monetary Economics. 954Neil Gandal, JT Hamrick, Tyler Moore, and Tali Oberman. Price manipulation in the Bitcoin ecosystem. Journal of Mon- etary Economics, 95(4):86-96, 2018.
The bitcoin backbone protocol: Analysis and applications. Juan Garay, Aggelos Kiayias, Nikos Leonardos, Annual International Conference on the Theory and Applications of Cryptographic Techniques. SpringerJuan Garay, Aggelos Kiayias, and Nikos Leonardos. The bit- coin backbone protocol: Analysis and applications. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 281-310. Springer, 2015.
On the security and performance of proof of work blockchains. Arthur Gervais, O Ghassan, Karl Karame, Vasileios Wüst, Hubert Glykantzis, Srdjan Ritzdorf, Capkun, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityACMArthur Gervais, Ghassan O Karame, Karl Wüst, Vasileios Glykantzis, Hubert Ritzdorf, and Srdjan Capkun. On the security and performance of proof of work blockchains. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 3-16. ACM, 2016.
The decentralized financial crisis: Attacking defi. Lewis Gudgeon, Daniel Perez, Dominik Harz, Arthur Gervais, Benjamin Livshits, Lewis Gudgeon, Daniel Perez, Dominik Harz, Arthur Gervais, and Benjamin Livshits. The decentralized financial crisis: At- tacking defi, 2020.
Contractfuzzer: Fuzzing smart contracts for vulnerability detection. Bo Jiang, Ye Liu, Chan, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. the 33rd ACM/IEEE International Conference on Automated Software EngineeringACMBo Jiang, Ye Liu, and WK Chan. Contractfuzzer: Fuzzing smart contracts for vulnerability detection. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 259-269. ACM, 2018.
To the moon: defining and detecting cryptocurrency pump-and-dumps. Josh Kamps, Bennett Kleinberg, Crime Science. 7Josh Kamps and Bennett Kleinberg. To the moon: defining and detecting cryptocurrency pump-and-dumps. Crime Science, 7, 12 2018.
Volatility estimation for Bitcoin: A comparison of GARCH models. Paraskevi Katsiampa, Economics Letters. 158Paraskevi Katsiampa. Volatility estimation for Bitcoin: A comparison of GARCH models. Economics Letters, 158:3-6, 2017.
. Kyber, Kyber, Kyber. Kyber. https://kyber.network/, 2020.
Idex: A real-time and high-throughput ethereum smart contract exchange. Aurora Labs, Technical reportAurora Labs. Idex: A real-time and high-throughput ethereum smart contract exchange. Technical report, January 2019.
Making smart contracts smarter. Loi Luu, Duc-Hiep Chu, Hrishi Olickel, Prateek Saxena, Aquinas Hobor, Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. the 2016 ACM SIGSAC conference on computer and communications securityLoi Luu, Duc-Hiep Chu, Hrishi Olickel, Prateek Saxena, and Aquinas Hobor. Making smart contracts smarter. In Proceed- ings of the 2016 ACM SIGSAC conference on computer and communications security, pages 254-269, 2016.
Trading and Arbitrage in Cryptocurrency Markets. Igor Makarov, Antoinette Schoar, Igor Makarov and Antoinette Schoar. Trading and Arbitrage in Cryptocurrency Markets. 2018.
Trading and arbitrage in cryptocurrency markets. Igor Makarov, Antoinette Schoar, Journal of Financial Economics. 1352Igor Makarov and Antoinette Schoar. Trading and arbitrage in cryptocurrency markets. Journal of Financial Economics, 135(2):293-319, 2020.
. Maker, Makerdao, Maker. Makerdao. https://makerdao.com/en/, 2019.
Intro to the oasisdex protocol. Makerdao, MakerDao. Intro to the oasisdex protocol, September 2019. accessed 12 November, 2019, https://github.com/makerdao/de veloperguides/blob/master/Oasis/intro-to-oasis/intro-to-oasi s-maker-otc.md.
Market Manipulation as a Security Problem. Vasilios Mavroudis, arXiv:1903.12458arXiv preprintVasilios Mavroudis. Market Manipulation as a Security Prob- lem. arXiv preprint arXiv:1903.12458, 2019.
Libra: Fair Order-Matching for Electronic Financial Exchanges. Vasilios Mavroudis, Hayden Melton, arXiv:1910.00321arXiv preprintVasilios Mavroudis and Hayden Melton. Libra: Fair Order- Matching for Electronic Financial Exchanges. arXiv preprint arXiv:1910.00321, 2019.
How to turn $20M into $340M in 15 seconds. Micah Zoltu, Online; accessed 9-Feburary-2020Micah Zoltu. How to turn $20M into $340M in 15 seconds. https://medium.com/coinmonks/how-to-turn-20m-into-34 0m-in-15-seconds-48d161a42311, 2019. [Online; accessed 9- Feburary-2020].
Bitcoin: A peer-to-peer electronic cash system. Satoshi Nakamoto, Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system. 2008.
Securify: Practical security analysis of smart contracts. Petar Tsankov, Andrei Dan, Dana Drachsler-Cohen, Arthur Gervais, Florian Buenzli, Martin Vechev, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. the 2018 ACM SIGSAC Conference on Computer and Communications SecurityACMPetar Tsankov, Andrei Dan, Dana Drachsler-Cohen, Arthur Gervais, Florian Buenzli, and Martin Vechev. Securify: Practical security analysis of smart contracts. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 67-82. ACM, 2018.
. Uniswap, Io, Uniswap.io. accessed November, 2019.
Ethereum: A secure decentralised generalised transaction ledger. Ethereum Project Yellow Paper. Gavin Wood, Gavin Wood. Ethereum: A secure decentralised generalised transaction ledger. Ethereum Project Yellow Paper, 2014.
Harvey: A greybox fuzzer for smart contracts. Valentin Wüstholz, Maria Christakis, arXiv:1905.06944Valentin Wüstholz and Maria Christakis. Harvey: A greybox fuzzer for smart contracts. arXiv:1905.06944, 2019.
The anatomy of a cryptocurrency pump-and-dump scheme. Jiahua Xu, Benjamin Livshits, Proceedings of the Usenix Security Symposium. the Usenix Security SymposiumJiahua Xu and Benjamin Livshits. The anatomy of a cryptocur- rency pump-and-dump scheme. In Proceedings of the Usenix Security Symposium, August 2019.
| [
"https://github.com/aave/aave-protocol,",
"https://github.com/makerdao/de"
] |
[
"ChemXSeer Digital Library Gaussian Search",
"ChemXSeer Digital Library Gaussian Search"
] | [
"Shibamouli Lahiri [email protected] \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"Juan Pablo \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"Fernández Ramírez \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"Shikha Nangia [email protected] \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"Prasenjit Mitra [email protected] \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"C Lee Giles [email protected] \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n",
"Karl T Mueller \nComputer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA\n"
] | [
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA",
"Computer Science and Engineering\nBiomedical and Chemical Engineering\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nInformation Sciences and Technology The Pennsylvania State University University Park\nChemistry The Pennsylvania State University University Park\nThe Pennsylvania State University University Park\nSyracuse University Syracuse\n16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA"
] | [] | We report on the Gaussian file search system designed as part of the ChemXSeer digital library. Gaussian files are produced by the Gaussian software [4], a software package used for calculating molecular electronic structure and properties. The output files are semi-structured, allowing relatively easy access to the Gaussian attributes and metadata. Our system is currently capable of searching Gaussian documents using a boolean combination of atoms (chemical elements) and attributes. We have also implemented a faceted browsing feature on three important Gaussian attribute types -Basis Set, Job Type and Method Used. The faceted browsing feature enables a user to view and process a smaller, filtered subset of documents. | null | [
"https://arxiv.org/pdf/1104.4601v2.pdf"
] | 15,516,965 | 1104.4601 | 6c6886416831dc7db32244e1d60872fb3a87c2de |
ChemXSeer Digital Library Gaussian Search
Shibamouli Lahiri [email protected]
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
Juan Pablo
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
Fernández Ramírez
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
Shikha Nangia [email protected]
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
Prasenjit Mitra [email protected]
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
C Lee Giles [email protected]
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
Karl T Mueller
Computer Science and Engineering
Biomedical and Chemical Engineering
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Information Sciences and Technology The Pennsylvania State University University Park
Chemistry The Pennsylvania State University University Park
The Pennsylvania State University University Park
Syracuse University Syracuse
16802, 16802, 13244, 16802, 16802, 16802PA, PA, NY, PA, PA, PA
ChemXSeer Digital Library Gaussian Search
H37 [Information Storage and Retrieval]: Digital Libraries; H52 [Information Interfaces and Presentation]: User Interfaces- graphical user interfaces (GUI)interaction stylesscreen designuser-centered design General Terms DesignDocumentation Keywords ChemXSeerGaussian softwareChemoinformaticsFaceted search
We report on the Gaussian file search system designed as part of the ChemXSeer digital library. Gaussian files are produced by the Gaussian software [4], a software package used for calculating molecular electronic structure and properties. The output files are semi-structured, allowing relatively easy access to the Gaussian attributes and metadata. Our system is currently capable of searching Gaussian documents using a boolean combination of atoms (chemical elements) and attributes. We have also implemented a faceted browsing feature on three important Gaussian attribute types -Basis Set, Job Type and Method Used. The faceted browsing feature enables a user to view and process a smaller, filtered subset of documents.
INTRODUCTION
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.
ChemXSeer is a digital library and data repository for the Chemoinformatics and Computational Chemistry domains [8]. It currently offers search functionalities on papers and formulae, CHARMM calculation data and Gaussian computation data, and also features a comprehensive search facility on chemical databases. A table search functionality [7], similar in spirit to the one featured in Cite-SeerX 1 , is currently under development. Gaussian document search has been a key component of ChemXSeer from its inception. The alpha version of Gaussian search featured a simple query box and an SQL back-end. Here we describe the next generation of Gaussian search 2 which includes a customized user interface for Computational Chemistry researchers, boolean query functionality on a pre-specified set of attributes, and a faceted browsing option over three key attribute types. The current version of Gaussian search is powered by Apache Solr 3 , a state-of-the-art open-source enterprise search engine indexer.
The organization of this paper is as follows. In Section 2, we give a brief overview of the Gaussian software and Gaussian files, emphasizing the need for a customized search interface rather than a simple one. Description of the search interface appears in Section 3, followed by a brief sketch of related work in Section 4. We conclude in Section 5, outlining our contributions and providing directions for future improvement.
GAUSSIAN FILES
Computational chemists perform Gaussian calculations to determine properties of a chemical system using a wide array of computational methods. The methods include molecular mechanics, ground state semi-empirical, self-consistent field, and density functional calculations. Computational methods such as these are key to the upsurge of interest in chemical calculations, partly because they allow fast, reliable, and reasonably easy analysis, modeling, and prediction of known and proposed systems (e.g., atoms, molecules, solids, proposed drugs, etc.) under a wide range of physical constraints, and partly because of the availability of well-tested, comprehensive software packages like Gaussian that implement many of these methods with good tradeoff between accuracy and processing time.
The Gaussian software is actually a suite of several different chemical computation models, including packages for molecular mechanics, Hartree-Fock methods, and semi-empirical calculations. While the exact details of the functionalities of this software are beyond the scope of this paper 4 , we would like the reader to note that each run of the Gaussian software is equivalent to conducting a chemical experiment with certain inputs and under certain physicochemical conditions. The output of the software consists of a large amount of information returned to the user via the computer console and usually redirected to a suitably-named output file. We are interested in these output files, henceforth referred to as "Gaussian files" or "Gaussian documents".
The Gaussian files contain detailed information about the calculations being performed on the system of interest. Although the details of the calculations are essential for the analysis of the system being studied, the output file can be cumbersome to a new user. Each Gaussian file begins with the issued command that initiated a particular calculation, followed by copyright information, memory and hard disk specification, basis set, job type, method used, and several different matrices (e.g., Z-matrix, distance matrix, orientation matrix, etc.). It may also contain other information like rotational constants, trust radius, maximum number of steps, and steps in a particular run. Gaussian files are semi-structured ( Figure 1) in the sense that these parameters tend to appear in a particular order or with explicit markups.
Since Gaussian files are important to the design, testing and prediction of new chemical systems, ChemXSeer had integrated a search functionality on these files. The alpha version of Gaussian search interface only consisted of a simple query box (Figure 2), and the back-end of the search engine was an SQL database that stored data extracted from the Gaussian files. Although simple, the interface allowed users to type in fielded queries and view results in an easy-to-understand format. In the current version, we have retained many aspects of the alpha version, including parts of the search results page and visual representation of individual Gaussian files.
However, our domain experts argued that a more complex interface including faceted search was justified, partly because it eases the task of a researcher by limiting the number of search results to examine, and partly because such interfaces have already been successfully implemented [9]. A computational chemist usually knows what kinds of parameters he/she is looking for in a Gaussian files database, and therefore it makes sense to refine search results using this information. We identified three important parameters towards this end -Job Type, Method Used and Basis Set. There are other parameters and metadata that we can extract from the Gaussian files, but they are not as important from a domain expert's point of view. These are Charge, Degree of Freedom, Distance Matrix, Energy, Input Orientation, Mulliken Atomic Charge, Multiplicity, Optimized Parameters, Frequencies, Thermo-chemistry, Thermal Energy, Shielding Tensors, Reaction Path, PCM, and Variational Results. Metadata like ID, Title and File Path are used in organizing the search results.
SYSTEM DESCRIPTION
The basic query to the Gaussian search system is an atom (i.e., element) or a collection of atoms. The system returns all Gaussian files containing those atoms. However, as experienced by researchers, such basic queries often return a large number of search results, many of which are not relevant. While we can think of improving the ranking of search results in tune with traditional information retrieval research, domain experts have informed us that since Gaussian files are semi-structured, a faceted browsing option would be more appropriate. It remains open, however, whether ranking within each facet could be improved. Currently we rank the search results by their external IDs, because our domain experts were not overly concerned with the ranking.
The system architecture is given in Figure 3. Figure 3 has three principal components -the query interface, the search results page and the Gaussian file description page. The user supplies a query using the query interface, consisting of atoms (mandatory field), method used, job type and basis set. The last three fields are optional, and can be combined in boolean AND/OR fashion. The boolean query goes to the Gaussian document index, which in turn returns on the search results page all Gaussian files satisfying the Figure 3 also indicates that the index was generated from Gaussian documents using Apache Solr.
The lower section of Figure 3 explains the faceted browsing part. Facets are created based on three attributes -job type, method used, and basis set. Each facet link consists of an attribute, its value, and the number of search results under the current set that satisfy this value. The search results page contains links to different values of the attributes. When the user clicks on such a link, a refined query is sent to the Gaussian document index and the resulting smaller set of search results is returned.
The implementation of our query interface (Figure 4) was inspired partly by the EMSL Basis Set Exchange interface 5 , and partly by the requirements mentioned by our domain experts. Our interface features a periodic table of elements, where users can click to select and de-select each element (atom) individually. The selected elements appear together in the textbox at the bottom of the table. Users can specify whether they want search results that contain only the selected elements -no more and no less, or whether they want search results that contain the selected elements as well 5 https://bse.pnl.gov/bse/portal as other elements. After selecting elements, users can optionally select Job Type, Method Used, and Basis Set from the drop-down menus provided. They can also directly type in the desired values for these attributes in the textboxes. Finally, they can specify AND/OR from another drop-down menu. The default option is AND. Fourteen Job Type categories (values), sixteen Method Used categories, and two Basis Set categories are provided in the dropdown menus. These categories are given in Table 1. Each category has several sub-categories that are dealt with by the search system. For example, if a user specifies "Hartree-Fock" as the Method Used category, the system will search for four sub-categories of Hartree-Fock -hf, rhf, rohf and uhf. These sub-categories were specified by our domain experts. A sample of Method Used subcategories is given in Table 2. Table 2 shows the sub-categories for three Method Used categories -Molecular Mechanics, CI Methods, and CBS Methods. For the Basis Set attribute there are many categories, but only two options are provided in the drop-down menu to keep it short and simple. Users can type in the category (e.g., 3-21G*) in the textbox provided.
Ten search results are shown in one search results page (Figure 5) with the total number of results shown at the top. Note that the left part of the search results page ( Figure 5) contains links for faceted browsing, and the right part contains the actual results. Each search result consists of a link to the corresponding Gaussian file description and a one-line summary of the file containing attribute information. The Gaussian file description ( Figure 6) consists of a Jmol [5] rendering of the system being studied, followed by a summary of the Gaussian job and information about attributes extracted from the file. The summary contains a link to the Gaussian document ( Figure 6). Currently we have indexed 2148 documents.
The faceted browsing section (left half of Figure 5) follows the architectural specification of Figure 3. Users can refine search results any time simply by clicking on a particular attribute category. An "All Results" link has been provided to help users quickly find the original set of results. Anecdotal evidence from our domain cbs-qb3 qcisd cbs-apno qcisd(t) sac-ci experts suggests that the faceted browsing feature has been able to significantly cut down on the number of search results to examine, thereby saving a considerable amount of time on the part of a Computational Chemistry researcher. Moreover, since each facet link gives the number of search results to examine for a particular attribute category, a user can readily obtain a visual appreciation of the distribution of search results across different attribute categories for a single query.
The core search and indexing functionality of Gaussian search is currently provided by Apache Solr, an open-source state-of-theart enterprise search server designed to handle, among other things, faceted search, boolean queries, and multivalued attributes. In our case, atoms (chemical elements) in a Gaussian document comprise a multivalued attribute. Each Gaussian document was converted by our home-grown metadata extractor into an XML-style file suitable for ingestion to Solr. The selection of Solr as the back-end platform for this system was partly motivated by the need to integrate ChemXSeer architecture with SeerSuite 6 , a package of open-source software tools that powers the CiteSeerX digital library.
RELATED WORK
In this section we give a brief sketch of the related work. The importance of using large databases to support chemistry calculations has been illustrated by Feller in [3]. Schuchardt, et al., describe such a database, the Basis Set Exchange [9]. Basis Set Exchange helps users find particular basis sets that work on certain collections of atoms, while ChemXSeer lets users search Gaussian files with basis sets as boolean query components.
Among other purely chemistry-domain digital libraries, OREChem ChemXSeer by Li, et al. [6] integrates semantic web technology with the basic ChemXSeer framework. The Chemical Education Digital Library [2] and the JCE (Journal of Chemical Education) Digital Library [1] focus on organizing instructional and educational materials in Chemistry. Both these projects are supported by NSF under the National Science Digital Library (NSDL). In contrast with these studies, our focus here is to design a search functionality on Gaussian files that helps domain experts locate attribute information more easily.
CONCLUSION
In this paper our contributions are two-fold:
• design of a new search engine for Computational Chemistry research on documents produced by the widely used Gaussian software, and
• design of a metadata extractor that sieves out several important attributes from the Gaussian documents, and exports them into Solr-ingestible XML format. 6 http://sourceforge.net/projects/citeseerx/ Future work consists of integration of documents from the ChemXSeer Digital Library with Gaussian files so that users can have an integrated view of calculations, results, and analysis. The metadata extractor could also be improved. There are a few cases where our metadata extractor could not locate certain attribute values, mainly due to the anomalous placement of those attributes in the Gaussian output files. The structure of these documents appeared inconsistent in certain places. Information extraction techniques may be useful for handling these cases. Another area of potential research is improving the ranking of search results. Although our domain experts were not concerned with ranking, it remains to be seen if combining attribute information can help pull up more relevant files earlier in the ranking. Finally, Section 2 indicates the presence of several other attributes in the Gaussian documents. It would be interesting to explore whether these attributes are useful and can be leveraged to produce additional relevant information.
ACKNOWLEDGMENTS
This work was partially supported by the National Science Foundation award CHE-0535656.
Figure 1 :
1Screenshot of a Gaussian document.
Figure 2 :
2First-generation Gaussian query interface.
Figure 3 :
3Gaussian search system architecture.
Figure 4 :
4Gaussian query interface.
Figure 5 :
5A search results page.
Figure 6 :
6A Gaussian file description.
Table 1 :
1boolean query. The search results page contains links to individual Gaussian file descriptions, which in turn link to the actual Gaussian documents.Gaussian Attribute Categories
Job Type
Method Used
Basis Set
Any
Any
Any
Single Point
Semi-empirical
gen
Opt
Molecular Mechanics
Freq
Hartree-Fock
IRC
MP Methods
IRCMax
DFT Methods
Force
Multilevel Methods
ONIOM
CI Methods
ADMP
Coupled Cluster Methods
BOMD
CASSCF
Scan
BD
PBC
OVGF
SCRF
Huckel
NMR
Extended Huckel
GVB
CBS Methods
Table 2 :
2A sample of Method Used sub-categories
Molecular Mechanics CI Methods CBS Methods
amber
cis
cbs-4m
drieding
cis(d)
cbs-lq
uff
cid
cbs-q
cisd
http://citeseerx.ist.psu.edu/ 2 http://cxs05.ist.psu.edu:8080/ChemXSeerGaussianSearch 3 http://lucene.apache.org/solr/
For details, please see http://www.gaussian.com/g_tech/g_ur/g09help.htm
. Jce Digital Library, JCE Digital Library. http://jchemed.chem.wisc.edu/JCEDLib/index.html.
The Chemical Education Digital Library. The Chemical Education Digital Library. http://www.chemeddl.org/.
The role of databases in support of computational chemistry calculations. D Feller, Journal of Computational Chemistry. 1713D. Feller. The role of databases in support of computational chemistry calculations. Journal of Computational Chemistry, 17(13):1571-1586, 1996.
. M J Frisch, G W Trucks, H B Schlegel, G E Scuseria, M A Rob, J R Cheeseman, J A ) MontgomeryJr, T Vreven, K N Kudin, J C Burant, J M Millam, S S Iyengar, J Tomasi, V Barone, B Mennucci, M Cossi, G Scalmani, N Rega, G A Petersson, H Nakatsuji, M Hada, M Ehara, K Toyota, R Fukuda, J Hasegawa, M Ishida, T Nakajima, Y Honda, O Kitao, H Nakai, M Klene, X Li, J E Knox, H P Hratchian, J B Cross, V Bakken, C Adamo, J Jaramillo, R Gomperts, R E Stratmann, O Yazyev, A J Austin, R Cammi, C Pomelli, J W Ochterski, P Y Ayala, K Morokuma, G A Voth, P Salvador, J J Dannenberg, V G Zakrzewski, S Dapprich, A D Daniels, M C Strain, O Farkas, D K Malick, A D Rabuck, K Raghavachari, J B Foresman, J V Ortiz, Q Cui, A G Baboul, S Clifford, J Cioslowski, B B Stefanov, G Liu, A Liashenko, P Piskorz, I Komaromi, R L Martin, D J Fox, T Keith, M A Al-Laham, C Y Peng, C. Gonzalez, and J. A. Pople. Gaussian03Gaussian, IncWallingford, CTM. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Rob, J. R. Cheeseman, J. A. Montgomery (Jr.), T. Vreven, K. N. Kudin, J. C. Burant, J. M. Millam, S. S. Iyengar, J. Tomasi, V. Barone, B. Mennucci, M. Cossi, G. Scalmani, N. Rega, G. A. Petersson, H. Nakatsuji, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, M. Klene, X. Li, J. E. Knox, H. P. Hratchian, J. B. Cross, V. Bakken, C. Adamo, J. Jaramillo, R. Gomperts, R. E. Stratmann, O. Yazyev, A. J. Austin, R. Cammi, C. Pomelli, J. W. Ochterski, P. Y. Ayala, K. Morokuma, G. A. Voth, P. Salvador, J. J. Dannenberg, V. G. Zakrzewski, S. Dapprich, A. D. Daniels, M. C. Strain, O. Farkas, D. K. Malick, A. D. Rabuck, K. Raghavachari, J. B. Foresman, J. V. Ortiz, Q. Cui, A. G. Baboul, S. Clifford, J. Cioslowski, B. B. Stefanov, G. Liu, A. Liashenko, P. Piskorz, I. Komaromi, R. L. Martin, D. J. Fox, T. Keith, M. A. Al-Laham, C. Y. Peng, A. Nanayakkara, M. Challacombe, P. M. W. Gill, B. Johnson, W. Chen, M. W. Wong, C. Gonzalez, and J. A. Pople. Gaussian03. Gaussian, Inc., Wallingford, CT, 2003.
Jmol -a paradigm shift in crystallographic visualization. R M Hanson, Journal of Applied Crystallography. 435Part 2R. M. Hanson. Jmol -a paradigm shift in crystallographic visualization. Journal of Applied Crystallography, 43(5 Part 2):1250-1260, Oct 2010.
oreChem ChemXSeer: a semantic digital library for chemistry. N Li, L Zhu, P Mitra, K Mueller, E Poweleit, C L Giles, Proceedings of the 10th annual joint conference on Digital libraries, JCDL '10. the 10th annual joint conference on Digital libraries, JCDL '10New York, NY, USAACMN. Li, L. Zhu, P. Mitra, K. Mueller, E. Poweleit, and C. L. Giles. oreChem ChemXSeer: a semantic digital library for chemistry. In Proceedings of the 10th annual joint conference on Digital libraries, JCDL '10, pages 245-254, New York, NY, USA, 2010. ACM.
TableSeer: automatic table metadata extraction and searching in digital libraries. Y Liu, K Bai, P Mitra, C L Giles, Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries, JCDL '07. the 7th ACM/IEEE-CS joint conference on Digital libraries, JCDL '07New York, NY, USAACMY. Liu, K. Bai, P. Mitra, and C. L. Giles. TableSeer: automatic table metadata extraction and searching in digital libraries. In Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries, JCDL '07, pages 91-100, New York, NY, USA, 2007. ACM.
ChemXSeer: a digital library and data repository for chemical kinetics. P Mitra, C L Giles, B Sun, Y Liu, Proceedings of the ACM first workshop on CyberInfrastructure: information management in eScience, CIMS '07. the ACM first workshop on CyberInfrastructure: information management in eScience, CIMS '07New York, NY, USAACMP. Mitra, C. L. Giles, B. Sun, and Y. Liu. ChemXSeer: a digital library and data repository for chemical kinetics. In Proceedings of the ACM first workshop on CyberInfrastructure: information management in eScience, CIMS '07, pages 7-10, New York, NY, USA, 2007. ACM.
Basis Set Exchange: A Community Database for Computational Sciences. K L Schuchardt, B T Didier, T Elsethagen, L Sun, V Gurumoorthi, J Chase, J Li, T L Windus, 17428029Journal of Chemical Information and Modeling. 473K. L. Schuchardt, B. T. Didier, T. Elsethagen, L. Sun, V. Gurumoorthi, J. Chase, J. Li, and T. L. Windus. Basis Set Exchange: A Community Database for Computational Sciences. Journal of Chemical Information and Modeling, 47(3):1045-1052, 2007. PMID: 17428029.
| [] |
[
"Symmetry Breaking Vacua in Lovelock Gravity",
"Symmetry Breaking Vacua in Lovelock Gravity"
] | [
"David Kastor [email protected] \nIntroduction\n\n",
"Ç Etin Şentürk ",
"\nAmherst Center for Fundamental Interactions Department of Physics\nUniversity of Massachusetts\n01003AmherstMA\n"
] | [
"Introduction\n",
"Amherst Center for Fundamental Interactions Department of Physics\nUniversity of Massachusetts\n01003AmherstMA"
] | [] | Higher curvature Lovelock gravity theories can have a number of maximally symmetric vacua with different values of the curvature. Critical surfaces in the space of Lovelock couplings separate regions with different numbers of such vacua, and there exist symmetry breaking regions with no maximally symmetric vacua. Especially in such regimes, it is interesting to ask what reduced symmetry vacua may exist. We study this question, focusing on vacua that are products of maximally symmetric spaces. For low order Lovelock theories, we assemble a map of such vacua over the Lovelock coupling space, displaying different possibilities for vacuum symmetry breaking. We see indications of interesting structure, with e.g. product vacua in Gauss-Bonnet gravity covering the entirety of the symmetry breaking regime in 5-dimensions, but only a limited portion of it in 6-dimensions. | 10.1088/0264-9381/32/18/185004 | [
"https://arxiv.org/pdf/1506.01897v3.pdf"
] | 118,735,566 | 1506.01897 | 94b38185d045aca1b2e5f470b4cee6bdb6ca0ed6 |
Symmetry Breaking Vacua in Lovelock Gravity
20 Jul 2015
David Kastor [email protected]
Introduction
Ç Etin Şentürk
Amherst Center for Fundamental Interactions Department of Physics
University of Massachusetts
01003AmherstMA
Symmetry Breaking Vacua in Lovelock Gravity
20 Jul 2015
Higher curvature Lovelock gravity theories can have a number of maximally symmetric vacua with different values of the curvature. Critical surfaces in the space of Lovelock couplings separate regions with different numbers of such vacua, and there exist symmetry breaking regions with no maximally symmetric vacua. Especially in such regimes, it is interesting to ask what reduced symmetry vacua may exist. We study this question, focusing on vacua that are products of maximally symmetric spaces. For low order Lovelock theories, we assemble a map of such vacua over the Lovelock coupling space, displaying different possibilities for vacuum symmetry breaking. We see indications of interesting structure, with e.g. product vacua in Gauss-Bonnet gravity covering the entirety of the symmetry breaking regime in 5-dimensions, but only a limited portion of it in 6-dimensions.
Introduction
Higher curvature gravitational interactions have been investigated in a great many physical contexts. Among such models, the special class of Lovelock gravity theories [1] is distinguished via having field equations that depend only on the Riemann tensor, and not on its derivatives, and hence include at most second derivatives of the metric tensor. This has a number of important consequences at both the classical and quantum levels. For example, it leads to a Hamiltonian formulation in terms of the standard canonical gravitational degrees of freedom [2] and to the absence of the ghost degrees of freedom that are typical of higher curvature theories [3,4].
Lovelock theories include a single interaction term at each higher curvature order, so that the Lagrangian in n spacetime dimensions is given by L = p k=0 c k L k with 3
L k = 1 2 k δ α 1 β 1 ...α k β k µ 1 ν 1 ...µ k ν k R µ 1 ν 1 α 1 β 1 . . . R µ k ν k α k β k ,(1)
with the upper limit of the sum given by p ≡ [(n−1)/2]. The term L 0 gives the cosmological constant term in the action, while L 1 gives the Einstein-Hilbert term and the L k with k ≥ 2 are higher curvature terms. The Lagrangian truncates because the interactions L k vanish identically for k > n/2, while for n even the variation of L n/2 gives a total divergence and hence does not contribute to the equations of motion 4 . This truncation distinguishes between even and odd dimensions. Moving up from an even dimension to the next higher odd dimension, a new Lovelock interaction is introduced. However, no new term is introduced in even dimensions. The coefficients c k are the couplings of the theory, and we will be interested in how the space of possible vacua of Lovelock gravity varies as a function of these couplings.
The simplest vacua of Lovelock theories are maximally symmetric ones, and depending on the values of the couplings c k , Lovelock gravity in n-dimensions may have up to p such vacua with distinct curvatures [5,6,7]. Assuming that n > 1, the curvature of a maximally symmetric spacetime has the form
R µν αβ = Kδ µν αβ ,(2)
where the constant K is related to the scalar curvature according to K = 1 n(n−1) R. The maximally symmetric spacetime is then either Minkowski (M), de Sitter (dS), or anti-de 3 The antisymmetrized Kronecker symbol used here has overall strength k! and is defined by 4 In spacetime dimension n = 2k, the term L k is the Euler density, whose integral over a compact manifold without boundary is topologically invariant Sitter (AdS) spacetimes 5 , depending on whether K = 0, K > 0, or K < 0. When the Riemann tensor has the constant curvature form (2), the Lovelock equations of motion reduce to a pth-order polynomial equation for K. Only real roots of this equation correspond to physical vacua. Therefore, while there exists a region of coupling space with p distinct maximally symmetric vacua, there are also regions with fewer such vacua. For p odd there will be at least one maximally symmetric vacuum. However, for p even there is a range of couplings such that no maximally symmetric vacuum exist. We can think of the surfaces in coupling space that divide such regions, with different numbers of maximally symmetric vacua, as critical surfaces of the theory.
δ α1...α k β1...β k = k!δ [α1 β1 . . . δ α k ] β k = k!δ α1 [β1 . . . δ α k β k ] .
Consider, for example, Gauss-Bonnet gravity in dimensions n ≥ 5 where only the couplings c k with k = 0, 1, 2 are taken to be nonzero 6 . In this case, the equations of motion yield a quadratic equation for the curvature constant K. For fixed values of c 0 and c 1 , one finds that there is always either a maximum or minimum value of c 2 , depending on the sign of c 0 , beyond which maximally symmetric vacua no longer exist. It is then natural to ask what vacua exist beyond this critical value of the coupling c 2 . Such vacua will necessarily have less than the maximal symmetry allowed in a given spacetime dimension.
In this paper, we will investigate a simple class of alternative vacua with reduced amounts of symmetry and examine how the number and existence of such vacua vary over the space of Lovelock couplings. The vacua we consider are products of maximally symmetric space(times). In n spacetime dimensions, we will write this product in the form K d
1 × K n−d 2 .
Here, the first factor K d 1 is a Lorentzian maximally symmetric spacetime of dimension d with curvature constant K 1 , while the second factor K n−d 2 is an (n − d)-dimensional maximally symmetric Euclidean space with curvature constant K 2 . The existence of such reduced symmetry vacua introduces a new set of regions and critical surfaces in coupling space. We will see how these regions either extend, or fail to extend, the region in which maximally symmetric vacua exist. Some special cases of such product vacua have already been discussed in literature. The product vacua M 4 × S 3 and more generally K 4 1 × S 3 in third order Lovelock gravity were studied in [8] and [9] respectively. The Nariai and (anti)-Nariai [10,11] type factorizations dS 2 × S n−2 and AdS 2 × H n−2 were investigated in Einstein gravity [12] and in Gauss-Bonnet gravity [13], while the Bertotti-Robinson [14,15] type factorization AdS 2 × S n−2 was investigated in Einstein gravity [12], in generic Lovelock gravity [16], in pure Lovelock gravity theories [17], and in 5-dimensional quadratic gravity [18]. The product K 2 1 × K 3 2 in 5-dimensional Gauss-Bonnet gravity was considered by [19,20]. Related work has also appeared in references [21,22] which study dynamical compactification, in the broken symmetry regime of Gauss-Bonnet gravity, that are products of a 4D FRW spacetime with a compact space having a time dependent scale factor. The dynamics of possible compactifications has also been studied in [23,24,25].
This paper is organized as follows. In Section (2) we give some more details of Lovelock gravity. In order to keep our analysis of product vacua tractable we will restrict our analysis to at most cubic order interactions in the curvature. In Section (3), in order to orient the subsequent discussion, we recall the maximally symmetric and product vacua of Einstein gravity. In Section (4) we look at the maximally symmetric and product vacua of Gauss-Bonnet gravity in n = 5 and n = 6 dimensions. In Section (5), we study product vacua in third order Lovelock theory, making a further restriction of the couplings to keep the problem tractable. Finally, we offer some concluding remarks in Section (6).
Low order Lovelock theory
In practice, in order to keep our analysis of product vacua tractable, we will restrict our attention to Lovelock theories including only the first few, relatively low order interaction terms. Accordingly, we will work with the theory described by the action
I = 1 16πG n d n x √ −g (−2Λ 0 + R + α 2 L 2 + α 3 L 3 ) ,(3)
where we have written the cosmological and Einstein-Hilbert terms in the action in their conventional forms, while leaving the 2nd and 3rd order Lovelock terms in the compact form (1). The explicit form of the second order Gauss-Bonnet term, which is dynamically relevant in dimensions n ≥ 5, is given by
L 2 = R 2 − 4R µ α R α µ + R µν αβ R αβ µν .(4)
The explicit form for the third order term, which is relevant in dimensions n ≥ 7, is unwieldy. The equations of motion of a general Lovelock theory are given by p k=0 c k G (k)µ ν = 0, where
G (k)µ ν = − 1 2 k+1 δ µα 1 β 1 ...α k β k νσ 1 κ 1 ...σ k κ k R σ 1 κ 1 α 1 β 1 . . . R σ k κ k α k β k .(5)
and the expression for G (1) µ ν reproduces the ordinary Einstein tensor G µ ν . For our theory (3) the equations of motion are then given by
Λ 0 δ µ ν + G µ ν + α 2 G (2) µ ν + α 3 G (3) µ ν = 0.(6)
In order to orient the discussion below, we will first set the couplings α 2 = α 3 = 0 and look at product vacua of Einstein gravity. We will then take α 2 to be nonvanishing and study the problem for Gauss-Bonnet gravity 7 . Finally, we will allow α 3 to be nonzero, although we will restrict our attention to a subclass of theories with a definite relation between the couplings α 2 and α 3 , in order to make the analysis manageable.
ߙ ଶ ൌ Ͳ ߙ ଷ ൌ Ͳ Ȧ Ͳ ܯ ݀ܵ ܵ݀ܣ
Product vacua in Einstein gravity
In order to orient the discussion of product vacua in Lovelock theories, we first present the analysis for Einstein gravity, setting the α 2 = α 3 = 0 in our theory (3). Our theory is then parameterized by the cosmological constant Λ 0 . For each value of the cosmological constant, the theory has a maximally symmetric vacuum, with curvature constant K related to the cosmological constant by
K = 2Λ 0 (n − 1)(n − 2) .(7)
The different possible ranges for the cosmological constant Λ 0 < 0, Λ 0 = 0 and Λ 0 > 0 correspond to AdS, Minkowski and dS vacua respecitvely. We illustrate this situation in Figure (1), which serves as the prototype for subsequent, more complicated figures which we will use to display our results. In this case, the figure shows the different types of maximally symmetric vacua corresponding to different values of the cosmological constant. There is no evidence of critical behavior in this case. As noted above, for Einstein gravity there is a unique maximally symmetric vacuum associated with each value of the cosmological constant.
We now consider certain reduced symmetry vacua of Einstein gravity, taking the n-dimensional spacetime manifold to be a direct product of two maximally symmetric submanifolds K d 1 having and K n−d 2 of dimensions d and n − d respectively. We will assume for the moment that n ≥ 4 and that n − 1 > d > 1 and consider the case of 1-dimensional submanifolds separately. The metric is then taken to have the form
ds 2 = g µν (x)dx µ dx ν = g ab (u)du a du b + g ij (v)dv i dv j ,(8)
where g ab (u) is the metric on K d 1 , which is assumed to have Lorentzian signature, and g ij (v) is the metric on the manifold K n−d 2 , which is taken to have Euclidean signature. Coordinate indices along K d 1 have been denoted by a, b, c, . . . and coordinates indices for K n−d 2 by i, j, k, . . .. This construction enables us to decompose the Riemann tensor R µν αβ of the full spacetime as where R ab (1)cd and R ij (2)kl are the Riemann tensors of K d 1 and K n−d 2 , respectively, which are each assumed to have the constant curvature form
R ab cd = R ab (1)cd , R ij kl = R ij (2)kl , R ai bj = 0,(9)݊ Ͷ ʹ ݀ ݊ െ ʹ Ȧ Ͳ ܵ݀ܣ ௗ ൈ ܪ ିௗ ݀ܵ ௗ ൈ ܵ ିௗ ܯ ௗ ൈ ܧ ିௗR ab (1)cd = K 1 δ ab cd , R ij (2)kl = K 2 δ ij kl(10)
with curvature constants K 1 and K 2 . Plugging into the Einstein equation then yields for these constants
K 1 = 2Λ 0 (d − 1)(n − 2) , K 2 = 2Λ 0 (n − d − 1)(n − 2)(11)
We see in particular that K 1 and K 2 necessarily each have the same sign as the cosmological constant Λ 0 . The product vacua we consider, therefore, are always of the form AdS d ×H n−d for Λ 0 negative, dS d × S n−d for Λ 0 positive, and M d × E n−d for vanishing cosmological constant. This result is illustrated in Figure (2). As with the maximally symmetric vacua, there is no evidence of critical behavior for the product vacua. Such vacua exist for all values of the cosmological constant.
The cases d = 1 and d = n − 1, where one of the submanifolds K d 1 or K n−d 2 has dimension 1 and is therefore flat, require special handling. In these cases we see that one or the other of the terms in the formal solution in (11) diverges. This can be traced back to an inconsistency between the different components of the Einstein equations, which implies that there is no product vacuum unless Λ 0 = 0, as illustrated in Figure (3).
Product vacua in Gauss-Bonnet gravity
Now equipped with an understanding of product vacua in Einstein gravity, we move on to consider such vacua in Gauss-Bonnet gravity, which is obtained by setting the coupling α 3 = 0 in the action (3).
Ȧ Ͳ ܴ ൈ ܧ ିଵ ܯ ିଵ ൈ ܴ ݊ ͵ ݀ ൌ ͳ Θ ݀ ൌ ݊ െ ͳ
Maximally symmetric vacua
Let us first consider the maximally symmetric vacua, having constant curvature (2). We will characterize these solutions by an effective cosmological constant Λ related to the curvature constant K by
Λ = (n − 1)(n − 2) 2 K,(12)
which is the relation that holds between the curvature and cosmological constants for constant curvature solutions in Einstein gravity (7). It is well known that the equations of motion for constant curvature solutions in Gauss-Bonnet gravity reduce to a quadratic equation, which is given in terms of the effective cosmological constant bỹ
α 2 Λ 2 + Λ − Λ 0 = 0,(13)whereα 2 = 2(n−3)(n−4) (n−1)(n−2) α 2 .
The maximally symmetric solutions are then characterized by the effective cosmological constants
Λ ± = − 1 2α 2 1 ± 1 + 4α 2 Λ 0 .(14)
Since only real values of Λ correspond to physical vacua, the number of such solutions will depend on the cosmological constant Λ 0 and the coupling strength α 2 of the Gauss-Bonnet term. For sufficiently small values ofα 2 there will always be two physical maximally symmetric vacua. In the limit of small Gauss-Bonnet coupling, such that |4α 2 Λ 0 | ≪ 1, these are given approximately by
Λ − ≃ Λ 0 , Λ + ≃ − 1 α 2(15)
One sees that Λ − matches on to the vacuum of Einstein gravity in this limit, while the Λ + branch goes off to infinite curvature. For this reason, the corresponding branches of solutions in (14) are known as the Einstein and Gauss-Bonnet branches respectively. (14), and Λ CS 0> and Λ CS 0< represent the CS points whenα 2 > 0 andα 2 < 0, respectively.
ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ ¡ ¢ £ ¤ Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ ሺିሻ ܵ݀ܣ ሺିሻ ݀ܵ ሺାሻ ݀ܵ ሺିሻ ܵ݀ܣ ܯ ሺିሻ ݀ܵ ߙ ଶ Ͳ ܵ݀ܣ ሺାሻ ݀ܵ ሺିሻ ܵ݀ܣ ሺିሻ Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ Ȧ ¡ ¥ £ ¤ Ȧ ¡ £ ¤ ؠ െ ͳ Ͷߙ ¦ ϭ DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ ϭ DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ
The set of maximally symmetric vacua of Gauss-Bonnet gravity is displayed in Figure (4). To read the figure, envision fixing a value of the Gauss-Bonnet coupling α 2 , or equivalentlỹ α 2 , and asking how the number and character of the maximally symmetric vacua vary with the value of the cosmological constant Λ 0 . The behavior is qualitatively the same for all values ofα 2 > 0, which are displayed in the top half of the diagram, and also for all values ofα 2 < 0, which are displayed on the bottom half. Forα 2 > 0, it follows from (14) that two distinct vacua exist for all values of Λ 0 > Λ CS 0 , while at the critical value
Λ CS 0 = − 1 4α 2 ,(16)
the two branches of solutions become degenerate and a single unique maximally symmetric vacuum exists. We call this critical point the CS point because in n = 5 dimensions the theory can be re-expressed as a Chern-Simons theory when the Gauss-Bonnet coupling and cosmological constant are related in this way (see reference [26]). It also follows from (14) that no physical vacua exist for Λ 0 < Λ CS 0 . This implies that whatever vacuum solutions exist in this region of coupling space will necessarily be symmetry breaking ones.
We can also examine the character of the maximally symmetric vacua on the two branches for Λ 0 > Λ CS 0 . One finds that the sign of the effective cosmological constant Λ − for the Einstein branch is precisely correlated with the cosmological constant Λ 0 , so that the solutions along this branch are dS, Minkowski, or AdS depending on whether the value of Λ 0 is positive, zero, or negative. These are respectively the dS
Product vacua
We now turn to vacua which, as above, are products K d
1 × K n−d 2
of maximally symmetric submanifolds. The Gauss-Bonnet equations of motion for such product vacua reduce to a coupled set of quadratic equations for the curvature constants K 1 and K 2 given by
α 2 (d − 1)! (d − 5)! K 2 1 + 2(d − 1)!D! (d − 3)!(D − 2)! K 1 K 2 + D! (D − 4)! K 2 2(17)+ (d − 1)! (d − 3)! K 1 + D! (D − 2)! K 2 − 2Λ 0 = 0 α 2 (D − 1)! (D − 5)! K 2 2 + 2(D − 1)!d! (D − 3)!(d − 2)! K 1 K 2 + d! (d − 4)! K 2 1(18)+ (D − 1)! (D − 3)! K 2 + d! (d − 2)! K 1 − 2Λ 0 = 0 where D = n − d.
The linear equations that result from setting the Gauss-Bonnet coupling α 2 to zero were used above to obtain the product solutions in Einstein gravity (11). However, for α 2 = 0 we cannot write down a general analytic solution to the equations. For sufficiently small values of d or D, however, the equations simplify and yield interesting results, and we will focus on such cases. In particular, we will focus on product vacua for Gauss-Bonnet gravity in n = 5 dimensions, which is the lowest dimension in which the Gauss-Bonnet term is relevant, and in n = 6 dimensions, where it is also the highest order Lovelock term. The subsequent term L 3 which is cubic in the curvature becomes relevant in n = 7 dimensions.
In n = 5 dimensions we consider the cases of 3 + 2 and 2 + 3 dimensional splits, which differ only in which factor of the product K d 1 × K n−d 2 is Lorentzian and which is Euclidean.
Taking the case d = 3, D = 2 case first, the equations of motion simplify to
8α 2 K 1 K 2 + 2(K 1 + K 2 ) − 2Λ 0 = 0 (19) 6K 1 − 2Λ 0 = 0
The resulting curvature constants can then be written in the form
K 1 = Λ 0 3 , K 2 = 1 1 − Λ 0 Λ CS 0 · 2 3 Λ 0 ,(20)
where Λ CS 0 = − 3 4α 2 is the CS value of the cosmological constant (16) in n = 5 dimensions. These values of the curvature constants K 1 and K 2 approach those for Einstein gravity (11) in the limit of small Gauss-Bonnet coupling.
We display these results in Figure ( The case in n = 5 dimensions for a product of a maximally symmetric d = 2 dimensional Lorentzian spacetime with a maximally symmetric D = 3 dimensional Euclidean space is very similar. The equations of motion (17) and (18) again reduce to have the drastically simplified (19), but with the curvature constants swapped, so that the solutions are now given by
K 1 = 1 1 − Λ 0 Λ CS 0 · 2 3 Λ 0 , K 2 = Λ 0 3(21)
These results are displayed in Figure (6), which is very similar to Figure (5), the key difference being that moving through the CS value of the cosmological constant changes the sign of the curvature of the Lorentzian, rather than the Euclidean, part of the product in this case.
It is intriguing that the critical point occurs at Λ 0 = Λ CS 0 for both these product vacua, this being the same as the critical point that separates different regimes for maximally ߙ ଶ ൏ Ͳ We now move on to discuss product vacua for Gauss-Bonnet gravity in n = 6 spacetime dimensions, which we will see display different types of critical behavior. We will consider 3+3, 4+2 and 2+4 splits into Lorentzian and Euclidean factors in turn. In n = 5 dimensions we saw that the product vacua existed for all values of the cosmological constant, providing possible vacuum states in the broken symmetry regime. We will see that this is no longer the case in n = 6 dimensions. (17) and (18) simplify to
For a 3 + 3 split one finds that the equations of motion
24α 2 K 1 K 2 + 2K 1 + 6K 2 − 2Λ 0 = 0 (22) 24α 2 K 1 K 2 + 2K 2 + 6K 1 − 2Λ 0 = 0(23)
which has the two solutions K 1 = K 2 = K ± where and Λ c 0 = − 1 3α 2 is a new critical point that arises in this system. Focusing on α 2 > 0, which is illustrated in the top half of Figure (7), there will be two physical 3 + 3 product solutions for Λ 0 > Λ c 0 and none in the regime Λ 0 < Λ c 0 , with a unique solution at the critical point. In n = 6 dimensions, one finds from (16) that Λ CS 0 = − 5 12α 2 , so that with α 2 > 0 one has the ordering Λ CS 0 < Λ c 0 . It follows that in the symmetry breaking regime Λ < Λ CS 0 , where no maximally symmetric vacua exist, there are also no 3 + 3 split product vacua. One also finds that, as for the maximally symmetric solutions, there is an "Einstein" branch of product solutions with K 1 = K 2 = K − which approaches the analogous product vacua of Einstein gravity in the limit of small Gauss-Bonnet coupling, and a "Gauss-Bonnet" branch with K 1 = K 2 = K + where the curvatures of both factors diverge in this limit. On the Einstein branch, the curvatures of both factors are precisely correlated with the sign of Λ 0 , while on the Gauss-Bonnet branch both factors are always negatively curved. Finally, this whole structure is mirrored on the bottom half of the diagram for α 2 < 0.
K ± = Λ c 0 2 1 ± 1 − Λ 0 Λ c 0(24)ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ Ǣ ሺܣሻ݀ܵ EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ ଶ ൈ ܧ ଷ ܵ݀ܣ ଶ ൈ ܪ ଷ ܵ݀ܣ ଶ ൈ ܵ ଷ ݀ܵ ଶ ൈ ܵ ଷ ܯ ଶ ൈ ܧ ଷ ߙ ଶ Ͳ ݀ܵ ଶ ൈ ܵ ଷ ܵ݀ܣ ଶ ൈ ܪ ଷ ݀ܵ ଶ ൈ ܪ ଷ Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ Ǣ ሺܣሻ݀ܵ EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ Ȧ ݊ ൌ ͷ ݀ ൌ ʹ Ȧ ؠ െ ͵ Ͷߙ
The situation is similar in most respects for the d = 4, D = 2 product vacua. In this case the equations (17) and (18) for the curvature constants K 1 and K 2 reduce to
24α 2 K 1 K 2 + 6K 1 + 2K 2 − 2Λ 0 = 0 (25) 24α 2 K 2 1 + 12K 1 − 2Λ 0 = 0(26)
which have the solutions > Λ c 0 , so that the product vacua extend some distance into the broken symmetry regime. The curvatures K + 1 and K + 2 are always negative along the Gauss-Bonnet branch, so these solutions are AdS 4 × H 2 . The Einstein branch is more intricate. For Λ 0 > Λ CS 0 , the signs of the curvatures K − 1 and K − 2 are both correlated with the sign of Λ 0 as they were in the 3 + 3 split product described above. However, the denominator of the expression for K − 2 vanishes linearly 8 at Λ 0 = Λ CS 0 leading to a transition to AdS 4 × S 2 in the range Λ c
K ± 1 = Λ c 0 3 B ± , K ± 2 = Λ c 0 B ± (B ± − 1) 3B ± − 1 (27) ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ ! " ߙ ଶ Ͳ ܵ݀ܣ # $ % & ൈ ܪ $ % & # ݀ܵ # $ ' & ൈ ܵ $ ' & # ܵ݀ܣ # $ ' & ൈ ܪ $ ' & # Ȧ ( ) 0 ݊ ൌ ݀ ൌ ͵ Ȧ ! ) 0 ܯ $ ' & # ൈ ܧ $ ' & # ܵ݀ܣ # ൈ ܪ # Ȧ ( " ܯ $ ' & # ൈ ܧ $ ' & # ݀ܵ # ൈ ܵ # ݀ܵ # $ ' & ൈ ܵ $ ' & # ܵ݀ܣ # $ ' & ൈ ܪ $ ' & # ݀ܵ # $ % & ൈ ܵ $ % & # Ȧ " ؠ െ ͳ ͵ߙ 1 Ȧ ) 0 ؠ െ ͷ0 < Λ 0 < Λ CS 0 . Precisely at the critical point Λ 0 = Λ c 0 there is a single AdS 4 × E 2 solution.
The case of a 2 + 4 split product vacua in n = 6 dimensions again reduces to equations (25) and (26), but now with the curvatures K 1 and K 2 swapped. The resulting configurations 8 One finds that the factor in the denominator of K − 2 in (27) can be written as are shown in Figure (9). These vacua also exist a finite distance into the broken symmetry regime.
3B − − 1 = −5 1 − Λ0 Λ CS 0 2 + 3 1 − Λ0 Λ c 0 .(28)ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ 3 4 5 6 ߙ ଶ Ͳ ܵ݀ܣ 7 8 9 @ ൈ ܪ 8 9 @ A ݀ܵ 7 8 B @ ൈ ܵ 8 B @ A ܵ݀ܣ 7 8 9 @ ൈ ܪ 8 9 @ A ܵ݀ܣ 7 8 B @ ൈ ܪ 8 B @ A ܵ݀ܣ 7 8 B @ ൈ ܵ 8 B @ A Ȧ 3 C 5 6 ݊ ൌ ݀ ൌ Ͷ Ȧ 3 4 D ܯ 8 B @ 7 ൈ ܧ 8 B @ A ܵ݀ܣ 7 ൈ ܧ A Ȧ 3 C D ܯ 8 B @ 7 ൈ ܧ 8 B @ A ݀ܵ 7 ൈ ܧ A ݀ܵ 7 8 B @ ൈ ܪ 8 B @ A ݀ܵ 7 8 B @ ൈ ܵ 8 B @ A ܵ݀ܣ 7 8 B @ ൈ ܪ 8 B @ A ݀ܵ 7 8 9 @ ൈ ܵ 8 9 @ A ݀ܵ 7 8 9 @ ൈ ܵ 8 9 @ A Ȧ 3 D ؠ െ ͵ Ͷߙ E Ȧ 3 5 6 ؠ െ ͷ ͳʹߙ E Ϯ
Finally, as in Einstein gravity, the cases with 1-dimensional factors, d = 1 and d = n − 1, require special handling, although in this case we are able to do the analysis for general spacetime dimension n. Taking d = 1, the equations of motion (17) and (18) reduce to
α 2 (n − 1)(n − 2)(n − 3)(n − 4)K 2 2 + (n − 1)(n − 2)K 2 − 2Λ 0 = 0 (29) α 2 (n − 2)(n − 3)(n − 4)(n − 5)K 2 2 + (n − 2)(n − 3)K 2 − 2Λ 0 = 0(30)
These equations are inconsistent, except for the two special cases
Λ 0 = 0, K 2 = 0 (31) Λ 0 = Λ CS 0 , K 2 = 4 (n − 1)(n − 2) Λ CS 0(32)
which are displayed along with the corresponding results for d = n − 1 in Figure (10). This is a curious result. Recall that in Einstein gravity, as shown in Figure (3), there are no similar 1-dimensional product vacua with a non-zero value of the coupling constant. However, any value of the cosmological constant in Einstein gravity can be thought of as being a CS value in the sense considered here. It would be interesting to look at the
ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ G H I P ߙ ଶ Ͳ ܵ݀ܣ Q R S T ൈ ܪ R S T U ݀ܵ Q R V T ൈ ܵ R V T U ܵ݀ܣ Q R S T ൈ ܪ R S T U ܵ݀ܣ Q R V T ൈ ܪ R V T U ݀ܵ Q R V T ൈ ܪ R V T U Ȧ G W I P ݊ ൌ ݀ ൌ ʹ Ȧ G H X ܯ R V T Q ൈ ܧ R V T U ܯ Q ൈ ܪ U Ȧ G W X ܯ R V T Q ൈ ܧ R V T U ܯ Q ൈ ܵ U ܵ݀ܣ Q R V T ൈ ܵ R V T U ݀ܵ Q R V T ൈ ܵ R V T U ܵ݀ܣ Q R V T ൈ ܪ R V T U ݀ܵ Q R S T ൈ ܵ R S T U ݀ܵ Q R S T ൈ ܵ R S T U Ȧ G X ؠ െ ͵ Ͷߙ Y Ȧ G I P ؠ െ ͷ ͳʹߙ Y Ϯ
Vacua in third order Lovelock gravity
We now consider the full 'low order' Lovelock theory introduced in Section (2), which includes the third order Lovelock term as well. This will be relevant in n = 7 dimensions and beyond. The equation determining the maximally symmetric vacua is now given bỹ
α 3 Λ 3 +α 2 Λ 2 + Λ − Λ 0 = 0,(33)
whereα 2 is as given above andα 3 = 4(n−3)(n−4)(n−5)(n−6) (n−1) 2 (n−2) 2 α 3 . Since this is a cubic equation, there will always be at least one real root and, hence, at least one physical maximally symmetric vacuum state. Therefore, third order Lovelock theory has no broken symmetry
ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ a b c d Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ e Ǣ ሺܣሻ݀ܵ e EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ߙ ଶ Ͳ Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ e Ǣ ሺܣሻ݀ܵ e EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ Ȧ a f c d ܴ ൈ ܧ ିଵ ܴ ൈ ܵ ିଵ ݀ܵ ିଵ ൈ ܴ ܯ ିଵ ൈ ܴ ܯ ିଵ ൈ ܴ ܴ ൈ ܧ ିଵ ܵ݀ܣ ିଵ ൈ ܴ ܴ ൈ ܪ ିଵ Ȧ a c d ؠ െ ͳ Ͷߙ g ݊ ͷ ݀ ൌ ͳ ݀ ൌ ݊ െ ͳΛ 1 = 1 3α 3 −α 2 + A + ∆ 0 A ,(34)Λ 2± = − 1 6α 3 2α 2 + A + ∆ 0 A ± i √ 3 A − ∆ 0 A ,(35)
where A 3 = 1 2 −∆ 1 + ∆ 2 1 − 4∆ 3 0 with ∆ 0 =α 2 2 − 3α 3 and ∆ 1 = 2α 3 2 − 9α 2α3 − 27α 2 3 Λ 0 . As noted in [28], when one takes the Gauss-Bonnet limit, α 3 → 0, the real root Λ 1 diverges, while the complex conjugate pair Λ 2± become the solutions (14).
The nature of the three roots (34) and (35) depends on the sign of the quantity ∆ 2 Figure 11: Maximally symmetric vacua of third-order Lovelock gravity in the special casẽ α 3 =α 2 2 /3 in n dimensions. The script "(r)" refers to the real solution (38).
1 − 4∆ 3 0 ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ h i p q ܯ ሺሻ ܵ݀ܣ ܯ ሺሻ ݀ܵ ߙ ଶ Ͳ Ȧ h r p q ߙ ଷ ൌ ߙ ଶ ଶ ͵ ݀ܵ ሺሻ ܵ݀ܣ ሺሻ ܵ݀ܣ ሺሻ ϭ ZĞĂů DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ dŚƌŽƵŐŚŽƵƚ ݀ܵ ሺሻ ݀ܵ ሺሻ ܵ݀ܣ ሺሻ ϭ ZĞĂů DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ dŚƌŽƵŐŚŽƵƚ Ȧ h p q ؠ െ ͳ ͵ߙ s
in the following way 10
1. ∆ 2 1 − 4∆ 3 0 > 0 ⇒ one real and two complex roots, 2. ∆ 2 1 − 4∆ 3 0 = 0 ⇒ multiple real roots, 3. ∆ 2 1 − 4∆ 3 0 < 0 ⇒ three distinct real roots.(36)
The CS point for the third order theory, i.e. the point at which all three roots coincide, occurs when the quantities ∆ 0 = ∆ 1 = 0. The couplings at the CS points are then related according toα
3 =α 2 2 3 , Λ 0 = Λ CS 0 ≡ − 1 3α 2 .(37)
The effective cosmological constant at the CS point is Λ = − 1 α 2 which can be either dS n forα 2 < 0 or AdS n forα 2 > 0. 10 The first case is obvious, for it is the generic case. The second case can be understood as follows. When ∆ 2 1 − 4∆ 3 0 = 0, there are three possibilities, which can be studied by considering ∆ 1 = 0, ∆ 1 > 0, and ∆ 1 < 0: If ∆ 1 = 0, then ∆ 0 = 0 and A = 0. So in this case, (34) and (35) seem indeterminate, but rewriting the equation (33) as
Λ +α 2 3α 3 3 − ∆ 0 3α 2 3 Λ +α 2 3α 3 + ∆ 1 27α 3 3 = 0,
we see that in this case there is one triple real root. This is just the CS point. On the other hand, if ∆ 1 > 0 or ∆ 1 < 0, then ∆ 1 = 2∆ It is difficult to investigate the full parameter space of third order Lovelock theory. In order to make progress, we will restrict our attention to the two parameter family of theories satisfying ∆ 0 = 0, which will greatly simplify our analysis of product vacua, while still yielding interesting results. We can now regard the third order coupling α 3 as fixed in terms of α 2 by the first condition in (37). From (36) we see that with ∆ 0 = 0 there will generically be one real root, which is found to be
ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ t u v w ߙ ଶ Ͳ ܵ݀ܣ x y ൈ ܵ y ݀ܵ x y ൈ ܵ y ܵ݀ܣ x y ൈ ܪ y Ȧ t v w ݊ ൌ ݀ ൌ ͷ ܯ y x ൈ ܧ y ܯ y x ൈ ܧ y ݀ܵ x y ൈ ܵ y ܵ݀ܣ x y ൈ ܪ y ݀ܵ x y ൈ ܪ y ߙ ൌ ʹߙ Ȧ t v w ؠ െ ͷΛ 1 = − 1 α 2 1 − (1 + 3α 2 Λ 0 ) 1/3 .(38)
which can be zero for Λ 0 = 0, positive for Λ 0 > 0, or negative for Λ 0 < 0, and yielding respectively an M n , dS n , or AdS n vacuum. If in addition ∆ 1 = 0, which yields the second condition in (37), this is then the CS case with three coinciding real roots. The corresponding maximally symmetric vacua are represented in Fig. 11. The vacua of Einstein gravity are recovered by taking the limit α 2 → 0, which because ∆ 0 = 0 also takes the third order coupling to zero.
We now turn to product vacua for third order Lovelock gravity. The equations satisfied by the curvatures K 1 and K 2 are now given by
ߙ ଶ ൏ Ͳ Ȧ Ͳ Ȧ ߙ ଶ Ͳ ݀ܵ ൈ ܪ ݀ܵ ൈ ܵ ܵ݀ܣ ൈ ܪ Ȧ ݊ ൌ ݀ ൌ ʹ ܯ ൈ ܧ ܯ ൈ ܧ ݀ܵ ൈ ܵ ܵ݀ܣ ൈ ܪ ܵ݀ܣ ൈ ܵ ϭ ZĞĂů DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ dŚƌŽƵŐŚŽƵƚ ܯ Ǣ ሺܣሻ݀ܵ ߙ d ൌ ʹߙα 3 (d − 1)! (d − 7)! K 3 1 + 3(d − 1)!D! (d − 5)!(D − 2)! K 2 1 K 2 + 3(d − 1)!D! (d − 3)!(D − 4)! K 1 K 2 2 + D! (D − 6)! K 3 2 +α 2 (d − 1)! (d − 5)! K 2 1 + 2(d − 1)!D! (d − 3)!(D − 2)! K 1 K 2 + D! (D − 4)! K 2 2(39)+ (d − 1)! (d − 3)! K 1 + D! (D − 2)! K 2 − 2Λ 0 = 0 α 3 (D − 1)! (D − 7)! K 3 2 + 3(D − 1)!d! (D − 5)!(d − 2)! K 2 2 K 1 + 3(D − 1)!d! (D − 3)!(d − 4)! K 2 K 2 1 + d! (d − 6)! K 3 1 +α 2 (D − 1)! (D − 5)! K 2 2 + 2(D − 1)!d! (D − 3)!(d − 2)! K 2 K 1 + d! (d − 4)! K 2 1(40)+ (D − 1)! (D − 3)! K 2 + d! (d − 2)! K 1 − 2Λ 0 = 0
We will also restrict our analysis to n = 7 dimensions, which is lowest dimension in which the third order Lovelock term is relevant. We will consider in turn product vacua with 5 + 2, 2 + 5, 4 + 3 and 3 + 4 splits. Figure 14: Direct product vacua of third-order Lovelock gravity when n = 7 and d = 4.
f o h i Ȧ f o n Ȧ f o m ݀ܵ j ൈ ܪ k ݀ܵ j ൈ ܵ k ܵ݀ܣ j ൈ ܵ k
Beginning with the d = 5, D = 2 split, we find that the equations of motion (39) and (40) reduce to
144α 3 K 2 1 K 2 + 24α 2 K 2 1 + 48α 2 K 1 K 2 + 12K 1 + 2K 2 − 2Λ 0 = 0 (41) 120α 2 K 2 1 + 20K 1 − 2Λ 0 = 0(42)
We see that the second equation can be solved for K 1 , with the first equation then determining K 2 , giving
K ± 1 = Λ CS 0 5 C ± , K ± 2 = − 4Λ CS 0 5 C ± C ± − 1 (43) where C ± = 1 ± 1 − Λ 0 Λ CS 0 , Λ CS 0 = − 5 12α 2
is the value of the cosmological constant at the CS point for the third order Lovelock theory in n = 7 dimensions, and we have set α 3 = 2α 2 2 in accordance with our assumption that ∆ 0 = 0. The character of these solutions and the range of Λ 0 covered is shown in Figure (12). The case of a 2 + 5 split reduces to the same set of equations, with the roles of the two curvatures K 1 and K 2 swapped. The resulting solutions are displayed in Figure (13).
The structure of product vacua with a d = 4, D = 3 split is considerably more intricate. In this case the equations of motion reduce to
72α 2 K 1 K 2 + 6K 1 + 6K 2 − 2Λ 0 = 0 (44) 144α 3 K 2 K 2 1 + 48α 2 K 1 K 2 + 24α 2 K 2 1 + 2K 2 + 12K 1 − 2Λ 0 = 0.(45)
After including the relation α 3 = 2α 2 2 this yields the curvatures
K 1 = 2Λ 0 15(1 − Λ 0 Λ 2 0 )
, Figure 15: Direct product vacua of third-order Lovelock gravity when n = 7 and d = 3.
K 2 = (1 − Λ 0 Λ 1 0 )Λ 0 5(1 − Λ 0 Λ CS 0 ) , (46) r s Ȧ p y x Ȧ p y w ܵ݀ܣ t ൈ ܵ u ݀ܵ t ൈ ܵ u ݀ܵ t ൈ ܪ u
where the critical values of the cosmological constant at which one or the other of K 1 and K 2 change sign are given by
Λ 1 0 = − 3 4α 2 , Λ 2 0 = − 5 4α 2 and Λ CS 0 = − 5 12α 2 .
These results are displayed in Figure (14). The case of a 3 + 4 split is simply obtained by swapping the curvatures K 1 and K 2 and is displayed in Figure (15).
Finally, we consider products with one dimensional factors, d = n − 1 and d = 1. Taking d = 1, we find that as in the Gauss-Bonnet case solutions exist only for Λ 0 = 0 with K 2 = 0 and for Λ 0 = − 1 3α 2 ≡ Λ CS 0 with K 2 = 6 (n−1)(n−2) Λ CS 0 . This result 11 along with the corresponding result for d = n − 1 is displayed in Figure (16). It is intriguing that the CS value of the couplings show up again here. It would be interesting to understand the case of product vacua with one dimensional factors more generally.
Conclusion
Einstein gravity has maximally symmetric vacua for all values of the cosmological constant and in all spacetime dimensions. However, Lovelock theories can have symmetry breaking regions of coupling space, in which no maximally symmetric vacua exist. We have carried out a partial survey of alternative, reduced symmetry vacua in Lovelock theories that are products of lower dimensional maximally symmetric space(times), with particular interest in whether such vacua exist cover symmetry breaking regions of coupling space. Our findings on this question show indications of interesting structure. Gauss-Bonnet gravity in any dimension has such a symmetry breaking region of coupling space. We looked at product vacua in n = 5 and n = 6 dimensions, finding sharply different results. While product vacua cover the entire symmetry breaking region of coupling space in n = 5 dimensions, in n = 6 dimensions such vacua only cover a small portion of the symmetry breaking area.
Ȧ Ͳ Ȧ z { | } Ȧ z ~ | } ܴ ൈ ܧ ିଵ ܴ ൈ ܵ ିଵ ݀ܵ ିଵ ൈ ܴ ܯ ିଵ ൈ ܴ ܯ ିଵ ൈ ܴ ܴ ൈ ܧ ିଵ ܵ݀ܣ ିଵ ൈ ܴ ܴ ൈ ܪ ିଵ ݊ ݀ ൌ ͳ ݀ ൌ ݊ െ ͳ
In n = 7 dimensions, the third order Lovelock interaction becomes physically relevant, and so long as its coupling is nonzero at least one maximally symmetric vacuum will exist. We have looked at product vacua in this theory, restricting our focus to a tractable region of coupling space. We found that 5 + 2 and 2 + 5 dimensional products again exist throughout this region. However, 4 + 3 and 3 + 4 dimensional products exist only over a portion of coupling space.
It would be interesting to extend this study further. For example, one could look at product vacua in Gauss-Bonnet gravity beyond n = 6 dimensions, i.e. setting the couplings of the relevant higher order Lovelock terms to zero. Our results in n = 5, 6 dimensions would be consistent with a number of possible patterns. For example, it might turn out that product vacua cover all of the symmetry breaking region of coupling space only in n = 5 dimensions. Alternatively, it could be that there is an alternation between even and odd dimensions, with product vacua covering the symmetry breaking region fully in odd dimensions. As the dimension gets higher, it might also be natural to consider product vacua with more than two factors. It would be particularly interesting to map out the product vacua of 4th order Lovelock theory, which will also have a symmetry breaking regime, in n = 9, 10 dimensions, where it includes all the relevant Lovelock terms. However, even if one considers only a particular subspace of the full set of couplings, the equations for the curvatures will be quartic and difficult to analyze. It is also important to note that not all maximally symmetric vacua in Lovelock gravity are stable. For example, the Gauss-Bonnet branch of vacua in Gauss-Bonnet gravity suffers from a ghost instability [5]. As noted in [9], it will be important to study the stability of product solutions such as those found here, in order to determine the true vacua of the theory.
Finally, it would be interesting to consider the potential physical relevance of transitions across critical surfaces in coupling space in which the number of maximally symmetric vacua change. This could happen, for example, if the cosmological constant were dynamical 12 . If the cosmological constant crossed into the symmetry breaking region of coupling space in n = 5 dimensional Gauss-Bonnet gravity, it might be possible to transition from an AdS 5 vacuum to an AdS 3 × S 2 vacuum as in the top half of Figure (5).
Figure 1 :
1Maximally symmetric vacua of Einstein gravity in n dimensions.
Figure 2 :
2Direct product vacua of Einstein gravity in n dimensions.
Figure 3 :
3Direct product vacua of Einstein gravity in the case of one-dimensional submanifolds in n dimensions, which exist only for Λ 0 = 0.
Figure 4 :
4Maximally symmetric vacua of Gauss-Bonnet gravity in n dimensions. Here, the plus and minus signs refer to the branches in the solution
the top half of Figure(4). Note that Λ CS 0 → −∞ in the limit of vanishing Gauss-Bonnet coupling, and that the Einstein branch correctly reduces to the result inFigure (1). On the Gauss-Bonnet branch, however, the effective cosmological constant is always negative, independent of the sign of Λ 0 , giving the AdS (+) n branch of vacua. Finally, this entire structure is mirrored in a straightforward way forα 2 < 0 on the bottom ofFigure (4).
5). Again, to read this diagram, envision fixing a nonzero value of the Gauss-Bonnet coupling α 2 and then consider the full range of values for the cosmological constant Λ 0 . Let us focus on the top half of the diagram where α 2 > 0. As in the case of Einstein gravity, displayed above inFigure (2), there is again a single product vacuum across the full range of values for the cosmological constant. However, the curvatures of the two factor spaces are no longer precisely correlated with the sign of Λ 0 across the full range of values, as they are in the Einstein case. For Λ 0 > Λ CS 0 there is such a precise correlation, with both K 1 and K 2 being either positive, zero or negative in correspondence with the value of Λ 0 . This produces, in succession for smaller values of Λ 0 , products solutions of the form dS 3 × S 2 , M 3 × E 2 and AdS 3 × H 2 . However, the solutions (20) have a critical point at Λ = Λ CS 0 , where the curvature K 2 diverges. For Λ < Λ CS 0 the curvature of the Euclidean space K 2 2 becomes positive, opposite to the sign of Λ 0 , giving a product of the form AdS 3 × S 2 . This behavior is mirrored for α 2 < 0 on the bottom half of the diagram.
Figure 5 :
5Direct product vacua of Gauss-Bonnet gravity when n = 5 and d = 3. symmetric vacua, as displayed in Figure (4). If via some mechanism, the cosmological constant were dynamical, then various possibilities for vacuum transitions exist. Assume for concreteness that the Gauss-Bonnet coupling is positive. If Λ 0 then evolved through the CS point from above, a maximally symmetric AdS 5 vacuum of Figure (4) might transition into the AdS 3 × S 2 vacuum of Figure (5), a process of spontaneous compactification.
Figure 6 :
6Direct product vacua of Gauss-Bonnet gravity when n = 5 and d = 2.
2 Figure 7 : 0 Λ c 0 .
2700Direct product vacua of Gauss-Bonnet gravity when n = 6 and d = 3. where in this case Λ c 0 = − 3 4α 2 and B ± = 1 ± 1 − Λ These solutions are displayed in Figure (8). Focusing on the top half of the diagram with α 2 > 0, there are again two branches of solutions for all values of the cosmological constant Λ 0 > Λ c 0 . However, in this case we have Λ CS 0
FFigure 8 :
8Direct product vacua of Gauss-Bonnet gravity when n = 6 and d = 4.
Figure 9 :
9Direct product vacua of Gauss-Bonnet gravity when n = 6 and d = 2.CS limits of higher order Lovelock theories to see if any pattern emerges with respect to product vacua with flat directions 9 .
Figure 10 :
10Direct product vacua of Gauss-Bonnet gravity in the case of one-dimensional submanifolds in n dimensions. regime in coupling space. The roots of equation (33) are given by
respectively. In either case, the complex part inside the square bracket in (35) drops and we get three real roots two of which are always equal. And finally in the last case, ∆ 2 1 − 4∆ 3 0 < 0, A becomes complex and so |A| 2 = ∆ 0 . By using simple complex algebra, one can easily show that the roots (34) and (35) are indeed real.
Figure 12 :
12Direct product vacua of third-order Lovelock gravity when n = 7 and d = 5.
Figure 13 :
13Direct product vacua of third-order Lovelock gravity when n = 7 and d = 2.
Figure 16 :
16Direct product vacua of third-order Lovelock gravity in the case of onedimensional submanifolds in n dimensions.
DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯF
Ǣ ሺܣሻ݀ܵ
F
EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ
ϭ DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ ܵ݀ܣ
F
Ϯ DĂdž͘ ^LJŵ͘ sĂĐƵĂ ܯ
F
Ǣ ሺܣሻ݀ܵ
F
EŽ DĂdž͘ ^LJŵ͘ sĂĐƵĂ
ϭ DĂdž͘ ^LJŵ͘ sĂĐƵƵŵ ݀ܵ
If instead we considered Euclidean metrics then we would have respectively Euclidean (E), spherical (S), or hyperbolic (H) spaces corresponding to these ranges of the curvature constant.6 This includes the full set of allowable couplings for dimensions n = 5, 6, while for n > 6 the higher order couplings c k with k = 3, . . . , p are taken to be zero in Gauss-Bonnet gravity.
In the context of the low energy effective action from string theory, it is noted in[5] that α 2 > 0, but we will consider all possible values here.
Note that warped products with one dimensional factors were found for Lovelock theories with CS couplings in[27].
The solutions with Λ 0 = 0 are actually present for general values of α 2 and α 3 .
There has recently been a good deal of interest in considering the cosmological constant as a thermodynamic variable in the context of black hole physics (see e.g.[29,30,31] and references thereto). The thermodynamics of varying Lovelock couplings has also been considered in[32].
Acknowledgements Ç . Ş. wishes to thank ACFI for hospitality and also thanks the Scientific and Technological Research Council of Turkey (TÜBİTAK) for financial support under the Programme BIDEB-2219.
The Einstein tensor and its generalizations. D Lovelock, J. Math. Phys. 12498D. Lovelock, "The Einstein tensor and its generalizations," J. Math. Phys. 12, 498 (1971).
Dimensionally continued topological gravitation theory in Hamiltonian form. C Teitelboim, J Zanelli, Class. Quant. Grav. 4125C. Teitelboim and J. Zanelli, "Dimensionally continued topological gravitation theory in Hamiltonian form," Class. Quant. Grav. 4, L125 (1987).
Curvature Squared Terms and String Theories. B Zwiebach, Phys. Lett. B. 156315B. Zwiebach, "Curvature Squared Terms and String Theories," Phys. Lett. B 156, 315 (1985).
Gravity Theories in More Than Four-Dimensions. B Zumino, Phys. Rept. 137109B. Zumino, "Gravity Theories in More Than Four-Dimensions," Phys. Rept. 137, 109 (1986).
String Generated Gravity Models. D G Boulware, S Deser, Phys. Rev. Lett. 552656D. G. Boulware and S. Deser, "String Generated Gravity Models," Phys. Rev. Lett. 55, 2656 (1985).
Symmetric Solutions to the Gauss-Bonnet Extended Einstein Equations. J T Wheeler, Nucl. Phys. B. 268737J. T. Wheeler, "Symmetric Solutions to the Gauss-Bonnet Extended Einstein Equa- tions," Nucl. Phys. B 268, 737 (1986).
Symmetric Solutions To The Maximally Gauss-bonnet Extended Einstein Equations. J T Wheeler, Nucl. Phys. B. 273732J. T. Wheeler, "Symmetric Solutions To The Maximally Gauss-bonnet Extended Ein- stein Equations," Nucl. Phys. B 273, 732 (1986).
Spontaneous Compactification With Quadratic and Cubic Curvature Terms. F Mueller-Hoissen, Phys. Lett. B. 163106F. Mueller-Hoissen, "Spontaneous Compactification With Quadratic and Cubic Cur- vature Terms," Phys. Lett. B 163, 106 (1985).
General Relativity with small cosmological constant from spontaneous compactification of Lovelock theory in vacuum. F Canfora, A Giacomini, R Troncoso, S Willison, arXiv:0812.4311Phys. Rev. D. 8044029hep-thF. Canfora, A. Giacomini, R. Troncoso and S. Willison, "General Relativity with small cosmological constant from spontaneous compactification of Lovelock theory in vacuum," Phys. Rev. D 80, 044029 (2009) [arXiv:0812.4311 [hep-th]].
. H Nariai, Sci. Rep. Tohoku Univ. 3462H. Nariai, Sci. Rep. Tohoku Univ. 34, 160 (1950); 35, 62 (1951)
On product space-time with 2 sphere of constant curvature. N Dadhich, gr- qc/0003026N. Dadhich, "On product space-time with 2 sphere of constant curvature," gr- qc/0003026.
Nariai, Bertotti-Robinson and anti-Nariai solutions in higher dimensions. V Cardoso, O J C Dias, J P S Lemos, hep- th/0401192Phys. Rev. D. 7024002V. Cardoso, O. J. C. Dias and J. P. S. Lemos, "Nariai, Bertotti-Robinson and anti-Nariai solutions in higher dimensions," Phys. Rev. D 70, 024002 (2004) [hep- th/0401192].
String Generated Generalizations of the Nariai Solution. D Lorenz-Petzold, Prog. Theor. Phys. 78969D. Lorenz-Petzold, "String Generated Generalizations of the Nariai Solution," Prog. Theor. Phys. 78, 969 (1987).
Uniform electromagnetic field in the theory of general relativity. B Bertotti, Phys. Rev. 1161331B. Bertotti, "Uniform electromagnetic field in the theory of general relativity," Phys. Rev. 116, 1331 (1959).
A Solution of the Maxwell-Einstein Equations. I Robinson, Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys. 7351I. Robinson, "A Solution of the Maxwell-Einstein Equations," Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys. 7, 351 (1959).
Lovelock black holes with maximally symmetric horizons. H Maeda, S Willison, S Ray, arXiv:1103.4184Class. Quant. Grav. 28165005gr-qcH. Maeda, S. Willison and S. Ray, "Lovelock black holes with maximally symmetric horizons," Class. Quant. Grav. 28, 165005 (2011) [arXiv:1103.4184 [gr-qc]].
Probing pure Lovelock gravity by Nariai and Bertotti-Robinson solutions. N Dadhich, J M Pons, arXiv:1210.1109J. Math. Phys. 54102501gr-qcN. Dadhich and J. M. Pons, "Probing pure Lovelock gravity by Nariai and Bertotti- Robinson solutions," J. Math. Phys. 54, 102501 (2013) [arXiv:1210.1109 [gr-qc]].
Bertotti-Robinson solutions in five-dimensional quadratic gravity. G Clement, arXiv:1311.7501Class. Quant. Grav. 3175001gr-qcG. Clement, "Bertotti-Robinson solutions in five-dimensional quadratic gravity," Class. Quant. Grav. 31, 075001 (2014) [arXiv:1311.7501 [gr-qc]].
Some exact solutions with torsion in 5-D Einstein-Gauss-Bonnet gravity. F Canfora, A Giacomini, S Willison, arXiv:0706.2891Phys. Rev. D. 7644021gr-qcF. Canfora, A. Giacomini and S. Willison, "Some exact solutions with torsion in 5- D Einstein-Gauss-Bonnet gravity," Phys. Rev. D 76, 044021 (2007) [arXiv:0706.2891 [gr-qc]].
Effectively four-dimensional spacetimes emerging from d = 5 Einstein-Gauss-Bonnet Gravity. F Izaurieta, E Rodriguez, arXiv:1207.1496Class. Quant. Grav. 30155009hep-thF. Izaurieta and E. Rodriguez, "Effectively four-dimensional spacetimes emerging from d = 5 Einstein-Gauss-Bonnet Gravity," Class. Quant. Grav. 30, 155009 (2013) [arXiv:1207.1496 [hep-th]].
Dynamical compactification in Einstein-Gauss-Bonnet gravity from geometric frustration. F Canfora, A Giacomini, S A Pavluchenko, arXiv:1308.1896Phys. Rev. D. 88664044gr-qcF. Canfora, A. Giacomini and S. A. Pavluchenko, "Dynamical compactification in Einstein-Gauss-Bonnet gravity from geometric frustration," Phys. Rev. D 88, no. 6, 064044 (2013) [arXiv:1308.1896 [gr-qc]].
Cosmological dynamics in higherdimensional Einstein-Gauss-Bonnet gravity. F Canfora, A Giacomini, S A Pavluchenko, arXiv:1409.2637Gen. Rel. Grav. 46101805gr-qcF. Canfora, A. Giacomini and S. A. Pavluchenko, "Cosmological dynamics in higher- dimensional Einstein-Gauss-Bonnet gravity," Gen. Rel. Grav. 46, no. 10, 1805 (2014) [arXiv:1409.2637 [gr-qc]].
Exact exponential solutions in Einstein-Gauss-Bonnet flat anisotropic cosmology. D Chirkov, S Pavluchenko, A Toporensky, arXiv:1401.2962Mod. Phys. Lett. A. 291450093gr-qcD. Chirkov, S. Pavluchenko and A. Toporensky, "Exact exponential solutions in Einstein-Gauss-Bonnet flat anisotropic cosmology," Mod. Phys. Lett. A 29, 1450093 (2014) [arXiv:1401.2962 [gr-qc]].
Non-constant volume exponential solutions in higher-dimensional Lovelock cosmologies. D Chirkov, S Pavluchenko, A Toporensky, arXiv:1501.04360gr-qcD. Chirkov, S. Pavluchenko and A. Toporensky, "Non-constant volume exponential solutions in higher-dimensional Lovelock cosmologies," arXiv:1501.04360 [gr-qc].
Stability analysis of the exponential solutions in Lovelock cosmologies. S A Pavluchenko, arXiv:1507.01871gr-qcS. A. Pavluchenko, "Stability analysis of the exponential solutions in Lovelock cos- mologies," arXiv:1507.01871 [gr-qc].
Black hole scan. J Crisostomo, R Troncoso, J Zanelli, hep-th/0003271Phys. Rev. D. 6284013J. Crisostomo, R. Troncoso and J. Zanelli, "Black hole scan," Phys. Rev. D 62, 084013 (2000) [hep-th/0003271].
On black strings and branes in Lovelock gravity. D Kastor, R B Mann, hep-th/0603168JHEP. 060448D. Kastor and R. B. Mann, "On black strings and branes in Lovelock gravity," JHEP 0604, 048 (2006) [hep-th/0603168].
Black hole solution in third order Lovelock gravity has no Gauss-Bonnet limit. Z Amirabi, arXiv:1311.4911Phys. Rev. D. 88887503gr-qcZ. Amirabi, "Black hole solution in third order Lovelock gravity has no Gauss-Bonnet limit," Phys. Rev. D 88, no. 8, 087503 (2013) [arXiv:1311.4911 [gr-qc]].
Enthalpy and the Mechanics of AdS Black Holes. D Kastor, S Ray, J Traschen, arXiv:0904.2765Class. Quant. Grav. 26hep-thD. Kastor, S. Ray and J. Traschen, "Enthalpy and the Mechanics of AdS Black Holes," Class. Quant. Grav. 26, 195011 (2009) [arXiv:0904.2765 [hep-th]].
Black Hole Enthalpy and an Entropy Inequality for the Thermodynamic Volume. M Cvetic, G W Gibbons, D Kubiznak, C N Pope, arXiv:1012.2888Phys. Rev. D. 8424037hep-thM. Cvetic, G. W. Gibbons, D. Kubiznak and C. N. Pope, "Black Hole Enthalpy and an Entropy Inequality for the Thermodynamic Volume," Phys. Rev. D 84, 024037 (2011) [arXiv:1012.2888 [hep-th]].
P-V criticality of charged AdS black holes. D Kubiznak, R B Mann, arXiv:1205.0559JHEP. 120733hep-thD. Kubiznak and R. B. Mann, "P-V criticality of charged AdS black holes," JHEP 1207, 033 (2012) [arXiv:1205.0559 [hep-th]].
Smarr Formula and an Extended First Law for Lovelock Gravity. D Kastor, S Ray, J Traschen, arXiv:1005.5053Class. Quant. Grav. 27235014hep-thD. Kastor, S. Ray and J. Traschen, "Smarr Formula and an Extended First Law for Lovelock Gravity," Class. Quant. Grav. 27, 235014 (2010) [arXiv:1005.5053 [hep-th]].
| [] |
[
"To appear in the special Brockhouse Issue of the Can",
"To appear in the special Brockhouse Issue of the Can"
] | [
"A Griffin [email protected] \nDipartimento di Fisica\nDepartment of Physics\nUniversità di Trento\nI-38050PovoItaly\n\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada\n"
] | [
"Dipartimento di Fisica\nDepartment of Physics\nUniversità di Trento\nI-38050PovoItaly",
"University of Toronto\nM5S 1A7TorontoOntarioCanada"
] | [
"Journ. Phys"
] | The density functional theory originally developed by Hohenberg, Kohn and Sham provides a rigorous conceptual framework for dealing with inhomogeneous interacting Fermi systems. We extend this approach to deal with inhomogeneous interacting Bose-condensed systems, limiting this presentation to setting up the formalism to deal with ground state (T = 0) properties.The key new feature is that one must deal with energy functionals of both the local density n(r) and the local complex macroscopic wavefunction Φ(r) associated with the Bose broken-symmetry (the local condensate density is n c (r) = |Φ(r)| 2 ). Implementing the Kohn-Sham scheme, we reduce the problem to a gas of weakly-interacting Bosons moving in self-consistent diagonal and off-diagonal one-body potentials. Our formalism should provide the basis for studies of the surface properties of liquid 4 He as well as the properties of Bose-condensed atomic gases trapped in external potentials. | 10.1139/p95-111 | [
"https://export.arxiv.org/pdf/cond-mat/9511101v1.pdf"
] | 119,415,044 | cond-mat/9511101 | 273e80a541ebff5786eb8bc7f297ec5a90801294 |
To appear in the special Brockhouse Issue of the Can
1995
A Griffin [email protected]
Dipartimento di Fisica
Department of Physics
Università di Trento
I-38050PovoItaly
University of Toronto
M5S 1A7TorontoOntarioCanada
To appear in the special Brockhouse Issue of the Can
Journ. Phys
19951 Rigorous Density Functional Theory for Inhomogeneous Bose-Condensed Fluids * * permanent address 2
The density functional theory originally developed by Hohenberg, Kohn and Sham provides a rigorous conceptual framework for dealing with inhomogeneous interacting Fermi systems. We extend this approach to deal with inhomogeneous interacting Bose-condensed systems, limiting this presentation to setting up the formalism to deal with ground state (T = 0) properties.The key new feature is that one must deal with energy functionals of both the local density n(r) and the local complex macroscopic wavefunction Φ(r) associated with the Bose broken-symmetry (the local condensate density is n c (r) = |Φ(r)| 2 ). Implementing the Kohn-Sham scheme, we reduce the problem to a gas of weakly-interacting Bosons moving in self-consistent diagonal and off-diagonal one-body potentials. Our formalism should provide the basis for studies of the surface properties of liquid 4 He as well as the properties of Bose-condensed atomic gases trapped in external potentials.
I. INTRODUCTION
This article develops a density functional formalism for dealing with superfluid 4 He, with the ultimate goal of understanding the role of Bose-condensation in the low density surface region. The use of neutron scattering to study the atomic structure and dynamics of superfluid 4 He has been an ongoing research effort at Chalk River since the early fifties.
This world famous programme grew out of and was nurtured by the atmosphere which Bert Brockhouse helped to create. On a more personal note, I have always enjoyed being invited to give seminars at McMaster, partly for the stimulating discussions with research colleagues but also because I knew that Bert would be in the audience and after it was over, he would come over and give some encouraging comments. It is with great pleasure that I dedicate this article to Prof. Bert Brockhouse.
At a phenomenological level, the low temperature properties of bulk superfluid 4 He are well understood in terms of Landau's picture of a weakly-interacting quasiparticle gas of phonons and rotons. At a more microscopic (atomistic) level, current discussions of superfluid 4 He can be divided into two broad classes. One of these is the field-theoretic analysis, which is built on the fundamental role of Bose-broken symmetry. The superfluid phase coincides with the appearance of a macroscopic wavefunction given by the finite expectation value of the field operator Φ(r) = ψ (r) [1]. This approach goes back to the pioneering work of Bogoliubov [2] and Penrose [3], but was first formulated in a systematic way by Beliaev [4] and Hugenholtz and Pines [5]. The current status of theories of superfluid 4 He based on the key role of the Bose order parameter Φ(r) is reviewed in a recent book [6].
A second kind of microscopic theory is built on the use of variational many-body wavefunctions for the ground state and the low-energy excited states of liquid 4 He. Such manybody wavefunctions were first introduced by Bijl [7] and later by Feynman [8], and in their current form have become very sophisticated (the correlated basis function approach [9]).
However, while such ground state many-particle wavefunctions (of the Jastrow-Feenberg type, for example) lead to very good estimates of the condensate fraction (about 10% at T = 0), the role of a Bose order parameter is not exhibited very explicitly (see chapter 9 of [6]).
The motivation of the present paper is to set up a formalism which can deal with the surface properties of superfluid 4 He, taking Bose-condensation into account from the beginning.
Formally this means we need a theory of spatially inhomogeneous Bose-condensed system.
In the last decade, there has developed a considerable literature on superfluid 4 He with free surfaces (including droplets and films) based on generalizations of the above mentioned correlated basis function approach [10]. However, as with the case of bulk liquid 4 He, such treatments of surfaces give little (if any) emphasis to the role of Bose-broken symmetry or Bose-Einstein condensation. On the other hand, the many-body wavefunctions which have been developed [11] to describe the free surface of liquid 4 He at T = 0 appear to give realistic estimates of the density profile n(r) in the surface region. From these, one finds accurate values of the binding energy of 3 He and spin-polarized H atoms bound to the surface of superfluid 4 He [12].
Such correlated basis function approaches involve a very heavy computational effort when dealing with inhomogeneous systems. An alternative, much simpler theory has developed for free surfaces of liquid 4 He based on a density functional theory [13]. This approach is loosely inspired by the very successful theory of inhomogeneous interacting electron systems and it is useful to recall some aspects of the latter theory. Almost 30 years ago, Hohenberg and Kohn (HK) gave a rigorous formulation [14] of the ground state properties of interacting Fermi systems, in terms of a functional of the local density n(r) = ψ † (r)ψ(r) . Kohn and Sham [15] (KS) used the two exact HK theorems to implement the HK formalism in terms of finding the single-particle energies and eigenstates of free Fermions moving in an appropriately defined self-consistent fields. This Hohenberg-Kohn-Sham (HKS) density functional formalism is now the accepted way of dealing with inhomogeneous Fermi systems, building on our extensive knowledge of the ground state properties of homogeneous interacting Fermi systems. The HKS approach has been generalized to deal with normal systems at finite temperatures [16] as well as BCS superconductors [17]. In the latter case, one works with functionals of the local density n(r) and the local anomalous (or off-diagonal) density ∆(r) ≡ ψ ↑ (r)ψ ↓ (r) describing spin-singlet Cooper pairs. Finally, a density functional formalism has been developed for dealing with time-dependent quantities (excited-states and linear response functions) of both normal [18] and superconducting Fermi systems [19].
As we have noted, recent density functional theories of superfluid 4 He with surfaces make analogies to the above HKS theory of inhomogeneous Fermi systems. However, there has apparently never been a careful study of how to use the HKS ideas to give a rigorous basis to a theory of inhomogeneous Bose-condensed liquids, analogous to what has been done for inhomogeneous BCS superconductors [17]. In particular, all current T = 0 density functional theories of superfluid 4 He [13] simply assume that the energy functional only depends on the local density n(r). There is never any reference to the possibly equally important role that the local condensate density n c (r) might play, or more generally, the local macroscopic wavefunction Φ(r) = ψ (r) = n c (r)e iS(r) . Needless to say, it is the finite value of the order parameter Φ(r) which characterizes the superfluid phase below T λ = 2.17K. To avoid confusion, we note that the energy in density functional theories of liquid Helium (see third paper of ref. [13]) is often taken to be a functional of the variable Ψ(r) ≡ n(r)e iS(r) , which involves the total local density. While this variable is sometimes referred to as a "macroscopic wavefunction", it is clearly unrelated to the Bose order parameter Φ(r) we have introduced above. The insufficiency of theories based on functionals of only n(r) has been made especially obvious in recent work [20] which points out that the low density surface region of liquid 4 He corresponds to a dilute inhomogeneous Bose gas with 100%
Bose condensation (at T = 0). In this surface region, the key function is Φ(r), with the local density being determined by it, namely n(r) = n c (r) ≡ |Φ(r)| 2 .
In the present paper, we formulate a density functional theory of the HKS kind for the ground state properties of an inhomogeneous interacting Bose-condensed fluid. The key new element in our analysis (following the analogous case of BCS superconductors [17]) is the realization that one must work with functionals of both n(r) and the (complex) order parameter Φ(r). The key theorems of HK and the methods of proof are, of course, valid for Bose as well as Fermi statistics. For brevity, we shall only sketch these arguments when they involve the identical steps as in the density functional treatment of BCS superconductors.
The present analysis of ground state properties can be extended to finite temperatures (free energies) following the analogous discussion for BCS superconductors [17]. This will be reported elsewhere.
What the present paper accomplishes is to give a formally exact scheme for dealing with inhomogeneous Bose-condensed fluids which should ultimately provide a platform for specific calculations. In applying the present formalism, one must introduce approximations for the correlation energy functionals. This is a separate question, with specific problems associated with the anomalous long-range correlations in Bose-condensed systems [21], and is not treated here.
In this paper, we mainly use the surface region of superfluid 4 He as an example of an inhomogeneous Bose-condensed system. Very recently, Bose-condensation has been finally achieved [22] in a dilute gas of 87 Rb atoms below 200 nK, using laser and evaporative cooling.
This gas was trapped in a harmonic potential well and as a result, both n(r) and n c (r) are highly inhomogeneous. Our present formalism gives a natural basis for generalizing the currently available Hartree-Fock-Bogoliubov approximations [23] for such inhomogeneous weakly interacting Bose-condensed gases.
II. HOHENBERG-KOHN FORMALISM
Our starting Hamiltonian is defined as (compare with [17])
H v,η ≡ drψ † (r) − ∇ 2 2m − µ ψ (r) + 1 2 dr dr ′ψ † (r)ψ † (r ′ )v 2 (r − r ′ )ψ(r ′ )ψ(r) + drv(r)ψ † (r)ψ(r) + dr[η(r)ψ † (r) + η * (r)ψ(r)](1)≡Ĥ 0 − µN +V 2 +V 1 +V SB .(2)
Throughout the analysis, the two-particle interaction v 2 (r − r ′ ) is assumed to be fixed. Thuŝ
H v,η in (1) depends only on: (a) the choice of the external diagonal single-particle potential v(r), which couples to the density; and (b) the external off-diagonal (or symmetry-breaking) potential η(r), which couples to the field operatorsψ(r) andψ † (r). The first step in the HK approach is to note that the ground state |Ψ ofĤ v,η is a functional of these external fields v(r) and η(r), and hence so are the following groundstate expectation values:
n(r) ≡ Ψ|ψ † (r)ψ(r)|Ψ Φ(r) ≡ Ψ|ψ(r)|Ψ Φ * (r) ≡ Ψ|ψ † (r)|Ψ .(3)
We note that because of the symmetry-breaking fieldV SB in (1), the ground state eigenstate |Ψ ofĤ v,η allows Φ(r) and Φ * (r) to be finite. Introducing a symmetry-breaking (offdiagonal) perturbation as in (1) is the standard method [1,4] of dealing with the appearance of Bose-condensation in an interacting system. It involves treating the condensate as a "reservoir" of atoms and thus leads to number non-conservation and to a groundstate where Φ(r) can be finite. The underlying physics of this broken symmetry is the same for both homogeneous and inhomogeneous systems and is discussed most clearly in Ref. [1]. We recall that the essential physics for dealing with BCS superconductors [17] also involves such symmetry-breaking number non-conserving states.
The second step of HK is to note that, using their famous reductio ad absurdum argument, that up to additive constants, one can prove v(r) and η(r) are unique functionals of n(r) and Φ(r).
Since v and η fixĤ v,η and thus also |Ψ , we can finally conclude that |Ψ is a unique functional of n(r) and Φ(r).
In turn, it follows from (5) that the expectation value Ψ|Ĥ 0 − µN +V 2 |Ψ is a universal functional of n(r) and Φ(r). That is to say, there is no explicit dependence of |Ψ on the specific forms assumed for v(r) and η(r), since these can be expressed as functionals of n(r) and Φ(r), as stated in (4). Summarizing this train of argument, we conclude that
F [n(r), Φ(r)] ≡ Ψ|Ĥ 0 − µN +V 2 |Ψ(6)
is a universal functional of n and Φ, valid for any number of particles and any external potentials v(r) and η(r). As in the HK formalism for Fermi systems, the universal functional F [n, Φ] will play a central role in our subsequent analysis. A key problem, of course, will be to find some appropriate approximation to this universal functional F [n, Φ] in superfluid 4 He.
Following HK, it is useful to define, for given v and η potentials, the energy functional
E v,η [n, Φ] ≡ F [n, Φ] + drv(r)n(r) + dr[η(r)Φ * (r) + η * (r)Φ(r)],(7)
where the densities n(r), Φ(r) and Φ * (r) are defined in (3). For simplicity of notation, we shall generally show functionals as depending only on Φ, but in fact, they depend on both Φ and Φ * . Following HK, we have a variational principle, i.e., one can prove that
E v,η [n, Φ]
is a minimum at the correct values of the densities n(r) and Φ(r) produced by the external potentials v(r) and η(r), i.e.,
δE v,η [n, Φ] δn(r) = 0 , δE v,η [n, Φ] δΦ(r) = 0.(8)
III. KOHN-SHAM PROCEDURE
Following Kohn and Sham [15], there is a clever way of finding the correct values of n(r) and Φ(r) (which, according to (8), minimize E v,η [n, Φ]) by solving a simpler auxiliary problem for which the HK theorems in Section II are also valid. To understand the logic of the KS procedure in the context of our present problem, let us consider an auxiliary system
Hamiltonian defined by
H s vs ,ηs ≡ drψ † (r) − ∇ 2 2m − µ ψ (r) +V s [ψ † ,ψ] + drv s (r)ψ † (r)ψ(r) + dr[η s (r)ψ † (r) + η * s (r)ψ(r)],(9)
where the interactionV s is a part ofV 2 in (1) and (2), to be specified later. All the HK results of Section II apply to (9). In particular, if we denote the ground state ofĤ s vs ,ηs as |Ψ s , then the densities
n s (r) ≡ Ψ s |ψ † (r)ψ(r)|Ψ s Φ s (r) ≡ Ψ s |ψ(r)|Ψ s(10)
are unique functionals of the external fields v s (r) and η s (r). In turn, v s and η s and hence |Ψ s can be shown to be unique functionals of n s (r) and Φ s (r). Thus we conclude that
F s [n s (r), Φ s (r)] ≡ Ψ s |Ĥ o − µN +V s |Ψ s(11)
is a universal functional of n s (r) and Φ s (r). Finally, we can define an energy functional of this auxiliary system
E s vs ,ηs [n, Φ] ≡ F s [n, Φ] + drv s (r)n(r) + dr[η s (r)Φ * (r) + η * s (r)Φ(r)],(12)
which will be minimized by the correct values, n(r) = n s (r) and Φ(r) = Φ s (r), for this system.
The whole point of introducing this auxiliary system defined by (9) is that:
(a) It will be easier to solve than the actual system described by (1).
(b) By a judicious choice of the external fields v s (r) and η s (r), the densities n s (r) and Φ s (r)
of this auxiliary problem can be made to be identical to those of the real system. Since (6) is a universal functional of only n(r) and Φ(r), this means that we can evaluate it using results for n(r) and Φ(r) obtained from solving the auxiliary system.
F [n(r), Φ(r)] in
In applying the KS procedure to superconductors [17], one uses a non-interacting gas of
Fermions moving in external one-body and pair potentials as the auxiliary model system.
In our interacting Bose system, this would correspond to setting V s [ψ † ,ψ] in (9) to zero.
The problem with this choice is that for a non-interacting Bose gas moving in given external potentials v s and η s , and at T = 0, one has complete Bose-Einstein condensation (see discussion after (35) for more details). This implies that n(r) and n c (r) ≡ |Φ(r)| 2 are equal, even though we know that in any interacting Bose system, the local condensate density n c (r)
is less than the local total density n(r). This problem is not addressed in density functional theories [13] of superfluid 4 He based on functionals of only the density n(r). We recall that such theories usually start with the kinetic energy of an inhomogeneous non-interacting Bose gas with a density profile n(r) identical to the fully-interacting system. Such a kinetic energy functional implies that n c (r) = n(r), which would not appear to be a very good starting point for describing superfluid 4 He.
In order to define our auxiliary Bose system in (9), we first introduce the usual decomposition of Bose quantum field operators [6] ψ(r) = Φ(r) +ψ(r)
ψ † (r) = Φ * (r) +ψ † (r),(13)
where Φ(r) is defined in (3). The non-condensate field operatorsψ(r) andψ † (r) satisfy Bose commutation relations. Using (13), the two-particle interactionV 2 in (1) can be rewritten
asV 2 = 1 2 dr dr ′ v 2 (r − r ′ )|Φ(r)| 2 |Φ(r ′ )| 2 + dr dr ′ v 2 (r − r ′ )|Φ(r)| 2 Φ(r ′ )ψ † (r ′ ) + dr dr ′ v 2 (r − r ′ )|Φ(r)| 2 Φ * (r ′ )ψ(r ′ ) + dr dr ′ v 2 (r − r ′ ) ψ † (r)ψ(r)|Φ(r ′ )| 2 +ψ † (r)ψ(r ′ )Φ * (r ′ )Φ(r) + 1 2ψ † (r)ψ † (r ′ )Φ(r ′ )Φ(r) + 1 2ψ (r)ψ(r ′ )Φ * (r ′ )Φ * (r) + dr dr ′ v 2 (r − r ′ ) ψ † (r ′ )ψ(r ′ )ψ(r)Φ * (r) +ψ † (r ′ )ψ † (r)ψ(r ′ )Φ(r) + 1 2 dr dr ′ v 2 (r − r ′ )ψ † (r)ψ † (r ′ )ψ(r ′ )ψ(r).(14)
If all the atoms were Bose-condensed, only the first term in (14) would be important. If the system is not Bose-condensed, then only the last term in (14) is present. In the well-known Bogoliubov approximation [24,25] for a dilute, weakly interacting gas, in which almost all the atoms are Bose-condensed, one only keeps terms up to quadratic in the non-condensate field operatorsψ andψ † (since it is assumed that, in an average sense,ψ ≪ Φ). That is to say, the last two terms in (14) are higher order and hence omitted. A feature of this Bogoliubov approximation is that the resulting Hamiltonian can be diagonalized exactly (see below).
We now define what we shall call (for want of a better term) the exchange-correlation energy functional F xc [n, Φ] by writing (6) in the form
F [n, Φ] = F s [n, Φ] + 1 2 dr dr ′ v 2 (r − r ′ )n(r)n(r ′ ) + F xc [n, Φ],(15)
where F s [n, Φ] is the energy functional of the auxiliary system defined by (11) with the interactionV
s = dr dr ′ v 2 (r − r ′ )|Φ(r)| 2 [Φ(r ′ )ψ † (r ′ ) + Φ * (r ′ )ψ(r ′ )] + 1 2 dr dr ′ v 2 (r − r ′ )[2Φ * (r ′ )Φ(r)ψ † (r)ψ(r ′ ) +Φ(r ′ )Φ(r)ψ † (r)ψ † (r ′ ) + Φ * (r ′ )Φ * (r)ψ(r)ψ(r ′ )],(16)
and subject to potentials v s (r) and η s (r) [see (12)] chosen such that the density n s (r) and order parameter Φ s (r) are identical to those of the full system. As usual [14,15], it is useful to separate out the total Hartree energy contribution as we have done in (15). Writing this contribution out more explicitly for a Bose-condensed system, we have
V H = 1 2 dr dr ′ v 2 (r − r ′ )[|Φ(r)| 2 |Φ(r ′ )| 2 + 2|Φ(r)| 2ñ (r ′ ) +ñ(r)ñ(r ′ )],(17)
where the non-condensate local density is defined bỹ
n(r) ≡ ψ † (r)ψ(r) = n(r) − |Φ(r)| 2 .(18)
The last two terms in (17) come from theψ †ψ and (ψ †ψ ) 2 terms in (14).
Calculating the variational derivatives in (8) using F [n, Φ] in (15), one finds
δF s [n, Φ] δn(r) + v H (r) + δF xc [n, Φ] δn(r) + v(r) = 0 δF s [n, Φ] δΦ(r) + δF xc [n, Φ] δΦ(r) + η(r) = 0,(19)
where the Hartree field is defined as
v H (r) ≡ dr ′ v 2 (r − r ′ )n(r ′ ).(20)
Similarly, using (8) for the auxiliary system defined above, one finds
δF s [n, Φ] δn(r) + v s (r) = 0 δF s [n, Φ] δΦ(r) + η s (r) = 0.(21)
Combining the results in (19) and (21), we conclude that the density and order parameters of the auxiliary system will be identical with the actual system if
v s (r) = v(r) + v H (r) + δF xc [n, Φ] δn(r) = v s [n, Φ] η s (r) = η(r) + δF xc [n, Φ] δΦ(r) = η s [n, Φ].(22)
This gives v s and η s as explicit functionals of n(r) and Φ(r), once we have decided on a specific form for the functional F xc [n, Φ].
IV. BOGOLIUBOV GAS AS AUXILIARY SYSTEM
We now turn to the auxiliary system defined by (9), withV s given by (16). Using (13), one finds thatĤ
+ 1 2 dr dr ′ v 2 (r − r ′ ) 2Φ * (r ′ )Φ(r)ψ † (r)ψ(r ′ ) +Φ(r ′ )Φ(r)ψ † (r)ψ † (r ′ ) + Φ * (r ′ )Φ * (r)ψ(r)ψ(r ′ ) ,(23)
where the operatorL is defined bŷ
L ≡ − ∇ 2 2m + v s (r) − µ .(24)
This Hamiltonian is similar in structure to the one one obtains in an inhomogeneous weakly interacting Bose gas at T = 0, which has been extensively studied in the literature [24,25].
We solve it using similar techniques. In order to diagonalize (23), we first eliminate the terms linear inψ andψ † by requiring that Φ(r) satisfy the equation
LΦ(r) + η s (r) = 0.(25)
Eq. (25) is a sort of generalized Gross-Pitaevskii equation for Φ, but now in the context of density functional theory rather than for a dilute Bose gas. This connection is easily seen by setting F xc [n, Φ] in (15) to zero, in which case (22)
simplifies to v s (r) = v(r) + v H (r) η s (r) = η(r),(26)
and (25) reduces to
− ∇ 2 2m + v(r) − µ + dr ′ v 2 (r − r ′ )n(r ′ ) Φ(r) + η(r) = 0.(27)
Setting the external fields v and η to zero, we recover the well-known Gross-Pitaevskii equation [26,24] for a dilute inhomogeneous gas at T = 0. In that case, since all the atoms are Bose-condensed, the density n(r) in (27) can be approximated by n c (r) = |Φ(r)| 2 , and then (27) is a closed non-linear Schrodinger equation (NLSE) for Φ(r).
Assuming that Φ(r) satisfies (25), our auxiliary system Hamiltonian (23) reduces to H s ηs ,vs = drη * s (r)Φ(r)
+ drψ † (r) − ∇ 2 2m + v s (r) − µ ψ (r) + dr dr ′ v 2 (r − r ′ )Φ * (r ′ )Φ(r)ψ † (r)ψ(r ′ ) + 1 2 dr dr ′ v 2 (r − r ′ ) Φ(r ′ )Φ(r)ψ † (r)ψ † (r ′ ) + Φ * (r ′ )Φ * (r)ψ(r)ψ(r ′ ) .(28)
The Hartree contribution is contained in v s (r) [see (22)]. The third line in (28) gives the exchange term, while the fourth line involves the anomalous contributions involvingψ †ψ † andψψ characteristic of a Bose-condensed system. The quadratic expression given by (28) can be diagonalized by the usual Bogoliubov transformation [25]
ψ(r) = j [u j (r)α j − v * j (r)α † j ] ψ † (r) = j [u * j (r)α † j − v j (r)α j ],(29)
where the new "quasiparticle" operators α j and α † j satisfy Bose commutation relations. One finds the amplitudes u j (r) and v j (r) are given by the generalized Bogoliubov coupled equations:
− ∇ 2 2m + v s (r) − µ u j (r) + dr ′ v 2 (r − r ′ )Φ(r)Φ * (r ′ )u j (r ′ ) − dr ′ v 2 (r − r ′ )Φ(r)Φ(r ′ )v j (r ′ ) = E j u j (r) − ∇ 2 2m + v s (r) − µ v j (r) + dr ′ v 2 (r − r ′ )Φ * (r)Φ(r ′ )v j (r ′ ) − dr ′ v 2 (r − r ′ )Φ * (r)Φ * (r ′ )u j (r ′ ) = −E j v j (r).(30)
One can prove that the eigenvalues E j are real and that one must choose solutions such that E j ≥ 0. For details of how (30) is derived, we refer to a similar calculation by Fetter [25] for the case of a contact interaction v 2 (r) = v 0 δ(r). However, we note that long-range tail of the He-He interatomic potential is very important when dealing with the surface region of liquid Helium [20].
With u j and v j given by the solutions of (30), the Hamiltonian in (28) can be shown to reduce to
H s ηs ,vs = drη * s (r)Φ(r) − j E j dr|v j (r)| 2 + j E jα † jαj ,(31)
which describes a non-interacting gas of quasiparticles of energy E j . The ground state |Ψ s of this Hamiltonian is defined byα j |Ψ s = 0. Thus the ground state expectation value of (31) is given by
E s ηs ,vs = drη * s (r)Φ(r) − j E j dr|v j (r)| 2(32)
Using (29), the non-condensate local density in (18) is given by (T = 0)
n(r) ≡ Ψ s |ψ † (r)ψ(r)|Ψ s = j |v j (r)| 2 .(33)
Inserting (31) into (12) and using (25) and (18), one finds after a little algebra that
F s [n, Φ] = drΦ * (r) − ∇ 2 2m − µ Φ(r) − j dr|v j (r)| 2 [E j + v s (r)] .(34)
We recall that the Hartree contribution was separated out in (15) and consequently it is not contained inĤ s ηs ,vs defined in (23). Thus it is not included in the energy eigenvalues E j given by (30), but rather appears as a separate contribution from the diagonal potential v s (r) in (22).
The key feature of the above results is that one can have a depletion of the condensate, as shown by the finite value ofñ(r) in (33). Thus the auxiliary system defined by (23) and (22) can be used to find both Φ(r) and n(r) [using (25) and (30)] even when n c (r) ≡ |Φ(r)| 2 and n(r) are quite different (as in superfluid 4 He).
If we had chosen a non-interacting Bose gas as our KS reference system [i.e., setV s = 0 in (9)], the last term in (23) would be absent. The linear terms inψ andψ † can be eliminated as before by requiring that Φ(r) satisfy (25). ThenĤ s ηs,vs is easily diagonalized, with u j (r) given by the solution
− ∇ 2 2m + v s (r) − µ u j (r) = E j u j (r)(35)
and v j (r) = 0. It immediately follows from (33) that such a KS reference system leads to no depletion, withñ(r) = 0. This means that using a free Bose gas as a reference system inevitably leads to n c (r) ≡ |Φ(r)| 2 and n(r) being identical, no matter what we choose for the functional F xc [n, Φ]. This apparent "insufficiency" of the non-interacting Bose gas is somewhat surprising and deserves further study. It seems to be associated with the well-known fact that a weakly interacting Bose-condensed gas has qualitatively different properties than an ideal Bose gas.
It is convenient at this point to summarize the various steps in the KS procedure:
(1) One chooses some approximation for the "exchange-correlation" functional F xc [n, Φ] defined in (15), giving it as an explicit functional of the local quantities n(r) and Φ(r).
This is the big step containing the "physics", and has not been addressed in the present paper.
(2) The potentials v s (r) and η s (r) are then computed using (22) and given as functionals of n(r) and Φ(r).
(3) Evaluating v s and η s using an assumed (or trial) value of n(r), the GP-type equation (25) is solved for Φ(r), i.e.,
− ∇ 2 2m + v s [n, Φ] − µ Φ(r) + η s [n, Φ] = 0.(36)
(4) Evaluating v s [n, Φ] using the trial n(r) and the solution for Φ(r) given in step (3), the generalized Bogoliubov equations in (30) can be solved to determine E j , u j (r) and
v j (r).
(5) With these results, one can calculate the non-condensate local densityñ(r) in (33) and
hence finally obtain n(r) from (18), namely
n(r) = |Φ(r)| 2 + j |v j (r)| 2 .(37)
Using this new expression for n(r), one can go back to step (3) and repeat the procedure, until self-consistent values of Φ(r) and n(r) are obtained.
(6) These values of n(r) and Φ(r) are then inserted into the energy functional of the actual Bose system of interest, as given by (7) and (15).
V. CONCLUSIONS
As in the case of interacting Fermi systems [15,17], it is important to emphasize that while the auxiliary Bose system described in Section IV corresponds to a dilute, weakly interacting
Bose gas, we are only using it to find the local density n(r) and local order parameter Φ(r) of a Bose-condensed liquid. In particular, the quasiparticle excitations which are described by the generalized Bogoliubov equations of motion in (30) have, in general, no "direct" physical significance in the true system.
The above procedure gives a well-defined scheme to find (at T = 0) the energy, local density and local condensate density in superfluid 4 He with a free surface, once one has chosen some explicit approximation for the exchange-correlation functional F xc [n, Φ] in (15). As in the case of normal and superconducting metals, a good approximation for this functional is the key to obtaining reasonable results using the density functional formalism. We hope to discuss this problem elsewhere. In the case of superfluid 4 He, there has been considerable work [13] on constructing functionals which only depend on the total local density n(r), in which one tries to build in known experimental information (compressibility, ground state energy, surface tension, etc.). In developing the equivalent approximations for use in our new formalism, the first thing we need to understand better is the ground-state energy of bulk liquid 4 He as a function of the density n and the condensate density n 0 . More Monte Carlo calculations would be very useful.
One immediate implication of the present theory is contained in the first term of (34), which can be rewritten in the form (for clarity, we inserth)
drΦ * (r) −h 2 ∇ 2 2m Φ(r) = drh 2 2m
|∇ n c (r)| 2 + 1 2 drmn c (r)v 2 s (r),
where we have used Φ(r) = n c (r)e iS(r) and mv s (r) ≡h∇S(r) is the local superfluid velocity (in the ground state, we can set v s (r) = 0). In contrast with the first term in (38), currently available density functional theories [13] always start with a term of the form
drh 2 2m |∇ n(r)| 2 ,(39)
involving the total local density n(r). This corresponds to the kinetic energy functional of an inhomogeneous non-interacting Bose gas constrained to have the correct local density of the Bose liquid under consideration. We believe that the first term in (38) has a more natural as well as more sound theoretical basis when dealing with Bose-condensed fluids, and should be used in developing improved density functional theories of superfluid 4 He [20].
At the present time, the only detailed discussion of the properties of a spatially inhomogeneous Bose-condensed system has been for a weakly interacting atomic gas trapped in an external potential well [23]. In such systems [22], the condensate density n c (r) is strongly peaked at the center of the trap (due to the macroscopic occupation of the lowest quantum state) and has a very different spatial dependence from the non-condensate densitỹ n(r). Our density functional formalism (extended to finite temperatures) should be useful in calculating the properties of such trapped Bose-condensed gases.
ηs ,vs = drΦ * (r)[LΦ(r) + η s (r)] + drη * s (r)Φ(r) + drψ † (r)[LΦ(r) + η s (r)] + dr[Φ * (r)L + η * s (r)]ψ(r) + drψ † (r)Lψ(r)
ACKNOWLEDGEMENTSI would like to acknowledge many stimulating discussions with Sandro Stringari, which led to this work. I also thank Hardy Gross and the referee for useful comments. This work was done while on sabbatical at the University of Trento, which provided financial support and a congenial atmosphere. I also thank NSERC for a research grant.
. P C Hohenberg, P C Martin, 34291Ann. Physics (N.Y.P.C. Hohenberg and P.C. Martin, Ann. Physics (N.Y.), 34, 291 (1965).
. N N Bogoliubov, J. Phys. U.S.S.R. 1123N.N. Bogoliubov, J. Phys. U.S.S.R. 11, 23 (1947).
. O Penrose, Phil. Mag. 421373O. Penrose, Phil. Mag. 42, 1373 (1951).
. S T Beliaev, Sov. Phys.-JETP. 7289S.T. Beliaev, Sov. Phys.-JETP 7, 289 (1958).
. N Hugenholtz, D Pines, Phys. Rev. 116489N. Hugenholtz and D. Pines, Phys. Rev. 116, 489 (1959).
A Griffin, Excitations in a Bose-Condensed Liquid. Cambridge, N.Y.A. Griffin, Excitations in a Bose-Condensed Liquid (Cambridge, N.Y., 1993).
. A Bijl, Physica. 7869A. Bijl, Physica 7, 869 (1940).
. R P Feynman, Phys. Rev. 94262R.P. Feynman, Phys. Rev. 94, 262 (1954).
E Feenberg, Theory of Quantum Liquids. Academic, N.Y.E. Feenberg, Theory of Quantum Liquids (Academic, N.Y., 1969);
C E Campbell, Progress in Liquid Physics. C.A. CroxtonLondonWileyC.E. Campbell, in Progress in Liquid Physics, ed. by C.A. Croxton (Wiley, London, 1977), ch. 6.
. See E For References, G.-X Krotscheck, W Qian, Kohn, Phys. Rev. 314245For references, see E. Krotscheck, G.-X. Qian and W. Kohn, Phys. Rev. B31, 4245 (1985);
. K A Gernoth, J W Clark, G Senger, M L Ristig, Phys. Rev. 4915836K.A. Gernoth, J.W. Clark, G. Senger and M.L. Ristig, Phys. Rev. B49, 15836 (1994).
. C C See, M Chang, Cohen, Phys. Rev. 83131See, for example, C.C. Chang and M. Cohen, Phys. Rev. A8, 3131 (1973).
. I B Mantz, D O Edwards, Phys. Rev. 204518I.B. Mantz and D.O. Edwards, Phys. Rev. B20, 4518 (1979).
. C Ebner, W F Saam, Phys. Rev. 12923C. Ebner and W.F. Saam, Phys. Rev. B12, 923 (1975);
); for more recent references, see. S Stringari, J Treiner ; F. Dalfovo, A Lastri, L Pricaupenko, S Stringari, J Treiner, Phys. Rev. 361193Phys. Rev.S. Stringari and J. Treiner, Phys. Rev. B36, 8369 (1987); for more recent references, see F. Dalfovo, A. Lastri, L. Pricaupenko, S. Stringari and J. Treiner, Phys. Rev. B52, 1193 (1995) .
. P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg and W. Kohn, Phys. Rev. B136, 864 (1964).
. W Kohn, L J Sham, Phys. Rev. 1401133W. Kohn and L.J. Sham, Phys. Rev. A140, 1133 (1965).
. D Mermin, Phys. Rev. 1371441D. Mermin, Phys. Rev. 137, A1441 (1965).
. L N Oliveira, E K U Gross, W Kohn, Phys. Rev. Letts. 602430L.N. Oliveira, E.K.U. Gross and W. Kohn, Phys. Rev. Letts. 60, 2430 (1988).
. E Runge, E K U Gross, Phys. Rev. Letts. 52997E. Runge and E.K.U. Gross, Phys. Rev. Letts. 52, 997 (1984).
. O J Wacker, R Kümmel, E K U Gross, Phys. Rev. Letts. 732915O.J. Wacker, R. Kümmel and E.K.U. Gross, Phys. Rev. Letts. 73, 2915 (1994).
. A Griffin, S Stringari, Phys. Rev. Letts. in pressA. Griffin and S. Stringari, Phys. Rev. Letts., in press.
. Y A Nepomnyashchy, L P Pitaevskii, 543Y.A. Nepomnyashchy and L.P. Pitaevskii, Physica B194-196, 543 (1994).
. M H Anderson, J R Ensher, M R Matthews, C E Wieman, E A Cornell, Science. 269198M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman and E.A. Cornell, Science 269, 198 (1995).
. V V Goldman, I F Silvera, A J Leggett, Phys. Rev. 242870V.V. Goldman, I.F. Silvera and A.J. Leggett, Phys. Rev. B24, 2870 (1981);
. D A Huse, E D Siggia, Journ. Low Temp. Phys. 46137D.A. Huse and E.D. Siggia, Journ. Low Temp. Phys. 46, 137 (1982).
A L Fetter, J D Walecka, Quantum Theory of Many-Particle Systems. McGraw-Hill, N.Y.492A.L. Fetter and J.D. Walecka, Quantum Theory of Many-Particle Systems (McGraw- Hill, N.Y., 1971), p. 492ff.
. A L Fetter, Am. Phys. (N.Y.). 7067A.L. Fetter, Am. Phys. (N.Y.) 70, 67 (1972).
. L P Pitaevskii, Sov. Phys. JETP. 13451L.P. Pitaevskii, Sov. Phys. JETP 13, 451 (1961);
. E P Gross, Nuovo Cimento. 20454E.P. Gross, Nuovo Cimento 20, 454 (1961).
| [] |
[
"Number 4 On-the-Fly Power-Aware Rendering",
"Number 4 On-the-Fly Power-Aware Rendering"
] | [
"Yunjin Zhang \nState Key Lab of CAD&CG\nZhejiang University\n\n",
"Marta Ortin \nUniversidad de Zaragoza\nI3A\n",
"Victor Arellano \nUniversidad de Zaragoza\nI3A\n",
"Rui Wang [email protected] \nState Key Lab of CAD&CG\nZhejiang University\n\n",
"Diego Gutierrez \nUniversidad de Zaragoza\nI3A\n",
"Hujun Bao \nState Key Lab of CAD&CG\nZhejiang University\n\n",
"Yunjin Zhang ",
"Marta Ortin ",
"Victor Arellano ",
"Rui Wang ",
"Diego Gutierrez ",
"Hujun Bao ",
"Y Zhang ",
"M Ortin ",
"V Arellano ",
"R Wang ",
"D Gutierrez ",
"H Bao "
] | [
"State Key Lab of CAD&CG\nZhejiang University\n",
"Universidad de Zaragoza\nI3A",
"Universidad de Zaragoza\nI3A",
"State Key Lab of CAD&CG\nZhejiang University\n",
"Universidad de Zaragoza\nI3A",
"State Key Lab of CAD&CG\nZhejiang University\n"
] | [
"Eurographics Symposium on"
] | Figure 1: We propose a novel on-the-fly, power-aware framework that selects the optimal rendering configuration to maximize visual quality, while keeping GPU power consumption within a power budget. Different from existing approaches, our method requires only a few minutes of initialization executed once per platform. The figure shows results for the Hall scene, where our power-optimal settings yield images of similar quality as Maximum Quality, with significantly lower power consumption. The charts on the right show power consumption and quality error (measured with the perceptually-based SSIM metric).AbstractPower saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings. | 10.1111/cgf.13483 | [
"https://arxiv.org/pdf/1807.11760v1.pdf"
] | 51,877,799 | 1807.11760 | cd6c6f6133e324753db8702a4aa721063570fc84 |
Number 4 On-the-Fly Power-Aware Rendering
2018. 2018. 2018
Yunjin Zhang
State Key Lab of CAD&CG
Zhejiang University
Marta Ortin
Universidad de Zaragoza
I3A
Victor Arellano
Universidad de Zaragoza
I3A
Rui Wang [email protected]
State Key Lab of CAD&CG
Zhejiang University
Diego Gutierrez
Universidad de Zaragoza
I3A
Hujun Bao
State Key Lab of CAD&CG
Zhejiang University
Yunjin Zhang
Marta Ortin
Victor Arellano
Rui Wang
Diego Gutierrez
Hujun Bao
Y Zhang
M Ortin
V Arellano
R Wang
D Gutierrez
H Bao
Number 4 On-the-Fly Power-Aware Rendering
Eurographics Symposium on
372018. 2018. 201810.1111/cgf.13483*Joint first authors †Corresponding author:
Figure 1: We propose a novel on-the-fly, power-aware framework that selects the optimal rendering configuration to maximize visual quality, while keeping GPU power consumption within a power budget. Different from existing approaches, our method requires only a few minutes of initialization executed once per platform. The figure shows results for the Hall scene, where our power-optimal settings yield images of similar quality as Maximum Quality, with significantly lower power consumption. The charts on the right show power consumption and quality error (measured with the perceptually-based SSIM metric).AbstractPower saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.
Introduction
Current mobile phones and other battery-powered devices incorporate increasingly complex functionalities and applications, which in turn lead to higher power consumptions. Advances in computer graphics have produced highly sophisticated real-time rendering algorithms, which are used in games, data visualization, or virtual reality. To extend the limited battery life, energy saving becomes a primary goal [AMS08,JGDAM12]. A lot of research effort has recently been oriented towards characterising the power consumption of rendering algorithms and finding strategies to control the amount of expended energy [SPP * 15, PLS11, APX14, WYM * 16].
Wang et al. [WYM * 16] proposed a state-of-the-art power-saving framework, which traded power consumption for image quality at rendering time. The system was capable of producing high-quality images, while expending significantly less energy. Unfortunately, the system required a pre-processing step of several days, which had to be performed for every different scene to be rendered. Moreover, as a consequence of such precomputation, it could not handle dynamic scenes, and required many memory accesses to fetch the stored data, hampering performance.
In this paper, we propose a novel real-time, power-saving framework that finds the optimal tradeoff between power consumption and image quality on-the-fly, with only a few minutes of initialization. This is a significantly harder problem, which in turn makes the framework useful for any new game or application without any additional initialization, and enables handling dynamic scenes for the first time. It predicts power consumption and estimates the quality error of different rendering configurations at runtime, and leverages those predictions to adjust the quality level of different shaders, in order to tune the expended energy and keep it within a user-given power budget.
The main challenge for such real-time power-efficient rendering framework is running without affecting user experience. Key to solving this problem are our runtime power prediction and quality error estimation strategies: First, our novel power prediction model (Section 5) allows us to anticipate the power consumption for every rendering configuration, without having to measure the actual energy expended. Second, our quality error estimation (Section 6) obtains the error for all configurations without the need to render them. These two components yield extremely accurate predictions, which completely remove the need for the time consuming precomputation of the entire camera-view-space, required for every different scene in Wang We show results with an in-house, OpenGL prototype implementation that includes six different shaders, with three different quality levels for each one, which yields 729 different shader combinations. We demonstrate the flexibility of our approach by running it on two different platforms: a desktop PC and a mobile device.
Related Work
The reduction of power consumption is a growing concern in many different areas, including both algorithms and hardware architecture [KY14]. Many recent examples have been shown regarding display technology [MWDG13, CWC * 14, CCC * 16], user interfaces [DCZ09], or cloud photo enhancement [GSC * 15], to name a few. This issue is specially relevant in mobile devices with limited battery life [ILMR03]. We focus here on the particular aspects more closely related to our work: energy saving in rendering and GPU power modeling.
Energy saving in rendering. The power efficiency of several existing graphics algorithms has been extensively examined on different GPUs, as a first step towards reducing the power consumption associated to rendering [JGDAM12]. Power limitations are specially relevant in GPUs for mobile devices, and power reduction techniques such as tiling architectures and data compression have been broadly explored [AMS08]. Stravrakis et al. [SPP * 15] employ dynamic voltage scaling based on framerate, and implement an energy-aware balancing algorithm that dynamically selects the rendering parameters (geometrical complexity and texture resolution) to save power. Reducing the precision of arithmetic operations can also effectively reduce energy consumption in pixel shaders [PLS11]. With respect to hardware-based optimizations, Arnau et al. [APX14] observe that many fragments are repeatedly rendered in different frames, and exploit this redundancy using fragment memoization.
GPU power modeling. GPU power can be modeled by considering the static and dynamic power of each one of its architectural units (floating point unit, ALU, cache, memory...) [HK10]. Instead, we aim at predicting power consumption using only rendering information, in order to obtain a model directly related to scene complexity. Vatjus-Anttila et al.
[VAKH13] proposed a model for GPU power consumption taking into account the contributions of three different primitives separately (batches, triangles, and texels), and combining them as a weighted sum. Different from this approach, our model includes render passes, takes into account all primitives simultaneously, includes the number of fragment shader invocations instead of texels as a better predictor of power consumption, and adapts in real-time to changes in the scene. Besides, Vatjus-Anttila et al. need to include an estimated percentage of backfacing and depth culled primitives in order to improve the accuracy of their model. In contrast, we obtain the precise number of primitives used in each stage of the GPU pipeline and use them directly. This allows us to handle more complex, dynamic scenes, and leads to much higher prediction accuracy.
Recently, Wang et al. developed a power-optimal rendering framework for mobile devices [WYM * 16], based on Pareto frontiers in power-error space. Despite the very good results, the method requires several days of precomputation of the whole camera-view space for each scene. This is impractical, limits the application of the method to static scenes only, imposes large memory requirements, and forces all novel views produced at runtime to be interpolated, which may lead to large errors even in static scenes with large content changes between views (e.g. unoccluded objects).
In contrast, we introduce a novel real-time power prediction model and an error estimation mechanism, which can handle new games and applications without any specific precomputation. It adapts to the scene being rendered in real-time, thus being able to handle dynamic scenes and effects while providing very high accuracy for scenes with different characteristics and complexity.
Problem definition
We consider the rendering process as composed of multiple rendering passes that define the visual effects (shadow mapping, reflections, antialiasing...), as illustrated in Figure 2. Each pass is executed with a shader with a particular quality level. The input of the rendering process is thus a rendering configuration s (a vector describing the sequence of shaders corresponding to each rendering pass), and the camera parameters c (position and view). In the s vector, the i th component represents the shader quality level used for the i th pass. The contributions from all the passes are combined by a function f to generate the final image; generalizing the rendering process as a function f allows us to implicitly include forward and deferred rendering in our framework.
e(s, c) = xy || f (s best , c) − f (s, c)||dxy (1)
where x, y define the pixel domain of the image, and || · || indicates the chosen norm.
Besides yielding varying quality errors, different s and c vectors also result in different power consumption P(s, c). In general, higher quality images require more power, which generates a tradeoff between power and error. Therefore, given a power budget P bgt , we look for a vector s such that e(s, c) is minimized, while P(s, c) remains within the budget:
s = arg min s e(s, c)
subject to P(s, c) < P bgt (2) Different from Wang's work, we demonstrate in this paper how to predict P(s, c) and estimate e(s, c) in real-time. This is a significantly more difficult problem, since Wang's framework relied on a time-consuming precomputation (in the order of a few days) of the entire camera-view-space, to be performed for each particular game or scenario. Our implementation includes six different shaders (resolution, base shading, reflections, shadows, metals, and antialiasing), with three quality levels each, generating a total of 729 different rendering configurations. Rendering configuration: vector with shader quality level for each pass s i Shader for pass i s best Rendering configuration that generates best quality images c
Camera position and view f (s, c)
Image rendering function with s and c. e(s, c)
Image quality error with s and c, simplified as e(s). P(s, c)
Rendering power with s and c, simplified as P(s Rendering configuration where every pass uses shader quality level 0, same as s best s l i Configuration where every pass uses shader quality level 0 except for pass i, which uses level l(l > 0) s lmax i Configuration where every pass uses shader quality level 0 except for pass i, which uses the worst quality level k Coefficient that relates the quality error of two shaders for the same pass Table 1: Symbols used throughout the paper, and their definition.
Algorithm Overview
Our algorithm is based on two key components, depicted on Figure 3: a power prediction model, and a quality error estimation mechanism. Our power prediction model fits a set of coefficients in an equation describing scene complexity, requires minimal initialization, and adapts in real-time to the content being displayed. In order to select the optimal rendering configuration within a power budget, we introduce a strategy to reuse fitted coefficients to predict power consumption in new configurations with minimal computation.
For our quality error estimation, we first compute the error of a frame with several rendering configurations (one per pass, six in total in our prototype implementation) by running the renders in the background and calculating the SSIM perceptual quality metric. We then use the obtained values to estimate the error for all the other configurations (729 in our case).
Our on-the-fly power-efficient rendering framework makes use of these two components to produce a final image with the highest possible quality, within a given power budget. Sections 5 and 6 explain the details of our power prediction model and error estimation mechanism, respectively, while Section 7 describes how the power prediction and error estimation steps are combined at runtime to obtain the optimal rendering configuration.
Power Prediction Model
We introduce our power prediction model based on scene complexity, describe the initialization and real-time fitting process, show how we can predict the power for all our rendering configurations by fitting our model for only one of them, and describe implementation details.
Our model
The rendering pipeline starts running when we command the GPU to draw a group of triangles (called a batch) already uploaded to the GPU memory. Those triangles go through several consecutive stages: first, the vertex shader executes per-vertex processing operations; then, rasterization runs per-primitive processing to cull hidden primitives; finally, the fragment shader interpolates perfragment parameters, texturing, and coloring to generate the final pixel color [Hen]. When setting a fixed frame rate, the complexity of the scene determines the load imposed on the GPU, and power savings are achieved when the GPU is idle between consecutive frames. Our power prediction model takes into account consumption at the different stages of the GPU pipeline, according to scene complexity; it includes the number of batches b, vertex shader calls v, and fragment shader calls f (in the rest of the paper we will refer to these variables as primitives).
Similar to previous work [VAKH13], we observe that GPU power consumption follows an inverted exponential function between a minimum Pm and a maximum P M power, as the rendering load increases. Given a rendering configuration s and camera parameters c, we thus propose the following power consumption model:
P(s, c) = Pm + (P M − Pm)(1 − exp −α ) (3) α = k b b B + kv v V + k f f F Each one of the b
, v, and f primitives is normalized by the number of elements that causes the GPU to saturate to its maximum capac- ity (B, V , and F, respectively). They are additionally weighted by coefficients k b , kv, and k f , to take into account the relative impact of each one on the total power consumption. All the parameters depend on the rendering configuration s; additionally, the camera parameters c are implicitly included in the primitives b, v, and f ; we omit these explicit dependencies in the rest of the paper for the sake of clarity.
Since the complete rendering process is composed of several passes, we extend our previous power model to represent each pass individually:
P = Pm + (P M − Pm)(1 − exp − ∑ N i αi ) (4) α i = k bi b i B i + k vi v i V i + k f i f i F i
where N is the number of passes, and the subindex i for each variable indicates its per-pass value. Figure
Initialization
Given our power model, we first obtain Pm by sending an empty scene to the GPU; we then progressively increase the complexity of the scene to find the values of P M , B i , V i , and F i that saturate the GPU. This is an offline process that needs to be performed only once per hardware platform, and requires only 3 minutes, in contrast with the costly precomputation of the camera-view-space required for every scene in Wang et al.'s proposal [WYM * 16]. In addition, we also obtain the number of instructions and texel accesses for the rendering passes with each quality level, which will be explained in Section 5.4. We then check if the accuracy of the predicted power is above a given threshold. If the predicted power needs to be updated, we collect new samples during a fitting window, which are used to refit the power model and yield a new prediction.
Real-Time fitting
A key aspect of our method is our real-time power fitting process, which allows us to obtain very high prediction accuracy without imposing a penalty in performance. When the scene starts running, we collect rendering samples † during a fitting window, and use them to fit coefficients k bi , k vi , and k f i using a linear regression on the power consumption and number of primitives (see Equation 4).
After the fitting takes place, we check periodically during runtime if the process has to be triggered again, to improve the accuracy of the prediction due to scene changes: First, we collect rendering samples during a prediction accuracy check window, and compare our predicted power consumption with the actual consumption measured in the GPU. If the average difference (computed during the frames of the prediction accuracy check window) is above a set threshold, we trigger the real-time fitting process again, and update the coefficients of our power model. This process is illustrated in Figure 5.
Reusing the fitted coefficients for other configurations
At any given moment, we are rendering the scene and fitting our power model with a single rendering configuration. However, each configuration leads to a different power consumption. Therefore, in order to find the optimal rendering configuration, we need to predict the power consumption for every one of them (729 in our prototype implementation); fitting the model for every configuration would obviously be impractical, and too computationally expensive.
To solve this problem, we leverage what our coefficients represent: k bi , k vi , and k f i express the cost associated to batches, vertices, and fragments, respectively. In particular, k bi is the cost of a rendering request and all the exchange of information between CPU and GPU required to perform the rendering task. In general, this cost † A sample includes the measured power for a frame, and its corresponding number of batches, vertices, and fragments for each rendering pass. Figure 6: Power consumption of the Hall and Subway scenes, for rendering configuration s A = (l 0 , l 2 , l 1 , l 1 , l 2 , l 2 ). We show ground truth measured data, predicted power with our real-time fitting, and predicted power with coefficients reused from the fitting of a different configuration s B = (l 1 , l 1 , l 1 , l 1 , l 1 , l 1 ).
is fixed for all the shader quality levels of a given pass, so we can reuse the same k bi for all our rendering configurations. On the other hand, k vi , and k f i are related to the number of executed instructions and texel accesses ‡ , so we can express the coefficients associated to pass i with shader quality level s i as:
k vi (s i ) = χIns vi (s i ) + ψTex vi (s i ) (5) k f i (s i ) = χIns f i (s i ) + ψTex f i (s i )(6)
where Ins vi (s i ), and Ins f i (s i ) represent the average number of executed instructions for vertices and fragments in pass i, with shader quality level s i ; Tex vi (s i ) and Tex f i (s i ) are the average number of texel accesses for vertices and fragments in pass i with shader quality level s i ; χ and ψ are the costs associated to an instruction and a texel access. Since vertex shaders usually have the same number of instructions per triangle for each pass regardless of the quality level, and they do not access texels, we can simplify Equation 5 as k vi = χIns vi . We obtain Ins vi , Ins f i (s i ), and Tex f i (s i ) during the initialization step, which is performed only once per platform (Section 5.2). We instrument the shaders while they are being loaded into the GPU to include atomic counters that automatically count the number of executed instructions and texel accesses for each primitive, and compute the average after running a dummy scene for a few minutes § . Therefore, the only unknowns are χ and ψ, which we obtain by solving the inconsistent overdetermined system of equations with a linear regression.
This strategy has one key advantage: By fitting only one rendering configuration, we obtain the coefficients k bi , χ, and ψ, which do not depend on that particular rendering configuration. We can then ‡ Even though any memory fetch could be issued from vertex and fragment shaders, we use the term texel access because they generally constitute the vast majory of memory fetches. § Any scene can be used to obtain the necessary information, as long as all the shaders in the rendering engine are executed. At 30 fps, running the scene for a couple of minutes allows us to obtain stable, averaged results. reuse them together with Equations 4, 5 and 6, to obtain power predictions for all our configurations. Thus, the power consumptions associated to different rendering configurations depend only on the number of executed instructions, texel accesses, and primitives. Figure 6 shows our resulting prediction reusing coefficients from a different configuration.
Implementation details
Our prediction accuracy check window has a length of 10 frames, and our refitting window lasts 30 frames. These values are selected to enable a fast fitting process while collecting enough data to ensure the robustness of our fitted model. The complete fitting process and reuse of coefficients is completed in less than 1.5 s after the prediction accuracy check is launched. We set the accuracy threshold to 10% of the difference between Pm and P M . The linear regression to fit our power model takes an average of 2.6 ms, and the computations to reuse the coefficients for other configurations require 0.7 ms. To ensure that they do not interfere with the rendering process, we execute them in a separate thread on the CPU, while the GPU continues rendering the scene.
Quality Error Estimation
To select the optimal rendering configuration, we need to assess the quality error of any frame, by comparing it with its corresponding reference frame ϕr, rendered with the highest quality. Similar to Wang's budget rendering framework [WYM * 16], we use the perceptually-based Structural Similarity Index (SSIM) [WBSS04], with error given by e = 1 − SSIM.
Let ϕ be the current frame for which we want to obtain the error, and let ϕs represent all other alternative renderings using all other rendering configurations. In Wang's previous offline approach, every high-quality reference frame ϕr, all their alternative renderings ϕs, and their associated quality errors had been precomputed in advance, based on a dense partitioning of the camera-view space of the scene. We face a much harder problem, since we aim to perform all necessary computations at runtime, on a dynamic scene. This involves, apart from obtaining the reference frame ϕr, rendering frames ϕs with all other rendering configurations, and calculating their associated quality error. In the rest of the section, we first describe how error is computed; however, given the large space of all rendering configurations, it is impossible to compute the error for all of them without visibly affecting performance. We thus introduce our approach to accurately estimate most errors, without the need to explicitly compute them.
Computing quality error
Since error computation should not interfere with the user experience, ϕr and all ϕs are rendered in the background, to a secondary frame buffer (not shown on the screen); ϕr is saved in a texture, while each ϕs is rewritten in successive renderings after its associated error has been calculated. To avoid a visible drop in the frame rate from rendering ϕr and all ϕs consecutively, we distribute the task over time. We save the rendering settings used to obtain ϕ, as well as the positions of moving objects ¶ , and restore them with an error computation frequency to render one frame in the background.
Distributing the rendering tasks over time avoids a sudden drop in performance, but in turn it makes the process excessively long for all the different rendering configurations. To overcome this, the quality error can be computed for just a small subset of rendering configurations. Since this step takes place after power prediction, such configuration subset can be selected from the configurations with higher power below the threshold, which are more likely to produce high quality images. The selection of the optimal rendering configuration would then choose the best configuration among the available ones. Alternatively, we propose an approximation to obtain estimated quality error values for all the configurations, without the need to compute all of them. This approximation is suitable for our application, since it allows us to obtain relative estimations to compare different configurations.
Estimating quality error for all rendering configurations
We make two important observations that allow us to estimate the error for all 729 rendering configurations by rendering and computing the error for only six of them (one per rendering pass).
For the following discussion, we define s 0 as the rendering configuration where every pass uses the highest shader quality (level 0, l 0 ), and s l i as the configuration where every pass uses l 0 except for pass i, which uses level l(l > 0). Our two observations are:
• First, we can approximate the quality error for a rendering configuration by adding up the error introduced by each individual pass. This means that the total error for any rendering configuration can be expressed as a sum of errors using only s l i rendering configurations. For example, with three rendering passes, the error for rendering configuration s = (l 2 , l 0 , l 1 ) can be obtained as: e(s = (l 2 , l 0 , l 1 )) = e(s 2 0 ) + e(s 1 2 ) (7)
• Second, given two rendering configurations, s l 1 i and s l 2 i , with best quality shaders except for one pass i using shaders l 1 and l 2 , their associated quality errors follow:
e(s l 1 i ) = ke(s l 2 i )(8)
The set of all coefficients k depends only on the rendering engine used, not on the particular scene being rendered, and thus can be computed beforehand (together with the initialization of our power model).
Combining these two observations, we can estimate the quality error for all our configurations by computing only the error for all s lmax i , that is, one configuration per pass. Figure 7 shows the accuracy of the estimated error using these simplifications. Note that all error computations and estimations are performed in real-time; only the k coefficients have to be obtained beforehand. ¶ In our implementation, we identify moving objects by the presence of animated skeletal meshes.
Implementation details
We compute the error for a frame ϕs once very 10 frames. This error computation frequency was selected to minimize the length of the error computation and estimation process while guaranteeing that the GPU is able to keep up with the target frame rate. Alternatively, the error computation frequency can be adjusted at runtime based on the current and target frame rates.
Obtaining the SSIM index is computationally expensive, taking an average of 0.05 s. Therefore, after a frame ϕs has been rendered in the background, the quality error with SSIM is computed in parallel on a separate thread, while the GPU continues rendering the game. The GPU-CPU communication takes 0.02 seconds in the worst case, which corresponds to the exchange of data to compute the SSIM index for 2048x2048 resolution.
On-the-fly Power-Efficient Rendering
In the previous sections we have described our power prediction model and quality error estimation mechanism. We now show how those components are combined at runtime to select the optimal rendering configuration. Our periodic selection for the optimal configuration is followed by a temporal filtering to gradually transition to the new configuration, as illustrated in Figure 8. When the new configuration is set, we start the real-time fitting of our power model.
Selection of the optimal rendering configuration
Given our power predictions and quality error estimations, we aim to find the optimal rendering configuration for a given scene and camera parameters, minimizing quality error while meeting our power budget, as formulated in Equation 2. This selection process is triggered periodically, with a configuration selection frequency.
We first use our power prediction model to obtain the power consumption for the current frame with all possible rendering configurations. Wang : Timeline illustrating power consumption during rendering (measured and predicted with our model), and how our algorithm is executed. Given a configuration selection frequency, a new optimal rendering configuration is selected (purple box). Our temporal filtering is executed to transition to the new configuration. Immediately after that, the power accuracy check is performed, followed if necessary by the real-time fitting, and the reuse of fitted coefficients (orange boxes). In this example, when switching to rendering configuration B, the power check step detects an abovethreshold gap between the measured and the predicted power, so fitting and reuse are activated. However, when switching to rendering configuration C, the power check confirms that the gap is below threshold, and refitting and reuse are not launched. Please refer to the text for more details.
for all configurations, producing a large two-dimensional powererror space. To simplify their runtime search for the optimal configuration, they also precompute the Pareto frontier to reduce their two-dimensional exploration of the power-error space to a onedimensional search along the Pareto frontier. Instead, we predict the power consumption and estimate the error at runtime. This is difficult, as argued in the paper, but in turn it offers an additional advantage: since the power budget is known in advance, when we predict power consumption we can discard all the configurations with a power consumption higher than our budget. The costly twodimensional search in power-error space is then reduced to a onedimensional search in error space; this means that we can completely eliminate the need to compute the Pareto frontier.
For the configurations that meet our power budget, we estimate the quality error following the process described in Section 6: We render ϕ in the background with configurations s 0 and s lmax i , and compute their error (according to the error computation frequency to ensure a constant frame rate), and use Equations 7 and 8 to estimate the quality error for the rest of the configurations. Finally, since we already discarded all the configurations above the power budget, we simply need to choose the rendering configuration with the lowest quality error. This process corresponds to the purple box in Figure 8, and is illustrated by Figure 9, which shows how our rendering configurations are distributed in power-error space, and how our strategy is effective in selecting the one with lowest error within the power budget. We discard all the configurations over the power budget, and perform just a one-dimensional search in error space on the remaining configurations, selecting the one with lowest error (marked with a red circle). This eliminates the need to compute the Pareto frontier in our framework (shown in red for reference only).
Since we do not rely on scene-specific precomputed data, and different scenes may have very different power requirements, which are thus not known in advance, setting the power budget as an absolute predefined value (as in Wang et al.'s proposal [WYM * 16]) is not practical. Therefore, we define the power budget as a percentage between the minimum and maximum power consumption of the scene, which is a very intuitive value to represent the trade-off between power consumption and image quality. For example, if our power budget is 40%, only configurations with predicted power lower than Pm + 0.4(P M − Pm) will be eligible.
Temporal filtering
To avoid a sudden change in image quality when a new rendering configuration is selected, the transition to the new configuration is performed smoothly with the temporal filtering introduced by Wang et al. [WYM * 16]. During an interpolation interval T, while the framework transitions from s old to snew, the effective rendering configuration used for rendering s eff is computed as:
s eff = [(1 − t T )s old + t T snew](9)
where the brackets denote the closest integer and t is the time after starting the transition to the new configuration.
Real-time fitting of the power model and reusing coefficients
Every time a new rendering configuration is set, our power prediction accuracy check is triggered (small orange box after temporal filtering in Figure 8). If the accuracy of our prediction is below a threshold, we refit our power model, as explained in Section 5.3. This happens twice in Figure 8, and is represented by larger orange boxes. The newly fitted coefficients are then used to update the power model for all other configurations by obtaining the cost as-sociated to each instruction and texel access (Section 5.4 and third small orange box in Figure 8).
Implementation details
The configuration selection frequency triggers the process to select a new optimal rendering configuration 200 frames after the previous configuration was set. This frequency allows us to quickly detect changes in the scene while minimizing the impact of the associated computations. Refitting the power model and reusing the coefficients for other configurations (3.4 ms), predicting the power for all configurations (0.02 ms), and estimating the error and selecting the optimal configuration (0.03 ms) are executed on separate threads. The temporal filtering interval used for interpolation is 2 seconds.
Implementation
To show how our on-the-fly power-budget framework adapts to different hardware, we have implemented it on two different platforms: A desktop PC with an Intel Core i7-7700 and an NVIDIA Quadro P4000, and a mobile Qualcomm Snapdragon 660 (with a 8x Kryo 260 CPU and an Adreno 512 GPU).
Power Measurement
To measure the power usage of the graphics card in the desktop PC, we use the NVIDIA Management Library (NVML) [NVM15], which allows us to directly access the power usage of the GPU and its associated circuitry. The specifications report an accuracy of 5%. In our mobile device, we use an external source meter to directly supply the power of the device. We use a Keithley A2230-30-1, which provides APIs to access the instantaneous voltage and current (same setup used by Wang et al.
[WYM * 16]). During the stages of our algorithm when we have to collect rendering samples (for the power prediction accuracy check and real-time refitting), we measure the power consumption for every frame. In order to reduce variance, in our graphs we report the average power measured over 30 frames.
Rendering Configurations
Our rendering framework runs at 30 frames per second in the desktop PC and at 10 frames per second in the mobile device, and has six passes, each one with shaders of three different quality levels; this amounts to a total of 729 different rendering configurations. The complete set of parameters and values of these shaders is given in Table 2. In particular, we have included:
Resolution: When setting the resolution of a frame, the number of fragments for other passes are proportionally scaled.
The resolution is technically not a pass, it sets the screen resolution, which affects other passes. However, it is included in the list of passes for convenience, because it has an effect on power consumption and quality error, and has to be considered as an additional degree of freedom when selecting the optimal rendering configuration.
Base Shading:
The simplest level is a cheap specular shader, which is improved with a better model for point lights in the next level. The best quality level implements microfacet-based shading.
Reflections: For objects with specular materials; it is a multipass shader where quality levels increase the number of generated secondary rays, and the kernel size for color filtering [Sta15].
Shadows: The quality level is given by the resolution of the shadow map.
Metals: It is an importance sampling algorithm where quality levels are defined by the number of samples.
Antialiasing: We rely on the FXAA morphological antialiasing to detect edges in the pixel shader [Lot09, JGY * 11].
When using Equation 4, we consider the following: i) The resolution pass has no associated primitives, only having an effect on the number of primitives used in other passes. Therefore, we do not include that pass specifically in the power model formula. And ii) The antialiasing pass works on the final image, and thus does not depend on the number of batches and vertices, it is only affected by the number of fragments.
Results and Evaluation
We have tested our power-efficient rendering framework on two different platforms (a desktop PC and a mobile device), with four scenes of different complexity, to verify its efficiency in a wide range of scenarios; refer to Table 3 for a summary of their main characteristics. In every case, we are able to maintain the predefined 30 frames per second (10 fps in the mobile device). Our framework supports free exploration of the scene, but we use predefined camera paths to facilitate comparisons and measurements with different qualities, and show the potential of our framework in the long run. For each demo, we specify the preset power budget used to guide our optimal configuration selection process. Figure 10 shows the average power consumption and average quality error of the four scenes, with maximum and minimum quality, and using our framework with the power budgets reported in this section. It can be seen how we significantly reduce power consumption, while keeping visual quality very close to the maximum.
In the following, we show images from our four scenes with the maximum and minimum quality rendering configurations, together with the result of our power-aware framework. Zoomed-in insets allow to better appreciate details, showing how our results are close Table 3: Statistics for our four demo scenes, including number of triangles, number of objects, size on disk of each scene, and duration of the demo. to the maximum quality, at a reduced power cost (shown in the accompanying plots). In addition, the supplemental video shows the full demo, including split-screen comparisons.
Hall: This scene has a spotlight acting as a lamp and is composed of diffuse objects, except for the reflective floor and two metallic buddha statues. It has a high polygon count but very few objects, thus being useful to test scenarios with a small number of batches. Results for our framework running under a power budget of 40% are shown in Figure 1.
Sponza: We use one directional light as the Sun, and one spot light as a candle, both casting shadows rendered by our shadow pass. There are two metallic lion head ornaments (rendered with our metal pass), and the rest of the scene is diffuse (rendered with our base shading pass). The floor of the scene is slightly reflective. Figure 11, top, shows the results for a power budget of 60%.
Valley: This relatively low-poly scene is illuminated using the Sun as a directional light, without any spotlights. There are no reflective or metallic objects, hence demonstrating the effectiveness of our model on scenarios where some passes do not affect quality error. This also leads to a smaller difference between maximum and minimum power consumption; although this challenging scene limits the range for improvement, our framework still manages to save considerable energy with minimal image degradation when setting the power budget to 40% (Figure 11, middle).
Subway: This complex, high-poly scene is used to test our method in high power usage scenarios. The scene is located underground, so it has no Sun. All the lighting comes from a spotlight located in a lightbulb. The floor of the scene is reflective. The two fighting soldiers are metallic, while the remaining objects are dif- Figure 11: Sponza (top), Valley (middle) and Subway (bottom) demo scenes, executed on a desktop PC. We compare the minimum and maximum quality rendering configurations against our power-optimal configuration. For Sponza, we use a power budget of 60% (which corresponds to a percentage of the difference between the Min and Max power consumptions); for Valley we use 40%, and for Subway we use 50%. Our method generates images very similar to those rendered with the maximum quality configuration, while keeping power consumption lower. Please refer to the supplementary video for the full demos. We compare the minimum and maximum quality rendering configurations against our power-optimal configuration with a 50% power budget. Our method generates images very similar to those rendered with the maximum quality configuration, while keeping power consumption lower. Please refer to the supplementary video for the full demo.
fuse. The soldiers are animated using skeletal meshes, allowing us to test our framework on a dynamic scene. Results with a power budget of 50% are shown in Figure 11, bottom.
Additionally, in the supplementary material we show the results for different power budgets applied to the Sponza scene.
To demonstrate the efficiency of our framework on mobile devices, we show additional results for the Valley scene running in our mobile phone, with a power budget of 50% ( Figure 12). We are again able to keep power consumption within our budget with image quality very close to the maximum quality.
Discussion
Our on-the-fly power-aware rendering framework successfully addresses the two key limitations of previous work: it does not require any precomputation, and it can handle dynamic scenes. We have shown results for four different scenes of different characteristics, demonstrating large power savings while maintaining image quality close to maximum quality. Analysing the optimal configurations chosen by our framework, we notice that image resolution is rarely lowered, since it leads to high quality errors. Our algorithm does not lead to any degradation of the frame rate, as we demonstrate in our supplemental video. Our power prediction model may have other applications beyond budget rendering. For example, by detecting an increase in power consumption (which indicates higher rendering complexity), Figure 13: Power cosumption of the Hall and Sponza scenes with ground truth measured data, predicted with our power model with real-time fitting and predicted with our power model with generic fitting.
it could analyse different options to avoid frame-rate drops in video games before they happen. It could also be applied to estimate the total energy consumption of a new rendering task, given a limited set of initial data. Since b, t, and f in Equation 4 can be fetched with native OpenGL queries, incorporating our power prediction model to any project is straightforward, and requires no modifications of any existing shader code.
As we have shown, real-time fitting of our power prediction model provides very high accuracy. However, it is also possible to fit the model with a generic dataset to obtain valid coefficients before running any specific scene. To do that, we collect rendering samples from dummy scenes with a varying number of batches, vertices, and fragments, covering the whole parameter space from Pm to P M . With these data, we fit k bi , k vi , and k f i in Equation 4 using linear regression on the power consumption and number of primitives. This offline process takes around 4 minutes for one rendering configuration, and provides a reasonable approximation of the actual power consumption (see yellow curve in Figure 13).
To perform our runtime quality error computations, we have set a frequency of 10 frames, which we have selected to be as small as possible without minimizing the impact of background rendering on the frame rate. Alternatively, this frequency could be automatically adjusted during runtime according to the current frame, to guarantee a given target frame rate.
Throughout the whole paper, we have defined the power optimal configuration as the one with lowest quality error that meets our power budget (Equation 2). Similar to Wang's work [WYM * 16], solving the analogous problem of obtaining the rendering configuration with the lowest error consumption within an error budget is straightforward:
s = arg min s P(s, c)
subject to e(s, c) < e bgt (10)
In this case, we would start our selection of the optimal configuration by estimating the error for all configurations and discarding the ones above the budget, then predicting power for the remaining configurations, and choosing the one with lowest consumption.
Limitations and future work: Our framework still has some limitations that could be addressed in future work. Our power prediction model seamlessly supports dynamic scenes, by using the in-formation of the current frame to predict power consumption. However, our error computation mechanism needs to explicitly store the positions of moving objects and restore them for background rendering. For scenes with a large number of moving objects, this could become too computationally expensive.
Our power model is based on the typical rendering pipeline with basic processing of batches, vertices, and fragments. It does not accurately model other GPU stages that could be integrated into the pipeline, such as geometry shaders or tesselation, which should be included as additional contributors to our formula. Apart from that, our model is already able to seamlessly represent the additional fragments generated by a geometry shader.
We have demonstrated the viability of our framework using a reasonable number of different shaders, under the strict constraint of real-time execution. We have not, however, exhausted all the possibilities; testing our proposal in a complex rendering engine is a very interesting direction for future work.
Our framework may produce inaccurate predictions when the rendering samples used to fit the model do not include information related to a certain pass (e.g., the Reflections pass if no reflective surfaces were being rendered at the time). However, these inaccuracies tend to last only a few frames, and the system eventually self-corrects; we have found that this does not have a relevant impact on performance in the long run.
Figure 2 :
2Example of the rendering process composed of three rendering passes: base shading, shadows, and reflections.
Figure 3 :
3Main components of our algorithm: power prediction model, and quality error estimation. Both components are combined in our on-the-fly, power-efficient rendering framework to generate the optimal rendering configuration.
Figure 4 :
4Power consumption for the Hall and Subway scenes. Our power prediction closely matches the ground truth, measured power consumption. For comparison, we also show the prediction using the model proposed by Vatjus-Anttila et al. [VAKH13].
4 depicts results for two example scenes (Hall and Subway) with ground truth measured power, showing how our equation yields a good prediction of power consumption: an average of only 1% error in Hall and 2% in Subway. The model proposed by Vatjus-Anttila et al. provides worse power predictions (11% error in Hall and 37% error in Subway) because it does not model the contribution of each rendering pass, it does not consider the number of fragment shader invocations, it uses an estimated correction factor to compute the number of primitives, and it does not adapt to the scene being rendered in real-time. More examples can be found in the supplementary material.
Figure 5 :
5Timeline showing how refitting works in our model. First, rendering samples are collected during a prediction accuracy check window.
Figure 7 :
7Computed and estimated quality error for the Subway scene, using our observations in 6.2. Left: Quality error for rendering configuration s = (l 0 , l 2 , l 2 , l 1 , l 0 , l 0 ), approximated by adding up the error from each individual pass (Equation 7). Right: Quality error of one rendering pass with medium quality (configuration s = (l 0 , l 0 , l 0 , l 1 , l 0 , l 0 )), using the observation in Equation 8 and coefficient k obtained from the Sponza and Valley scenes. Please refer to the digital version to distinguish the overlapping computed and estimated quality errors.
et al. [WYM * 16] precompute the power and error
Figure 8
8Figure 8: Timeline illustrating power consumption during rendering (measured and predicted with our model), and how our algorithm is executed. Given a configuration selection frequency, a new optimal rendering configuration is selected (purple box). Our temporal filtering is executed to transition to the new configuration. Immediately after that, the power accuracy check is performed, followed if necessary by the real-time fitting, and the reuse of fitted coefficients (orange boxes). In this example, when switching to rendering configuration B, the power check step detects an abovethreshold gap between the measured and the predicted power, so fitting and reuse are activated. However, when switching to rendering configuration C, the power check confirms that the gap is below threshold, and refitting and reuse are not launched. Please refer to the text for more details.
Figure 9 :
9Our rendering configurations drawn in power-error space.
Figure 10 :
10Average power consumption per frame and quality error in our four demos. Note that the quality error for maximum quality is zero.
Figure 12 :
12Valley demo scene executed on our mobile device.
et al.'s framework [WYM * 16].
Table 1 sums up all the symbols used the paper. Different rendering configurations yield results of varying visual quality. Let s best denote the rendering settings that generate the best quality image. Similar to the recent work of Wang et al. [WYM * 16], we can define the quality error e(s, c) of any image produced by different rendering settings as
Table 2 :
2List of parameters and values forming the space of rendering settings.
c 2018 The Author(s) Computer Graphics Forum c 2018 The Eurographics Association and John Wiley & Sons Ltd.
AcknowledgementsWe would like to thank all reviewers for their insightful comments. We also thank Bowen Yu for his contribution in the ini-
Graphics processing units for handhelds. Akenine-Möller T, J Strom, Proceedings of the IEEE. 965AKENINE-MÖLLER T., STROM J.: Graphics processing units for handhelds. Proceedings of the IEEE 96, 5 (May 2008), 779-789. 2
Eliminating redundant fragment shader executions on a mobile GPU via hardware memoization. Arnau J.-M Parcerisa J.-M, P Xekalakis, ISCA. ARNAU J.-M., PARCERISA J.-M., XEKALAKIS P.: Eliminat- ing redundant fragment shader executions on a mobile GPU via hardware memoization. In ISCA (2014). 2
An energy-saving color scheme for direct volume rendering. Chen W Chen W, H Chen, Zhang Z, H Qu, Computers & Graphics. 542CCC * 16. Special Issue on CAD/Graphics[CCC * 16] CHEN W., CHEN W., CHEN H., ZHANG Z., QU H.: An energy-saving color scheme for direct volume rendering. Computers & Graphics 54 (2016), 57 -64. Special Issue on CAD/Graphics 2015. 2
An image-space energy-saving visualization scheme for OLED displays. Chen H Wang, J Chen W, H Qu, Chen W, Computers & Graphics. 382CWC * 14[CWC * 14] CHEN H., WANG J., CHEN W., QU H., CHEN W.: An image-space energy-saving visualization scheme for OLED displays. Computers & Graphics 38 (2014), 61 -68. 2
Power modeling of graphical user interfaces on OLED displays. Dong M K Choi Y.-S, Zhong L, Proceedings of the 46th Annual Design Automation Conference. the 46th Annual Design Automation ConferenceACMDONG M., CHOI Y.-S. K., ZHONG L.: Power modeling of graphical user interfaces on OLED displays. In Proceedings of the 46th Annual Design Automation Conference (2009), ACM, pp. 652-657. 2
Transform recipes for efficient cloud photo enhancement. M Gharbi, Y Shih, Chaurasia G, Paris J S Ragan-Kelley, Durand F ; A P Hennessy D, 12. 2ACM Trans. Graph. 346GSC * 15[GSC * 15] GHARBI M., SHIH Y., CHAURASIA G., RAGAN-KELLEY J., PARIS S., DURAND F.: Transform recipes for efficient cloud photo en- hancement. ACM Trans. Graph. 34, 6 (Oct. 2015), 228:1-228:12. 2 [Hen] HENNESSY D. A. P. .
Computer organization and design : the hardware/software interface. Appendix C: Graphics and Computing GPUs. J L , The Morgan Kaufmann Series in Computer Architecture and Design. BostonElsevier5th edJ. L.: Computer organization and design : the hardware/software interface. Appendix C: Graphics and Computing GPUs., 5th ed. ed. The Morgan Kaufmann Series in Computer Architec- ture and Design. Morgan Kaufmann, Elsevier" Boston :. 4
An integrated GPU power and performance model. Hong S Kim, H , Proceedings of the 37th Annual International Symposium on Computer Architecture. the 37th Annual International Symposium on Computer ArchitectureNew York, NY, USAISCA '10, ACMHONG S., KIM H.: An integrated GPU power and performance model. In Proceedings of the 37th Annual International Symposium on Computer Architecture (New York, NY, USA, 2010), ISCA '10, ACM, pp. 280-289. 2
Energyadaptive display system designs for future mobile environments. Iyer S, L Luo, Mayo R, P Ranganathan, Proceedings of the 1st International Conference on Mobile Systems, Applications and Services. the 1st International Conference on Mobile Systems, Applications and ServicesACMIYER S., LUO L., MAYO R., RANGANATHAN P.: Energy- adaptive display system designs for future mobile environments. In Pro- ceedings of the 1st International Conference on Mobile Systems, Appli- cations and Services (2003), ACM, pp. 245-258. 2
Power efficiency for software algorithms running on graphics processors. [jgdam12] Johnsson B Ganestam, P Doggett, M Akenine-Möller T, Proceedings of the Fourth ACM SIGGRAPH / Eurographics Conference on High-Performance Graphics. the Fourth ACM SIGGRAPH / Eurographics Conference on High-Performance Graphics[JGDAM12] JOHNSSON B., GANESTAM P., DOGGETT M., AKENINE- MÖLLER T.: Power efficiency for software algorithms running on graph- ics processors. In Proceedings of the Fourth ACM SIGGRAPH / Euro- graphics Conference on High-Performance Graphics (2012), pp. 67-75.
J Jimenez, Yang J Gutierrez D, Reshetov A, De-Moreuille P, T Berghoff, C Perthuis, H Yu, M Mcguire, T Lottes, H Malan, E Persson, D Andreev, Sousa T, Filtering approaches for real-time anti-aliasing. 9ACM SIGGRAPH Courses[JGY * 11] JIMENEZ J., GUTIERREZ D., YANG J., RESHETOV A., DE- MOREUILLE P., BERGHOFF T., PERTHUIS C., YU H., MCGUIRE M., LOTTES T., MALAN H., PERSSON E., ANDREEV D., SOUSA T.: Filter- ing approaches for real-time anti-aliasing. In ACM SIGGRAPH Courses (2011). 9
Energy-Aware System Design: Algorithms and Architectures. Kyung C.-M Yoo S, Incorporated. 2Springer Publishing CompanyKYUNG C.-M., YOO S.: Energy-Aware System Design: Algo- rithms and Architectures. Springer Publishing Company, Incorporated, 2014. 2
. Lottes T, FxaaLOTTES T.: Fxaa, 2009. https://developer.download.nvidia.com/.
A survey on computational displays: Pushing the boundaries of optics, computation, and perception. [mwdg13] Masia B Wetzstein, G Didyk, P Gutierrez D, Computers & Graphics. 372[MWDG13] MASIA B., WETZSTEIN G., DIDYK P., GUTIERREZ D.: A survey on computational displays: Pushing the boundaries of optics, computation, and perception. Computers & Graphics 37, 8 (2013), 1012 -1038. 2
NVML: Nvidia management library. [NVM15] NVML: Nvidia management library, 2015. https://developer.nvidia.com/nvidia-management-library-nvml. 8
Precision selection for energy-efficient pixel shaders. J Pool, Lastra A, Singh M, Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics. the ACM SIGGRAPH Symposium on High Performance GraphicsPOOL J., LASTRA A., SINGH M.: Precision selection for energy-efficient pixel shaders. In Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics (2011), pp. 159-168. 2
Toward energyaware balancing of mobile graphics. Stavrakis E, M Polychronis, N Pelekanos, Artusi A, P Hadjichristodoulou, Y Chrysanthou, IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics. 9411SPP * 15[SPP * 15] STAVRAKIS E., POLYCHRONIS M., PELEKANOS N., ARTUSI A., HADJICHRISTODOULOU P., CHRYSANTHOU Y.: Toward energy- aware balancing of mobile graphics. In IS&T/SPIE Electronic Imag- ing, International Society for Optics and Photonics (2015), vol. 9411, pp. 94110D-10. 2
Stochastic screen-space reflections. Stachowiak T, ACM SIGGRAPH 2015 Courses Advances in Real-Time Rendering in Games. 9STACHOWIAK T.: Stochastic screen-space reflections. In ACM SIGGRAPH 2015 Courses Advances in Real-Time Rendering in Games (2015). 9
Power consumption model of a mobile GPU based on rendering complexity. [vakh13] Vatjus-Anttila J M Koskela, T Hickey S, Seventh International Conference on Next Generation Mobile Apps, Services and Technologies. 24[VAKH13] VATJUS-ANTTILA J. M., KOSKELA T., HICKEY S.: Power consumption model of a mobile GPU based on rendering complexity. In 2013 Seventh International Conference on Next Generation Mobile Apps, Services and Technologies (Sept 2013), pp. 210-215. 2, 4
Image quality assessment: from error visibility to structural similarity. Wang Z C Bovik A, H R Sheikh, E P Simoncelli, IEEE Transactions on Image Processing. 134WANG Z., BOVIK A. C., SHEIKH H. R., SIMONCELLI E. P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13, 4 (April 2004), 600-612. 6
Yu B Wang R, J Marco, Hu T, Gutierrez D, Bao H, Real-time rendering on a power budget. 3511WYM * 16[WYM * 16] WANG R., YU B., MARCO J., HU T., GUTIERREZ D., BAO H.: Real-time rendering on a power budget. ACM Trans. Graph. 35, 4 (July 2016), 111:1-111:11. 2, 3, 4, 6, 7, 8, 11
| [] |
[
"Quasi-2D Fermi surface in the anomalous superconductor UTe 2",
"Quasi-2D Fermi surface in the anomalous superconductor UTe 2"
] | [
"A G Eaton [email protected] \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"T I Weinberger \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"N J M Popiel \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"Z Wu \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"A J Hickey \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"A Cabala \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"J Pospíšil \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"J Prokleška \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"T Haidamak \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"G Bastien \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"P Opletal \nAdvanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan\n",
"H Sakai \nAdvanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan\n",
"Y Haga \nAdvanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan\n",
"R Nowell \nNational High Magnetic Field Laboratory\n32310TallahasseeFloridaUSA\n",
"S M Benjamin \nNational High Magnetic Field Laboratory\n32310TallahasseeFloridaUSA\n",
"V Sechovský \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n",
"G G Lonzarich \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"F M Grosche \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n",
"M Vališka \nFaculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic\n"
] | [
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Advanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan",
"Advanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan",
"Advanced Science Research Center\nJapan Atomic Energy Agency\n319-1195TokaiIbarakiJapan",
"National High Magnetic Field Laboratory\n32310TallahasseeFloridaUSA",
"National High Magnetic Field Laboratory\n32310TallahasseeFloridaUSA",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Faculty of Mathematics and Physics\nDepartment of Condensed Matter Physics\nCharles University\nKe Karlovu 5, Prague 2, 121 16Czech Republic"
] | [] | 1 Spin-triplet superconductors represent a fascinating platform with which to explore the technological potential of emergent topological excitations. While candidate triplet superconductors are rare, one especially promising material is the heavy fermion paramagnet uranium ditelluride (UTe 2 ), which has recently been found to exhibit numerous characteristics of an unconventional spin-triplet pairing state. To date, efforts to understand the microscopic details of superconductivity in UTe 2 have been severely impeded by uncertainty regarding the underlying electronic structure. Here, we directly probe the Fermi surface of UTe 2 by measuring magnetic quantum oscillations in ultra-pure crystals, as evidenced by their high superconducting transition temperature c ≈ 2.1 K and residual resistivity ratio (RRR) ≈ 900. We find an angular profile of quantum oscillatory frequency and amplitude that is characteristic of a quasi-2D Fermi surface, exhibiting heavy effective masses up to 78(2)owing to strong correlations. We performed Fermi surface simulations guided by these data, yielding excellent correspondence between quantum oscillation measurements and our resultant Fermi surface model, which consists of two cylindrical sections of electron-and hole-type respectively. A comparison between the density of states at the Fermi level inferred from the quantum oscillations, and the normal state Sommerfeld coefficient obtained from specific heat measurements, gives excellent agreement with our Fermi surface model. Additionally, we find that both cylindrical Fermi sheets possess negligible corrugation, which may allow for their near-nesting and therefore promote magnetic fluctuations that enhance the triplet pairing mechanism. Our results place strong constraints on the possible symmetry of the superconducting order parameter in UTe 2 . 2 Conventional phonon-mediated superconductivity involves the pairing of two fermions in a spin-singlet configuration 1 forming a bosonic quasiparticle of total spin = 0. Unconventional superconductors may replace the attractive role played by phonons with a magneticallymediated pairing interaction, often yielding a -wave symmetry of the orbital wavefunction, but still with overall = 0 for the bound pair. 2 By contrast, the formation of superfluidity in 3 He involves a triplet pairing configuration with = 1 and an odd-parity -wave symmetry. 3 To date no bulk solid state analogue of this exotic state of matter has been unequivocally identified, although actinide metals such as UPt 3 and UGe 2 are promising candidates. 4, 5 The technological realisation of devices incorporating -wave superconductivity is highly desirable, due to their expected ability to effect coherent quantum information processing. 6 For several years the layered perovskite Sr 2 RuO 4 appeared a likely host of spin-triplet superconductivity; 7 however, recent experimental observations have cast considerable doubt on this interpretation. 8 Since the discovery of unconventional superconductivity in UTe 2 in 2019, 9 characteristic features of a -wave superconducting state in this material have been reported across numerous physical properties. These include a negligible change in the Knight shift upon cooling through c as probed by nuclear magnetic resonance (NMR), 10 high upper critical fields far in excess of the Pauli paramagnetic limit, 11 high magnetic field re-entrant superconductivity, 12 chiral in-gap states measured by scanning tunneling microscopy (STM), 13 time-reversal symmetry breaking inferred from the development of a finite polar Kerr rotation angle below c , 14, 15 multiple point nodes detected from penetration depth measurements indicative of a chiral triplet pairing symmetry, 16 anomalous normal fluid properties consistent with Majorana surface arcs, 17 and ferromagnetic fluctuations coexisting with superconductivity measured by muon spin relaxation measurements. 18 Several theoretical studies have sought to provide a microscopic description of how the exotic magnetic and superconducting features manifest in this material. 11 However, an out-3 standing challenge concerns the determination of the underlying electronic structure, with the question of the geometry and topology of the material's Fermi surface having been the subject of recent debate and speculation. 11, 19-30 Two angle-resolved photoemission spectroscopy (ARPES) studies have given contrasting interpretations, with one inferring the presence of multiple small 3D Fermi surface pockets, 31 while the other identified spectral features characteristic of a large cylindrical quasi-2D Fermi surface section, along with possible heavy 3D section(s). 32 A recent de Haas-van Alphen (dHvA) effect study 33 also resolved features representative of a quasi-2D Fermi surface, but with a spectrally dominant low frequency branch not captured by density functional theory (DFT) calculations, which could be indicative of a 3D Fermi surface pocket. However, this study was limited to magnetic fields lower than the ì-axis upper critical field, along which direction the cylindrical axes are posited to be located, thus preventing a detailed quantitative analysis of the Fermi surface geometry and topology. Discerning the dimensionality of the UTe 2 Fermi surface is important in order to determine the symmetry of the superconducting order parameter. Here, we report direct measurements of the UTe 2 Fermi surface, probed by magnetic torque measurements of the dHvA effect up to magnetic field strengths of 28 T at various temperatures down to 19 mK. Through measurements of magnetic quantum oscillations as a function of temperature and magnetic field tilt angle, we observe Fermi surface sections with heavy cyclotron effective masses up to 78(2) , where is the bare electron mass. We investigated the angular dependence of the quantum oscillations and used the resulting data to perform Fermi surface simulations. Our results indicate that the UTe 2 Fermi surface is very well described by two cylindrical Fermi surface sheets of equal volumes and super-elliptical cross sections. In combination, the evolution of quantum oscillatory amplitude and frequency with the tilt angle of the magnetic field, our quantitative analysis of the oscillatory waveform and the contributions from separate Fermi sheets, and the correspondence between the density of states implied from 4 0 1 2 3 0 1 0 0 2 0 0 3 0 0 4 0 0 C p / T ( m J K -2 m o l -1 ) T ( K ) a 0 4 8 1 2 1 6 0 2 4 6 8 1 0 r ( µΩ c m ) T 2 ( K 2 ) T c = 2 . 1 K R R R = 9 0 0 b 1 1 0 1 0 0 0 2 0 0 4 0 0 r ( µΩ c m ) T ( K ) Fig. 1. Characterisation of high purity UTe 2 . a, Specific heat capacity ( ) divided by temperature ( ) of a UTe 2 single crystal measured on warming to 3 K. A single, sharp bulk superconducting transition is exhibited. (Inset) The crystal structure of UTe 2 . b, Resistivity ( ) versus temperature squared up to 4 K for current sourced along the ì direction. A superconducting transition temperature of 2.1 K is observed (defined by zero resistivity). A residual resistivity ratio (RRR, defined in the text) of 900 is found, with a residual resistivity 0 ≲ 0.5 µΩ cm, indicative of very high sample purity. 34, 35 (Inset) The same dataset as the main panel extended up to 300 K. 5 specific heat measurements and our dHvA observations, makes the presence of any 3D Fermi surface pocket(s) extremely unlikely. We also measured Shubnikov-de Haas (SdH) oscillations in the contactless resistivity (see Supplementary Information). The SdH effect is generally more sensitive than the magnetic torque technique to symmetrical 3D Fermi pockets. 36 However, we find that the SdH response of UTe 2 contains no additional frequency components than those probed by dHvA (up to field strengths of 28 T, see Supplementary Information). We therefore identify, to a high level of confidence, that the Fermi surface of UTe 2 is quasi-2D in nature, and composed of two undulating cylindrical sections of hole-and electron-type, respectively. Samples were grown by the molten salt flux (MSF) technique 34 in excess uranium, to minimise the formation of uranium vacancies (see Methods for details). The MSF technique has been found to produce crystals of exceptionally high quality, 34 as demonstrated by specific heat capacity, , and electrical resistivity, , measurements in Figure 1. For this batch of crystals on which quantum oscillation studies were performed, we observe a superconducting transition temperature ( c ) of 2.1 K and residual resistivity ratios (RRR) of up to 900. The RRR is defined as (300 K)/ 0 , where 0 is the residual 0 K resistivity expected for the normal state in the absence of superconductivity, fitted by the dashed line (linear in 2 ) in Fig. 1b. By comparison, samples grown by the chemical vapour transport method tend only to exhibit an RRR of ≈ 88 at best, 35 and a typical c of ≈ 1.6 K. 11 Furthermore, in this study we resolve quantum oscillatory frequencies up to 18.5 kT, implying a mean free path of itinerant quasiparticles of at least 1900Å (see Supplementary Information for calculation), further underlining the pristine quality of this new generation of UTe 2 samples.Figure 2 shows quantum oscillations measured in the magnetic torque of UTe 2 . The os- | null | [
"https://export.arxiv.org/pdf/2302.04758v2.pdf"
] | 256,697,525 | 2302.04758 | ae293291472a6115497e408c228e4460d79f77d3 |
Quasi-2D Fermi surface in the anomalous superconductor UTe 2
May 24, 2023
A G Eaton [email protected]
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
T I Weinberger
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
N J M Popiel
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Z Wu
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
A J Hickey
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
A Cabala
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
J Pospíšil
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
J Prokleška
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
T Haidamak
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
G Bastien
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
P Opletal
Advanced Science Research Center
Japan Atomic Energy Agency
319-1195TokaiIbarakiJapan
H Sakai
Advanced Science Research Center
Japan Atomic Energy Agency
319-1195TokaiIbarakiJapan
Y Haga
Advanced Science Research Center
Japan Atomic Energy Agency
319-1195TokaiIbarakiJapan
R Nowell
National High Magnetic Field Laboratory
32310TallahasseeFloridaUSA
S M Benjamin
National High Magnetic Field Laboratory
32310TallahasseeFloridaUSA
V Sechovský
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
G G Lonzarich
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
F M Grosche
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
M Vališka
Faculty of Mathematics and Physics
Department of Condensed Matter Physics
Charles University
Ke Karlovu 5, Prague 2, 121 16Czech Republic
Quasi-2D Fermi surface in the anomalous superconductor UTe 2
May 24, 2023* To whom correspondence should be addressed:
1 Spin-triplet superconductors represent a fascinating platform with which to explore the technological potential of emergent topological excitations. While candidate triplet superconductors are rare, one especially promising material is the heavy fermion paramagnet uranium ditelluride (UTe 2 ), which has recently been found to exhibit numerous characteristics of an unconventional spin-triplet pairing state. To date, efforts to understand the microscopic details of superconductivity in UTe 2 have been severely impeded by uncertainty regarding the underlying electronic structure. Here, we directly probe the Fermi surface of UTe 2 by measuring magnetic quantum oscillations in ultra-pure crystals, as evidenced by their high superconducting transition temperature c ≈ 2.1 K and residual resistivity ratio (RRR) ≈ 900. We find an angular profile of quantum oscillatory frequency and amplitude that is characteristic of a quasi-2D Fermi surface, exhibiting heavy effective masses up to 78(2)owing to strong correlations. We performed Fermi surface simulations guided by these data, yielding excellent correspondence between quantum oscillation measurements and our resultant Fermi surface model, which consists of two cylindrical sections of electron-and hole-type respectively. A comparison between the density of states at the Fermi level inferred from the quantum oscillations, and the normal state Sommerfeld coefficient obtained from specific heat measurements, gives excellent agreement with our Fermi surface model. Additionally, we find that both cylindrical Fermi sheets possess negligible corrugation, which may allow for their near-nesting and therefore promote magnetic fluctuations that enhance the triplet pairing mechanism. Our results place strong constraints on the possible symmetry of the superconducting order parameter in UTe 2 . 2 Conventional phonon-mediated superconductivity involves the pairing of two fermions in a spin-singlet configuration 1 forming a bosonic quasiparticle of total spin = 0. Unconventional superconductors may replace the attractive role played by phonons with a magneticallymediated pairing interaction, often yielding a -wave symmetry of the orbital wavefunction, but still with overall = 0 for the bound pair. 2 By contrast, the formation of superfluidity in 3 He involves a triplet pairing configuration with = 1 and an odd-parity -wave symmetry. 3 To date no bulk solid state analogue of this exotic state of matter has been unequivocally identified, although actinide metals such as UPt 3 and UGe 2 are promising candidates. 4, 5 The technological realisation of devices incorporating -wave superconductivity is highly desirable, due to their expected ability to effect coherent quantum information processing. 6 For several years the layered perovskite Sr 2 RuO 4 appeared a likely host of spin-triplet superconductivity; 7 however, recent experimental observations have cast considerable doubt on this interpretation. 8 Since the discovery of unconventional superconductivity in UTe 2 in 2019, 9 characteristic features of a -wave superconducting state in this material have been reported across numerous physical properties. These include a negligible change in the Knight shift upon cooling through c as probed by nuclear magnetic resonance (NMR), 10 high upper critical fields far in excess of the Pauli paramagnetic limit, 11 high magnetic field re-entrant superconductivity, 12 chiral in-gap states measured by scanning tunneling microscopy (STM), 13 time-reversal symmetry breaking inferred from the development of a finite polar Kerr rotation angle below c , 14, 15 multiple point nodes detected from penetration depth measurements indicative of a chiral triplet pairing symmetry, 16 anomalous normal fluid properties consistent with Majorana surface arcs, 17 and ferromagnetic fluctuations coexisting with superconductivity measured by muon spin relaxation measurements. 18 Several theoretical studies have sought to provide a microscopic description of how the exotic magnetic and superconducting features manifest in this material. 11 However, an out-3 standing challenge concerns the determination of the underlying electronic structure, with the question of the geometry and topology of the material's Fermi surface having been the subject of recent debate and speculation. 11, 19-30 Two angle-resolved photoemission spectroscopy (ARPES) studies have given contrasting interpretations, with one inferring the presence of multiple small 3D Fermi surface pockets, 31 while the other identified spectral features characteristic of a large cylindrical quasi-2D Fermi surface section, along with possible heavy 3D section(s). 32 A recent de Haas-van Alphen (dHvA) effect study 33 also resolved features representative of a quasi-2D Fermi surface, but with a spectrally dominant low frequency branch not captured by density functional theory (DFT) calculations, which could be indicative of a 3D Fermi surface pocket. However, this study was limited to magnetic fields lower than the ì-axis upper critical field, along which direction the cylindrical axes are posited to be located, thus preventing a detailed quantitative analysis of the Fermi surface geometry and topology. Discerning the dimensionality of the UTe 2 Fermi surface is important in order to determine the symmetry of the superconducting order parameter. Here, we report direct measurements of the UTe 2 Fermi surface, probed by magnetic torque measurements of the dHvA effect up to magnetic field strengths of 28 T at various temperatures down to 19 mK. Through measurements of magnetic quantum oscillations as a function of temperature and magnetic field tilt angle, we observe Fermi surface sections with heavy cyclotron effective masses up to 78(2) , where is the bare electron mass. We investigated the angular dependence of the quantum oscillations and used the resulting data to perform Fermi surface simulations. Our results indicate that the UTe 2 Fermi surface is very well described by two cylindrical Fermi surface sheets of equal volumes and super-elliptical cross sections. In combination, the evolution of quantum oscillatory amplitude and frequency with the tilt angle of the magnetic field, our quantitative analysis of the oscillatory waveform and the contributions from separate Fermi sheets, and the correspondence between the density of states implied from 4 0 1 2 3 0 1 0 0 2 0 0 3 0 0 4 0 0 C p / T ( m J K -2 m o l -1 ) T ( K ) a 0 4 8 1 2 1 6 0 2 4 6 8 1 0 r ( µΩ c m ) T 2 ( K 2 ) T c = 2 . 1 K R R R = 9 0 0 b 1 1 0 1 0 0 0 2 0 0 4 0 0 r ( µΩ c m ) T ( K ) Fig. 1. Characterisation of high purity UTe 2 . a, Specific heat capacity ( ) divided by temperature ( ) of a UTe 2 single crystal measured on warming to 3 K. A single, sharp bulk superconducting transition is exhibited. (Inset) The crystal structure of UTe 2 . b, Resistivity ( ) versus temperature squared up to 4 K for current sourced along the ì direction. A superconducting transition temperature of 2.1 K is observed (defined by zero resistivity). A residual resistivity ratio (RRR, defined in the text) of 900 is found, with a residual resistivity 0 ≲ 0.5 µΩ cm, indicative of very high sample purity. 34, 35 (Inset) The same dataset as the main panel extended up to 300 K. 5 specific heat measurements and our dHvA observations, makes the presence of any 3D Fermi surface pocket(s) extremely unlikely. We also measured Shubnikov-de Haas (SdH) oscillations in the contactless resistivity (see Supplementary Information). The SdH effect is generally more sensitive than the magnetic torque technique to symmetrical 3D Fermi pockets. 36 However, we find that the SdH response of UTe 2 contains no additional frequency components than those probed by dHvA (up to field strengths of 28 T, see Supplementary Information). We therefore identify, to a high level of confidence, that the Fermi surface of UTe 2 is quasi-2D in nature, and composed of two undulating cylindrical sections of hole-and electron-type, respectively. Samples were grown by the molten salt flux (MSF) technique 34 in excess uranium, to minimise the formation of uranium vacancies (see Methods for details). The MSF technique has been found to produce crystals of exceptionally high quality, 34 as demonstrated by specific heat capacity, , and electrical resistivity, , measurements in Figure 1. For this batch of crystals on which quantum oscillation studies were performed, we observe a superconducting transition temperature ( c ) of 2.1 K and residual resistivity ratios (RRR) of up to 900. The RRR is defined as (300 K)/ 0 , where 0 is the residual 0 K resistivity expected for the normal state in the absence of superconductivity, fitted by the dashed line (linear in 2 ) in Fig. 1b. By comparison, samples grown by the chemical vapour transport method tend only to exhibit an RRR of ≈ 88 at best, 35 and a typical c of ≈ 1.6 K. 11 Furthermore, in this study we resolve quantum oscillatory frequencies up to 18.5 kT, implying a mean free path of itinerant quasiparticles of at least 1900Å (see Supplementary Information for calculation), further underlining the pristine quality of this new generation of UTe 2 samples.Figure 2 shows quantum oscillations measured in the magnetic torque of UTe 2 . The os-
Spin-triplet superconductors represent a fascinating platform with which to explore the technological potential of emergent topological excitations. While candidate triplet superconductors are rare, one especially promising material is the heavy fermion paramagnet uranium ditelluride (UTe 2 ), which has recently been found to exhibit numerous characteristics of an unconventional spin-triplet pairing state. To date, efforts to understand the microscopic details of superconductivity in UTe 2 have been severely impeded by uncertainty regarding the underlying electronic structure. Here, we directly probe the Fermi surface of UTe 2 by measuring magnetic quantum oscillations in ultra-pure crystals, as evidenced by their high superconducting transition temperature c ≈ 2.1 K and residual resistivity ratio (RRR) ≈ 900. We find an angular profile of quantum oscillatory frequency and amplitude that is characteristic of a quasi-2D Fermi surface, exhibiting heavy effective masses up to 78 (2) owing to strong correlations. We performed Fermi surface simulations guided by these data, yielding excellent correspondence between quantum oscillation measurements and our resultant Fermi surface model, which consists of two cylindrical sections of electron-and hole-type respectively. A comparison between the density of states at the Fermi level inferred from the quantum oscillations, and the normal state Sommerfeld coefficient obtained from specific heat measurements, gives excellent agreement with our Fermi surface model. Additionally, we find that both cylindrical Fermi sheets possess negligible corrugation, which may allow for their near-nesting and therefore promote magnetic fluctuations that enhance the triplet pairing mechanism. Our results place strong constraints on the possible symmetry of the superconducting order parameter in UTe 2 .
Conventional phonon-mediated superconductivity involves the pairing of two fermions in a spin-singlet configuration 1 forming a bosonic quasiparticle of total spin = 0. Unconventional superconductors may replace the attractive role played by phonons with a magneticallymediated pairing interaction, often yielding a -wave symmetry of the orbital wavefunction, but still with overall = 0 for the bound pair. 2 By contrast, the formation of superfluidity in 3 He involves a triplet pairing configuration with = 1 and an odd-parity -wave symmetry. 3 To date no bulk solid state analogue of this exotic state of matter has been unequivocally identified, although actinide metals such as UPt 3 and UGe 2 are promising candidates. 4,5 The technological realisation of devices incorporating -wave superconductivity is highly desirable, due to their expected ability to effect coherent quantum information processing. 6 For several years the layered perovskite Sr 2 RuO 4 appeared a likely host of spin-triplet superconductivity; 7 however, recent experimental observations have cast considerable doubt on this interpretation. 8 Since the discovery of unconventional superconductivity in UTe 2 in 2019, 9 characteristic features of a -wave superconducting state in this material have been reported across numerous physical properties. These include a negligible change in the Knight shift upon cooling through c as probed by nuclear magnetic resonance (NMR), 10 high upper critical fields far in excess of the Pauli paramagnetic limit, 11 high magnetic field re-entrant superconductivity, 12 chiral in-gap states measured by scanning tunneling microscopy (STM), 13 time-reversal symmetry breaking inferred from the development of a finite polar Kerr rotation angle below c , 14,15 multiple point nodes detected from penetration depth measurements indicative of a chiral triplet pairing symmetry, 16 anomalous normal fluid properties consistent with Majorana surface arcs, 17 and ferromagnetic fluctuations coexisting with superconductivity measured by muon spin relaxation measurements. 18 Several theoretical studies have sought to provide a microscopic description of how the exotic magnetic and superconducting features manifest in this material. 11 However, an out-standing challenge concerns the determination of the underlying electronic structure, with the question of the geometry and topology of the material's Fermi surface having been the subject of recent debate and speculation. 11,[19][20][21][22][23][24][25][26][27][28][29][30] Two angle-resolved photoemission spectroscopy (ARPES) studies have given contrasting interpretations, with one inferring the presence of multiple small 3D Fermi surface pockets, 31 while the other identified spectral features characteristic of a large cylindrical quasi-2D Fermi surface section, along with possible heavy 3D section(s). 32 A recent de Haas-van Alphen (dHvA) effect study 33 also resolved features representative of a quasi-2D Fermi surface, but with a spectrally dominant low frequency branch not captured by density functional theory (DFT) calculations, which could be indicative of a 3D Fermi surface pocket. However, this study was limited to magnetic fields lower than the ì-axis upper critical field, along which direction the cylindrical axes are posited to be located, thus preventing a detailed quantitative analysis of the Fermi surface geometry and topology. Discerning the dimensionality of the UTe 2 Fermi surface is important in order to determine the symmetry of the superconducting order parameter.
Here, we report direct measurements of the UTe 2 Fermi surface, probed by magnetic torque measurements of the dHvA effect up to magnetic field strengths of 28 T at various temperatures down to 19 mK. Through measurements of magnetic quantum oscillations as a function of temperature and magnetic field tilt angle, we observe Fermi surface sections with heavy cyclotron effective masses up to 78(2) , where is the bare electron mass. We investigated the angular dependence of the quantum oscillations and used the resulting data to perform Fermi surface simulations. Our results indicate that the UTe 2 Fermi surface is very well described by two cylindrical Fermi surface sheets of equal volumes and super-elliptical cross sections. In combination, the evolution of quantum oscillatory amplitude and frequency with the tilt angle of the magnetic field, our quantitative analysis of the oscillatory waveform and the contributions from separate Fermi sheets, and the correspondence between the density of states implied from F r e q u e n c y ( k T )
2 3 T < m 0 H < 2 8 T b 0°8°2 0°q ∆t ( a r b . ) m 0 H ( T ) 0°8°2 0°a T = 2 0 m K 7 4°∆ t ( a r b . ) m 0 H ( T ) d 7 4°F
F T a m p . ( a r b . )
F r e q u e n c y ( k T )
2 5 T < m 0 H < 2 8 T 7 4°e t ( a r b . ) m 0 H ( T ) 7 4°0°c
Fig. 2. Angular evolution of quantum oscillatory frequencies and amplitude. a, Oscillatory component of magnetic torque (Δ ) at various angles and b, their corresponding Fourier frequency spectra. Angles were calibrated to within 2 • of uncertainty. 0 • corresponds to magnetic field, ì , applied along the ì direction; 90 • corresponds to field applied along the ì direction. At 0 • a singular frequency peak of high amplitude is observed at = 3.5 kT (with a second harmonic peak at 2 = 7.0 kT). Upon rotating away from ì towards ì this peak splits and the oscillatory amplitude diminishes markedly. c, Raw magnetic torque signal (without background subtraction), , at 0 This angular evolution of the dHvA effect -of a singular, relatively low frequency oscillatory component along a high symmetry direction subsequently evolving under rotation to yield much higher frequencies of smaller amplitude -is characteristic of a cylindrical quasi-2D Fermi surface. This is due to there being negligible smearing of the quantum oscillatory phase when averaging over along the cylindrical axis. Hence, one observes a large quantum oscillatory amplitude for magnetic field oriented in this direction (in this case the ì direction). Then, as the field is tilted away from the axis of the cylinder, the oscillatory amplitude falls considerably as successive Landau tubes of allowed energy levels intersect a smaller number of states as they pass through the Fermi surface. Furthermore, the oscillatory frequency increases at higher angles because the cross-sectional area of the Fermi surface normal to the magnetic field grows as 1 cos (ref. 36 ). Thus, our observation of the progression from large amplitude, low frequency quantum oscillations with field oriented along ì evolving to small amplitude, high frequency oscillations for field applied close to ì is strongly indicative of UTe 2 possessing cylindrical quasi-2D Fermi surface sections, axially collinear with the ì direction. Figure 3 shows the evolution in temperature of Δ for magnetic field oriented along the ì direction. The quantum oscillation amplitude diminishes rapidly at elevated temperatures, with the signal at 200 mK being an order of magnitude smaller than at 19 mK. Fig. 3c fits the quantum oscillatory amplitude to the temperature dependence of the Lifshitz-Kosevich formula 36 (see Methods for details), yielding a heavy effective cyclotron mass, * , of 41 (2) , consistent with observations reported in ref. 33 For an inclined angle of the magnetic field we observe heavier effective masses up to 78(2) (see Supplementary Information).
We plot the angular evolution of quantum oscillatory frequency with magnetic field tilt angle in Figure 4 for both the ì to ì and ì to ì rotation planes. Due to experimental limitations, our ì to ì measurements were constrained to within 45 • of rotation. We also plot the dHvA frequency simulation for our calculated Fermi surface sections (see Supplementary Information for simulation details), and find remarkably good correspondence between measurement and theory for all frequency branches. Thus, we find that the dHvA profile of UTe 2 is excellently described by two quasi-2D Fermi sheets, of 'squircular' (super-elliptical) cross-section, with the hole-(electron-) type sheet centred at the X (Y) point of the Brillouin zone (Fig. 4c).
We note that in performing DFT calculations (see Supplementary Information) we were unable to capture the angular profile of the low, spectrally dominant frequency branch in the ì to ì rotation plane. This branch initially decreases in frequency as field is titled away from ì, reaching a minimum at around 25 • , before then increasing rapidly close to ì. This is in sharp contrast to the expectation for a cylindrical Fermi surface section of circular cross section, for which the corresponding frequencies should increase monotonically under rotation away from the cylindrical axis. This property of the quantum oscillatory spectra thus places strong constraints on cylindrical Fermi sheets with super-elliptical rather than circular cross-sections, with significant amounts of warping (or undulation) along their lengths, but with a singular cross-sectional area present at extremal points normal to ì. We find that this simulation excellently describes the evolution of the three frequency branches observed at intermediate angles (Fig. 4a).
We can compare the density of states at the Fermi energy inferred from these quantum oscillation experiments with that determined by measurements of the linear specific heat coefficient.
Our specific heat measurements (in ambient magnetic field) give a residual normal state Sommerfeld coefficient N = 121(1) mJ mol −1 K −2 , consistent with prior reports. 11 Assuming that the quasiparticles of both the hole-and electron-sheets have * = 41(2) (Fig. 3c), we numerically calculated the contribution to the density of states from both Fermi surface sections (see Methods for calculation). We find that our simulated electron-type sheet should contribute ≈ 61.8 mJ mol −1 K −2 , while the hole-type section should give ≈ 58.7 mJ mol −1 K −2 . Together the two cylinders thus comprise a total density of states at the Fermi energy corresponding to a Sommerfeld ratio of ≈ 120.5 mJ mol −1 K −2 , in excellent agreement with the heat capacity measurements of N . We therefore conclude that these two sheets are very likely the only Fermi surface sections present in UTe 2 .
A number of studies have sought to reconcile anomalous aspects of the superconducting and normal state properties of UTe 2 by evoking models that assume the presence of one or more 3D
Fermi surface sections. [22][23][24][25][26][27][28][29][30] The distinction between having a 3D or quasi-2D Fermi surface is important, as this sets strong constraints on the possible irreducible representations of the point group symmetries of the superconducting order parameter. 37 In the absence of direct observations clarifying the Fermi surface dimensionality these models appeared promising due to their apparent ability to explain physical properties such as the slight anisotropy of the material's electrical conductivity tensor. 27 However, our angle-dependent dHvA measurements and corresponding Fermi surface simulation clearly resolves that the Fermi surface of UTe 2 comprises two cylindrical sections, possessing quasiparticle effective masses that fully account for the linear specific heat coefficient.
Therefore, any potential 3D Fermi surface sections must be very small in size such that their quantum oscillatory frequencies would be very low, or have very high effective masses to necessitate measurement temperatures considerably lower than 19 mK, or possess very high curvature around their entire surface so as to minimise the phase coherence of intersections with successive Landau tubes. However, as the contribution to the density of states per Fermi surface section is directly proportional to the effective mass of the quasiparticles hosted by that section (see Methods), it therefore seems unlikely that UTe 2 possesses any 3D Fermi pockets with markedly heavier effective masses than the cylindrical sections we observe in our temperaturedependent measurements. We note that the pronounced undulations along ì of our calculated
Fermi surface cylinders (that one could perhaps describe as 'warped squircular prisms') may account for several effects previously attributed to a 3D Fermi surface component, such as the relatively modest anisotropy of the electrical conductivity tensor. 11,27 We add further confidence to our interpretation of the dHvA data by a quasi-2D Fermi sur- 14 At the first-order metamagnetic transition obtained at 0 ≈ 35 T for ì ∥ ì , a Fermi surface reconstruction has been proposed to occur 11 due to reports of a change of sign of the Seebeck coefficient and a discontinuity in the carrier density as determined by Hall effect measurements. 51
This raises the interesting possibility that the high field re-entrant superconducting phase, 12 which is acutely angle-dependent and appears to persist to at least 70 T, 52 may be markedly different in character compared to the superconductivity found below 35 T. We note that our Fermi surface simulations predict the occurrence of a Yamaji angle 53 at the magnetic field orientation where the high field re-entrant superconducting phase is most pronounced (see Supplementary Fig. S17), similar to prior reports. 52, 54 This implies that a sharp peak in the density of states may underpin the microscopic mechanism driving this exotic superconducting state. A further quantum oscillation study beyond the scope of this work, in the experimentally challenging temperature-field regime of < 100 mK and 0 > 35 T, would therefore be of great interest in comparing the underlying fermiology of the magnetic field re-entrant superconducting phase with that of the lower field Fermi surface we uncover here.
In conclusion, our quantum oscillation study on pristine quality crystals has revealed the quasi-2D nature of the Fermi surface in UTe 2 . Performing dHvA and SdH measurements in magnetic field strengths greater than the ì-axis upper critical field have enabled us to compute the Fermi surface geometry, which consists of two cylindrical sheets of super-elliptical crosssection with a significant degree of undulation along their lengths. We numerically calculated the contribution to the density of states for our computed Fermi surface, and find that it fully accounts for the normal state Sommerfeld ratio determined from specific heat measurements.
Our findings indicate that the Fermi surface of UTe 2 possesses a negligible degree of small-scale corrugation, implying that the Fermi sheets may nest very closely together, thereby favouring magnetic fluctuations that enhance the spin-triplet superconducting pairing mechanism.
15
Methods
Sample preparation High quality single crystals of UTe 2 were grown using the molten salt flux technique adapted from ref. 34 We used an equimolar mixture of powdered NaCl (99.99%) and KCl (99.999%) salts as a flux, which had been dried at 200°C for 24 hours. Natural uranium metal with an initial purity of 99.9% was further refined using the solid state electrotransport (SSE) method 55 under ultra-high vacuum (∼10 −10 mbar); by passing a high electrical current of 400 A through the initial uranium metal, impurities can be removed extremely effectively. Following SSE treatment, a piece of purified uranium of typical mass ≈ 0.35g was etched using nitric acid to remove surface oxides. It was subsequently placed in a carbon crucible of inner diameter 13 mm together with pieces of tellurium (99.9999%) with the molar ratio of 1:1.71; subsequently, the equimolar mixture of NaCl and KCl was added. The molar ratio of uranium to NaCl,KaCl mixture was 1:60. The whole process was performed under a protective argon atmosphere in a glovebox. The carbon crucible was plugged by quartz wool, placed in a quartz tube, and heated up to 200°C under dynamic high vacuum (∼10 −6 mbar) for 12 hours. Then it was sealed and placed in a furnace. It was initially heated to 450°C in 24 hours, and left there for a further 24 hours. Then it was heated to 950°C at a rate of 0.35°C/min and kept there for an additional 24 hours. Afterwards the temperature was slowly decreased at a rate of 0.03°C/min down to 650°C, maintained there for 24 hours, and then cooled down to room temperature during the following 24 hours.
After the growth process, the ampoules were crushed and the contents of the carbon crucibles were immersed in water where the salts rapidly dissolved. Bar-shaped crystals were manually removed from the solution, rinsed with acetone, and stored under an argon atmosphere prior to characterisation and quantum oscillation studies. The longest edge of the produced single crystals was typically 3-12 mm (along the ì direction), with widths 0.5-1.2 mm (along ì ) and thicknesses around 0.2-1 mm (along ì).
Capacitive torque magnetometry measurements Torque magnetometry measurements were performed at the National High Magnetic Field Laboratory, Tallahassee, Florida, USA. Measurements were taken in SCM4 fitted with a dilution refrigerator sample environment. Single crystal samples were oriented with a Laue x-ray diffractometer. We note that our angular data obtained in the ì-ì plane is calibrated to within ≈ 2 • of experimental uncertainty; however, a possible azimuthal offset in the ì-ì angles means that these data should only be taken to be accurate to within ≈ 5 • . Samples were mounted on beryllium copper cantilevers suspended above a copper base plate, thereby forming a capacitive circuit component. The capacitance of the cantilever-base plate system was measured as a function of applied magnetic field strength by a General Radio analogue capacitance bridge in conjunction with a phase sensitive detector. This configuration of cantilever and base plate was mounted on a custom-built rotatable housing unit, allowing for the angular dependence of the dHvA effect to be studied.
In analysing the measured torque data, the oscillatory component was isolated from the background magnetic torque by subtracting a smooth monotonic polynomial fit by the local regression technique. 56 The main benefit of this technique over simply subtracting a polynomial fitted over the whole field range is that the LOESS window over which the averaging occurs can be modified; for oscillations of faster (slower) frequency, smaller (larger) LOESS windows will achieve a better isolation of the dHvA effect signal. This averaging window then slides along the entire curve to extract the oscillatory component from the background magnetic torque.
Lifshitz-Kosevich temperature study Figure 3a shows quantum oscillations measured at various temperatures, with the quantum oscillatory amplitude being strongly diminished at elevated temperature. We extract an effective cyclotron mass, * , by fitting the temperature dependence of the FFT amplitudes to the Lifshitz-Kosevich formula for temperature damping; this fit is plotted in Fig. 3c.
The temperature damping coefficient, , may be written as: 36
= sinh (1) where = 2 2 B * ℏ ,(2)
in which is the elementary charge, ℏ is the reduced Planck constant, B is the Boltzmann constant, is the temperature, and is the average magnetic field strength of the inverse field range used to compute the FFTs. Thus * can be found by fitting the quantum oscillatory amplitude to Eqn. 1 as a function of temperature.
Evaluation of the density of states at the Fermi level
The quasiparticle density of states at the Fermi level, ( F ), may be expressed 57 in terms of the linear specific heat coefficient extrapolated from the normal state, N , as
( F ) = 3 N 2 2 B .(3)
We can compare this with the density of states predicted for a given Fermi surface geometry as measured by the dHvA effect. 36 For a Fermi surface section with surface element dS, which hosts quasiparticles of effective mass * that have Fermi velocity F , we can write
( F ) = 1 4 3 ℏ ∫ S F .(4)
For the simple geometric case of a cylindrical Fermi surface, of radius F = √︃ 2 + 2 and height , combining these two expressions gives a contribution (per cylinder) to the linear specific heat coefficient of
N = 2 B * 6ℏ 2 ,(5)
for a metal of molar volume . Therefore, comparing this simple case with our dHvA data, for the 3.5 kT quantum oscillatory frequency observed for magnetic field applied along the ì direction (Figs. 2,3), assuming that both cylinders have the same (single) effective mass we found in our Lifshitz-Kosevich temperature study (Fig. 3) we can estimate a contribution per (circular) cylinder to N of ≈ 51.5 mJ mol −1 K −2 .
Taking this result for the simple case of ideal, circularly cross-sectional cylindrical Fermi surface sections with no warping, we then numerically calculated the actual surface area of the squircular, warped Fermi sheets we generated in our Fermi surface simulations (Fig. 4). We found that the electron-type section possesses a surface area 1.20 times bigger than the case of the simple circular cross-sectional cylinder of the same cross-sectional area normal to ì (corresponding to a dHvA frequency of 3.5 kT). For the hole-type sheet, we found that its surface area is bigger by a factor of 1.14. Therefore, we obtained values of ≈ 61.8 mJ mol −1 K −2 for the electron sheet and ≈ 58.7 mJ mol −1 K −2 for the hole sheet, giving a total contribution to N of ≈ 120.5 mJ mol −1 K −2 .
We note that this treatment is only approximate, as we assume a constant F along the entire surface of the Fermi sheets, using the value obtained from our measurement of * for magnetic field applied along the ì direction. At inclined angles of magnetic field tilt angle a range of effective masses is observed, from as low as 32 reported in ref., 33 up to the mass of 78 we find in Supplementary Fig. S13. Therefore, this comparison between the density of states implied by specific heat capacity measurements and inferred from observations of the dHvA effect is only approximate, in the absence of a full determination of the profile of F along the entire surface of the Fermi surface sections. In our calculations we used the cyclotron effective mass measured for magnetic field parallel to the axis of the cylinders, as at this orientation the quantum oscillatory amplitude is largest and thus we are sampling a large proportion of the variation of F along the cylinders' surfaces. Given the close correspondence between the values of N measured by specific heat experiments and calculated from our dHvA data and Fermi surface simulations, this adds strong confidence to our Fermi surface simulations and interpretation of the dHvA data that these two quasi-2D sections likely comprise the only Fermi surface sheets present in UTe 2 .
Acknowledgements
Poor correspondence between dHvA effect data and DFT-calculated Fermi surfaces
We performed density functional theory (DFT) calculations for UTe 2 using the full electron, Here, we plot characteristic Fermi surface calculations for a range of U values from U = 1 eV up to U = 16 eV. We compare the expected angular evolution of the dHvA effect of these calculated Fermi surfaces with the measurements performed in this study and reported in ref. 3 It is clear that none of these calculated Fermi surfaces fully account for the observed dHvA frequency evolution. Hence, this motivated us to perform Fermi surface simulations guided by the dHvA data, as detailed below.
Visualisation of DFT-generated Fermi surfaces was performed using XCrySDen 4 and the corresponding quantum oscillation frequencies were extracted using the SKEAF extremal area program. 5 2 Fig. S1. The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 1.0 eV (c,d). Low quantum oscillatory frequencies are expected, very different from those observed by experiment. (c,d). A large proportion of the angular profile in the -plane is captured by this Fermi surface calculation. However, in the -plane the low, spectrally dominant frequency branch is not accounted for. (c,d). Again the low, spectrally dominant frequency branch in the -plane is not accounted for.
Fermi surface parameterisation
The Fermi surface simulations detailed in the Methods section were generated using a Cartesian reciprocal space basis:
ì = 2 (1/ , 0, 0), ì = 2 (0, 1/ , 0), ì = 2 (0, 0, 1/ ).(1)
This can be related to the actual -space basis as:
ì = ì + ì ì = ì + ì ì = ì + ì .
(2) Fig. S7. The reciprocal space basis used to parameterise the Fermi surface. A Cartesian coordinate system was used since it has an intuitive relation with the cylindrical polars used to parameterise the Fermi surface.
Fermi surface simulations
As discussed above, DFT calculations were unable to accurately describe the measured quantum oscillation data. On close inspection, the dHvA effect data exhibit some key features:
1. At 0 • , all frequencies collapse onto a single point, indicating that the area of all surfaces when viewed down the -axis must be extremely similar.
2. When initially rotating away from ì towards ì, the frequencies split into three branches, with one branch that decreases in frequency, which would appear at odds with a cylindrical Fermi surface of circular cross-section.
3. At angles close to 90 • the the frequencies go as 1 cos with only one (fast) frequency observable at 74 • , indicative of cylindrical Fermi surfaces of similar area again.
Initially, the behaviour of the quantum oscillation data at 0 • and high angles seems to contradict the behaviour seen at intermediate angles. However, from the angle dependence of the oscillations and from DFT calculations, several features of the Fermi surface can be deduced.
DFT calculations suggest that the Fermi surface does indeed consist of two 'squircular' cylinders. The 0 • behaviour indicates that both cylinders must have very similar areas but also that their area cannot be very warped as a function of otherwise this would result in a number of frequencies at 0 • . However, the mid-angle frequencies show neck-and-belly behaviour as well as a branch that decreases in frequency, which would indicate warping along . To reconcile these behaviours, it is noted that the symmetry of the Brillouin Zone allows spatial warping along . What this means is that the Fermi surface consists of cylinders of constant area which follow a sinusoidally oscillating path in the / plane as a function of .
In DFT, changing U in the range 2-16 eV modifies the direction and amplitude of the above warping, likely due to changes in the hybridisation of the bands. However, the DFT results could not completely capture the angular frequency evolution observed in the quantum oscillation data (see Supplementary Figures S1-6).
Instead, an approach similar to the work of Bergemann et al. 6 was adopted. For UTe 2 , the squared-off shape of the cylinders means that the Fermi surface can be described as a superposition of super-ellipses defined as:
+ = 1.(3)
Therefore, the surface vectors can be defined as
( ) = ∑︁ | cos | 2 · sgn(cos ) ( ) = ∑︁ | sin | 2 · sgn(sin ) ∈ [0, 2 ](4)
where and are the semi-diameters in the and direction respectively. The centres of the super-ellipses trace out a sinusoidal path in reciprocal space that can be parameterised as
( ) = cos( ) ( ) = cos( ) ( ) = ∈ [− , ](5)
where and are the warping parameters in the ì and ì directions respectively. Physically, this may correspond to hybridisation between U and Te orbitals governed by the respective interatomic distances, similar to behaviour seen in YFe 2 Ge 2 . 7 Note since is defined in the range of [− , ], the final areas must be rescaled according to the real extent of the ì direction.
With these parametric equation defined, the simulated Fermi surface can be obtained by defining the surface
ì = ì + ì + ì(6)
where ì = ( , , ),
and ì = ( , , ).
ì is an offset from the centre of the Brillouin zone such that the cylinders are centred at the X and Y high-symmetry points. For these simulations the unit-cell was assumed to have dimensions: 8 = 4.123Å, = 6.086Å,
= 13.812Å.(9)
The 0 • dHvA data fix the areas of each cylinder at constant values A ℎ/ while the symmetry properties of the Brillouin zone allows the hole-like cylinder to only be warped in the ì direction, whereas for the electron-like cylinder warping is only allowed in the ì direction.
This means that once the area for each cylinder has been determined there are only two free parameters that can be varied to fit the data:
, where /ℎ /ℎ = A /ℎ ,(10)
/ for /ℎ.
For the hole-like cylinder, values of:
= 1.92, = 2.12, = 0.52, = 0(12)
were determined, whereas for the electron-like cylinder it was found that:
= 1.55, = 2.59, = 0, = −0.15.(13)
With these parameters, Fermi surfaces were generated and visualised using PyVista. 9 Simulated frequencies were determined according to a similar methodology to SKEAF, 5
It should be noted that the quantum oscillation data is degenerate with respect to inversions of the warping parameter. This is to say that making / negative will invert the warping of the 13 cylinders while leaving the simulated oscillation pattern the same (see Supplementary Figure S8). In this work we chose the warping to best emulate the Fermi surfaces seen for U = 8 eV as used in several other UTe 2 works, in addition to studies of several uranium oxides. [11][12][13] Inverted-warping Fermi surface When the warping parameters are inverted an alternate Fermi surface is produced with simulated dHvA effect frequencies that identically match the quantum oscillation data (see Fig. 4 of main text). The shape of these surfaces more closely resembles the DFT-generated Fermi surfaces of U = 1.7-2.0 eV than those of other U parameters.
Quantum oscillations in the contactless resistivity of UTe 2
In addition to our dHvA effect study presented in the main text, we also measured quantum oscillations from the Shubnikov-de Haas (SdH) effect. Contactless resistivity was measured by utilizing a proximity diode oscillator (PDO) circuit, with the change in frequency of this circuit as a function of magnetic field being related to the change in resistivity of a UTe 2 crystal (for further details of this measurement technique, see for example refs. [14][15][16]. Figure S9 shows the contactless resistivity of UTe 2 for field oriented along the ì direction (purple curves) and at a tilt angle of 51 • away from ì towards ì (green curves). For field aligned along ì, a monofrequency oscillatory signal is clearly resolved, with no other frequency peaks resolvable above the noise floor. This is in very good agreement with the torque data presented in the main text.
We note that a recent field-modulation technique study of the dHvA signal of UTe 2 along the ì direction reported multiple frequency branches for this orientation. 17 The authors posited that the disagreement between their result and that of our torque study is likely due to a misalignment in one of the experiments. For our contactless resistivity study presented here, we replicate the monofrequency waveform of Fig. 3 of the main text very well. We note that the alignment of the crystal for the PDO experiment was assisted by choosing a platelet shaped sample, with a dominant (001) face. This was then secured onto a planar PDO coil such that the base plate of the measurement coil -the plane of which runs orthogonally to the crystallographic ì directioncould be firmly secured onto the rotator platform of the measurement probe. With this platform oriented normal to the applied field, the field is then aligned along the ì direction. Angular orientation was calibrated by use of a Hall sensor.
We note that the resolution of our PDO measurement was not as sensitive as the comparative torque measurement at = 0 • in the main text, as the second harmonic is not discernible.
This may be due to some technical differences in the optimisation of the two measurement
Estimation of the mean free path
We can estimate a lower bound for the mean free path of the samples investigated in this study by considering the (real-space) cyclotron orbits that give rise to the observed magneto-quantum oscillations. Assuming an approximately circular cross-sectional surface area normal to the applied magnetic field, then A = 2 F , where F is the Fermi wave-vector. Consider now the cyclotron motion of an electrically charged quasiparticle in the presence of a magnetic field , which may be described by
* 2 F = F ,(15)
where is the radius of the cyclotron orbit and F is the Fermi velocity. As ℏ F ≡ * F , we can substitute this into the above and with use of Eqn. 14 we may express as
= √︂ 2ℏ 2 .(16)
Therefore, the observation of a quantum oscillation frequency of 18.5 kT (in Fig. 2 of the main text) at a magnetic field of 26 T implies a cyclotron orbit of radius ≈ 1900Å. Thus, this gives an approximate lower bound on the mean free path of the sample.
We can compare this value deduced from cyclotron orbit arguments with an estimation of the mean free path, , expected from Drude theory. 18 For a metal with carrier density , may be expressed as:
= * F 2 0 = ℏ F 2 0 .(17)
Taking from a prior Hall effect study, 19 which found for < 35 T that = 1.6 ×10 22 cm −3 , then a sample with 0 ≲ 0.5 µΩ cm (Fig. 1b) To assess the validity of these geometrical assumptions, we can closely inspect the dHvA signal for field applied collinear to the cylindrical axes (the ì direction). For the general case of two cylindrical Fermi sheets, labelled and , of unspecified cross-sectional area and degree of corrugation, the theoretically expected 20, 21, 26, 27 quantum oscillatory waveform for magnetic field, , applied parallel to their axes may be approximated as:
Δ = ∑︁ = , Δ ,0 · T D · 0 2 Δ 0 · cos 2 0 ,(18)
where Δ 0 is the amplitude in the infinite-field limit, T is the temperature damping coefficient (computed from the data presented in Fig. 3 We note that this treatment is only approximate as it is assumes cylinders having circular cross-sections, rather than the super-elliptical cross-sections we find in our Fermi surface simulations -hence, we have restricted this analysis solely to the 0 • data. A similar analysis at inclined angles, beyond the scope of this work, fully accounting for the squircular nature of the cylinders, could be illuminating in evaluating the effects of a possible Yamaji angle in the vicinity of the orientation at which the very high magnetic field re-entrant superconducting phase is located 29, 30 (see Fig. S17). T e m p e r a t u r e ( K ) Fig. S12. Determining c from resistivity measurements. The same electrical resistivity data as that in Fig. 1 of the main text, here plotted linearly in temperature close to the superconducting transition. A c of 2.1 K is clearly resolved, as determined by zero resistivity. , of a UTe 2 sample from the same growth batch as those used in our quantum oscillation measurements. The onset of superconductivity is clearly resolved at = 2.1 K, indicating a single bulk transition at this temperature. A small magnetic field of 1 mT was applied along the ì direction; the sample was affixed to a quartz sample holder by cryogenic varnish and measured in a Quantum Design MPMS. Fig. S15. Measurements of the high frequency, small amplitude dHvA signal at = 74 • . a, Δ from three successive magnetic field sweeps for ì tilted 74 • from ì towards ì, plotted linearly in inverse field. To maximise the ratio of signal-to-noise, the magnetic field was swept slowly at a rate of 0.05 T/min for each curve. The cyan curve is the averaged, smoothed waveform. b, The corresponding FFTs of the data in (a). All curves show a clear peak at 18.5 kT on top of 1/ background noise. A noise profile of 1/ is to be expected due to the measurement being performed linearly in time (and hence field), but the Fourier analysis is conducted for oscillations that are periodic in inverse field. It is the average of these three individual sweeps that is plotted in Fig. 2 of the main text. F r e q u e n c y ( k T ) d Fig. S16. Long field sweep for field oriented along the ì direction. a, Δ at 100 mK. Note that the signal to noise of the data presented in this figure is notably poorer than that of the data presented in Fig. 3 of the main text. This is due to the data here being obtained at a considerably faster sweep rate of 0.5 T/min, compared to 0.05 T/min for the Δ curves in Fig. 3. b, Raw torque signal. c FFT of the data in (a) over the field interval of 16 T ≤ 0 ≤ 28 T, and d, over 25 T ≤ 0 ≤ 28 T. No slower frequency components are resolved over this wide field range. It has previously been reported that for magnetic fields in excess of 40 T applied in a narrow angular range tilted approximately 60 • from the ì direction towards the ì direction, re-entrant superconductivity is observed up to at least 70 T. 29,30 Interestingly, we note that our Fermi surface simulation predicts an intersection of three frequency branches (a crossing of the maxima and minima of the hole sheet intersecting the maxima of the electron sheet) to occur in close proximity to this angle at which the re-entrant superconducting phase is most pronounced. 30 The lines in the main panel are given as a guide to the eye; the inset is the same simulation as plotted in Fig. 4 of the main text.
Figure 2
2shows quantum oscillations measured in the magnetic torque of UTe 2 . The oscillatory component of the signal, Δ , was isolated from the background magnetic torque by subtracting a smooth monotonic local polynomial regression fit (see Methods). In panel (a) we find that when a magnetic field, ì , is applied along the ì direction (defined here as 0 • ), a 6 F F T a m p . ( a r b . )
Fig
. 2c compares the raw torque signal, , at 0 • and 74 • . Both curves have been translated (without rescaling) to the same value at 26 T, for ease of comparison. A clear oscillatory component is visible in the 0 • curve, whereas the 74 • data appear very smooth. Figs. 2d,e give a zoomed in view of Δ at 74 • , in which a very small amplitude (over an order of magnitude smaller than at 0 • ), high frequency component of 18.5 kT is clearly present. In the Supplementary Information we perform a quantitative analysis of the 0 • waveform, which reveals the oscillatory contribution of two distinct Fermi surface sections of identical cross-sectional area.
Fig. 3 .
3Singular quantum oscillatory frequency and heavy effective cyclotron mass along the ì direction. a, Oscillatory component of magnetic torque for field applied very close (within 2 • ) to the ì direction and b, corresponding FFT amplitudes for incremental temperatures as indicated. A dominant frequency component of 3.5 kT (and second harmonic at 7.0 kT) is observed. c, FFT amplitude as a function of temperature (coloured points). The solid line is a fit to the Lifshitz-Kosevich formula,36 which fits the data well and yields a heavy effective cyclotron mass, * , of 41(2) , where is the bare electron mass.
Fig. 4 .
4The Fermi surface of UTe 2 . Angular dependence of the dHvA effect for (a) the ì to ì rotation plane, and (b) the ì to ì rotation plane. Blue and gold symbols are simulated frequencies (see Supplementary Information) for extremal orbit areas of our calculated electron-(e − ) and hole-type (h + ) Fermi surface sections, respectively; red and purple symbols represent dHvA data from this study and ref.33 c, Side-and top-view of our simulated Fermi surface cylinders, with high symmetry points indicated. Again, blue (gold) represents electron-(hole-) type sections. d, Extended-zone view of the UTe 2 Fermi surface. 11 the possible Fermi surface geometry, and motivated our computation of data-led Fermi surface simulations (seeSupplementary Information for simulation details). These simulations yielded
face model by fitting the 0 • 19 mK Δ curve from Fig. 3 as the sum of two distinct oscillatory components representing the hole-and electron-type sections, respectively (see Supplementary Information). The fit yields two sinusoids of identical frequencies (within error). Notably, there is only a very slight phase-smearing contribution, indicative of negligible warping along the lengths of the cylinders. Performing a Fourier analysis of the residual curve obtained by subtracting the fit from the data (Supplementary Fig. S10) reveals no additional frequency components that may have been obscured by the large peak in the 0 • FFT in Fig. 2b. Therefore, we conclude that our dHvA data are very well interpreted based on UTe 2 possessing a heavy, quasi-2D, charge-compensated Fermi surface. Our finding of a quasi-2D Fermi surface in UTe 2 has important implications for determining the symmetry of the superconducting gap structure. Evidence indicating the presence of point nodes has been reported from thermal conductivity measurements, 38 with subsequent NMR 39 and scanning SQUID 40 studies interpreting the gap structure within the 2ℎ point group as being of 3 character, with point nodes along the ì direction. However, recent NMR measurements 41 performed on high quality MSF crystals argue strongly in favour of a highly anisotropic full gap, of the representation. The quasi-2D cylinders of our Fermi surface model would be consistent with either a 3 or representation. Further experimental and theoretical studies to distinguish between these symmetries -or the possibility of a non-unitary combination of both 16, 39, 42, 43 -are urgently called for to provide a complete microscopic understanding of the superconducting order parameter in UTe 2 . It is interesting to consider how different the Fermi surface of UTe 2 is from the complex multi-sheet Fermi surfaces found in the ferromagnetic superconductors URhGe, UGe 2 , and UCoGe, the latter two of which have small 3D pockets. 44 Contrastingly, the possession of a relatively simple quasi-2D Fermi surface comprising charge-compensated cylindrical components is remarkably similar to other unconventional superconductors including the Fe-pnictides, 45, 46 underdoped highc cuprates, 47, 48 and Pu-based superconductors. 49 It has been suggested 19 that the near nesting between quasi-2D Fermi surface sections favours spin fluctuations in UTe 2 and may thereby strengthen the spin-triplet pairing mechanism. Thus, given the pronounced undulation but negligible degree of corrugation of the UTe 2 Fermi sheets, this is likely the reason why UTe 2 exhibits such a markedly higher c than its ferromagnetic U-based cousins. Given the multitude of theoretical study into the effects of -wave pairing symmetry hosted by such a Fermi surface in the case of e.g. the cuprates, 50 it is therefore interesting to consider what similarities and differences may be found when considering instead a -wave symmetry.
We are grateful to J. Chen, D. Shaffer, D.V. Chichinadze, S.S. Saxena, C.K. de Podesta, P. Coleman, T. Helm, A.B. Shick, and W. Luo for fruitful discussions. We thank T.J. Brumm, T.P. Murphy, A.F. Bangura, D. Graf, S.T. Hannahs, S.W. Tozer, E.S. Choi, and L. Jiao for technical advice and assistance. This project was supported by the EPSRC of the UK (grant no. EP/X011992/1). A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779* and the State of Florida. Crystal growth and characterization were performed in MGML (mgml.eu), which is supported within the program of Czech Research Infrastructures (project no. LM2018096). We acknowledge financial support by the Czech Science Foundation (GACR), project No. 22-22322S. The EMFL supported dual-access to facilities at MGML, Charles University, Prague, under the European Union's Horizon 2020 research and innovation programme through the ISABEL project (No. 871106). A part of this work was also supported by the JAEA REIMEI Research Program. DFT calculations were performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). This project also used the ARCHER2 UK National Supercomputing Service (https://www.archer2.ac.uk). A.G.E. acknowledges support from a QuantEmX grant from ICAM and the Gordon and Betty Moore Foundation through Grant GBMF5305; from the Henry Royce Institute for Advanced Materials through the Equipment Access Scheme enabling access to the Advanced Materials Characterisation Suite at Cambridge, grant numbers EP/P024947/1, EP/M000524/1 & EP/R00661X/1; and from Sidney Sussex College (University of Cambridge). T.I.W. and A.J.H. acknowledge support from EPSRC studentships (EP/R513180/1 & EP/M506485/1). N.J.M.P. and Z.W. acknowledge studentship support from the Cambridge Trust (www.cambridgetrust.org). Z.W. also acknowledges studentship support from the Chinese Scholarship Council (www.chinesescholarshipcouncil.com).
linearised augmented plane-wave package Wien2K.1 Electronic structures were computed on a 17 × 17 × 17 Monkhorst-Pack -mesh within the Brillouin Zone of the primitive unit cell using the Generalised Gradient Approximation (GGA) exchange-correlation potential. A variable Hubbard parameter (U) was utilised, while the static magnetic moment on the uranium ions was constrained to zero. The effects of spin-orbit coupling (SOC) were taken into account.DFT+U+SOC results were consistent with previous work. 2, 3 When correlations are neglected, the DFT produces an insulating ground state in UTe 2 . However, applying a moderate repulsive potential to the U-5 electrons through a Hubbard U induces an insulator to metal transition at ∼ 1 eV.2 The geometry and topology of the computed Fermi surfaces was examined for U = 1-16 eV. For values of U below 1.7 eV a 3D, toroidal, electron-like contribution to the Fermi surface is recovered, while the hole-like surface consists of a heavily warped cylinder. Above 1.7 eV the toroidal Fermi surface splits into another cylinder whereas the hole-like sheet becomes less warped, resulting in a quasi-2D Fermi surface consisting of two cylinders centred 1 at the X and Y high-symmetry points.
Fig. S2 .
S2The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 1.5 eV (c,d). Multiple frequency branches are expected, in both rotation planes, that are not observed by experiment.
Fig. S3 .
S3The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 2.0 eV
Fig. S4 .
S4The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 8.0 eV(c,d). Again the low, spectrally dominant frequency branch in the -plane is not accounted for.
Fig. S5 .
S5The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 12.0 eV (c,d). Again the low, spectrally dominant frequency branch in the -plane is not accounted for.
Fig. S6 .
S6The simulated quantum oscillation frequencies (a,b) from the DFT calculated Fermi surface with U = 16.0 eV
and are defined according to orthogonal basis vectors, not the reciprocal lattice vectors (see Supplementary Figure S7). The exact shape of the Fermi surface in the ì − ì plane does not have a strong influence on the simulated oscillations and so the combination of super-ellipses was chosen such that it reproduced the in-plane shape of the U = 2-16 eV DFT calculations. The dominant super-ellipse contribution comes from = 5.
10 = ℏA 2 .
102although the closed-cylindrical topology of each surface makes determining extremal frequencies significantly easier. Since there is only one warping parameter, each cylinder can contribute at most only two extremal areas. Extremal areas were determined by shifting each cylinder to the origin and creating a supercell of the cylinder (extending 20 Brillouin zones). Slicing planes were then placed at regular intervals along the ì direction where the angle of the slicing planes could be varied continuously through 90 • towards either the ì or ì direction. The maximal and minimal areas, A, of the intersection of each slicing plane with each cylinder could then be determined, and hence the frequency contribution, , of each cylinder was calculated according to the Onsager relation,
Fig. S8 .
S8An example degenerate Fermi surface. a, Side-on view of the Fermi surface. b, Top-down view of the Fermi surface. c, Extension of cylinders outside of the first Brillouin zone.
Fig. S9 .
S916 techniques, or due to a difference in sample quality, or a combination of both. Despite the lower temperature of the 51 • measurement, it is not sensitive enough to resolve any frequency components. This is consistent with the rapidly increasing frequency profile of our quasi-2DFermi surface model, accompanied by a sharp diminution of oscillatory amplitude, as the field is titled further away from the axis of the cylinders (as demonstrated by Fig. 2 of the main text) -thus, the oscillatory amplitude appears to have fallen below the comparatively high noise floor of the PDO measurement. Our SdH measurements therefore do not indicate the presence of any other Fermi sheets beyond the two quasi-2D cylindrical sections that are captured by our Fermi surface model and presented inFig. 4of the main text. Quantum oscillations in the contactless resistivity of UTe 2 . a, PDO frequency for a coil connected to a UTe 2 crystal with magnetic field oriented along the ì direction. The inset gives a zoomed view at high field, showing that the 3.5 kT oscillations are clearly visible in the raw signal before background subtraction. b, Background-subtracted PDO signal for magnetic field oriented along the ì direction ( = 0 • ) and tilted 51 • away from ì towards ì . The 51 • data has been offset by 80 Hz for clarity. The subtraction procedure for both angles was performed identically. c, FFT spectra of the data in panel (b).
of the main text), D is the Dingle damping coefficient 28 of form D = exp − 0 for damping factor , Δ is the depth of corrugation (in frequency-space) of the th Fermi sheet, and corresponds to the oscillatory frequency corresponding to a cross-sectional area A by the Onsager relation. 10 0 denotes a zeroth order Bessel function of the first kind, to capture the extent of any possible corrugation along the lengths of the cylinders, which would result in interference due to phase smearing. We perform an unconstrained fit to Eqn. 18 in Fig. S9, which yields values of = 3470(15) T, Δ = 13(4) T, = 3485(13) T, and Δ = 7(6) T. This implies that both cylinders have identical cross-sectional areas (within uncertainty), and that the presence of corrugation is negligible as Δ / = 0.004. Therefore, we conclude that the description provided by our Fermi surface simulation -of quasi-2D cylindrical sections with identical cross-sectional areas and negligible small-scale corrugations -is well supported by this analysis.
Fig. S10 .
S10Isolating the dHvA contributions of two quasi-2D Fermi surface sections of equal area. a, The 19 mK dHvA data fromFig. 3of the main text, for magnetic field oriented along ì, is reproduced here (black curve). Red solid and blue dashed curves are the and components, respectively, of Eqn. 18, obtained by performing an unconstrained fit to the data. Ω , represent the oscillatory contribution from each of the two distinct cylindrical Fermi surface sections. The purple curve is the sum of these two components (Ω + Ω ), which fits the measured data very well. b, FFTs of the curves in (a), along with the FFT of the residual curve obtained by subtracting the fit from the data (grey dashed line).
Fig. S10 .
S10(cont.) c, The same FFT spectra as (b) plotted here on a logarithmic amplitude axis. No clear frequency peaks are distinguishable from the noise after subtracting the dominant component, except for the second harmonic at 7.0 kT. Therefore, the two cylindrical sections (labelled here as and ) appear to be the sole Fermi surface sections with closed cyclotron orbits normal to ì.
Fig. S11 .
S11UTe 2 single crystal. Photograph of a typical UTe 2 crystal prepared by the molten salt flux method. Our UTe 2 samples tend to crystallise to have flat (001) and (011) surfaces, with the direction of longest extent (left-right in this image) being the ì direction.
Fig. S13 .
S13Determining c from magnetisation measurements. dc magnetisation,
Fig. S14 .
S14Heavy quasiparticle effective masses. a, Oscillatory component of magnetic torque at = 45 • from ì towards ì over the field range 26-28 T, at various temperatures between 27 mK and 140 mK as indicated. b, Fast Fourier transforms (FFTs) of the data in (a). Two prominent frequency branches are observed, at 5.4 kT and 4.4 kT, respectively. c, FFT amplitudes from (b) plotted versus temperature. Lines are fits to the Lifshitz-Kosevich theory of temperature damping for magnetic quantum oscillations 26 (see Methods); these fits yield cyclotron effective masses of 78(2) for the 5.4 kT branch, and 57(3) for the 4.4 kT branch; denotes the bare electron mass.
Fig. S17 .
S17Prediction by our Fermi surface simulation of a Yamaji angle coinciding with the orientation of very high magnetic field re-entrant superconductivity.
imise the formation of uranium vacancies (see Methods for details). The MSF technique has been found to produce crystals of exceptionally high quality,34 as demonstrated by specific heat capacity, , and electrical resistivity, , measurements inFigure 1. For this batch of crystals is the residual 0 K resistivity expected for the normal state in the absence of superconductivity, fitted by the dashed line (linear in 2 ) inFig. 1b. By comparison, samples grown by the chemical vapour transport method tend only to exhibit an RRR of ≈ 88 at best,35 and a typical c of ≈ 1.6 K.11 Furthermore, in this study we resolve quantum oscillatory frequencies up to 18.5 kT, implying a mean free path of itinerant quasiparticles of at least 1900Å (seeSupplementary Information for calculation), further underlining the pristine quality of this new generation of UTe 2 samples.Fig. 1. Characterisation of high purity UTe 2 . a, Specific heat capacity ( ) divided by
temperature ( ) of a UTe 2 single crystal measured on warming to 3 K. A single, sharp
bulk superconducting transition is exhibited. (Inset) The crystal structure of UTe 2 . b,
Resistivity ( ) versus temperature squared up to 4 K for current sourced along the ì
direction. A superconducting transition temperature of 2.1 K is observed (defined by
zero resistivity). A residual resistivity ratio (RRR, defined in the text) of 900 is found,
with a residual resistivity 0 ≲ 0.5 µΩ cm, indicative of very high sample purity. 34, 35
(Inset) The same dataset as the main panel extended up to 300 K.
specific heat measurements and our dHvA observations, makes the presence of any 3D Fermi
surface pocket(s) extremely unlikely. We also measured Shubnikov-de Haas (SdH) oscillations
in the contactless resistivity (see Supplementary Information). The SdH effect is generally more
sensitive than the magnetic torque technique to symmetrical 3D Fermi pockets. 36 However, we
find that the SdH response of UTe 2 contains no additional frequency components than those
probed by dHvA (up to field strengths of 28 T, see Supplementary Information). We therefore
identify, to a high level of confidence, that the Fermi surface of UTe 2 is quasi-2D in nature, and
composed of two undulating cylindrical sections of hole-and electron-type, respectively.
Samples were grown by the molten salt flux (MSF) technique 34 in excess uranium, to min-
on which quantum oscillation studies were performed, we observe a superconducting transition
temperature ( c ) of 2.1 K and residual resistivity ratios (RRR) of up to 900. The RRR is defined
as (300 K)/ 0 , where 0
44 .
44Aoki, D., Ishida, K. & Flouquet, J. Review of U-based Ferromagnetic Superconductors: Comparison between UGe 2 , URhGe, and UCoGe. J. Phys. Soc. Jpn. 88, 022001 (2019). 50. Lee, P. A., Nagaosa, N. & Wen, X.-G. Doping a Mott insulator: Physics of hightemperature superconductivity. Rev. Mod. Phys. 78, 17-85 (2006). 51. Niu, Q. et al. Evidence of Fermi surface reconstruction at the metamagnetic transition of the strongly correlated superconductor UTe 2 . Phys. Rev. Res. 53. Yamaji, K. On the Angle Dependence of the Magnetoresistance in Quasi-Two-Dimensional Organic Superconductors. J. Phys. Soc. Jpn. 58, 1520-1523 (1989). 54. Honda, F. et al. Pressure-induced Structural Phase Transition and New Superconducting Phase in UTe 2 . J. Phys. Soc. Jpn. 92, 044702 (2023).45. Dong, J. et al. Competing orders and spin-density-wave instability in La(O 1 F )FeAs. EPL
83, 27006 (2008).
46. Carrington, A. Quantum oscillation studies of the Fermi surface of iron-pnictide supercon-
ductors. Rep. Prog. Phys. 74, 124507 (2011).
47. Sebastian, S. E. et al. Compensated electron and hole pockets in an underdoped high-c
superconductor. Phys. Rev. B 81, 214524 (2010).
48. Proust, C. & Taillefer, L. The remarkable underlying ground states of cuprate supercon-
ductors. Annu. Rev. Condens. Matter Phys. 10, 409-429 (2019).
49. Maehira, T., Hotta, T., Ueda, K. & Hasegawa, A. Electronic Structure and the Fermi Surface
of PuCoGa 5 and NpCoGa 5 . Phys. Rev. Lett. 90, 207007 (2003).
2, 033179 (2020).
52. Helm, T. et al. Suppressed magnetic scattering sets conditions for the emergence of 40 T
high-field superconductivity in UTe 2 (2022). arXiv:2207.08261.
55. Haga, Y. et al. Purification of Uranium Metal using the Solid State Electrotransport Method
under Ultrahigh Vacuum. Jpn. J. Appl. Phys. 37, 3604 (1998).
56. Cleveland, W. S. & Devlin, S. J. Locally weighted regression: An approach to regression
analysis by local fitting. J. Am. Stat. Assoc. 83, 596-610 (1988).
57. Ashcroft, N. W. & Mermin, D. N. Solid State Physics (Harcourt College Publishers, San
Diego, CA, 1976).
23
has ∼ 2000Å. Thus, this approximation underlines the pristine quality of the UTe 2 single crystals investigated in this study. of two distinct Fermi surface sections contributing to the observed oscillatory waveform The high resolution of the quantum oscillation data measured in this study allows us to perform a detailed quantitative analysis, to better understand the geometrical properties of the UTe 2 Fermi surface. The arguments presented in the main text, and the discussion of our Fermi surface simulation given above, present a strong argument in favour of the UTe 2 Fermi surface being composed of two quasi-2D cylindrical sheets. Prior analyses of cylindrical Fermi surfaces in hole-doped cuprates have carefully considered the amount of corrugation that may be present along the sides of the cylinders, 20-23 which can have important implications for Fermi surface nesting vectors. 24, 25 Our simulated Fermi surface (Fig. 4 of the main text) possesses pronounced undulations along the length of the cylinders. However, on a smaller scale, our model predicts that the cylinders are very smooth, with negligible crinkles or corrugations along the surfaces. Furthermore, the model expects a singular frequency component to be observed for field along ì. This would imply that both cylinders have the same cross-sectional area, and again, that there are only negligibly small ripples or corrugations up their lengths -only the large-scale undulations, that go through one complete period of undulation for each Brillouin zone, without adding additional extremal orbits for field along ì.Identification
. J Bardeen, L N Cooper, J R Schrieffer, Theory, Superconductivity, Phys. Rev. 108Bardeen, J., Cooper, L. N. & Schrieffer, J. R. Theory of Superconductivity. Phys. Rev. 108, 1175-1204 (1957).
Superconductivity without phonons. P Monthoux, D Pines, G G Lonzarich, Nature. 450Monthoux, P., Pines, D. & Lonzarich, G. G. Superconductivity without phonons. Nature 450, 1177-1183 (2007).
A theoretical description of the new phases of liquid 3 He. A J Leggett, Rev. Mod. Phys. 47Leggett, A. J. A theoretical description of the new phases of liquid 3 He. Rev. Mod. Phys. 47, 331-414 (1975).
Observation of broken time-reversal symmetry in the heavy-fermion superconductor UPt 3. E Schemm, W Gannon, C Wishne, W Halperin, A Kapitulnik, Science. 345Schemm, E., Gannon, W., Wishne, C., Halperin, W. & Kapitulnik, A. Observation of broken time-reversal symmetry in the heavy-fermion superconductor UPt 3 . Science 345, 190-193 (2014).
Superconductivity on the border of itinerant-electron ferromagnetism in UGe 2. S S Saxena, Nature. 406Saxena, S. S. et al. Superconductivity on the border of itinerant-electron ferromagnetism in UGe 2 . Nature 406, 587-592 (2000).
Non-abelian anyons and topological quantum computation. C Nayak, S H Simon, A Stern, M Freedman, S Das Sarma, Rev. Mod. Phys. 80Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Das Sarma, S. Non-abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083-1159 (2008).
The superconductivity of Sr 2 RuO 4 and the physics of spintriplet pairing. A P Mackenzie, Y Maeno, Rev. Mod. Phys. 75Mackenzie, A. P. & Maeno, Y. The superconductivity of Sr 2 RuO 4 and the physics of spin- triplet pairing. Rev. Mod. Phys. 75, 657-712 (2003).
Constraints on the superconducting order parameter in Sr 2 RuO 4 from oxygen-17 nuclear magnetic resonance. A Pustogow, Nature. 574Pustogow, A. et al. Constraints on the superconducting order parameter in Sr 2 RuO 4 from oxygen-17 nuclear magnetic resonance. Nature 574, 72-75 (2019).
Nearly ferromagnetic spin-triplet superconductivity. S Ran, Science. 365Ran, S. et al. Nearly ferromagnetic spin-triplet superconductivity. Science 365, 684-687 (2019).
Anisotropic response of spin susceptibility in the superconducting state of UTe 2 probed with 125 Te−NMR measurement. G Nakamine, Phys. Rev. B. 103100503Nakamine, G. et al. Anisotropic response of spin susceptibility in the superconducting state of UTe 2 probed with 125 Te−NMR measurement. Phys. Rev. B 103, L100503 (2021).
Unconventional superconductivity in UTe 2. D Aoki, J. Phys. Condens. Matter. 34243002Aoki, D. et al. Unconventional superconductivity in UTe 2 . J. Phys. Condens. Matter 34, 243002 (2022).
Extreme magnetic field-boosted superconductivity. S Ran, Nat. Phys. 15Ran, S. et al. Extreme magnetic field-boosted superconductivity. Nat. Phys. 15, 1250-1254 (2019).
Chiral superconductivity in heavy-fermion metal UTe 2. L Jiao, Nature. 579Jiao, L. et al. Chiral superconductivity in heavy-fermion metal UTe 2 . Nature 579, 523-527 (2020).
Multicomponent superconducting order parameter in UTe 2. I M Hayes, Science. 373Hayes, I. M. et al. Multicomponent superconducting order parameter in UTe 2 . Science 373, 797-801 (2021).
Interplay between magnetism and superconductivity in UTe 2. D S Wei, Phys. Rev. B. 10524521Wei, D. S. et al. Interplay between magnetism and superconductivity in UTe 2 . Phys. Rev. B 105, 024521 (2022).
Chiral superconductivity in UTe 2 probed by anisotropic low-energy excitations. K Ishihara, arXiv:2105.13721Ishihara, K. et al. Chiral superconductivity in UTe 2 probed by anisotropic low-energy excitations (2021). arXiv:2105.13721.
Anomalous normal fluid response in a chiral superconductor UTe 2. S Bae, Nat. Commun. 122644Bae, S. et al. Anomalous normal fluid response in a chiral superconductor UTe 2 . Nat. Commun. 12, 2644 (2021).
Coexistence of ferromagnetic fluctuations and superconductivity in the actinide superconductor UTe 2. S Sundar, Phys. Rev. B. 100140502Sundar, S. et al. Coexistence of ferromagnetic fluctuations and superconductivity in the actinide superconductor UTe 2 . Phys. Rev. B 100, 140502 (2019).
Quasi-Two-Dimensional Fermi Surfaces and Unitary Spin-Triplet Pairing in the Heavy Fermion Superconductor UTe 2. Y Xu, Y Sheng, Y Yang, Phys. Rev. Lett. 123217002Xu, Y., Sheng, Y. & Yang, Y.-f. Quasi-Two-Dimensional Fermi Surfaces and Unitary Spin- Triplet Pairing in the Heavy Fermion Superconductor UTe 2 . Phys. Rev. Lett. 123, 217002 (2019).
Insulator-Metal Transition and Topological Superconductivity in UTe 2 from a First-Principles Calculation. J Ishizuka, S Sumita, A Daido, Y Yanase, Phys. Rev. Lett. 123217001Ishizuka, J., Sumita, S., Daido, A. & Yanase, Y. Insulator-Metal Transition and Topological Superconductivity in UTe 2 from a First-Principles Calculation. Phys. Rev. Lett. 123, 217001 (2019).
Chiral superconductivity in UTe 2 via emergent 4 symmetry and spin-orbit coupling. D Shaffer, D V Chichinadze, Phys. Rev. B. 10614502Shaffer, D. & Chichinadze, D. V. Chiral superconductivity in UTe 2 via emergent 4 sym- metry and spin-orbit coupling. Phys. Rev. B 106, 014502 (2022).
Theory of Spin-polarized Superconductors --An Analogue of Superfluid 3 He A-phase. K Machida, J. Phys. Soc. Jpn. 8933702Machida, K. Theory of Spin-polarized Superconductors --An Analogue of Superfluid 3 He A-phase. J. Phys. Soc. Jpn. 89, 033702 (2020).
Time reversal symmetry and the structure of Cooper pair wavefunction in topological superconductor UTe 2. V Yarzhemsky, E Teplyakov, Phys. Lett. A. 384126724Yarzhemsky, V. & Teplyakov, E. Time reversal symmetry and the structure of Cooper pair wavefunction in topological superconductor UTe 2 . Phys. Lett. A 384, 126724 (2020).
Orientation of point nodes and nonunitary triplet pairing tuned by the easy-axis magnetization in UTe 2. S Kittaka, Phys. Rev. Research. 232014Kittaka, S. et al. Orientation of point nodes and nonunitary triplet pairing tuned by the easy-axis magnetization in UTe 2 . Phys. Rev. Research 2, 032014 (2020).
Topological band and superconductivity in UTe 2. T Shishidou, H G Suh, P M R Brydon, M Weinert, D F Agterberg, Phys. Rev. B. 103104504Shishidou, T., Suh, H. G., Brydon, P. M. R., Weinert, M. & Agterberg, D. F. Topological band and superconductivity in UTe 2 . Phys. Rev. B 103, 104504 (2021).
Nonunitary triplet superconductivity tuned by field-controlled magnetization: URhGe, UCoGe, and UTe 2. K Machida, Phys. Rev. B. 10414514Machida, K. Nonunitary triplet superconductivity tuned by field-controlled magnetization: URhGe, UCoGe, and UTe 2 . Phys. Rev. B 104, 014514 (2021).
-axis transport in UTe 2 : Evidence of three-dimensional conductivity component. Y S Eo, Phys. Rev. B. 10660505Eo, Y. S. et al. -axis transport in UTe 2 : Evidence of three-dimensional conductivity com- ponent. Phys. Rev. B 106, L060505 (2022).
UTe 2 : A nearly insulating half-filled = 5 2 5 3 heavy-fermion metal. A B Shick, S.-I Fujimori, W E Pickett, Phys. Rev. B. 103125136Shick, A. B., Fujimori, S.-i. & Pickett, W. E. UTe 2 : A nearly insulating half-filled = 5 2 5 3 heavy-fermion metal. Phys. Rev. B 103, 125136 (2021).
Intrinsic Anomalous Thermal Hall Effect in the Unconventional Superconductor UTe. Y Moriya, T Matsushita, M G Yamada, T Mizushima, S Fujimoto, arXiv:2205.118482Moriya, Y., Matsushita, T., Yamada, M. G., Mizushima, T. & Fujimoto, S. Intrin- sic Anomalous Thermal Hall Effect in the Unconventional Superconductor UTe 2 (2022). arXiv:2205.11848.
H C Choi, S H Lee, B.-J Yang, arXiv:2206.04876Correlated normal state fermiology and topological superconductivity in. UTe2Choi, H. C., Lee, S. H. & Yang, B.-J. Correlated normal state fermiology and topological superconductivity in UTe 2 (2022). arXiv:2206.04876.
Electronic Structure of UTe 2 Studied by Photoelectron Spectroscopy. S Fujimori, J. Phys. Soc. Jpn. 88103701Fujimori, S.-i. et al. Electronic Structure of UTe 2 Studied by Photoelectron Spectroscopy. J. Phys. Soc. Jpn. 88, 103701 (2019).
Low Energy Band Structure and Symmetries of UTe 2 from Angle-Resolved Photoemission Spectroscopy. L Miao, Phys. Rev. Lett. 12476401Miao, L. et al. Low Energy Band Structure and Symmetries of UTe 2 from Angle-Resolved Photoemission Spectroscopy. Phys. Rev. Lett. 124, 076401 (2020).
First Observation of the de Haas-van Alphen Effect and Fermi Surfaces in the Unconventional Superconductor UTe 2. D Aoki, J. Phys. Soc. Jpn. 9183704Aoki, D. et al. First Observation of the de Haas-van Alphen Effect and Fermi Surfaces in the Unconventional Superconductor UTe 2 . J. Phys. Soc. Jpn. 91, 083704 (2022).
Single crystal growth of superconducting UTe 2 by molten salt flux method. H Sakai, Phys. Rev. Materials. 673401Sakai, H. et al. Single crystal growth of superconducting UTe 2 by molten salt flux method. Phys. Rev. Materials 6, 073401 (2022).
Single thermodynamic transition at 2 K in superconducting UTe 2 single crystals. P F S Rosa, Commun. Mater. 333Rosa, P. F. S. et al. Single thermodynamic transition at 2 K in superconducting UTe 2 single crystals. Commun. Mater. 3, 33 (2022).
D Shoenberg, Magnetic Oscillations in Metals. Cambridge, UK, Cambridge, UKCambridge University PressShoenberg, D. Magnetic Oscillations in Metals (Cambridge University Press, Cambridge, UK, Cambridge, UK, 1984).
Topological superconductors: a review. M Sato, Y Ando, Rep. Prog. Phys. 8076501Sato, M. & Ando, Y. Topological superconductors: a review. Rep. Prog. Phys. 80, 076501 (2017).
Point-node gap structure of the spin-triplet superconductor UTe 2. T Metz, Phys. Rev. B. 100220504Metz, T. et al. Point-node gap structure of the spin-triplet superconductor UTe 2 . Phys. Rev. B 100, 220504 (2019).
Superconducting Order Parameter in UTe 2 Determined by Knight Shift Measurement. H Fujibayashi, J. Phys. Soc. Jpn. 9143705Fujibayashi, H. et al. Superconducting Order Parameter in UTe 2 Determined by Knight Shift Measurement. J. Phys. Soc. Jpn. 91, 043705 (2022).
Microscopic Imaging Homogeneous and Single Phase Superfluid Density in UTe 2. Y Iguchi, Phys. Rev. Lett. 130196003Iguchi, Y. et al. Microscopic Imaging Homogeneous and Single Phase Superfluid Density in UTe 2 . Phys. Rev. Lett. 130, 196003 (2023).
Large Reduction in the -axis Knight Shift on UTe 2 with c = 2. H Matsumura, arXiv:2305.01200Matsumura, H. et al. Large Reduction in the -axis Knight Shift on UTe 2 with c = 2.1 K (2023). arXiv:2305.01200.
Generalized Bardeen-Cooper-Schrieffer States and the Proposed Low-Temperature Phase of Liquid He 3. P W Anderson, P Morel, Phys. Rev. 123Anderson, P. W. & Morel, P. Generalized Bardeen-Cooper-Schrieffer States and the Pro- posed Low-Temperature Phase of Liquid He 3 . Phys. Rev. 123, 1911-1934 (1961).
Theory of Pairing Assisted Spin Polarization in Spin-Triplet Equal Spin Pairing: Origin of Extra Magnetization in Sr 2 RuO 4 in Superconducting State. K Miyake, Supplementary information for: Quasi-2D Fermi surface in the anomalous superconductor. UTe 28353701Miyake, K. Theory of Pairing Assisted Spin Polarization in Spin-Triplet Equal Spin Pair- ing: Origin of Extra Magnetization in Sr 2 RuO 4 in Superconducting State. J. Phys. Soc. Jpn. 83, 053701 (2014). Supplementary information for: Quasi-2D Fermi surface in the anomalous superconductor UTe 2
. A G Eaton, T I Weinberger, N J M Popiel, Z Wu, A J Hickey, A Cabala, J Pospíšil, J Prokleška, T Haidamak, G Bastien, P Opletal, H Sakai, Y Haga, R Nowell, S M Benjamin, V Sechovský, G G Lonzarich, F M Grosche, M Vališka, Correspondence to: [email protected]. G. Eaton, T. I. Weinberger, N. J. M. Popiel, Z. Wu, A. J. Hickey, A. Cabala, J. Pospíšil, J. Prokleška, T. Haidamak, G. Bastien, P. Opletal, H. Sakai, Y. Haga, R. Nowell, S. M. Benjamin, V. Sechovský, G. G. Lonzarich, F. M. Grosche, and M. Vališka. Correspondence to: [email protected]
WIEN2k: An APW+lo program for calculating the properties of solids. P Blaha, J. Chem. Phys. 15274101Blaha, P. et al. WIEN2k: An APW+lo program for calculating the properties of solids. J. Chem. Phys. 152, 074101 (2020).
Unconventional superconductivity in UTe 2. D Aoki, J. Phys. Condens. Matter. 34243002Aoki, D. et al. Unconventional superconductivity in UTe 2 . J. Phys. Condens. Matter 34, 243002 (2022).
First Observation of the de Haas-van Alphen Effect and Fermi Surfaces in the Unconventional Superconductor UTe 2. D Aoki, J. Phys. Soc. Jpn. 9183704Aoki, D. et al. First Observation of the de Haas-van Alphen Effect and Fermi Surfaces in the Unconventional Superconductor UTe 2 . J. Phys. Soc. Jpn. 91, 083704 (2022).
XCrySDen-a new program for displaying crystalline structures and electron densities. A Kokalj, J. Mol. Graph. Model. 17Kokalj, A. XCrySDen-a new program for displaying crystalline structures and electron densities. J. Mol. Graph. Model. 17, 176-179 (1999).
Numerical extraction of de Haas-van Alphen frequencies from calculated band energies. P Rourke, S Julian, Comput. Phys. Commun. 183Rourke, P. & Julian, S. Numerical extraction of de Haas-van Alphen frequencies from calculated band energies. Comput. Phys. Commun. 183, 324-332 (2012).
Quasi-twodimensional Fermi liquid properties of the unconventional superconductor Sr 2 RuO 4. C Bergemann, A P Mackenzie, S R Julian, D Forsythe, E Ohmichi, Adv. Phys. 52Bergemann, C., Mackenzie, A. P., Julian, S. R., Forsythe, D. & Ohmichi, E. Quasi-two- dimensional Fermi liquid properties of the unconventional superconductor Sr 2 RuO 4 . Adv. Phys. 52, 639-725 (2003).
Fermi Surface and Mass Renormalization in the Iron-Based Superconductor YFe 2 Ge 2. J Baglo, Phys. Rev. Lett. 12946402Baglo, J. et al. Fermi Surface and Mass Renormalization in the Iron-Based Superconductor YFe 2 Ge 2 . Phys. Rev. Lett. 129, 046402 (2022).
Low-temperature crystal structure of the unconventional spin-triplet superconductor UTe 2 from single-crystal neutron diffraction. V Hutanu, Acta Crystallogr. B. 76Hutanu, V. et al. Low-temperature crystal structure of the unconventional spin-triplet super- conductor UTe 2 from single-crystal neutron diffraction. Acta Crystallogr. B 76, 137-143 (2020).
PyVista: 3D plotting and mesh analysis through a streamlined interface for the Visualization Toolkit (VTK). C B Sullivan, A Kaszynski, J. Open Source Softw. 41450Sullivan, C. B. & Kaszynski, A. PyVista: 3D plotting and mesh analysis through a stream- lined interface for the Visualization Toolkit (VTK). J. Open Source Softw. 4, 1450 (2019).
Interpretation of the de Haas-van Alphen effect. L Onsager, Philos. Mag. 43Onsager, L. Interpretation of the de Haas-van Alphen effect. Philos. Mag. 43, 1006-1008 (1952).
Quasi-Two-Dimensional Fermi Surfaces and Unitary Spin-Triplet Pairing in the Heavy Fermion Superconductor UTe 2. Y Xu, Y Sheng, Y Yang, Phys. Rev. Lett. 123217002Xu, Y., Sheng, Y. & Yang, Y.-f. Quasi-Two-Dimensional Fermi Surfaces and Unitary Spin- Triplet Pairing in the Heavy Fermion Superconductor UTe 2 . Phys. Rev. Lett. 123, 217002 (2019).
X-ray absorption branching ratio in actinides: LDA+DMFT approach. J H Shim, K Haule, G Kotliar, EPL. 8517007Shim, J. H., Haule, K. & Kotliar, G. X-ray absorption branching ratio in actinides: LDA+DMFT approach. EPL 85, 17007 (2009).
Electronic correlation and transport properties of nuclear fuel materials. Q Yin, Phys. Rev. B. 84195111Yin, Q. et al. Electronic correlation and transport properties of nuclear fuel materials. Phys. Rev. B 84, 195111 (2011).
Proximity detector circuits: An alternative to tunnel diode oscillators for contactless measurements in pulsed magnetic field environments. M M Altarawneh, C H Mielke, J S Brooks, Rev. Sci. Inst. 8066104Altarawneh, M. M., Mielke, C. H. & Brooks, J. S. Proximity detector circuits: An al- ternative to tunnel diode oscillators for contactless measurements in pulsed magnetic field environments. Rev. Sci. Inst. 80, 066104 (2009).
-electron hybridised Fermi surface in magnetic field-induced metallic. H Liu, Liu, H. et al. -electron hybridised Fermi surface in magnetic field-induced metallic
YbB 12. npj Quantum Mater. 712YbB 12 . npj Quantum Mater. 7, 12 (2022).
Truncated mass divergence in a Mott metal. K Semeniuk, Semeniuk, K. et al. Truncated mass divergence in a Mott metal (2022).
de Haas-van Alphen Oscillations for the Field Along c-axis in UTe. D Aoki, arXiv:2304.076782Aoki, D. et al. de Haas-van Alphen Oscillations for the Field Along c-axis in UTe 2 (2023). arXiv:2304.07678.
Solid State Physics. N W Ashcroft, D N Mermin, Harcourt College PublishersSan Diego, CAAshcroft, N. W. & Mermin, D. N. Solid State Physics (Harcourt College Publishers, San Diego, CA, 1976).
Evidence of Fermi surface reconstruction at the metamagnetic transition of the strongly correlated superconductor UTe 2. Q Niu, Phys. Rev. Res. 233179Niu, Q. et al. Evidence of Fermi surface reconstruction at the metamagnetic transition of the strongly correlated superconductor UTe 2 . Phys. Rev. Res. 2, 033179 (2020).
Determining the in-plane Fermi surface topology in high c superconductors using angle-dependent magnetic quantum oscillations. N Harrison, R D Mcdonald, J. Phys.: Condens. Matter. 21192201Harrison, N. & McDonald, R. D. Determining the in-plane Fermi surface topology in high c superconductors using angle-dependent magnetic quantum oscillations. J. Phys.: Condens. Matter 21, 192201 (2009).
Compensated electron and hole pockets in an underdoped high-c superconductor. S E Sebastian, Phys. Rev. B. 81214524Sebastian, S. E. et al. Compensated electron and hole pockets in an underdoped high-c superconductor. Phys. Rev. B 81, 214524 (2010).
Normal-state nodal electronic structure in underdoped high-Tc copper oxides. S E Sebastian, Nature. 511Sebastian, S. E. et al. Normal-state nodal electronic structure in underdoped high-Tc copper oxides. Nature 511, 61-64 (2014).
Quantum oscillations in hole-doped cuprates. S E Sebastian, C Proust, Annu. Rev. Condens. Matter Phys. 6Sebastian, S. E. & Proust, C. Quantum oscillations in hole-doped cuprates. Annu. Rev. Condens. Matter Phys. 6, 411-430 (2015).
Key features in the measured band structure of Bi 2 Sr 2 CaCu 2 O 8+ : Flat bands at E F and Fermi surface nesting. D S Dessau, Phys. Rev. Lett. 71Dessau, D. S. et al. Key features in the measured band structure of Bi 2 Sr 2 CaCu 2 O 8+ : Flat bands at E F and Fermi surface nesting. Phys. Rev. Lett. 71, 2781-2784 (1993).
Fermi surface nesting and the origin of charge density waves in metals. M D Johannes, I I Mazin, Phys. Rev. B. 77165135Johannes, M. D. & Mazin, I. I. Fermi surface nesting and the origin of charge density waves in metals. Phys. Rev. B 77, 165135 (2008).
D Shoenberg, Magnetic Oscillations in Metals. Cambridge, UK, Cambridge, UKCambridge University PressShoenberg, D. Magnetic Oscillations in Metals (Cambridge University Press, Cambridge, UK, Cambridge, UK, 1984).
Haas-van Alphen effect in two-and quasi-two-dimensional metals and superconductors. T Champel, V P Mineev, De, Philos. Mag. B. 81Champel, T. & Mineev, V. P. de Haas-van Alphen effect in two-and quasi-two-dimensional metals and superconductors. Philos. Mag. B 81, 55-74 (2001).
Some magnetic properties of metals II. The influence of collisions on the magnetic behaviour of large systems. R B Dingle, Proc. R. Soc. Lond. A. R. Soc. Lond. A211Dingle, R. B. Some magnetic properties of metals II. The influence of collisions on the magnetic behaviour of large systems. Proc. R. Soc. Lond. A 211, 517-525 (1952).
Extreme magnetic field-boosted superconductivity. S Ran, Nat. Phys. 15Ran, S. et al. Extreme magnetic field-boosted superconductivity. Nat. Phys. 15, 1250-1254 (2019).
Suppressed magnetic scattering sets conditions for the emergence of 40 T high-field superconductivity in UTe. T Helm, arXiv:2207.082612Helm, T. et al. Suppressed magnetic scattering sets conditions for the emergence of 40 T high-field superconductivity in UTe 2 (2022). arXiv:2207.08261.
| [] |
[
"Counterfactual Analysis in Dynamic Latent-State Models",
"Counterfactual Analysis in Dynamic Latent-State Models"
] | [
"Martin Haugh ",
"Raghav Singal "
] | [] | [] | We provide an optimization-based framework to perform counterfactual analysis in a dynamic model with hidden states. Our framework is grounded in the "abduction, action, and prediction" approach to answer counterfactual queries and handles two key challenges where (1) the states are hidden and (2) the model is dynamic.Recognizing the lack of knowledge on the underlying causal mechanism and the possibility of infinitely many such mechanisms, we optimize over this space and compute upper and lower bounds on the counterfactual quantity of interest. Our work brings together ideas from causality, state-space models, simulation, and optimization, and we apply it on a breast cancer case study. To the best of our knowledge, we are the first to compute lower and upper bounds on a counterfactual query in a dynamic latent-state model. | null | [
"https://export.arxiv.org/pdf/2205.13832v4.pdf"
] | 256,275,035 | 2205.13832 | 3dd3fabb69ccfd88a6cb4dbb63de513993379f2a |
Counterfactual Analysis in Dynamic Latent-State Models
5 May 2023
Martin Haugh
Raghav Singal
Counterfactual Analysis in Dynamic Latent-State Models
5 May 2023
We provide an optimization-based framework to perform counterfactual analysis in a dynamic model with hidden states. Our framework is grounded in the "abduction, action, and prediction" approach to answer counterfactual queries and handles two key challenges where (1) the states are hidden and (2) the model is dynamic.Recognizing the lack of knowledge on the underlying causal mechanism and the possibility of infinitely many such mechanisms, we optimize over this space and compute upper and lower bounds on the counterfactual quantity of interest. Our work brings together ideas from causality, state-space models, simulation, and optimization, and we apply it on a breast cancer case study. To the best of our knowledge, we are the first to compute lower and upper bounds on a counterfactual query in a dynamic latent-state model.
Introduction
Counterfactual analysis, falling on the third rung of Pearl's ladder of causation (Pearl & Mackenzie, 2018), is a fundamental problem in causality. It requires us to imagine a world where a certain policy was enacted with a corresponding outcome given that a different policy and outcome were actually observed. It is performed via the 3step framework of abduction (conditioning on the observed data), action (changing the policy), and prediction (computing the counterfactual quantity of interest (CQI)), and has wide-ranging applications (Pearl, 2009a;b).
As a concrete application in healthcare and legal reasoning, consider someone who recently died from breast cancer. The exact progression of her disease is unknown. What is known, however, is that over a period of time prior to her diagnosis, her insurance company adopted a strategy of denying her regular scans (e.g., mammograms) even though these scans should have been covered by her policy. Had these scans gone ahead, the cancer may have been found 1 Imperial College 2 Dartmouth College. Correspondence to : MH <[email protected]>, RS <[email protected]>. earlier and the patient's life saved. Now a court wants to know the probability that her life would have been saved had the routine scans been permitted.
On top of the challenges posed by standard counterfactual analysis, there are two that are particular to such a setting. First, it's possible the underlying state of the patient (e.g., stage of cancer) is hidden / latent and we only observe a noisy signal depending on the accuracy of the scan (e.g., sensitivity and specificity of a mammogram). Second, the underlying model is dynamic as the patient's state evolves over time. As such, our goal in this work is to perform counterfactual analysis in dynamic latent-state models. 1 Two streams of work are closely related to ours. The first relates to works on constructing bounds on CQIs (Balke & Pearl, 1994;Tian & Pearl, 2000;Kaufman et al., 2005;Cai et al., 2008;Pearl, 2009b;Mueller et al., 2021;Zhang et al., 2021). These papers focus on static models. We note that despite some similarities of our work with Zhang et al. (2021), the two approaches are quite different. In particular, while both papers recognize the relevance of polynomial optimization for bounding CQIs, Zhang et al. (2021) do not solve polynomial optimization problems but instead propose Monte-Carlo algorithms as a work-around. In contrast, we actually solve polynomial optimization problems via sample average approximations (SAAs), which we generate via Monte-Carlo. As such, Monte-Carlo serves as an "input" to our polynomial programs whereas Zhang et al. (2021) use it as a "substitute" for polynomial programs. As mentioned above, another difference is our focus on dynamic models whereas Zhang et al. (2021) focus on static models. The second stream is more recent and concerns counterfactual analysis in dynamic models (Buesing et al., 2019;Oberst & Sontag, 2019;Lorberbom et al., 2021;Tsirtsis et al., 2021). Except Buesing et al. (2019), none of these works allows for latent states. In addition, these works perform counterfactual analysis by embedding assumptions that are strong enough to restrict the underlying set of causal mechanisms to a singleton. In particular, Buesing et al. (2019) explicitly fix a single causal mechanism whereas Oberst & Sontag (2019) and Tsirtsis et al. (2021) invoke counterfactual stability and implicitly fix the causal mechanism (via the Gumbel-max distribution). Lorberbom et al. (2021) extend the Gumbel-max approach but their choice of causal mechanism is the one that minimizes variance when estimating the CQI. In summary, none of these approaches explicitly account for all possible causal mechanisms, and therefore, they do not consider the construction of lower and upper bounds on the CQI, which is our focus.
Our key contribution is to provide a principled framework for counterfactual analysis in dynamic latent-state models. We define our problem in §2 and discuss counterfactual stability in §3. In §4, we present our solution approach and we describe our numerics in §5. We conclude in §6.
Problem Definition
We first define the underlying dynamic latent-state model ( §2.1) and then describe the counterfactual analysis problem ( §2.2). We will use a breast cancer application as a vehicle for explaining ideas throughout but it should be clear our framework is quite general.
A Dynamic Latent-State Model
The model, visualized in Figure 1, has T discrete periods. In each period t, the system is in a hidden state H t ∈ H (finite). As a stochastic function of H t and the policy X t ∈ X (finite), we observe an emission O t ∈ O (finite). The emission probability is denoted by e hxi := P(O t = i | H t = h, X t = x) for all (h, x, i). This is followed by the state H t transitioning to H t+1 with transition probability q hih ′ := P(H t+1 = h ′ | H t = h, O t = i). In the breast cancer application, the time periods map to the frequency of mammograms (e.g., 6 months) and the hidden state H t ∈ {1, . . . , 7} denotes the patient's condition. State 1 equates to the patient being healthy whereas states 2 and 3 correspond to undiagnosed in-situ and invasive breast cancer, respectively. States 4 and 5 corre-spond to diagnosed in-situ and invasive breast cancer respectively, with the understanding that the cancer treatment has begun (since it has been diagnosed). States 6 and 7 are absorbing and denote recovery from cancer (due to treatment) and death from cancer, respectively. The observation O t ∈ {1, . . . , 7} captures the mammogram result. A value of 1 means no screening took place, whereas 2 denotes a negative screening result (possibly a false negative). A value of 3 corresponds to a positive mammogram result, but followed by a negative biopsy (i.e., the patient is healthy and the mammogram produced a false positive). Observations 4 and 5 map to correctly diagnosed in-situ and invasive cancer respectively, i.e., a positive mammogram followed by a positive biopsy. Observations 6 and 7 are used to denote patient recovery and death from breast cancer, respectively. The variable X t ∈ {0, 1} models the insurance company's coverage policy for the mammograms, with 0 denoting the company covers it and 1 denoting the company (incorrectly) denies the coverage. If the coverage is denied, then the mammogram is not performed and hence, the observation cannot be 2, 3, 4, or 5. (In this application, the X t 's are deterministic but in general, they could be the result of a randomized policy.)
Our model is therefore a generalization of a hidden Markov model (HMM) since H t+1 depends not only on H t but also on O t . The dependence on O t is needed to capture the fact that if cancer was detected during period t and treatment began at that point of time, i.e., O t ∈ {4, 5}, then H t+1 depends on the fact that the treatment began in period t. For example, if H t = 2 (in-situ cancer) and O t = 4 (in-situ diagnosed and hence, treatment began), then H t+1 would be different compared to when H t = 2 and O t = 2 (false negative and hence, treatment did not begin).
Remark 1. Ayer et al. (2012) employed a similar model
for determining an optimal screening strategy for breast cancer but as their goal was to optimize over screening strategies, their model was a partially observable Markov decision process (POMDP). In contrast, our goal is not to find an optimal strategy but to evaluate CQIs. As such, our model is not a POMDP although it is easily related to a POMDP setting. For example, we can view the insurance company's observed coverage strategy and the counterfactual strategy where coverage is always provided, as being feasible strategies from a POMDP. Finally, we also note that a practical justification for our model comes from the simulation model used by the National Cancer Institute (UWBCS, 2013).
The Counterfactual Analysis Problem
We now use our dynamic model to state the counterfactual analysis problem.
Observed data. Suppose we observe emissions o 1:T with the underlying policy being x 1:T . The true hidden states h 1:T are not observed. In the context of breast cancer, the observations for a particular patient might be as follows:
o 1 , . . . , o τs−1 ∈{2,3} , o τs , . . . , o τe =1 , o τe+1 , . . . , o τ d −1 ∈{4,5} , o τ d :T =7
. (1) That is, the patient was screened (x 1:τs−1 = 0) and appeared healthy (o 1:τs−1 ∈ {2, 3}) up to and including time τ s − 1. Coverage was denied during periods τ s to τ e , i.e.
x τs:τe = 1; (see red font in (1). Hence, screening was not performed during those periods (o τs:τe = 1). As soon as the coverage for screening was re-approved (period τ e + 1 and hence, x τe+1 = 0), the patient was found to have cancer (either in-situ or invasive) and the corresponding treatment began; thus, o τe+1 ∈ {4, 5}. Unfortunately, the patient died at τ d .
CQI.
We focus on the well-known probability of necessity (PN) (Pearl, 2009a) as our CQI. It is the probability the patient would have not died (counterfactual state H T = 7) had the screening been covered in every period (intervention policy x 1:T = 0) given the observed data (o 1:T , x 1:T ). ("Tilde" notation denotes quantities in the counterfactual world.) The interpretation of x 1:T is straightforward as it is fixed exogenously. The counterfactual state H T is obtained via the 3 steps of abduction, action, and prediction (Pearl, 2009b).
Step 1 (abduction) involves conditioning on the observed data (o 1:T , x 1:T ) to form a posterior belief over the hidden states.
Step 2 (action) changes the policy from x 1:T to x 1:T and brings us to the counterfactual world M.
Step 3 (prediction) computes PN in the counterfactual model:
PN = P( H T = 7),(2)
with the understanding that the event { H T = 7} is conditional on (o 1:T , x 1:T ). Though we focus on PN, it is easy to extend our framework to a broad class of CQIs as the abduction and action steps do not depend on the CQI.
Given our focus on counterfactual analysis, we will assume the primitives (p, E, Q) are known. We discuss their calibration to real-world data in §5 and emphasize that even with known (p, E, Q), counterfactual analysis is challenging. This is because we are interested in counterfactuals at an individual level (i.e., conditioning on the patientlevel data (o 1:T , x 1:T ) via abduction), as opposed to the population level. A population-level counterfactual analysis would ignore the first step of abduction but simply change the policy to x 1:T to predict the CQI (by simulating the resulting model and obtaining a Monte-Carlo estimate of PN or doing it in closed-form if analytically tractable). However, this is very different from the task at hand, which falls on the highest rung of Pearl's ladder of causation (Pearl & Mackenzie, 2018). For instance, consider a patient who dies immediately after the coverage was denied versus a patient who dies a couple of years after the coverage was denied. Clearly, the first patient had a more "aggressive" cancer and hence we expect that her PN would be lower. By conditioning on individual-level data (o 1:T , x 1:T ), we are able to account for such differences. However, it makes the problem considerably more challenging.
In our dynamic latent-state model, each of the three steps of abduction, action, and prediction presents its own set of challenges 2 , which we discuss when presenting our methodology in §4. Before doing so, we discuss the notion of counterfactual stability (CS), which has become a popular approach in some settings (Oberst & Sontag, 2019).
Limitations of Counterfactual Stability
Instead of discussing CS in our dynamic latent-state model, we do so using the following simple model:
X → Y . Sup- pose we observe an outcome Y = y under policy X = x. With Y x := Y | (X = x), CS requires that the counter- factual outcome under an interventional policy x (denoted by Y := Y x | Y x = y) cannot be y ′ (for y ′ = y) if P(Y x = y)/P(Y x = y) ≥ P(Y x = y ′ )/P(Y x = y ′ ).
In words, CS states that if y was observed and this outcome becomes relatively more likely than y ′ under the intervention, then the counterfactual outcome Y can not be y ′ .
P(Y 1 = better) P(Y 0 = better) = 0.2 0.3 < 0.2 0.2 = P(Y 1 = bad) P(Y 0 = bad) ,"
Counterfactual Analysis via Optimization
We now present our solution methodology for the counterfactual analysis problem introduced in §2. We first discuss the underlying SCM ( §4.1), which is a precursor to defining the counterfactual model M ( §4.2), which feeds into our optimization framework for counterfactual analysis ( §4.3). To understand the SCM (Figure 2), consider O t for any t, which is a stochastic function of its parents (H t , X t ). The stochasticity is driven by the exogenous noise vector
V t := [V thx ] h,O t = f (H t , X t , V t ) = h,x f hx (V thx )I {Ht=h,Xt=x} (3a) where f hx (·) is defined using the emission distribution [e hxi ] i and V thx ∼ Unif[0, 1] wlog. Similarly, for t > 1, recognizing that each H thi := H t | (H t−1 = h, O t−1 = i)
is a distinct random variable for all (h, i), we associate each H thi with its own noise variable U thi :
H t = g(H t−1 , O t−1 , U t ) = h,i g hi (U thi )I {Ht−1=h,Ot−1=i} (3b) where g hi (·) is defined using the transition distribution [q hih ′ ] h ′ and U thi ∼ Unif[0, 1] wlog.
The representation in (3a) allows us to model [O thx ] h,x and capture any dependence structure among these random variables by specifying the joint multivariate distribution of V t .
Since the univariate marginals of V t are known (Unif[0, 1]), specifying the multivariate distribution amounts to specifying the dependence structure or copula. (Of course, the same comment applies to (3b) and U t as well.) For example, if the V thx 's are mutually independent (the independence copula) and we have
(H t = h ′ , X t = x ′ ), then inferring the conditional distribution of V th ′ x ′ will tell us nothing about the V thx 's for (h, x) = (h ′ , x ′ ). Alterna- tively, if V thx = V th ′ x ′ for all pairs (h, x) and (h ′ , x ′ ),
then this models perfect positive dependency (the comonotonic copula) and inferring the conditional distribution of V th ′ x ′ amounts to simultaneously inferring the conditional distribution of all the V thx 's. We emphasize that we must work with the exogenous vectors (U t , V t ) when doing a counterfactual analysis since different joint distributions of (U t , V t ) will lead to (possibly very) different values of PN. If we are not doing a counterfactual analysis and only care about the joint distribution of a (subset of) (O 1:T , H 1:T ) then our analysis will only depend on the joint distribution of the (U t , V t )'s via their known univariate marginals. We note the U t 's and V t 's must be mutually independent in order for the SCM to be consistent with the dependence / independence relationships implied by the model of Figure 1.
In our dynamic model, the emissions and the state transitions are time-independent. Thus, it is natural to also assume the copulas underlying V t and U t are timeindependent. We refer to this property as time invariance.
As such, we define the notation O hx :
= O t | (H t = h, X t = x) and H hi := H t+1 | (H t = h, O t = i). 3
Then, e hxi = P(O hx = i) and q hih ′ = P(H hi = h ′ ).
While the copula view is useful from a conceptual point of view (since specifying copulas for U t and V t amounts to specifying an SCM), it is more convenient to work with an alternative construction of the SCM. This is because in discrete-state space models, there will be infinitely many joint distributions of V
θ h x,hx ( i, i) := P(O h x = i, O hx = i) (4a) π h i,hi ( h ′ , h ′ ) := P(H h i = h ′ , H hi = h ′ )(4b)
and observe that
θ hx,hx (i, i) = e hxi ∀(h, x, i) (5a) π hi,hi (h ′ , h ′ ) = q hih ′ ∀(h, i, h ′ ). (5b) This holds because π hi,hi (h ′ , h ′ ) = P(H hi = h ′ , H hi = h ′ ) = P(H hi = h ′ ) = q hih ′ . We also have symmetry, i.e., θ h x,hx ( i, i) = θ hx, h x (i, i) ∀( h, x, i) ∀(h, x, i) (6a) π h i,hi ( h ′ , h ′ ) = π hi, h i (h ′ , h ′ ) ∀( h, i, h ′ ) ∀(h, i, h ′ ). (6b) This is because π h i,hi ( h ′ , h ′ ) = P(H h i = h ′ , H hi = h ′ ) = P(H hi = h ′ , H h i = h ′ ) = π hi, h i (h ′ , h ′ ).
We only defined the "pairwise marginals" in (4) but we will define the full joint PMFs in (9). We are now ready to discuss the counterfactual model.
The Counterfactual Model M
Recall from §2 that M is obtained after the two steps of abduction (conditioning on the observed data (o 1:T , x 1:T )) and action (changing the policy from x 1:T to x 1:T ). Understanding the dynamics underlying M are non-trivial, primarily due to the abduction step where the goal is to obtain the posterior distribution of the hidden path H 1:T . It is not possible to provide a closed-form expression for this distribution but we can use filtering / smoothing methods to describe the posterior dynamics of H 1:T . (See §B for details.)
We can therefore use these dynamics to generate B Monte-Carlo samples [h 1:T (b)] B b=1 from the posterior, i.e., from the distribution of H 1:T | (o 1:T , x 1:T ). Then, by conditioning on each sample b, it is possible to characterize M. In
particular, denote by M(b) ≡ ( p(b), [ E (t) (b)] t , [ Q (t) (b)] t )
the counterfactual model corresponding to posterior sample h 1:T (b). Similar to the primitives (p, E, Q) in §2, the counterfactual primitives
( p(b), [ E (t) (b)] t , [ Q (t) (b)] t )
correspond to initial state, emission, and transition distributions. As H 1 in Figure 1 has no parents, p(b) is such that the counterfactual hidden state in period 1 equals the posterior sample h 1 (b), i.e., h 1 (b) = h 1 (b). In contrast with E and Q, both E (t) (b) and Q (t) (b) are time-dependent (note the super-script "(t)"). This is because the period t counterfactual emission
E (t) (b) := [ e (t) h x i (b)] h, x, i and transition Q (t) (b) := [ q (t) h i h ′ (b)] h, i, h ′ probabilities are as follows: e (t) h x i (b) = P(O h x = i | O ht(b)xt = o t ) (7a) q (t) h i h ′ (b) = P(H h i = h ′ | H ht(b)ot = h t+1 (b)).(7b)
(The O hx and H hi notation is defined above (4).) The dependence on t is through the observed data (o t , x t ) and the posterior samples (h t (b), h t+1 (b)). As such, for each posterior path b, M(b) is a time-dependent dynamic latent-state model. If we knew E (t) (b) and Q (t) (b), then we could simulate M(b) to obtain a Monte-Carlo estimate of our CQI by averaging the CQI over the B posterior sample paths. However, E (t) (b) and Q (t) (b) are unknown as they depend on the joint distributions of U t and V t .
Towards this end, we can combine (7) with (4) to obtain
e (t) h x i (b) = θ h x,ht(b)xt ( i, o t ) e ht(b)xtot q (t) h i h ′ (b) = π h i,ht(b)ot ( h ′ , h t+1 (b)) q ht(b)otht+1(b) ,
which express the unknown and time-dependent emission and transition distributions in terms of the unknown θ θ θ and π that are time-independent.
Polynomial Optimization
We now propose an optimization model where we treat the unknowns (θ θ θ, π) as decisions and maximize (minimize) the CQI to obtain an upper bound (lower bound). We present our optimization model in terms of the objective and constraints, followed by a discussion on how we can enforce CS and PM (if indeed they were deemed appropriate).
PN = 1 − lim B→∞ 1 B B b=1 P M(b) ( H T = 7).
We next express P M(b) ( H T ) in terms of (θ θ θ, π) from (4).
Lemma 2. For t ∈ {T, T − 1, . . . , 2}, P M(b) ( h t ) := P M(b) ( H t = h t )
obeys the following recursion (over t):
P M(b) ( h t ) = ht−1, ot−1 π ht−1 ot−1,ht−1(b)ot−1 ( h t , h t (b)) q ht−1(b)ot−1ht(b) × θ ht−1 xt−1,ht−1(b)xt−1 ( o t−1 , o t−1 ) e ht−1(b)xt−1ot−1 × P M(b) ( h t−1 ).
The recursion breaks at t = 1:
P M(b) ( h 1 ) = 1 if h 1 = h 1 (b) 0 otherwise.
Putting together Lemmas 1 and 2 allows us to express PN in terms of the various primitives, all of which except (θ θ θ, π) are known (or can be sampled). Thus, we use the notation PN(θ θ θ, π | [h 1:T (b)] b ). As soon as we fix (θ θ θ, π), we can estimate PN. However, it is unclear apriori what we should fix (θ θ θ, π) at. We might have some information on the structure of (θ θ θ, π) that can help us shrink their feasibility space but in general, there can be many (θ θ θ, π)s that are "valid". To overcome this lack of knowledge, we take an agnostic view and compute bounds over PN. The upper (lower) bound is computed by maximizing (minimizing) PN(θ θ θ, π | [h 1:T (b)] b ) over the set of (θ θ θ, π) that are "valid". Denoting by F the set of "valid" (θ θ θ, π) (discussed below), we define
PN ub (B) := max (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ) (8a) PN lb (B) := min (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ).(8b)
Both optimizations in (8)
O k ≡ O hx , e ki ≡ e hxi H m ≡ H hi , q mh ′ ≡ q hih ′ .
We have k ∈
θ 1,...,K (i 1 , . . . , i K ) := P(O 1 = i 1 , . . . , O K = i K ) (9a) π 1,...,M (h 1 , . . . , h M ) := P(H 1 = h 1 , . . . , H M = h M ).(9b)
Note that we only have one joint θ 1,...,K among K random variables in contrast to multiple pairwise marginals [θ kℓ ] (k,ℓ) . Each of these joint PMFs are decision variables in the optimization (in addition to the pairwise decision variables) and must obey the following set of constraints. First, the 1-dimensional marginals of θ θ θ and π must equal the given 1-dimensional marginals [e ki ] (k,i) and
[q mh ] (m,h) : {i1,...,iK }\{i k } θ 1,...,K (i 1 , . . . , i K ) = e 1i k ∀i k ∈ O, k ∈ [K] (10a) {h1,...,hM }\{hm} π 1,...,M (h 1 , . . . , h M ) = q 1hm ∀h m ∈ H, m ∈ [M ].(10b)
Recall that (Q, E), i.e., the right-hand-sides of (10), are known. Moreover, since Q and E themselves define 1dimensional probability distributions and therefore sum to 1, (10) ensures the same will be true of both the joint PMFs, i.e., they will also sum to 1. Second, we must link the pairwise marginals to the joints:
θ kℓ (i k , i ℓ ) = {i1,...,iK }\{i k ,i ℓ } θ 1,...,K (i 1 , . . . , i K ) (11a) π mn (h m , h n ) = {h1,...,hM }\{hm,hn} π 1,...,M (h 1 , . . . , h M ).(11b)
(11a) holds for all i k , i ℓ ∈ O and k, ℓ ∈ [K] s.t. k < ℓ whereas (11b) holds for all h m , h n ∈ H and m, n ∈ [M ] s.t. m < n. The "k < ℓ" and "m < n" conditions avoid unnecessary duplication (recall (5) and (6)). 5 Finally, we need to ensure non-negativity:
θ θ θ, π ≥ 0 (12)
where we now use (θ θ θ, π) to denote all of the corresponding, i.e., joint and pairwise, decision variables.
Let F be the feasible region over (θ θ θ, π) defined by the constraints (10), (11), and (12). Observe that PN(θ θ θ, π) is a polynomial in (θ θ θ, π) (cf. Lemmas 1 and 2) and the constraints in F are linear. Thus, each of the problems in (8) fall within the class of polynomial optimization (Anjos & Lasserre, 2011). Denoting by PN * the PN under the true (unknown) (θ θ θ, π), we obtain the following inequalities.
Proposition 2. PN lb ≤ PN * ≤ PN ub .
Enforcing CS and PM via linear constraints. Suppose that at some time, the patient was in state h, the emission was i, followed by a transition to state h ′ . This maps to the realization
H hi = h ′ . For h ′ = h ′ , CS requires that if P(H h i = h ′ )/P(H hi = h ′ ) ≥ P(H h i = h ′ )/P(H hi = h ′ ), then P(H h i = h ′ | H hi = h ′ ) = 0. Observe that the "if" condition is equivalent to q h ih ′ /q hih ′ ≥ q h i h ′ /q hi h ′ and the LHS of "then" equals π h i,hi ( h ′ , h ′ )/q hih ′ .π h i,hi ( h ′ , h ′ ) = 0 if q h ih ′ q hih ′ ≥ q h i h ′ q hi h ′ .(13a)
Similarly, for emissions, CS can be modeled by adding the following linear constraints for all (h, x, i, h, x, i):
θ h x,hx ( i, i) = 0 if e h xi e hxi ≥ e h x i e hx i .(13b)
Hence, we can characterize the space of all SCMs that obey CS, which is in contrast to picking just one such SCM (Oberst & Sontag, 2019). Enforcing CS naturally leads to tighter bounds, but the bounds may not be "legitimate" if the true (θ θ θ, π) does not satisfy CS. Denoting by PN ub cs and PN lb cs the bounds obtained by adding CS constraints (13) to the optimizations in (8), we have the following result.
Proposition 3. PN lb ≤ PN lb cs ≤ PN ub cs ≤ PN ub .
PM can also be enforced via linear constraints. To see this, suppose the patient has in-situ cancer in period t which is
P(H h i = h ′ | H hi = h ′ ) = 0 for h = 2, i ∈ {1, 2}, h ′ = 2, h ∈ {2, 4}, i = 4, h ′ ∈ {5, 7}.
There can be multiple such cases to consider and we can enforce all the PM constraints by setting the corresponding
π h i,hi ( h ′ , h ′ ) variables equal to 0 as π h i,hi ( h ′ , h ′ ) = P(H hi = h ′ )P(H h i = h ′ | H hi = h ′ ).
As with CS (Proposition 3), PM will result in bounds PN ub pm and PN lb pm tighter than PN ub and PN lb .
Algorithm 1 Counterfactual analysis via optimization
Require: (E, Q), (o 1:T , x 1:T ), B, x 1:T 1: h 1:T (b) ∼ H 1:T | (o 1:T , x 1:T ) ∀b = 1, . . . , B 2: PN ub (B) = max (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ) 3: PN lb (B) = min (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ) 4: return (PN lb (B), PN ub (B))
We summarize our developments in Algorithm 1, which outputs the bounds (PN lb (B), PN ub (B)) 6 . Line 1 (sampling) can be executed efficiently (cf. §B), and we discuss three computational considerations behind solving the polynomial optimizations (lines 2 and 3).
First, though the constraints are linear, the objective is polynomial, making it a non-trivial non-convex optimization problem. To solve it, we leverage state-of-the-art developments in optimization. In particular, we use the BARON solver (Sahinidis, 2023), which relies on a polyhedral branch-and-cut approach, allowing it to achieve global optima (Tawarmalani & Sahinidis, 2005). We found it to work well in our numeric experiments ( §5).
Second, in terms of the problem size, it follows from (4) and (9) that we have at most |H| 4 |O| 2 + |H| 2 |O| 2 |X| 2 pairwise variables and |O| |H||X| + |H| |H||O| joint variables. Similarly, it follows from (10) and (11) that the feasible region F is defined by at most |O||H||X| + |H| 2 |O| + |O| 2 |H| 2 |X| 2 + |H| 4 |O| 2 constraints. However, these are merely upper bounds and we can exploit the sparsity inherent in the underlying application (along with the variable and constraint elimination discussed in Footnote 5) to drastically reduce the problem size. For instance, in our breast cancer application, we have (|H|, |O|, |X|) = (7, 7, 2), with the above formulae giving over 10 41 variables and 10 5 constraints. After we exploit sparsity (discussed in §5), they 6 We can output (PN lb cs (B), PN ub cs (B)) and (PN lb pm (B), PN ub pm (B)) as well by solving the same optimization problems but with additional linear constraints. are reduced to 16,124 and 610, respectively. Further, as CS and PM can be modeled by setting appropriate variables to 0, they allow for further sparsity as we can delete those variables.
Third, observe that a naive expansion of the recursion in Lemma 2 results in a number of terms that is exponential in T , which would result in memory issues for moderate to large values of T . Nonetheless, as we elaborate in §D.1, it is possible to remove this exponential dependence on T by a reformulation of the optimization problem. This comes at the cost of introducing polynomial constraints. Nonetheless, this reformulation allowed us to obtain high-quality solutions in the breast cancer setting with as many as T = 100 periods ( §D.3). In contrast, we run into memory issues for T as small as 11 with the original formulation. In fact, we discuss an alternative approach at the end of §D.1. This approach allows us to compute the objective function efficiently without having to add any additional constraints. Unfortunately, the BARON solver does not allow us to use this approach and so we leave this issue for future research.
Numerical Experiments
We now apply our approach to the breast cancer application we described in §1.
Setup. We described the elements of the underlying dynamic latent-state model M ≡ (p, E, Q) in §2. It has a total of 7 states, 7 emissions, and 2 actions. Given patientlevel data (o 1:T , x 1:T ), we wish to estimate the PN as defined in (2). The primitives (p, E, Q) are calibrated to realdata using a mix of sources, which we discuss in §E.1.
We consider two paths with the first path defined as:
o 1 =2 , o 2 , . . . , o T −2 , o T −1 =1 , o T =7 x 1 =0 , x 2 , . . . , x T −2 , x T −1 =1 , x T =0 .
That is, we observe a negative test result in period 1 after which screening was not performed for T − 2 periods (red font). The patient died from breast cancer in period T . Note that under this path, given the calibrated primitives in §E.1, it has to be the case that h T −1 = 3 (undiagnosed invasive) since a transition from state 2 (undiagnosed in-situ) to 7 is impossible. Further, the transition from 3 to 7 is not unlikely (q 317 ≈ 0.15). The second path is similar but with one difference: screening was performed in period T − 1 and invasive cancer was detected:
o 1 =2 , o 2 , . . . , o T −2 =1 , o T −1 =5 , o T =7 x 1 =0 , x 2 , . . . , x T −2 =1 , x T −1 , x T =0 .
Hence, in contrast with path 1, the final transition from invasive to death was under treatment with probability q 357 ≈ 0.01 ( §E.1), which is much smaller than q 317 from above. Given that this low probability transition did occur, this suggests the patient had an "aggressive" cancer in path 2. As such, regardless of what the optimal θ θ θ and π are, the chances of survival on the counterfactual path would be low because of this "aggressive" nature of the cancer. This doesn't hold on path 1 as the cancer was less "aggressive".
We vary T ∈ {4, . . . , 10}, with a larger value of T suggesting the cancer may have progressed more slowly. We compute PN bounds using our framework (Algorithm 1), which we implemented in MATLAB (MATLAB, 2021). The feasibility set F over (θ θ θ, π) corresponds to (10), (11), and (12). To solve the polynomial optimizations, we use the MATLAB-BARON interface (Sahinidis, 2023) with CPLEX (IBM, 2017) as the "LP / MIP solver". It solved each of our problem instances to global optimality within minutes / hours (depending on T ), with an "absolute termination tolerance" of 0.01 (on an Intel Xeon E5 processor with 16 GB RAM). Optimizations for T = 10 took the longest time on average (∼2 hours). We generated B = 100 samples using our sampling method in §B. It took less than a second and we found B = 100 was large enough to produce stable results for our SAA. We ensured this stability by computing our results for 20 seeds (for each (path, T ) pair) and verifying the standard deviations to be small. Though stability over the seeds is important, our PN estimates may still be biased for a finite B (recall Proposition 1 only holds asymptotically). As a check, we also generated results for B = 500 and observed them to be very similar to the ones for B = 100. As noted below Algorithm 1, the sparse structure of E and Q drastically reduces the size of the problem. For example, when considering the π h i,hi ( h ′ , h ′ ) variables, we rule out the ones that map
to impossible (h, i, h ′ ) or ( h, i, h ′ ) combinations (refer to §E.1.2).
The same observation also applies to all the joint variables (details in §E.2). . We show the average over 20 seeds and note that the standard deviation (s.d.) for every data point is smaller than 0.01. Given this small magnitude, we omit the ±1 s.d. bars to reduce the clutter in the sub-figures. Note that to simulate the naive estimate and the two copulas (independence and comonotonic), we use 10 4 Monte-Carlo samples. These simulations were fast (a couple of minutes). Using 10 4 samples is in contrast to the 100 samples we use for SAA and we did so to ensure a low s.d.
We also show the PN estimate when we perform counterfactual simulations using the two copulas discussed in §C (independence and comonotonic 8 ). Finally, the naive estimate completely ignores the information in the observations, i.e., it does not execute the abduction step and is therefore an invalid estimate of PN.
To simplify matters, we adopt an all-or-nothing approach whereby either CS is imposed for both hidden-state transitions and observations or not at all. We do the same for PM. Of course, it is possible to consider various combinations, e.g., imposing PM for hidden-state transitions only or imposing CS only for the observations, etc. This is also true of our copulas when we estimate PN for a particular SCM. In Figure 3, for example, the independence (comonotonic) curve corresponds to assuming the independence (comonotonic) copula for both hidden-state transitions and observations. But we could of course have assumed one copula for the hidden-state transitions and an entirely different one for the observations. Each such combination of copulas would yield a different SCM and therefore a feasible value of PN.
The naive estimate is independent of the observed path and can fall outside the bounds. This makes sense as it does not perform abduction but simply simulates the original model M under the intervention policy x 1:T = 0. The naive estimates are very close to 1 as dying of breast cancer in any 5-year period 9 is highly unlikely.
For path 1, we obtain relatively tight bounds, with PN always above 0.85. This means that in the counterfactual world, the patient would have not died with high probability, consistent with our discussion around q 317 above. 8 Further details on the comonotonic copula specific to the breast cancer model are in §E.4. 9 Each period maps to 6 months so T = 10 maps to 5 years.
Even in the absence of any additional structure such as CS or PM, the gap between the lower and upper bounds is within ∼10 percentage points. The gap gets tighter with CS (within ∼5 percentage points) and PM (within ∼1 percentage point!). The fact that the LB and UB under CS do not coincide resolves the open question of Oberst & Sontag (2019) regarding the uniqueness of the Gumbel-max mechanism w.r.t. CS -it is not unique. It is not surprising that the comonotonic estimate falls close to the PM bounds. Interestingly, the estimated PN for the two copulas roughly cover the range of possibilities in terms of the bounds (Figure 3(a)).
For path 2, the lower bounds are close to 0. This aligns with the fact that despite being diagnosed in period T − 1 (and hence, provided treatment), the patient eventually died (which suggests that the patient had an "aggressive" cancer). The bounds without CS and PM are relatively loose, simply reflecting the lack of knowledge to reason in a counterfactual world. As soon as we inject knowledge via CS or PM, the bounds become much tighter.
The experiments discussed so far are for up to T = 10 and we run into memory issues for T > 10 (recall the discussion at the end of §4.3). Nonetheless, as we show in §D, we can enhance the scalability of the polynomial optimizations in (8) via a reformulation and an approximation. In fact, as we demonstrate via numerics, these ideas allow us to obtain high-quality solutions for T as large as 100 in just a few hours of compute time.
Concluding Remarks
We have provided a framework for performing counterfactual analysis in dynamic latent-state models and in particular, computing lower and upper bounds on CQIs. There are several interesting directions for future research. First, we would like to handle the objective function in the optimization more efficiently as discussed at the end of §4.3. Specifically, BARON's solver appears to explicitly expand the objective function which results in a number of terms that is exponential in T . We were able to finesse this issue in §D via a reformulation but we suspect the approach outlined at the end of §4.3 might provide a better solution. All told, it may therefore be worthwhile developing an optimization algorithm specifically tailored to the problem (a polynomial objective with linear constraints) rather than using an offthe-shelf solver. Another possible direction is exploring the use of variance reduction methods and other Monte-Carlo techniques to improve our basic Monte Carlo approach for generating posterior sample paths. Finally, on the practical front, it would be of interest to apply our framework to real-world medical applications and use domain-specific knowledge to obtain (via the imposition of additional constraints) tighter bounds on the CQIs. Zhang, J., Tian, J., and Bareinboim, E. Partial counterfactual identification from observational and experimental data, 2021.
A. Proofs
Lemma 1. We have
PN = 1 − lim B→∞ 1 B B b=1 P M(b) ( H T = 7).
Proof. Observe that
PN = P HT ( H T = 7) [by definition] = 1 − P HT ( H T = 7) [P(Y = y) = 1 − P(Y = y)] = 1 − E HT [I{ H T = 7}] [P(Y = y) = E[I{Y = y}]] = 1 − E H1:T [E M|H1:T [I{ H T = 7}]] [law of total expectation] = 1 − 1 B B b=1 P M(b) ( H T = 7) as B → ∞. [law of large numbers]
The proof is now complete.
Lemma 2. For t ∈ {T, T − 1, . . . , 2}, P M(b) ( h t ) := P M(b) ( H t = h t )
obeys the following recursion (over t):
P M(b) ( h t ) = ht−1, ot−1 π ht−1 ot−1,ht−1(b)ot−1 ( h t , h t (b)) q ht−1(b)ot−1ht(b) × θ ht−1 xt−1,ht−1(b)xt−1 ( o t−1 , o t−1 ) e ht−1(b)xt−1ot−1 × P M(b) ( h t−1 ).
The recursion breaks at t = 1:
P M(b) ( h 1 ) = 1 if h 1 = h 1 (b) 0 otherwise.
Proof. For t ∈ {T, T − 1, . . . , 2}, observe that
P M(b) ( h t ) = ht−1∈H ot−1∈O P M(b) ( h t , h t−1 , o t−1 ) = ht−1∈H ot−1∈O P M(b) ( h t | h t−1 , o t−1 )P M(b) ( o t−1 | h t−1 )P M(b) ( h t−1 ) = ht−1∈H ot−1∈O q (t−1) ht−1 ot−1 ht (b) × e (t−1) ht−1 xt−1 ot−1 (b) × P M(b) ( h t−1 ) = ht−1∈H ot−1∈O π ht−1 ot−1,ht−1(b)ot−1 ( h t , h t (b)) q ht−1(b)ot−1ht(b) × θ ht−1 xt−1,ht−1(b)xt−1 ( o t−1 , o t−1 ) e ht−1(b)xt−1ot−1 × P M(b) ( h t−1 ).
The base case (t = 1) holds since the counterfactual hidden state in period 1 equals the posterior sample h 1 (b) (recall from §4.2). The proof is now complete.
B. Sampling Hidden Paths from the Posterior Distribution
In this section, we show how one can efficiently perform filtering, smoothing, and sampling for the dynamic latent-state model in Figure 1. As our model is a generalization of an HMM, these algorithms are simple generalizations of the standard variants corresponding to an HMM (Barber, 2012).
Filtering. We first compute α(h t ) := P(h t , o 1:t , x 1:t ) which will yield the un-normalized filtered posterior distribution. We can then easily normalize it to compute P (h t | o 1:t , x 1:t ) ∝ α(h t ). We begin with α(h 1 ) :
= P(o 1 | h 1 , x 1 )P(h 1 | x 1 )P(x 1 ) = P(o 1 | h 1 , x 1 )P(h 1 )P(x 1 ). For t > 1, note that α(h t ) = ht−1 P (h t , h t−1 , o 1:t−1 , o t , x 1:t ) = ht−1 P (o t | h t , h t−1 , o 1:t−1 , x 1:t ) P (h t | h t−1 , o 1:t−1 , x 1:t ) P(x t | h t−1 , o 1:t−1 , x 1:t−1 )P (h t−1 , o 1:t−1 , x 1:t−1 ) = ht−1 P (o t | h t , x t ) P (h t | h t−1 , o t−1 ) P(x t )P (h t−1 , o 1:t−1 , x 1:t−1 ) = P(x t )P (o t | h t , x t ) ht−1 P (h t | h t−1 , o t−1 ) α(h t−1 ). Smoothing. We now compute β(h t ) := P(o t+1:T , x t+1:T | h t , o t ) with the understanding that β(h T ) = 1. For t < T , we have β(h t ) = ht+1 P(o t+1 , x t+1 , o t+2:T , x t+2:T , h t+1 | h t , o t ) = ht+1 P(o t+2:T , x t+2:T | h t , o t , h t+1 , o t+1 , x t+1 )P(h t+1 , o t+1 , x t+1 | h t , o t ) = ht+1 P(o t+2:T , x t+2:T | h t+1 , o t+1 )P(o t+1 | h t+1 , x t+1 , h t , o t )P(h t+1 , x t+1 | h t , o t ) = ht+1 β(h t+1 )P(o t+1 | h t+1 , x t+1 )P(x t+1 | h t+1 , h t , o t )P(h t+1 | h t , o t ) = P(x t+1 ) ht+1 β(h t+1 )P(o t+1 | h t+1 , x t+1 )P(h t+1 | h t , o t ).| h t , o t ) = α(h t )β(h t ).
We therefore obtain the hidden state marginal
P (h t | o 1:T , x 1:T ) = α(h t )β(h t ) ht α(h t )β(h t )
, which solves the smoothing problem.
Pairwise marginal. We can compute P (h t , h t+1 | o 1:T , x 1:T ) by noting that
P (h t , h t+1 | o 1:T , x 1:T ) ∝ P (o 1:t , o t+1 , o t+2:T , x 1:t , x t+1 , x t+2:T , h t+1 , h t ) = P (o t+2:T , x t+2:T | o 1:t , o t+1 , x 1:t , x t+1 , h t+1 , h t ) P (o 1:t , o t+1 , x 1:t , x t+1 , h t+1 , h t ) = P (o t+2:T , x t+2:T | h t+1 , o t+1 ) P (o t+1 | o 1:t , h t+1 , h t , x 1:t , x t+1 ) P (o 1:t , h t+1 , h t , x 1:t , x t+1 ) = P (o t+2:T , x t+2:T | h t+1 , o t+1 ) P (o t+1 | h t+1 , x t+1 ) P (h t+1 , x t+1 | o 1:t , x 1:t , h t ) P (o 1:t , x 1:t , h t ) = P (o t+2:T , x t+2:T | h t+1 , o t+1 ) P (o t+1 | h t+1 , x t+1 ) P (h t+1 , x t+1 | h t , o t ) P (o 1:t , x 1:t , h t ) .(16)
We can rearrange (16) to obtain
P (h t , h t+1 | o 1:T , x 1:T ) ∝ α(h t )P (o t+1 | h t+1 , x t+1 ) P (h t+1 , x t+1 | h t , o t ) β(h t+1 ).(17)
Therefore, P (h t , h t+1 | o 1:T , x 1:T ) is easy to compute once the forward-backward, i.e. the filtering and smoothing, recursions have been completed.
Sampling. We would like to sample from the posterior P (h 1:T | o 1:T , x 1:T ). We can do this by first noting that We can therefore sample sequentially via the following two steps:
• First, draw h T from P (h T | o 1:T , x 1:T ), which we know from the smoothed distribution of h T .
• Second, observe that for any t < T , we have
P (h t | h t+1 , o 1:T , x 1:T ) ∝ P (h t , h t+1 | o 1:T , x 1:T ) ∝ α(h t )P (h t+1 , x t+1 | h t , o t ) [by (17)] = α(h t )P (h t+1 | h t , o t ) P (x t+1 ) ,
C. A Brief Introduction to Copulas and Counterfactual Simulations
Copulas are functions that enable us to separate the marginal distributions from the dependency structure of a given multivariate distribution. They are particularly useful in applications where the marginal distributions are known (either from domain specific knowledge or because there is sufficient marginal data) but a joint distribution with these known marginals is required. In our application in this paper, we know the marginal distribution of each random variable in [O hx ] h,x and [H hi ] h,i , which is dictated by the model primitives (E, Q) as follows: e hxi = P(O hx = i) and q hih ′ = P(H hi = h ′ ). Indeed, these marginal distributions can be estimated from data, but the joint distribution must be specified in order to compute counterfactuals.
In each of these cases, one needs to work with a joint distribution with fixed or pre-specified marginal distributions. Copulas and Sklar's Theorem (see below) can be very helpful in these situations. We only briefly review some of the main results from the theory of copulas here but Nelsen (2006) We write C(u) = C(u 1 , . . . , u d ) for a generic copula. It follows immediately from Definition 1 that C(u 1 , . . . , u d ) is non-decreasing in each argument and that C(1, . . . , 1, u i , 1, . . . , 1) = u i . It is also easy to confirm that C(1, u 1 , . . . , u d−1 ) is a (d − 1)-dimensional copula and, more generally, that all k-dimensional marginals with 2 ≤ k ≤ d are copulas. The most important result from the theory of copulas is Sklar's Theorem (Sklar, 1959).
Theorem 1 (Sklar 1959). Consider a d-dimensional CDF Π with marginals Π 1 , . . . , Π d . Then, there exists a copula C such that
Π(x 1 , . . . , x d ) = C (Π 1 (x 1 ), . . . , Π d (x d ))(18)
for all x i ∈ [−∞, ∞] and i = 1, . . . , d.
If Π i is continuous for all i = 1, . . . , d, then C is unique; otherwise C is uniquely determined only on Ran(Π 1 ) × · · · × Ran(Π d ), where Ran(Π i ) denotes the range of the CDF Π i .
Conversely, consider a copula C and univariate CDF's Π 1 , . . . , Π d . Then, Π as defined in (18) is a multivariate CDF with marginals Π 1 , . . . , Π d .
A particularly important aspect of Sklar's Theorem in the context of this paper is that C is only uniquely determined on Ran(Π 1 ) × · · · × Ran(Π d ). Because we are interested in applications with discrete state-spaces, this implies that there will be many copulas that lead to the same joint distribution Π. It is for this reason that we prefer to work directly with the joint distribution of [O hx ] h,x and [H hi ] h,i (recall (4)). That said, we emphasize that specifying copulas for the exogenous vectors U t and V t is equivalent to specifying a particular structural causal model (SCM) in which any CQI can be computed.
The following important result was derived independently by Fréchet and Hoeffding and provides lower and upper bounds on copulas.
Theorem 2 (The Fréchet-Hoeffding Bounds). Consider a copula C(u) = C(u 1 , . . . , u d ). Then,
max 1 − d + d i=1 u i , 0 ≤ C(u) ≤ min{u 1 , . . . , u d }.
Three important copulas are the comonotonic, countermonotonic (only when d = 2) and independence copulas which model extreme positive dependency, extreme negative dependency and (not surprisingly) independence. They are defined as follows.
Comonotonic Copula. The comonotonic copula is given by
C P (u) := min{u 1 , . . . , u d },(19)
which coincides with the Fréchet-Hoeffding upper bound. It corresponds to the case of extreme positive dependence. For example, let U = (U 1 , . . . , U d ) with U 1 = U 2 = · · · = U d ∼ Unif[0, 1]. Then, clearly min{u 1 , . . . , u d } = Π(u 1 , . . . , u d ) but by Sklar's Theorem F (u 1 , . . . , u d ) = C(u 1 , . . . , u d ) and so, C(u 1 , . . . , u d ) = min{u 1 , . . . , u d }.
Countermonotonic Copula. The countermonotonic copula is a 2-dimensional copula given by
C N (u) := max{u 1 + u 2 − 1, 0},(20)
which coincides with the Fréchet-Hoeffding lower bound when d = 2. It corresponds to the case of extreme negative dependence. It is easy to check that (20) Independence Copula. The independence copula satisfies
C I (u) := d i=1 u i ,
and it is easy to confirm using Sklar's Theorem that random variables are independent if and only if their copula is the independence copula.
A well known and important result regarding copulas is that they are invariant under monotonic transformations.
Proposition 4 (Invariance Under Monotonic Transformations). Suppose the random variables X 1 , . . . , X d have continuous marginals and copula C X . Let T i : R → R, for i = 1, . . . , d be strictly increasing functions. Then, the dependence structure of the random variables
Y 1 := T 1 (X 1 ), . . . , Y d := T d (X d )
is also given by the copula C X .
This leads immediately to the following result.
Proposition 5. Let X 1 , . . . , X d be random variables with continuous marginals and suppose X i = T i (X 1 ) for i = 2, . . . , d where T 2 , . . . , T d are strictly increasing transformations. Then, X 1 , . . . , X d have the comonotonic copula.
Proof. Apply the invariance under monotonic transformations proposition and observe that the copula of (X 1 , X 1 , . . . , X 1 ) is the comonotonic copula.
Our optimization framework implicitly optimizes over the space of copulas by solving polynomial programs with possibly a large number of variables and constraints. (We saw in §4.3 that the number of variables and constraints is polynomial in |H|, |O| and |X| when calculating the probability of necessity (PN).) It may also be worthwhile, however, working explicitly with copulas. For example, the independence and comonotonic copulas are well understood and using these copulas to define SCMs may provide interesting benchmarks. Indeed, we estimate the PN for these benchmarks in our numerical results of §5. Towards this end, in §C.1 and §C.2, we explain how we can simulate our dynamic latent-state model to estimate the CQI under the independence ( §C.1) and comonotonic ( §C.2) copulas. Specifically, we assume each of the copulas for U t and V t are the independence copulas in §C.1, whereas in §C.2, we assume their copulas are the comonotonic copula.
There is no reason, however, why we couldn't combine them and assume, for example, that the copula for U t was the independence copula and the copula for V t was the comonotonic copula. More generally, we could use domain-specific knowledge to identify or narrow down sub-components of the copulas and leave the remaining components to be identified via the optimization problems. Since convex combinations of copulas are copulas, we could also optimize over such combinations. For example, suppose domain specific knowledge 10 tells us that the copula of V t is λC N + (1 − λ)C I , i.e., a convex combination of the comonotonic and independence copulas, with λ ∈ [0, 1] unknown. Then, the optimization over V t would reduce to a single-variable (λ) optimization with a linear constraint. Of course, the optimization over the copula of U t must also be included but domain-specific knowledge may also help to simplify and constrain that component of the optimization. Properties such as pathwise monotonicity (PM) and counterfactual stability (CS) can also be expressed in copula terms. Indeed, PM can be expressed via the comonotonic copula, as we discuss in §C.2.
C.1. Counterfactual Simulations Under the Independence Copula
For convenience, we copy Figure 2 from the main text, which is now labelled as Figure 4. Furthermore, recall that (o 1:T , x 1:T ) is the observed data and x 1:T is the intervention policy that was applied. . These samples can be generated efficiently (cf. §B). For each sample b, our goal is to convert the sampled path h 1:T (b) into a counterfactual path h 1:T (b). As noted in §4.2, irrespective of the copula choice, the counterfactual hidden state in period 1 equals the posterior sample, i.e.,
h 1 (b) = h 1 (b).
We next need to sample h 2 (b), but that first requires us to sample the counterfactual emission o 1 (b) (cf. Figure 4). With the copula underlying V 1 being the independence copula, it follows that
o 1 (b) = o 1 if x 1 = x 1 and h 1 (b) = h 1 (b) sample from the emission distribution [e h1(b) x1i ] i otherwise.
The counterfactual emission o 1 (b) allows us to sample the counterfactual state h 2 (b), which again leverages the fact that the copula underlying U 2 is the independence copula:
h 2 (b) = h 2 (b) if h 1 (b) = h 1 (b) and o 1 = o 1 (b) sample from the transition distribution [q h1(b) o1(b)h ′ ] h ′ otherwise.
We then generate period 2 counterfactual emission o 2 (b) in a similar manner and the process repeats until we hit the end of horizon. We summarize the procedure in Algorithm 2.
Algorithm 2 Counterfactual simulations under the independence copula
h 1 (b) = h 1 (b) 3: for t = 1 to T − 1 do 4: if x t = x t and h t (b) = h t (b) then 5: o t (b) = o t 6: else 7: o t (b) ∼ Categorical([e ht(b) xti ] i ) 8: end if 9: if h t (b) = h t (b) and o t = o t (b) then 10: h t+1 (b) = h t+1 (b) 11: else 12: h t+1 (b) ∼ Categorical([q ht(b) ot(b)h ′ ] h ′ ) 13:
C.2. Counterfactual Simulations Under the Comonotonic Copula
Before the formal description (which involves non-trivial notation), we provide the intuition (which is relatively straightforward). We do so by revisiting Example 1, where we have the causal graph X → Y with X ∈ {0, 1} (medical treatment) and Y ∈ {bad, better, best} (patient outcome). The outcome Y x := Y | (X = x) obeys the following distribution: Consider a patient whose outcome Y was "better" under no treatment (x = 0). Given the prior U ∼ Unif[0, 1], we get the posterior U | (Y 0 = better) ∼ Unif[0.2, 0.5]. Now, suppose we are interested in the understanding the counterfactual outcome under the intervention x = 1, i.e., the random variable Y := Y 1 | (Y 0 = better). Then, given the Unif[0.2, 0.5] belief over U and the functional form of f 1 (·) (as defined in the caption of Figure 5), we get that the [0.2, 0.4] region of U will map to "better" and the [0.4, 0.5] to "best". Hence, Y equals "better" w.p. 2/3 and "best" w.p. 1/3. This clearly obeys the pathwise monotonicity (PM) intuition we alluded to towards the end of Example 1 ("the counterfactual outcome Y should not be worse under treatment ( x = 1) than under no treatment (x = 0)").
We now formalize this intuition to our dynamic latent-state model. As a prerequisite to discussing the notion of PM, one needs to define an ordering of the states (set H) and the emissions (set O), e.g., from "best" to "worst". Denote by r H (h) the rank of state h with respect to this ordering and by r O (i) the rank of emission i. Furthermore, let r −1 H (r) and r −1 O (r) denote the inverse functions corresponding to r H (h) and r O (i), respectively. That is, r −1 H (r) returns the state with rank r and r −1 O (r) returns the emission with rank r. Also, for each (h, i) pair, observe that [q hih ′ ] h ′ denotes the transition distribution (which maps to the random variable H hi ). Corresponding to this distribution, define the rank-ordered CDF as follows:
Q hih ′ := h ′′ :rH (h ′′ )≤rH (h ′ ) q hih ′′ ∀h ′ . (21a)
Similarly, for each (h, x) pair, observe that [e hxi ] i denotes the emission distribution (which maps to the random variable O hx ). Corresponding to this distribution, define the rank-ordered CDF as follows:
E hxi := j:rO(j)≤rO (i) e hxj ∀i.(21b)
Also, define Q hi0 = E hx0 = 0 for all (h, i) and (h, x). We discuss these orderings for the breast cancer application in §E.4.
As in §C.1, we start with the posterior samples [h 1:T (b)] B b=1 corresponding to the random path H 1:T | (o 1:T , x 1:T ). For each sample b, our goal is to convert the sampled path h 1:T (b) into a counterfactual path h 1:T (b). As noted in §4.2, irrespective of the copula choice, the counterfactual hidden state in period 1 equals the posterior sample, i.e.,
h 1 (b) = h 1 (b).
To generate o 1 (b), we revisit the SCM in Figure 6, which now has the noise nodes as scalars (as opposed to vectors). This is a direct implication of the comonotonic copula -see the statement immediately below (19). By the structural equation (3a),
o 1 (b) equals o 1 (b) = f ( h 1 (b), x 1 , V 1 ) = f h1(b) x1 (V 1 ),(22a)
where f hx (·) is the inverse transform function corresponding to the rank-ordered CDF [E hxi ] i (recall (21b)). Hence, all we need to sample o 1 (b) is the posterior distribution of V 1 , where the "posterior" corresponds to conditioning on O 1h1(b)x1 = o 1 (recall the notation O thx from §4). Given the prior V 1 ∼ Unif[0, 1], we can compute the posterior in closed-form. In particular,
V 1 | (O 1h1(b)x1 = o 1 ) ∼ Unif[E h1(b)x1o − 1 , E h1(b)x1o1 ],(22b)where o − := r −1 O (r O (o) − 1)
is the emission ranked just below o. Hence, we can efficiently sample V 1 from its posterior, and this V 1 sample can be used to generate o 1 (b) (via (22a)). Given we encoded rank orderings in the CDF E hxi , such sampling will naturally enforce pathwise monotonicity.
We can sample h 2 (b) similarly. By the structural equation (3b), h 2 (b) equals
h 2 (b) = g( h 1 (b), o 1 , U 2 ) = g h1(b) o1 (U 2 ),(23a)
where g hi (·) is the inverse transform function corresponding to the rank-ordered CDF [Q hih ′ ] h ′ (recall (21a)). Hence, all we need to sample h 2 (b) is the posterior distribution of U 2 , where the "posterior" corresponds to conditioning on H 2h1(b)o1 = h 2 (b) (recall the notation H thi from §4). Given the prior U 2 ∼ Unif[0, 1], we can compute the posterior in closed-form. In particular,
U 2 | (H 2h1(b)o1 = h 2 (b)) ∼ Unif[Q h1(b)o1h2(b) − , Q h1(b)o1h2(b) ],(23b)
where h − := r −1 H (r H (h) − 1) is the state ranked just below h. Hence, we can efficiently sample U 2 from its posterior, and this U 2 sample can be used to generate h 2 (b) (via (23a)). Given we encoded rank orderings in the CDF Q hih ′ , such sampling will naturally enforce pathwise monotonicity.
We then generate period 2 counterfactual emission o 2 (b) in a similar manner and the process repeats until we hit the end of horizon. We summarize the procedure in Algorithm 3.
Algorithm 3 Counterfactual simulations under the comonotonic copula
Require: (E, Q), (o 1:T , x 1:T ), [h 1:T (b)] B b=1 , x 1:T , r H (·), r O (·) 1: for b = 1 to B do 2: h 1 (b) = h 1 (b) 3: for t = 1 to T − 1 do 4: v t ∼ Unif[E ht(b)xto − t , E ht(b)xtot ] % posterior sample of V t (see (22b)) 5: o t (b) = f ht(b) xt (v t ) % counterfactual emission (see (22a)) 6: u t+1 ∼ Unif[Q ht(b)otht+1(b) − , Q ht(b)otht+1(b) ] % posterior sample of U t+1 (see (23b)) 7: h t+1 (b) = g ht(b) ot (u t+1 ) % counterfactual state (see (23a)) 8:
end for 9: end for 10: return [ h 1:
T (b)] b
D. Enhancing the Scalability of the Polynomial Optimization
In this section, we discuss ways to enhance the scalability of the polynomial optimizations in (8). First, in §D.1, we show how the optimization can be reformulated to avoid the exponential dependence on T (recall the discussion towards the end of §4.3). Second, in §D.2, we discuss an approximate way to optimize our problem that drastically reduces the underlying dimensionality of the problem. Third, in §D.3, we combine our ideas from §D.1 and §D.2 and demonstrate (via numerics) that we can obtain high-quality solutions for T as large as 100 in just a few hours of compute time.
Related to scalability, we mention in passing that in each of our optimization problems, we added the constraint that the objective value (which is a probability) must lie in [0, 1]. Of course, this constraint is redundant but we found it helped speed up the solver convergence in a few instances, possibly because it shrunk the search space as the solver does not know a priori that the objective is a probability.
D.1. Reformulating the Polynomial Optimization to Avoid the Exponential Dependence on T
Recall Lemmas 1 and 2, which characterize the objective function of our polynomial optimization problem. We repeat them here for the sake of convenience. Lemma 1. We have
PN = 1 − lim B→∞ 1 B B b=1 P M(b) ( H T = 7). Lemma 2. For t ∈ {T, T − 1, . . . , 2}, P M(b) ( h t ) := P M(b) ( H t = h t )
obeys the following recursion (over t):
P M(b) ( h t ) = ht−1, ot−1 π ht−1 ot−1,ht−1(b)ot−1 ( h t , h t (b)) q ht−1(b)ot−1ht(b) × θ ht−1 xt−1,ht−1(b)xt−1 ( o t−1 , o t−1 ) e ht−1(b)xt−1ot−1 × P M(b) ( h t−1 ).
The recursion breaks at t = 1:
P M(b) ( h 1 ) = 1 if h 1 = h 1 (b) 0 otherwise.
It is easy to see that a naive expansion of PN (as per Lemmas 1 and 2) results in a number of terms that is exponential in T . This is clearly undesirable since we end up running into memory issues for even a moderate value of T . For example, such issues arise for T > 10 in the breast cancer numerics of §5. It is possible to remove this exponential dependence, however, by a reformulation of the optimization, which we now discuss. (Note that the objective function remains the same irrespective of whether we optimize over the pairwise marginals (as discussed in §D.2) or the joint distribution (as presented in §4.3) and hence, the reformulation here is "universal".)
The reformulation steps are as follows:
1. Define P M(b) ( h t ) from Lemma 2 as a decision variable for all (t, h t , b) ∈ [T ] × H × [B].
2. Add the Lemma 2 equations as constraints in the optimization (for each (t,
h t , b) ∈ [T ] × H × [B]
). Note that these are non-linear but polynomial constraints and hence, we remain within the class of polynomial programs. Furthermore, none of the constraints have an exponential number of terms since P M(b) ( h t ) are decision variables now.
3. The objective now is simply the expression in Lemma 1.
These steps result in the following 11 optimization, where we use the decision variable γ t htb to denote the probability term 12
P M(b) ( h t ) in the LHS of Lemma 2 for all (t, h t , b) ∈ [T ] × H × [B], with γ γ γ := [γ t hb ] (t,h,b) : max (θ θ θ,π)∈F ,γ γ γ 1 − 1 B B b=1 γ T 7B (24a) s.t. γ t hb = h ′ ∈H o ′ ∈O π h ′ o ′ ,ht−1(b)ot−1 (h, h t (b)) q ht−1(b)ot−1ht(b) × θ h ′ xt−1,ht−1(b)xt−1 (o ′ , o t−1 ) e ht−1(b)xt−1ot−1 × γ t−1 h ′ b ∀t > 1 ∀h ∈ H ∀b ∈ [B]
(24b)
γ 1 hb = 1 ∀h = h 1 (b) ∀b ∈ [B] (24c) γ 1 hb = 0 ∀h = h 1 (b) ∀b ∈ [B].(24d)
As before (refer to §4), the feasibility set F over (θ θ θ, π) can correspond to (10), (11), and (12). It can also include additional constraints such as CS and PM, or correspond to the lower-dimensional space over the pairwise marginals (as discussed in §D.2). Clearly, (24) has a linear objective and polynomial constraints, and is therefore also a polynomial program. The number of terms in the objective is no longer exponential in T but this has come at the cost of having to add a total of |H|T B decision variables and (polynomial) constraints to the original formulation in (8). Though the size of our reformulation (number of variables and constraints) scales with both T and B, we found it to scale much more gracefully (with respect to T ) than the original formulation, as we discuss in §D.3 below.
Note that we do not necessarily need to add these |H|T B variables and constraints to the optimization but for that, we need the ability to modify the source code of the optimization solver (BARON in our case). This is because even in the original formulation (8), we can actually evaluate the objective function in polynomial time and space rather than naively expanding it into exponentially many terms. To see this, consider a given sample number b ∈ [B]. We need to evaluate P M(b) ( H T = 7) from Lemma 2. To do so, we start from period 1 and store P M(b) ( h 1 ) for all h 1 ∈ H (see Lemma 2's base case). We then move to period 2 and store P M(b) ( h 2 ) for all h 2 ∈ H (see Lemma 2's recursion). The key here is that when computing P M(b) ( h 2 ), we make use of the stored values of P M(b) ( h 1 ). We then move to period 3 evaluations, where we make use of the stored values of P M(b) ( h 2 ). We repeat this procedure until we hit period T . Clearly, this procedure requires polynomial time and space. Furthermore, we can evaluate the gradient (and the Hessian) of P M(b) ( H T = 7) in a similar manner (if needed by the optimization solver). We can therefore evaluate the objective and its gradient information at a given point in polynomial time and space. These can then be used by the optimization solver. However, we are unable to modify the solver we use (BARON), and BARON by default does not exploit this structure but naively expands the objective into exp(T ) terms. As such, we use the reformulation presented in (24) instead.
D.2. Approximating the Joint Optimization by the Pairwise Optimization
The problem (8) discussed in §4.3 optimizes over the joint PMFs ("joint optimization"). The challenge here lies in the dimensionality of the underlying joint distribution. As discussed towards the end of §4.3, the problem size (number of decision variables in particular) can grow exponentially in the primitives (e.g., |H|, |X|, and |O|). This is because the decision variables capture the entire joint distribution. Though we might be able to exploit application-specific sparsity to manage this blow-up (as we in fact do for the breast cancer application), it is worth exploring if there is a more tractable alternative in general (i.e., not specific to any application). We now show that this is possible.
Recall from §4.3 that we are interested in the following optimizations (repeating (8) for convenience):
PN ub (B) := max (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ) (8a) PN lb (B) := min (θ θ θ,π)∈F PN(θ θ θ, π | [h 1:T (b)] b ).(8b)
The key observation here is that the objective function does not depend on the joint PMF of (θ θ θ, π) but only the corresponding pairwise marginals (recall Lemmas 1 and 2). We introduced the joint PMF decision variables to ensure the feasibility set F is such that the pairwise marginals are valid. However, as an alternative, we can choose to not introduce the joint variables in the optimization and instead approximate F by expressing it in terms of the pairwise variables. For example, since the pairwise variables correspond to the 2-dimensional PMFs, they must obey basic probability axioms. In particular, they must be non-negative and agree with their known 1-dimensional marginals so that
h ′ π h i,hi ( h ′ , h ′ ) = q h i h ′ ∀( h, i, h ′ ) ∀(h, i) (25a) h ′ π h i,hi ( h ′ , h ′ ) = q hih ′ ∀( h, i) ∀(h, i, h ′ ) (25b) i θ h x,hx ( i, i) = e h x i ∀( h, x, i) ∀(h, x) (25c) i θ h x,hx ( i, i) = e hxi ∀( h, x) ∀(h, x, i).(25d)
These constraints are analogous to (10) in §4.3. It is easy to see that if (10) is obeyed, then so is (25). However, the reverse implication does not hold, meaning the feasibility space defined by (25) and non-negativity (say F ′ ) is a super-set of the feasibility space F in §4.3. In other words, though the constraints in F ′ are necessary, they are not sufficient to ensure the pairwise marginals correspond to a valid joint distribution. Hence, optimizing over F ′ ("pairwise optimization") 13 is a relaxation to the problem of optimizing over F . In fact, as we show via a simple example next, this relaxation can be strict.
(We thank an anonymous reviewer for this example.)
Example 2. Consider three random variables X, Y , and Z with the following pairwise marginals:
(X, Y ) = (0, 0) w.p. 1/2 (1, 1) w.p. 1/2 (Y, Z) = (0, 0) w.p. 1/2 (1, 1) w.p. 1/2 (X, Z) = (1, 0) w.p. 1/2 (0, 1) w.p. 1/2.
It is easy to verify these pairwise marginals obey (25) along with non-negativity. However, they do not correspond to any valid joint distribution over (X, Y, Z). To see this, suppose (X, Y ) realizes a value of (0, 0). Then, the pairwise marginal of (Y, Z) implies (Y, Z) has to be (0, 0), which implies (X, Z) must be (1, 0), resulting in a contradiction. Therefore, the bivariate marginals are not consistent with any valid 3-dimensional joint distribution.
Despite the relaxation being strict 14 , we found it to produce high-quality solutions and be highly scalable (discussed in §D.3). The high scalability is primarily driven by the lower dimensionality of the decision variables. In particular, the pairwise optimization has at most |H| 4 |O| 2 + |H| 2 |O| 2 |X| 2 decision variables (recall that the joint optimization has an additional |O| |H||X| + |H| |H||O| decision variables). In fact, after exploiting the sparsity in the breast cancer application (along with the variable elimination discussed in Footnote 5), the pairwise optimization has only 1082 decision variables. This is in contrast to the joint optimization which has 16,124 decision variables. Though the pairwise optimization has more constraints than the joint optimization, the difference is not that stark (2085 vs. 610). (The numbers reported here correspond to the objective formulation presented in §4.3 as opposed to the reformulation in §D.1. The reformulation adds a total of |H|T B decision variables and |H|T B constraints to both the pairwise and the joint optimizations.)
D.3. Computational Performance
We now compute upper and lower bounds on PN by (a) using the reformulation discussed in §D.1 and (b) optimizing over the relaxed constraint set defined by the pairwise marginals as discussed in §D.2. 15 We focus on path 1 from §5 for brevity and note that the results for path 2 are similar. The implementation details remain the same as in §5 (i.e., we code in MATLAB-BARON with CPLEX as the LP / MIP solver, set absolute termination tolerance at 0.01, generate B = 100 samples for SAA, and average over 20 seeds). To test the scalability of our approach, we now experiment with T ∈ {5, 10, 15, 20, 25, 50, 75, 100}. Note that T = 100 is an order of magnitude larger than the longest horizon we have in §5, i.e., T = 10. We next discuss the results that are shown in Figure , the UB and LB curves for the "joint" optimization (black color) are the same as the ones in Figure 3(a), and only go as far as T = 10 (since we run into memory issues for T > 10). Furthermore, to avoid clutter, we do not show the standard deviation bars in sub-figure (b) and note that the maximum standard deviation value is less than 0.01. To be clear, the compute times in sub-figure (a) correspond to the blue curves in sub-figure (b) ("pairwise"), which are the focus of this section.
14 Since the pairwise optimization is a relaxation of the joint optimization, it follows that Proposition 2 still holds for the bounds produced by the pairwise optimization. 15 We also experimented with other variations of these two approaches. If we use neither of them (as is the case in §5), then we run into memory issues for T > 10. In fact, even if we only use the second approach (optimizing over the relaxed constraint set), then we run into memory issues for T > 10 since the objective still scales exponentially in T . Finally, if we only use the first approach, i.e. the reformulation of §D.1, and optimize over the joint, the BARON solver does not converge even for T as small as 5 in 24 hours of compute time. This is because keeping the joint variables while doing the reformulation results in an optimization with a very large number of variables and constraints (even after we exploit sparsity).
In Figure 7(a), we display the compute time as a function of T . Compute time refers to the total time taken to compute LB and UB. Note that we let the solver run until convergence to global optimality (on just one core with at most 16 GB RAM). We are able to solve for T = 100 in 3 hours on average (over 20 seeds), with the minimum time being 1.2 hours and the maximum time being 8.4 hours. This demonstrates the scalability of our approach. (It is worth mentioning that after eliminating redundant variables and constraints, exploiting sparsity, using the reformulation of §D.1, and the pairwise approximation of §D.2, the T = 100 and B = 100 optimization has 71, 082 decision variables, 2085 linear constraints, and 70, 000 polynomial constraints.)
In Figure 7(b), we display the PN values as a function of T . The "joint" UB and LB curves are the same as the ones in Figure 3(a), and only go as far as T = 10 because of the aforementioned memory issues for T > 10. The "pairwise" UB and LB curves are the focus of this section and our goal here is to evaluate the quality of the "pairwise" bounds (blue curves) and we do so in two ways. First, as we are able to solve the joint optimization for T ≤ 10, we can use the "joint" bounds as benchmarks for the "pairwise" bounds. As may be seen from the figure, the joint and pairwise bounds are very close to each other (for values of T ≤ 10). In particular, the joint and pairwise lower bounds coincide and equal 0.8725 and 0.8884 for T = 5 and T = 10, respectively. The pairwise and joint upper bounds also coincide and equal 0.9885 for T = 5. The only difference between the two is the upper bound for T = 10 with values of 0.9904 and 0.9899, respectively. We therefore conclude that the pairwise bounds provide a very good approximation to the joint bounds, at least when T ≤ 10.
Second, for T > 10, we use the fact that we can simulate the independence and comonotonic copulas, which by definition are feasible solutions to the joint optimization. We therefore know that maximizing (minimizing) over the joint distribution will yield an upper (lower) bound that is no lower (higher) than the independence (comonotonic) curves in the figure. As an example, the gap between the independence and the pairwise UB curves (for T > 10) is never greater than 0.01, which means we lose at most 0.01 by restricting ourselves to the pairwise marginals. Similarly, the maximum gap between the comonotonic and the pairwise LB curves is ∼ 0.02. Thus, the pairwise bounds provide a high quality approximation to the joint bounds even when T > 10.
We can also embed CS and PM constraints in the pairwise optimization (recall from §4.3 that both these constraints are over the pairwise variables) and we show the corresponding bounds in Figure 8. Naturally, the bounds we obtain are tighter than the pairwise bounds in Figure 7(b). In particular, the UB gets much tighter while the LB does not change much.
E. Further Details on the Breast Cancer Case Study
We discuss the breast cancer model primitives and their calibration in §E.1, followed by showing how we exploit sparsity to reduce the number of decision variables ( §E.2). We then provide details on the PM constraints and the comonotonic copula in §E.3 and §E.4, respectively. Finally, in §E.5, we show the results for path 2.
E.1. Model Primitives and Their Calibration
As discussed in §2, the breast cancer application has |H| = 7 states, |O| = 7 emissions, and |X| = 2 actions. To be consistent with the literature (Ayer et al., 2012), we treat each period as corresponding to 6 months. The model comprises of three primitives: p, Q, and E. We discuss their (sparse) structure and the calibration to real-data in §E.
p 3 = 0.8 × 0.010183.
It is natural to set p 4 = p 5 = p 6 = p 7 = 0 and hence,
p 1 = 1 − p 2 − p 3 = 1 − 0.010183. E.1.2. TRANSITION DISTRIBUTION Q We have Q := [q hih ′ ] h,i,h ′ with q hih ′ := P(H t+1 = h ′ | H t = h, O t = i).
Before discussing the calibration, we discuss the sparse structure of Q. To do so, we define the transition matrix Q(i) := [q hih ′ ] hh ′ for each emission i (so that each row sums to 1) and observe that we have the following structure:
Q(1) =
q 11 q 12 q 13 q 22 q 23 q 27 q 33 q 37 q 44 q 45 q 46 q 47 q 55 q 56 q 57 1 1
Q(2) = q 11 q 12 q 13 q 22 q 23 q 27 q 33 q 37 Q(3) = q 11 q 12 q 13 Q(4) = q 24 q 25 q 26q27 q 44 q 45 q 46 q 47 Q(5) = q 35 q 36q37 q 55 q 56 q 57 Q(6) = 1 Q(7) = 1 .
A few comments are in order. First, an empty row means it is an impossible (h, i) combination. For example, if the we observe an emission i = 3 (i.e., a negative biopsy), then the underlying patient state has to be healthy, i.e., h / ∈ {2, 3, 4, 5, 6, 7}. Thus, rows 2 to 7 are empty in Q(3).
Second, observe that there is a decent amount of overlap across [Q(i)] i in terms of the underlying parameters. For example, q 11 corresponds to the probability a healthy patient stays healthy, which is independent of the emission being 1 (no test), 2 (negative test), or 3 (positive test but negative biopsy). Hence, q 11 appears in all three matrices Q(1), Q(2), and Q(3). Of course, if the emission is 4, 5, 6, or 7, then the patient can not be healthy and hence, the corresponding entry in matrices Q(4), Q(5), Q(6), and Q (7) is absent (in fact, the entire first row is empty, which means it is an impossible (h, i) combination as discussed above).
Third, some rows have only a partial set of entries, which means that the other entries equal 0. For example, if a patient is healthy (state 1), then her state can not transition to 4 (diagnosed in-situ with treatment started), 5 (diagnosed invasive with treatment started), 6 (recovered), or 7 (death) and hence, q 14 = q 15 = q 16 = q 17 = 0. Hence, we do not show q 14 , q 15 , q 16 , q 17 in Q(1), Q(2), or Q(3).
Fourth, observe that we have a "bar" overq 27 (in Q(4)) andq 37 (in Q (5)). This is done to recognize them being different from q 27 (in Q(1) and Q(2)) and q 37 (in Q(1) and Q(2)). To see the difference, consider q 37 versusq 37 . q 37 corresponds to the patient state transitioning from invasive cancer to death when the cancer was not detected (and hence, no treatment). On the other hand,q 37 corresponds to the patient state transitioning from invasive cancer to death when the cancer was detected (and hence, treatment was provided). Naturally, we expectq 37 ≤ q 37 .
Finally, since states 6 (recovery) and 7 (death) are absorbing, we have q 66 = q 77 = 1.
Having discussed the structure of Q, we now calibrate it to real-data. We iterate over each state h ∈ {1, . . . , 7} in a sequential manner.
State 1 (healthy). For state 1, we are interested in (q 11 , q 12 , q 13 ). These probabilities can depend on a woman's age but we ignore that and work with averages. Let's focus on (q 12 , q 13 ) since q 11 = 1 − q 12 − q 13 .
For q 12 , we use the in-situ incidence rates from Table 4.12 of NIH (2020) (all races, females). For q 13 , we use the invasive incidence rates from Table 4.11 of NIH (2020) (all races, females). The reported numbers are per year and we should divide by 2 to convert to a 6-month scale:
q 12 = 1 2 × 33.0 100000 q 13 = 1 2 × 128.5 100000 .
Note that consistent with the 20-80 split in (p 2 , p 3 ), we have q 13 ≈ 4q 12 .
State 2 (undiagnoised in-situ cancer). We are interested in (q 22 , q 23 , q 27 ) (if cancer is not detected) and (q 24 , q 25 , q 26 ,q 27 ) (if cancer is detected). First, consider (q 22 , q 23 , q 27 ). Table 4.13 of NIH (2020) and Page 26 of UWBCS (2013) imply there is no death from in-situ cancer: q 27 = 0. Haugh & Lacedelli (2019) assumed q 23 to equal the invasive incidence rate q 13 and so do we:
q 23 = q 13 q 22 = 1 − q 23 − q 27 = 1 − q 13 .
Second, consider (q 24 , q 25 , q 26 ,q 27 ). As q 27 = 0 and we expectq 27 ≤ q 27 (recall comment #4 above), we set q 27 = 0.
As all in-situ cancer patients survive (if treated), no one transitions to invasive (if in-situ detected):
q 25 = 0.
Finally, we have q 24 + q 26 = 1. The split between q 24 and q 26 is irrelevant in terms of the patient dying or not (all will survive as there is no positive probability path from state 4 to death; this will become clear when we discuss state 4 below).
State 3 (undiagnoised invasive cancer). We are interested in (q 33 , q 37 ) (if cancer is not detected) and (q 35 , q 36 ,q 37 ) (if cancer is detected). First, consider (q 33 , q 37 ). q 37 is the probability of dying from invasive breast cancer (under no treatment). According to Johnstone et al. (2000), the 5-year and 10-year survival rates for invasive breast cancer patients (under no treatment) are 18.4% and 3.6%, respectively. On calibrating to 5-year rate, we get (1 − q 37 ) 10 = 0.184, which implies q 37 ≈ 15.6% (note that we use "10" in the exponent since our time periods correspond to 6 months and and hence, 5 years correspond to 10 periods). Similarly, on calibrating to 10-year rate, we get (1 − q 37 ) 20 = 0.036 implies q 37 ≈ 15.3%. The two calibrations are consistent with each other (lending evidence to time-invariance). Minimizing sum of squared errors over the two data points, i.e., min q37∈[0,1] {((1 − q 37 ) 10 − 0.184) 2 + ((1 − q 37 ) 20 − 0.036) 2 }, gives the following estimate:
q 37 = 0.1554.
Naturally, we have
q 33 = 1 − q 37 .
Second, consider (q 35 , q 36 ,q 37 ). q 36 andq 37 are the probabilities of recovering and dying from invasive breast cancer (under treatment). Table 4.14 of NIH (2020) has various survival rates we can use to calibrate. We calibrate using the 10 data points corresponding to the year 2007 (see Figure 9): q 36 = 0.0459 q 37 = 0.0113.
As a sanity check, note thatq 37 < q 37 . Finally, Figure 9. Calibration of (q36,q37). Under our (time-invariant) Markov model, with a starting state of invasive breast cancer (under treatment), the survival rate after x years equals 1 −q37 2x−1 i=0 (1 − q36 −q37) i . Minimizing the sum of squared errors (on the 10 blue data points in the plot) over (q36,q37) gives us an estimate of (0.0459, 0.0113). The prediction using our fit is shown via the black curve.
q 35 = 1 − q 36 −q 37 .
State 4 (diagnoised in-situ cancer). We are interested in (q 44 , q 45 , q 46 , q 47 ). Under our Markov model (which by definition is "memoryless"), it seems reasonable to set (q 44 , q 45 , q 46 , q 47 ) = (q 24 , q 25 , q 26 ,q 27 ).
State 5 (diagnoised invasive cancer). We are interested in (q 55 , q 56 , q 57 ). Under our Markov model, it seems reasonable to set (q 55 , q 56 , q 57 ) = (q 35 , q 36 ,q 37 ).
States 6 (recovery) and 7 (death). These two states are absorbing and hence, q 66 = q 77 = 1.
E.1.3. EMISSION DISTRIBUTION E
We have E := [e hxi ] h,x,i with e hxi := P(O t = i | H t = h, X t = x). Before discussing the calibration, we discuss the sparse structure of E. To do so, we define the matrix E(x) := [e hxi ] hi for each action x (so that each row sums to 1) and observe that we have the following structure: A few comments are in order. First, for x = 1 (no mammogram screening), the emission matrix E(1) is extremely sparse with entries in {0, 1}. For instance, when hidden state equals 1 (healthy), 2 (undiagnosed in-situ), or 3 (undiagnosed invasive), we observe no signal (emission equals 1) w.p. 1. When hidden state equals 4, 5, 6, or 7, we naturally observe the same emission w.p. 1.
E(0) =
Second, for x = 0 (screening), the emission matrix E(0) is quite sparse as well. If the patient is healthy (row 1), then the test result is negative (true negative) w.p. e 12 and positive (false positive) w.p. 1 − e 12 . When the patient has undiagnosed in-situ cancer (row 1), it is detected (true positive) w.p. e 24 and missed (false negative) w.p. 1 − e 24 . The parameter e 35 has the same interpretation as e 24 but for invasive cancer. As for x = 1, when hidden state equals 4, 5, 6, or 7, we observe the same emission w.p. 1.
Having discussed the structure of E, we now calibrate it to real-data. There are 3 parameters: e 12 , e 24 , and e 35 . All of them can be age specific but we ignore that. e 12 is the specificity of the mammogram screening (i.e., probability of a true negative) and we calibrate it using Table 3 e 24 is the in-situ sensitivity (i.e., probability of a true positive) and we calibrate it using Multiplying all of the 14 cardinalities (last column in Table 1) implies that there are only eight θ 1,...,K decision variables that need to be considered. This is in contrast to the upper bound of |O| |H||X| = 7 14 .
The same logic applies to the π 1,...,M decision variables. In fact, for the π 1,...,M decision variables, even the first step proves useful since not all (h, i) pairs are valid. For instance, if h = 1, then i / ∈ {4, 5, 6, 7}. In particular, the first step allows us to trim down the number of [H hi ] h,i random variables from |H||O| = 49 to 13. The second step trims down the range of each of the 13 random variables. We document this in Table 2 and are able to reduce the number of π 1,...,M decision variables from |H| |H||O| = 7 49 to 15, 552 (which equals the product of the cardinalities presented in the last column). State h = 6 (recovery). No combination for which we enforce PM.
State h = 7 (death). No combination for which we enforce PM.
E.4. Details on the Comonotonic Copula
We discussed the counterfactual simulation under the comonotonic copula for a general dynamic latent-state model in §C.2.
In this section, we connect that discussion to the breast cancer application. To do so, it suffices to define the rank functions r H (·) (for states) and r O (·) (for emissions). For states, there are two possible orderings that seem "natural" (from "best" to "worst"):
• (1, 6, 4, 2, 5, 3, 7)
• (1, 6, 4, 5, 2, 3, 7).
Recalling the Q(i) notation from §E.1, observe that columns 2 and 5 are never "active" simultaneously in any row of Q(i) (for any i). Hence, the choice of ordering (between the two orderings above) will not matter and we can pick any one. Suppose we pick the first one. Then, this ordering defines the rank function. For example, r H (6) = 2, i.e., rank of state 6 equals 2. For the inverse function, r −1 H (2) = 6. It is unclear how to define r O (·) for the breast cancer application but as it turns out, it does not matter. To see why, consider the generic path of interest (from (1) For the first τ s − 1 periods, observe that the counterfactual emission o 1:τs−1 (b) equals the observed emission o 1:τs−1 for each b. This is because the intervention policy x 1:τs−1 equals the observed policy x 1:τs−1 . Now, consider periods τ s to τ e , during which the screening was not done, i.e., x τs:τe = 1. Hence, the emissions o τs:τe = 1 w.p. 1 (see the matrix E(1) in §E.1.3). This means that the emissions does not contain any information regarding the underlying noise variables V τs:τe (see Figure 6) and hence, their posterior equals their prior, which is Unif[0, 1]. As such, for t ∈ {τ s , . . . , τ e }, we can sample o t (b) using the categorical distribution over the probability vector [e ht(b) xti ] i . Note that we can use o τs−1 (b) to sample h τs (b), which we can use to sample o τs (b), and so on (until we have sampled h τe+1 (b)). Now, consider t = τ e + 1. We know o t ∈ {4, 5}:
• If o t = 4, then h t (b) = 2 and h t (b) ∈ {2, 4, 6} (cf. pathwise monotonicity). Finally, for t ≥ τ e + 2, we know h t (b) ≥ 4 and that the corresponding rows in E(0) are 0-1. Hence, the posterior of V t equals the prior and we can sample o t (b) using the categorical distribution over the probability vector [e ht(b) xti ] i . By construction, the comonotonic copula will obey pathwise monotonicity and hence, will ensure that in the counterfactual world, patient does not die before period T , i.e., H T −1 = 7 w.p. 1.
Figure 1 .
1The model M comprises three primitives: M ≡ (p, E, Q), where p := [p h ] h denotes the initial state distribution with p h := P(H 1 = h) for all h, E := [e hxi ] h,x,i , and Q := [q hih ′ ] h,i,h ′ . A dynamic latent-state model. States H1:T are hidden (red). Emissions O1:T are observed (blue). X1:T represents the policy (observed). (We use the notation X1:T := (Xt) T t=1 .)
Figure 2 .
2The SCM underlying the dynamic latent-state model. The only difference between the SCM here andFigure 1is the addition of (grey) exogenous noise nodes [Ut, Vt]t.
Objective.
As in (2), we wish to understand the PN, which equals P( H T = 7), where H T is the hidden state at time T under M. The randomness in H T depends on the randomness in (i) the true hidden path H 1:T (captured by [h 1:T (b)] b ) and (ii) the counterfactual model M | H 1:T after conditioning on H 1:T (captured by M(b)). Lemma 1 decomposes PN using these two uncertainties. (All proofs are in §A.) Lemma 1. We have
[K] and m ∈ [M ], where K := |H||X| and M := |H||O|. The K and M dimensional joint PMFs for all i 1 , . . . , i K ∈ O and h 1 , . . . , h M ∈ H are defined as
Figure 3 .
3PN results for path 1 as we vary T ∈ {4, . . . , 10}. Observe that the LB, LB(CS), and LB(PM) curves coincide (the lowest curve in eachfigure)
P
(h t , o 1:T , x 1:T ) = P (h t , o 1:t , x 1:t ) P (o t+1:T , x t+1:T | h t , o 1:t , x 1:t ) = P (h t , o 1:t , x 1:t ) P (o t+1:T , x t+1:T
P
(h 1:T | o 1:T , x 1:T ) = P (h 1 | h 2:T , o 1:T , x 1:T ) . . . P (h T −1 | h T , o 1:T , x 1:T ) P (h T | o 1:T , x 1:T ) = P (h 1 | h 2 , o 1:T , x 1:T ) . . . P (h T −1 | h T , o 1:T , x 1:T ) P (h T | o 1:T , x 1:T ) .
from which it is easy to sample.Hence, we can efficiently generate samples [h 1:T (b)] B b=1 from the posterior P (h 1:T | o 1:T , x 1:T ).
can be consulted for an introduction to the topic. McNeil et al. (2015) also contains a nice introduction but in the context of financial risk management. Definition 1. A d-dimensional copula, C : [0, 1] d :→ [0, 1] is a cumulative distribution function with uniform marginals.
is the joint distribution of (U, 1 − U ) where U ∼ Unif[0, 1].(The Fréchet-Hoeffding lower bound is only tight when d = 2. This is analogous to the fact that while a pairwise correlation can lie anywhere in [−1, 1], the average pairwise correlation of d random variables is bounded below by −1/(d − 1).)
Figure 4 .
4The SCM underlying the dynamic latent-state model. As in §4, we start with the posterior samples [h 1:T (b)] B b=1 corresponding to the random path H 1:T | (o 1:T , x 1:T )
15: end for 16: return [ h 1:T (b)] b
Figure 5 .
5Y 0 ∼ {bad, better, best} w.p. {0.2, 0.3, 0.5} and Y 1 ∼ {bad, better, best} w.p. {0.2, 0.2, 0.6}. The underlying SCM is shown again inFigure 5. SCM for Example 1 with the comonotonic copula and hence, the noise node is a scalar U ∼ Unif[0, 1], as opposed to a vector U. The structural equation is Y = f (X, U ), which we denote by fX (U ), the inverse transform function corresponding to the random variable YX. That is, f0(u) = bad, better, and best if u ∈ [0, 0.2], u ∈ [0.2, 0.5], and u ∈ [0.5, 1], respectively. Similarly, f1(u) = bad, better, and best if u ∈ [0, 0.2], u ∈ [0.2, 0.4], and u ∈ [0.4, 1], respectively.
Figure 6 .
6The SCM with the comonotonic copula. The key change is that the noise nodes are now scalars as opposed to vectors, i.e., (Ut, Vt) as opposed to (Ut, Vt).
Figure 7 .
7Evaluating the computational performance of our ideas in §D.1 and §D.2 on path 1 from §5. All results are averaged over the 20 seeds we use. In sub-figure (b)
Figure 8 .
8PN bounds obtained when we embed CS and PM constraints in the pairwise optimization. All results are computed as the average of bounds obtained from 20 different seeds with each seed being used to generate B = 100 paths. Furthermore, to avoid clutter, we do not show the standard deviation bars and note that the maximum standard deviation value is less than 0.01.
of Ayer et al. (2012): e 12 = 0.9.
We have k ∈ [K] and m ∈ [M ], where K := |H||X| and M := |H||O|. The K and M dimensional joint PMFs for all i 1 , . . . , i K ∈ O and h 1 , . . . , h M ∈ H are defined as θ 1,...,K (i 1 , . . . , i K ) := P(O 1 = i 1 , . . . , O K = i K ) (9a) π 1,...,M (h 1 , . . . , h M ) := P(H 1 = h 1 , . . . , H M = h M ).
-
If h t (b) ∈ {4, 6}, then o t (b) = h t (b) (since rows 4 and 6 of E(0) have 1 on the diagonal).
-
Else, if h t (b) = 2 (= h t (b)), then o t (b) = o t = 4.• Else, if o t = 5, then h t (b) = 3 and h t (b) ∈ {3, 4, 5, 6} (cf. pathwise monotonicity).
-
If h t (b) ∈ {4, 5, 6}, then o t (b) = h t (b) (since rows 4, 5, 6 of E(0) have 1 on the diagonal).
-
If h t (b) = 3 (= h t (b)), then o t (b) = o t = 5.
Figure 10 .
10PN results for path 2 as we vary T ∈ {4, . . . , 10} (analogous toFigure 3for path 1). As inFigure 3, observe that the LB, LB(CS), and LB(PM) curves coincide (the lowest curve in eachfigure). Further, the UB(CS) and UB(PM) curves coincide for path 2.
bad" is a feasible counterfactual outcome under CS.Even if CS is appropriate, its current operationalization
has a key limitation. In particular, instead of considering
all possible structural causal models (SCMs) that obey CS,
both Oberst & Sontag (2019) and Tsirtsis et al. (2021) pick
one SCM via the Gumbel-max distribution. Ideally, one
should characterize the space of all SCMs obeying CS, and
map that space into appropriate bounds on the CQI.
We present our optimization-based framework to perform
counterfactual analysis next. Our framework does not rely
on CS. However, if CS is deemed appropriate for one or
more components of the SCM (see §4), our approach al-
lows us to encode CS via linear constraints in the optimiza-
tion and characterize the entire space of solutions that obey
CS. We do this in §5 to negatively answer the open question
of Oberst & Sontag (2019) regarding whether Gumbel-max
obeys CS uniquely. Further, if enforcing the so-called path-
wise monotonicity (PM) is desirable, i.e., ensuring the coun-
terfactual outcome does not worsen under a better interven-
tion (as we assumed in Example 1), then we can embed it
in our optimization via linear constraints as well.
x , which comprises of |H||X| noise variables. We model the exogenous node as a vector (as opposed to a scalar) to capture the fact that each O thx := O t | (H t = h, X t = x) defines a distinct random variable for all (h, x). Moreover, these random variables might be independent, or they might display positive or negative dependence. One way to handle this is to associate each O thx with a distinct noise variable V thx . The dependence struc-ture among these noise variables [V thx ] h,x is then what determines the dependence structure among [O thx ] h,x . The structural equation obeys
Mueller, S., Li, A., and Pearl, J. Causes of effects: Learning individual responses from population data. arXiv, 2021. //resources.cisnet.cancer.gov/registry/packages/uwbcs-wisconsin/.Ayer, T., Alagoz, O., and Stout, N. K. A POMDP approach
to personalize mammography screening decisions. Op-
erations Research, 60(5):1019-1034, 2012.
Balke, A. and Pearl, J. Counterfactual probabilities: Com-
putational methods, bounds and applications. In Un-
certainty Proceedings, pp. 46-54. San Francisco (CA),
1994.
Barber, D. Bayesian Reasoning and Machine Learning.
Cambridge University Press, 2012.
Buesing, L., Weber, T., Zwols, Y., Heess, N., Racaniere, S.,
Guez, A., and Lespiau, J.-B. Woulda, coulda, shoulda:
Counterfactually-guided policy search. In International
Conference on Learning Representations, 2019.
Cai, Z., Kuroki, M., Pearl, J., and Tian, J. Bounds on di-
rect effects in the presence of confounded intermediate
variables. Biometrics, 64(3):695-701, 2008.
Haugh, M. B. and Lacedelli, O. R. Information relax-
ation bounds for partially observed Markov decision pro-
cesses. IEEE Transactions on Automatic Control, 65(8):
3256-3271, 2019.
IBM. ILOG CPLEX Optimizer Version 12.8. 2017.
Johnstone, P. A., Norton, M. S., and Riffenburgh, R. H. Sur-
vival of patients with untreated breast cancer. Journal of
surgical oncology, 73(4):273-277, 2000.
Kaufman, S., Kaufman, J., MacLenose, R., Greenland, S.,
and Poole, C. Improved estimation of controlled direct
effects in the presence of unmeasured confounding of
intermediate variables. Statistics in Medicine, 25:1683-
1702, 2005.
Lorberbom, G., Johnson, D. D., Maddison, C. J., Tarlow,
D., and Hazan, T. Learning generalized gumbel-max
causal mechanisms. In Advances in Neural Information
Processing Systems, volume 34, pp. 26792-26803, 2021.
MATLAB. Version 9.10.0 (R2021b). The MathWorks Inc.,
Natick, Massachusetts, 2021.
McNeil, A. J., Frey, R., and Embrechts, P. Quantita-
tive Risk Management: Concepts, Techniques and Tools.
Princeton University Press, 2 edition, 2015.
Nelsen, R. An Introduction to Copulas. Springer, 2 edition,
2006.
NIH.
SEER
Cancer
Statistics
Review
(CSR)
1975-2017.
2020.
URL
https://seer.cancer.gov/archive/csr/1975_2017/resu
Oberst, M. and Sontag, D. Counterfactual off-policy eval-
uation with Gumbel-max structural causal models. In
Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings
of the 36th International Conference on Machine Learn-
ing, volume 97 of Proceedings of Machine Learning Re-
search, pp. 4881-4890. PMLR, 09-15 Jun 2019.
Pearl, J. Causal inference in statistics: An overview. Statis-
tics surveys, 3:96-146, 2009a.
Pearl, J. Causality. Cambridge University Press, 2 edition,
2009b.
Pearl, J. and Mackenzie, D. The Book of Why. Penguin
Books, 2018.
Sahinidis, N. V. BARON 2023.1.5: Global Optimization
of Mixed-Integer Nonlinear Programs, User's Manual,
2023.
Shapiro, A., Dentcheva, D., and Ruszczynski, A. Lec-
tures on stochastic programming: modeling and theory.
SIAM, 2021.
Sklar, A. Fonctions de répartition à n dimensions et leurs
marges. Publ. Inst. Statist. Univ. Paris, 8:229-231, 1959.
Sprague, B. L. and Trentham-Dietz, A. Prevalence of breast
carcinoma in situ in the United States. JAMA: the journal
of the American Medical Association, 302(8):846, 2009.
Tawarmalani, M. and Sahinidis, N. V. A polyhedral branch-
and-cut approach to global optimization. Mathematical
programming, 103(2):225-249, 2005.
Tian, J. and Pearl, J. Probabilities of causation: Bounds
and identification. Annals of Mathematics and Artificial
Intelligence, 8:287-313, 2000.
Tsirtsis, S., De, A., and Rodriguez, M. Counterfactual
explanations in sequential decision making under uncer-
tainty. In Ranzato, M., Beygelzimer, A., Dauphin, Y.,
Liang, P., and Vaughan, J. W. (eds.), Advances in Neural
Information Processing Systems, volume 34, pp. 30127-
30139. Curran Associates, Inc., 2021.
UWBCS.
University of Wisconsin Breast
Cancer Simulation Model.
2013.
URL
https:
1.1, §E.1.2, and §E.1.3, respectively. E.1.1. INITIAL STATE DISTRIBUTION pWe have p := (p 1 , . . . , p 7 ) where p h := P(H 1 = h) for all h. Usually, breast cancer screening starts around the age of 40 and the prevalence among females aged 40-49 is 1.0183%(Table 4.24 of NIH (2020), all races, females):Since in-situ cancer comprises 20% of new breast cancer diagnoses (Sprague & Trentham-Dietz, 2009), we getp 2 + p 3 = 0.010183.
p 2 = 0.2 × 0.010183
Table 3 of
3Ayer et al. (2012): e 24 = 0.8. Finally, e 35 is the invasive sensitivity and following Ayer et al. (2012), we set e 35 = e 24 .E.2. Reducing the Number of Joint Decision Variables by Exploiting SparsityRecall from §4.3 the following setup, which we repeat for convenience. Let k ≡ (h, x) and m ≡ (h, i) so thatO k ≡ O hx , e ki ≡ e hxi H m ≡ H hi , q mh ′ ≡ q hih ′ .
Table 1 .
1Range of the 14 random variables [O hx ] h,x corresponding to θ1,...,K. State h Policy x Range of O hx Range cardinality1
0
{2, 3}
2
1
1
{1}
1
2
0
{2, 4}
2
2
1
{1}
1
3
0
{2, 5}
2
3
1
{1}
1
4
0
{4}
1
4
1
{4}
1
5
0
{5}
1
5
1
{5}
1
6
0
{6}
1
6
1
{6}
1
7
0
{7}
1
7
1
{7}
1
Table 2 .
2Range of the 13 random variables [H hi ] h,i corresponding to π1,...,M . Only 13 (h, i) pairs are shown as the other 36 are not valid.State h = 5 (diagnosed invasive). We enforce PM for the following combinations:• If (h, i, h ′ ) equals (invasive, detected, invasive),then the counterfactual state h ′ can not be recovered or death if h is invasive and detected. That is, h = 5, i = 5, h ′ = 5, h ∈ {3, 5}, i = 5, and h ′ ∈ {6, 7}. • If (h, i, h ′ ) equals (invasive, detected, recovery), then the counterfactual state h ′ can not be invasive or death if h is invasive and detected. That is, h = 5, i = 5, h ′ = 6, h ∈ {3, 5}, i = 5, and h ′ ∈ {5, 7}. • If (h, i, h ′ ) equals (invasive, detected, death), then the counterfactual state h ′ can not be invasive or recovery if h is invasive and detected. That is, h = 5, i = 5, h ′ = 7, h ∈ {3, 5}, i = 5, and h ′ ∈ {5, 6}.
The key feature distinguishing a static model from a dynamic model with T periods say, is that the single-period structure is repeated T times. As we shall see, our framework takes advantage of this repeated structure in several ways.
Instead of using the "twin networks" approach (Pearl, 2009b), we perform the counterfactual analysis directly by leveraging the structure in our model.
Ot | (Ht = h, Xt = x) is time-independent and hence, we use the notation O hx instead of O thx . Same logic holds for H hi .
In §C, we also discuss specific copulas (e.g., independence and comonotonic copulas) that can be used to provide benchmark values of PN.
In fact, given(5)and(6), we do not need to define all pairwise marginals as decision variables but only for "k < ℓ" and "m < n". This is because if an optimization has two decision variables x and y and the constraint x = y, we can eliminate y and the constraint x = y by replacing y with x everywhere in the optimization. not detected but the patient's state remains at in-situ in period t+1. Then, in the counterfactual world, if the cancer is detected in period t, then PM would require that the cancer can not be worse than in-situ in period t + 1, i.e.,
Results. The results for path 1 are displayed inFigure 3(and for path 2 inFigure 10( §E.5)), where we show the PN bounds as we vary T . In addition to the bounds (PN lb , PN ub ) computed via our baseline optimization (UB and LB), we show the bounds obtained when we encode CS (UB(CS) and LB(CS)) and PM (UB(PM) and LB(PM)) 7 . 7 Details on the PM constraints for breast cancer are in §E.3.
It may be more likely that we only have domain specific knowledge over sub-components of the copulas (which are themselves copulas).
Note that we focus on the maximization problem from (8) but the same holds for the minimization counterpart. All we need to do is simply change the "max" to a "min" in the objective function (24a).12 To be pedantic, we could have added a "t" super-script in P M(b) ( ht) and used the notation P t M(b) ( ht) instead. However, we did not do so earlier since this dependence on t was implicitly understood to exist, and adding this extra super-script felt unnecessary.
Note that the pairwise optimization is identical to the joint optimization (8) but with the following two differences: (a) F replaced by F ′ and (b) joint decision variables not defined.
AcknowledgementsWe thank the ICML review team, Madhumitha Shridharan, and Jim Smith for taking the time to read the paper and providing very useful feedback. We also thank Nick Sahinidis for his support with BARON-related issues.
Handbook on semidefinite, conic and polynomial optimization. M F Anjos, J B Lasserre, Springer Science & Business Media166Anjos, M. F. and Lasserre, J. B. Handbook on semidef- inite, conic and polynomial optimization, volume 166. Springer Science & Business Media, 2011.
K represents the joint PMF of the random variables [O k ] k where k ≡ (h, x), we first understand which (h, x) pairs are valid (as opposed to naively considering all (h, x) ∈ H × X). for convenience, suppose the patient has in-situ cancer in period t which is not detected but the patient's state remains at in. Consider the θ 1,...,K decision variables for now. Since θ 1. situ in period t + 1. Then, in the counterfactual world, if the cancer is detected in period t, then PM would require that the cancer can not be worse than in-situ in period t + 1, i.e., P(H h i = h ′ | H hi = h ′ ) = 0Consider the θ 1,...,K decision variables for now. Since θ 1,...,K represents the joint PMF of the random variables [O k ] k where k ≡ (h, x), we first understand which (h, x) pairs are valid (as opposed to naively considering all (h, x) ∈ H × X). for convenience, suppose the patient has in-situ cancer in period t which is not detected but the patient's state remains at in-situ in period t + 1. Then, in the counterfactual world, if the cancer is detected in period t, then PM would require that the cancer can not be worse than in-situ in period t + 1, i.e., P(H h i = h ′ | H hi = h ′ ) = 0
There can be multiple such cases to consider and we can enforce all the PM constraints by setting the corresponding π h i,hi ( h ′ , h ′ ) variables equal to 0 as π h i. ∈ {1, 2}, 7} {5, hi ( h ′ , h ′ ) = P(H hifor h = 2, i ∈ {1, 2}, h ′ = 2, h ∈ {2, 4}, i = 4, h ′ ∈ {5, 7}. There can be multiple such cases to consider and we can enforce all the PM constraints by setting the corresponding π h i,hi ( h ′ , h ′ ) variables equal to 0 as π h i,hi ( h ′ , h ′ ) = P(H hi =
′ ) combinations for which we set the π h i,hi ( h ′ , h ′ ) variables equal to 0. To do so, we iterate over each state h ∈ {1. Hence, h, i, h ′ , h, i, h; h, x, i, h, x, i) combinations for which we set the θ h x. hx ( i, i) variables equal to 0.Hence, to provide details on which all PM constraints we enforce, it suffices to enumerate the (h, i, h ′ , h, i, h ′ ) combina- tions for which we set the π h i,hi ( h ′ , h ′ ) variables equal to 0. To do so, we iterate over each state h ∈ {1, . . . , 7}. (Note that for PM, there are no (h, x, i, h, x, i) combinations for which we set the θ h x,hx ( i, i) variables equal to 0.)
We enforce PM for the following combinations: • If (h, i, h ′ ) equals (healthy, whatever emission, healthy), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 1, h = 1, i ∈ O. State h = 1 (healthy. and h ′ ∈ {2, 3, 4, 5, 7}State h = 1 (healthy). We enforce PM for the following combinations: • If (h, i, h ′ ) equals (healthy, whatever emission, healthy), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 1, h = 1, i ∈ O, and h ′ ∈ {2, 3, 4, 5, 7}.
equals (healthy, whatever emission, in-situ), then the counterfactual state h ′ can not be healthy, invasive, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 2, h = 1. • If ; H, I, H ′ ; I ∈ O, H ′ ∈ {1, 37• If (h, i, h ′ ) equals (healthy, whatever emission, in-situ), then the counterfactual state h ′ can not be healthy, invasive, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 2, h = 1, i ∈ O, and h ′ ∈ {1, 3, 5, 7}.
equals (healthy, whatever emission, invasive), then the counterfactual state h ′ can not be healthy, in-situ, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 3, h = 1, i ∈ O. • If, ; , 7and h ′ ∈ {1, 2, 4• If (h, i, h ′ ) equals (healthy, whatever emission, invasive), then the counterfactual state h ′ can not be healthy, in-situ, or death if h is healthy. That is, h = 1, i ∈ O, h ′ = 3, h = 1, i ∈ O, and h ′ ∈ {1, 2, 4, 7}.
equals (healthy, whatever emission, death), then the counterfactual state h ′ can not be healthy, in-situ, or invasive if h is healthy. That is. • If, ; , h = 1, i ∈ O, h ′ = 7, h = 1, i ∈ Oand h ′ ∈ {1, 2, 3, 4, 5}• If (h, i, h ′ ) equals (healthy, whatever emission, death), then the counterfactual state h ′ can not be healthy, in-situ, or invasive if h is healthy. That is, h = 1, i ∈ O, h ′ = 7, h = 1, i ∈ O, and h ′ ∈ {1, 2, 3, 4, 5}.
We enforce PM for the following combinations: • If (h, i, h ′ ) equals (in-situ, undetected, in-situ), then the counterfactual state h ′ can not be invasive or death if h is healthy or in-situ. ∈ O , H ′ ∈ {3, 57h = 2, i ∈ {1, 2, 3}, h ′ = 2, h ∈ {1, 2, 4}, iState h = 2 (undiagnosed in-situ. That isState h = 2 (undiagnosed in-situ). We enforce PM for the following combinations: • If (h, i, h ′ ) equals (in-situ, undetected, in-situ), then the counterfactual state h ′ can not be invasive or death if h is healthy or in-situ. That is, h = 2, i ∈ {1, 2, 3}, h ′ = 2, h ∈ {1, 2, 4}, i ∈ O, and h ′ ∈ {3, 5, 7}.
equals (in-situ, detected, in-situ), then the counterfactual state h ′ can not be invasive or death if h is in-situ and detected. • If, ; , That is, h = 2, i = 4, h ′ = 4, h ∈ {2, 4}, i = 4, and h ′ ∈ {3, 5, 7}• If (h, i, h ′ ) equals (in-situ, detected, in-situ), then the counterfactual state h ′ can not be invasive or death if h is in-situ and detected. That is, h = 2, i = 4, h ′ = 4, h ∈ {2, 4}, i = 4, and h ′ ∈ {3, 5, 7}.
then the counterfactual state h ′ can not be death if h is healthy or in-situ. • If (h, i, h ′ ) equals (in-situ, undetected, invasive). h = 2, i ∈ {1, 2, 3}, h ′ = 3, h ∈ {1, 2, 4}, i ∈ O, and h ′ = 7• If (h, i, h ′ ) equals (in-situ, undetected, invasive), then the counterfactual state h ′ can not be death if h is healthy or in-situ. That is, h = 2, i ∈ {1, 2, 3}, h ′ = 3, h ∈ {1, 2, 4}, i ∈ O, and h ′ = 7.
equals (in-situ, detected, invasive), then the counterfactual state h ′ can not be death if h is in-situ and detected. • If, ; , That is, h = 2, i = 4, h ′ = 5, h ∈ {2, 4}, i = 4, and h ′ = 7• If (h, i, h ′ ) equals (in-situ, detected, invasive), then the counterfactual state h ′ can not be death if h is in-situ and detected. That is, h = 2, i = 4, h ′ = 5, h ∈ {2, 4}, i = 4, and h ′ = 7.
equals (in-situ, detected, recovered), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is in-situ and detected. • If, ; , That is, h = 2, i = 4, h ′ = 6, h ∈ {2, 4}, i = 4, and h ′ ∈ {2, 3, 4, 5, 7}• If (h, i, h ′ ) equals (in-situ, detected, recovered), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is in-situ and detected. That is, h = 2, i = 4, h ′ = 6, h ∈ {2, 4}, i = 4, and h ′ ∈ {2, 3, 4, 5, 7}.
equals (in-situ, undetected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and undetected. That is. • If, ; , h = 2, i ∈ {1, 2, 3}, h ′ = 7, h = 2, i ∈ {1, 2, 3}and h ′ ∈ {2, 3, 4, 5, 6}• If (h, i, h ′ ) equals (in-situ, undetected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and undetected. That is, h = 2, i ∈ {1, 2, 3}, h ′ = 7, h = 2, i ∈ {1, 2, 3}, and h ′ ∈ {2, 3, 4, 5, 6}.
equals (in-situ, detected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and detected. • If, ; , That is, h = 2, i = 4, h ′ = 7, h ∈ {2, 4}, i = 4, and h ′ ∈ {2, 3, 4, 5, 6}• If (h, i, h ′ ) equals (in-situ, detected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and detected. That is, h = 2, i = 4, h ′ = 7, h ∈ {2, 4}, i = 4, and h ′ ∈ {2, 3, 4, 5, 6}.
We enforce PM for the following combinations: • If (h, i, h ′ ) equals (invasive, undetected, invasive), then the counterfactual state h ′ can not be death if h is healthy. State h = 3 (undiagnosed invasive. in-situ, or invasive. That is, h = 3, i ∈ {1, 2, 3}, h ′ = 3, h ∈ {1, 2, 3, 4, 5}, i ∈ O, and h ′ = 7State h = 3 (undiagnosed invasive). We enforce PM for the following combinations: • If (h, i, h ′ ) equals (invasive, undetected, invasive), then the counterfactual state h ′ can not be death if h is healthy, in-situ, or invasive. That is, h = 3, i ∈ {1, 2, 3}, h ′ = 3, h ∈ {1, 2, 3, 4, 5}, i ∈ O, and h ′ = 7.
equals (invasive, detected, invasive), then the counterfactual state h ′ can not be death if h is invasive and detected. • If, ; , That is, h = 3, i = 5, h ′ = 5, h ∈ {3, 5}, i = 5, and h ′ = 7• If (h, i, h ′ ) equals (invasive, detected, invasive), then the counterfactual state h ′ can not be death if h is invasive and detected. That is, h = 3, i = 5, h ′ = 5, h ∈ {3, 5}, i = 5, and h ′ = 7.
equals (invasive, detected, recovered), then the counterfactual state h ′ can not be invasive or death if h is invasive and detected. • If, ; , That is, h = 3, i = 5, h ′ = 6, h ∈ {3, 5}, i = 5, and h ′ ∈ {3, 5, 7}• If (h, i, h ′ ) equals (invasive, detected, recovered), then the counterfactual state h ′ can not be invasive or death if h is invasive and detected. That is, h = 3, i = 5, h ′ = 6, h ∈ {3, 5}, i = 5, and h ′ ∈ {3, 5, 7}.
equals (invasive, undetected, death), then the counterfactual state h ′ can not be invasive or recovered if h is invasive and undetected. • If, ; , That is. and h ′ ∈ {3, 5, 6}• If (h, i, h ′ ) equals (invasive, undetected, death), then the counterfactual state h ′ can not be invasive or recovered if h is invasive and undetected. That is, h = 3, i ∈ {1, 2, 3}, h ′ = 7, h ∈ {3, 5}, i ∈ {1, 2, 3}, and h ′ ∈ {3, 5, 6}.
equals (invasive, detected, death), then the counterfactual state h ′ can not be invasive or recovered if h is invasive and detected. • If, ; , That is, h = 3, i = 5, h ′ = 7, h ∈ {3, 5}, i = 5, and h ′ ∈ {3, 5, 6}• If (h, i, h ′ ) equals (invasive, detected, death), then the counterfactual state h ′ can not be invasive or recovered if h is invasive and detected. That is, h = 3, i = 5, h ′ = 7, h ∈ {3, 5}, i = 5, and h ′ ∈ {3, 5, 6}.
We enforce PM for the following combinations: • If (h, i, h ′ ) equals (in-situ, detected, in-situ), then the counterfactual state h ′ can not be invasive, recovered, or death if h is in-situ and detected. State h = 4 (diagnosed in-situ. That is, h = 4, i = 4, h ′ = 4, h ∈ {2, 4}, i = 4, and h ′ ∈ {5, 6, 7}State h = 4 (diagnosed in-situ). We enforce PM for the following combinations: • If (h, i, h ′ ) equals (in-situ, detected, in-situ), then the counterfactual state h ′ can not be invasive, recovered, or death if h is in-situ and detected. That is, h = 4, i = 4, h ′ = 4, h ∈ {2, 4}, i = 4, and h ′ ∈ {5, 6, 7}.
then the counterfactual state h ′ can not be in-situ, recovered, or death if h is in-situ and detected. • If (h, i, h ′ ) equals (in-situ, detected, invasive). h = 4, i = 4, h ′ = 5, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 6, 7}• If (h, i, h ′ ) equals (in-situ, detected, invasive), then the counterfactual state h ′ can not be in-situ, recovered, or death if h is in-situ and detected. That is, h = 4, i = 4, h ′ = 5, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 6, 7}.
equals (in-situ, detected, recovery), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is in-situ and detected. • If, ; , That is. h = 4, i = 4, h ′ = 6, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 5, 7}• If (h, i, h ′ ) equals (in-situ, detected, recovery), then the counterfactual state h ′ can not be in-situ, invasive, or death if h is in-situ and detected. That is, h = 4, i = 4, h ′ = 6, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 5, 7}.
equals (in-situ, detected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and detected. • If, ; , That is. h = 4, i = 4, h ′ = 7, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 5, 6}• If (h, i, h ′ ) equals (in-situ, detected, death), then the counterfactual state h ′ can not be in-situ, invasive, or recovered if h is in-situ and detected. That is, h = 4, i = 4, h ′ = 7, h ∈ {2, 4}, i = 4, and h ′ ∈ {4, 5, 6}.
| [] |
[
"Vector Boson Pair Production in Hadronic Collisions at O(α s ): Lepton Correlations and Anomalous Couplings",
"Vector Boson Pair Production in Hadronic Collisions at O(α s ): Lepton Correlations and Anomalous Couplings"
] | [
"L Dixon \nStanford Linear Accelerator Center\nDepartment of Physics\nStanford University\n94309StanfordCAUSA\n",
"Theoretical Physics, ETHZ Kunszt \nUniversity of Durham\nDH1 3LEDurhamEngland\n",
"Switzerland A Zürich \nUniversity of Durham\nDH1 3LEDurhamEngland\n",
"Signer \nUniversity of Durham\nDH1 3LEDurhamEngland\n"
] | [
"Stanford Linear Accelerator Center\nDepartment of Physics\nStanford University\n94309StanfordCAUSA",
"University of Durham\nDH1 3LEDurhamEngland",
"University of Durham\nDH1 3LEDurhamEngland",
"University of Durham\nDH1 3LEDurhamEngland"
] | [] | We present cross sections for production of electroweak vector boson pairs, W W , W Z and ZZ, in pp and pp collisions, at next-to-leading order in α s . We treat the leptonic decays of the bosons in the narrow-width approximation, but retain all spin information via decay angle correlations. We also include the effects of W W Z and W W γ anomalous couplings. | 10.1103/physrevd.60.114037 | [
"https://export.arxiv.org/pdf/hep-ph/9907305v1.pdf"
] | 15,419,963 | hep-ph/9907305 | 6c6a790808951abd10feeefa54312df0e5dacc76 |
Vector Boson Pair Production in Hadronic Collisions at O(α s ): Lepton Correlations and Anomalous Couplings
Jul 1999 July 1999
L Dixon
Stanford Linear Accelerator Center
Department of Physics
Stanford University
94309StanfordCAUSA
Theoretical Physics, ETHZ Kunszt
University of Durham
DH1 3LEDurhamEngland
Switzerland A Zürich
University of Durham
DH1 3LEDurhamEngland
Signer
University of Durham
DH1 3LEDurhamEngland
Vector Boson Pair Production in Hadronic Collisions at O(α s ): Lepton Correlations and Anomalous Couplings
Jul 1999 July 1999Submitted to Physical Review D
We present cross sections for production of electroweak vector boson pairs, W W , W Z and ZZ, in pp and pp collisions, at next-to-leading order in α s . We treat the leptonic decays of the bosons in the narrow-width approximation, but retain all spin information via decay angle correlations. We also include the effects of W W Z and W W γ anomalous couplings.
Introduction
At the core of the electroweak Standard Model is its invariance under the nonabelian gauge group SU(2) × U (1). Many aspects of this gauge structure, such as vector boson masses and couplings to fermions, have already been tested with high precision in a variety of experiments. However, the nonabelian self-interactions of vector bosonsin particular, triple gauge-boson couplings -are just beginning to be studied directly, via vector-boson pair production in e + e − annihilation at LEP2 at CERN, and in pp collisions at Run I of the Fermilab Tevatron. Although thousands of W + W − pairs have been collected at LEP2, they have all been produced at relatively modest values of the pair invariant mass, M W W ∼ < 200 GeV. On the other hand, if there are anomalous (non-Standard Model) vector-boson self-couplings, their effects are expected to grow with invariant mass, so it is useful to study vector-boson pair production at the highest possible energies. Vector boson pairs also provide a background for other types of physics. If the Higgs boson is heavy enough it will decay primarily into W + W − and ZZ pairs [1]. Exotic Higgs sectors can have substantial branching ratios for charged Higgs bosons to decay to W Z pairs [2]. Leptonically decaying W Z pairs, W + Z → ℓ + νℓ ′+ ℓ ′− , in which the negatively-charged lepton is lost, form a background to a signal for strong W W scattering associated with the mode W + W + → ℓ + νℓ ′+ ν ′ [3]. Finally, a prime signal for supersymmetry at hadron colliders is the production of three charged leptons and missing transverse momentum [4]; a background for this process is the production of a W plus a (virtual) Z or γ.
In the near future, hadron colliders will be the primary source of vector boson pairs with large invariant mass. Run II of the upgraded Tevatron should yield a data set roughly 20 times larger than that from Run I, including 100-200 leptonically decaying W + W − pairs. The Large Hadron Collider (LHC) at CERN promises to increase the sample by another factor of 50 beyond Run II. With this increase in statistics, refined Standard Model predictions are essential. The leading QCD corrections (O(α s )) are significant (generally of order tens of percent), and hence are required to get a precise estimate of the overall production cross section. Also, experiments do not detect vector bosons, but only those leptonic decay products that fall within the experimental acceptance. 1 Spin correlations between vector bosons are reflected in kinematic distributions of leptonic momenta, which in turn influence the number of events surviving experimental cuts. In order to properly take into account the effects of cuts on the cross section, as well as to study the more detailed (lepton) kinematic distributions permitted by higher statistics, it is important to treat the vector boson decays properly, including all spin correlations.
Hadronic production of vector boson pairs in the Standard Model has already been studied extensively. The Born-level, or leading-order (LO) cross sections for W + W − , W ± Z and ZZ pair production via quark annihilation were computed twenty years ago [6]. These cross sections were evaluated by treating the W and Z as stable particles and summing over their polarization states, using completeness relations to simplify the sum; thus spin and decay correlations were neglected. For the spin-summed production cross section, the next-to-leading order (NLO, or O(α s )) QCD corrections were obtained for W + W − final states in refs. [7,8], for W ± Z in refs. [9,10], and for ZZ in refs. [11,12].
The simplest way to include the effects of vector-boson spin and decay correlations is to compute directly the matrix elements for the production of the four final-state fermions. In the narrow-width approximation, only 'doubly-resonant' Feynman diagrams have to be considered -the same class of diagrams that gives rise to the on-shell spinsummed cross section. Because both the outgoing fermions and the initial-state partons are essentially massless, and because their couplings to vector bosons are chiral, it is very convenient to use a helicity basis for the fermions. The tree-level helicity amplitudes for massive vector-boson pair production and decay into leptons were first computed in ref. [13], which also demonstrated the significance of decay-angle correlations. This same approach was carried out at order α s in ref. [14]. At O(α s ), there are real corrections, consisting of tree graphs with an additional gluon in either the initial or final state, and virtual corrections, consisting of one-loop amplitudes that interfere with the Born amplitude. However, the full one-loop amplitudes including leptonic decays were unavailable until recently, so ref. [14] included decay correlations everywhere except for the finite part of the virtual contribution, for which spin-summed formulae were used. 2 In this paper, we present O(α s ) results for the hadronic production of W + W − , W Z and ZZ pairs, including the full lepton decay correlations in the narrow-width approximation. We rely on ref. [15] for all the required matrix elements, in particular the virtual one-loop amplitudes for qq ′ → V 1 V 2 → 4 leptons. In order to cancel the infrared divergences in the phase-space integrations for the real corrections, we implement the general subtraction method discussed in ref. [16]. This method allows the computation of distributions of arbitrary (infrared-safe) observables.
Recently, an update to vector boson pair production has been presented in ref. [17]. The corresponding Monte Carlo program, MCFM, relies on the same amplitudes [15] and, therefore, also includes all spin correlations exactly to next-to-leading order in α s . MCFM is more complete than the program described here, in the sense that the narrow-width approximation is not assumed, and singly-resonant diagrams are also included. These additions are expected to shift the resonance-dominated di-vector boson cross sections by the order of several percent. Their effects are obviously much bigger in the off-resonant regions important for studies of Standard Model backgrounds to new physics.
In the present paper, we first compute the di-vector boson cross sections in the Standard Model, both without and with a realistic set of experimental cuts. For W + W − production, a jet veto is used by experimentalists to suppress backgrounds; we study the effect of this veto on the size of the cross section and its renormalization/factorization scale dependence.
Di-boson amplitudes in the Standard Model have interesting angular dependences. For example, at Born level, there is an exact radiation zero in the partonic process q 1q2 → W ± γ at cos θ = (Q 1 + Q 2 )/(Q 1 − Q 2 ), where θ is the scattering angle of the W with respect to the direction of the quark q 1 , and Q 1,2 are the quark electric charges [6]. Similarly, it has been shown that there is a an approximate zero for q 1q2 → W ± Z at
cos θ = (g − 1 + g − 2 )/(g − 1 − g − 2 )
, where g − 1,2 are the left-handed couplings of the Z boson to the quarks [18]. The exact tree-level zero for W γ is filled in somewhat by QCD radiative corrections, and also by the kinematic ambiguity associated with the undetected neutrino in W + γ → ℓ + νγ. Still it produces a dip in the distribution of a related variable, the rapidity difference y γ − y ℓ + , which should be visible at Run II [19].
Here we study the QCD corrections to the approximate W Z radiation zero. Using on-shell, spin-summed cross sections, ref. [10] computed the distribution in the rapidity difference between the W and Z bosons, ∆y W Z = |y W − y Z |, which is a boost-invariant surrogate for the center-of-mass scattering angle θ. It was found that a dip in the ∆y W Z distribution persists at O(α s ), although the dip is less pronounced than at Born level. Since the rapidity of the W cannot be determined on an event-by-event basis, we study a quantity related to ∆y W Z , but constructed purely out of charged lepton variables, and find that a dip (or at least a shoulder) is still present. However, because of the much lower W Z cross section, this measurement is considerably more challenging than the W γ case, and will probably have to wait for the LHC.
Various types of TeV-scale physics may modify vector-boson self-interactions. Without a precise knowledge of the new physics, one often parameterizes the modifications using anomalous coupling coefficients. We shall consider anomalous contributions to the W + W − Z and W + W − γ triple gauge vertices, and their effect on various distributions in W W and W Z production. Similar studies have already been carried out at order α s and including spin correlations everywhere except in the finite virtual contributions, for both W W production [20] and W Z production [21]. Here we include as well the spin correlation effects from the finite virtual contributions. This requires matrix elements beyond those in ref. [15]; however, the new matrix elements are trivial by comparison.
The remainder of the paper is organized as follows. After outlining the computational techniques in section 2, we present results for the Standard Model production of W W , W Z and ZZ pairs in section 3. We first present total cross sections for all three channels, without and then with a set of realistic kinematic cuts on the leptons. We consider both pp collisions at √ s = 2 TeV (corresponding to Run II of the Tevatron), and pp collisions at √ s = 14 TeV (the LHC). We discuss the dependence of the W W cross section on a common renormalization and factorization scale, with and without a jet veto. For the W Z channel, we study the approximate radiation zero before and after QCD corrections. In section 4 we introduce anomalous W + W − Z and W + W − γ couplings, and compute their effect on the matrix elements in the narrow-width approximation through O(α s ). We then study the effects of these couplings on a double-binned transverse energy distribution for the pair of charged leptons in W + W − production followed by leptonic decays. Finally, in section 5 we present our conclusions.
Computation
It is straightforward to implement the helicity amplitudes presented in ref. [15], and those including anomalous couplings (see section 4), in a Monte Carlo program. The tree-level and one-loop amplitudes are computed as complex numbers and the squaring, as well as the sum over helicity configurations, is done numerically. In order to cancel singularities between the real and virtual parts analytically, we use the general version of the subtraction method [22] as presented in ref. [16]. Our code is flexible enough to compute arbitrary infrared-safe quantities, apply arbitrary cuts, and add any parton distribution easily. 3 In this paper we will present results for the Tevatron Run II and the LHC. The former term refers to pp scattering at √ s = 2 TeV, whereas the latter stands for pp scattering at √ s = 14 TeV. Most of our results will be presented with some 'standard cuts' which are defined as follows: We make a transverse momentum cut of p T > 20 GeV for all charged leptons. The event is required to have a minimum missing transverse momentum p miss T , which is carried off by the neutrino(s). We require p miss T > 25 GeV in the case of W -pair production and p miss T > 20 GeV in the case of W ± Z production. No p miss T cut is applied for Z-pair production. Finally, we apply some collider-dependent rapidity cuts for the charged leptons. For the Tevatron we require |η| < 1.5, whereas for the LHC |η| < 2.5.
In all results presented in this paper, we assume that the vector bosons always decay leptonically, i.e. the proper branching ratios of the vector boson decays into leptons Br(V → ff ′ ) are not included. Obviously, these branching ratios depend on which finalstate charged leptons are included in the analysis (electrons, muons, or both). They can easily be added at any stage, using
Br(Z → e + e − ) = Br(Z → µ + µ − ) = 3.37%, i=e,µ,τ Br(Z → ν iνi ) = 20.1%,(1)
Br(W + → e + ν e ) = Br(W + → µ + ν µ ) = 10.8%.
These ratios implicitly incorporate QCD corrections to the hadronic decay widths of the W and Z. We use two different parton distributions, MRST(ft08a) [23] and CTEQ(4M) [24], which we refer to simply as MRST and CTEQ. For both the leading and next-to-leading order results we shall use the same parton distributions (which have been obtained by a fit at next-to-leading order in α s ). The strong coupling constant is evaluated using
α s (µ) = α s (M Z ) w 1 − α s (M Z ) π β 1 β 0 ln(w) w , w = 1 − β 0 α s (M Z ) π ln M Z µ ,(2)with β 0 = 1 2 ( 11 3 C A − 2 3 N f ), β 1 = 1 4 ( 17 3 C 2 A −( 5 3 C A +C F )N f ), C A = N c , C F = (N 2 c −1)/(2N c )
. The value of α s (M Z ) is set equal to the value given in the respective parton distribution fit. Thus, we take α s (M Z ) = 0.1175 for MRST and α s (M Z ) = 0.116 for CTEQ. In all computations we have set the renormalization and factorization scales equal:
µ R = µ F ≡ µ.
The masses of the vector bosons have been set to M Z = 91.187 GeV and M W = 80.33 GeV. As for the coupling constants α and sin 2 θ W , we choose them in the spirit of the "improved Born approximation" [25,26] for W pair production at LEP2. We do not explicitly include any QED or electroweak radiative corrections. However, we take into account the top-quark-enhanced corrections to the relation between M Z , M W and sin 2 θ W , where the latter is defined as an effective coupling in a high-energy process, by using the definition [26]
sin 2 θ W ≡ πα(M Z ) √ 2G F M 2 W ,(3)
where G F = 1.16639 × 10 −5 GeV −2 is the Fermi constant and α(µ) the running QED coupling. For our numerical results we use α = α(M Z ) = 1/128 and sin 2 θ W = 0.230. The programs have been set up to allow for arbitrary values in the entries of the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix that do not depend on the top quark. For the numerical results we take |V ud | = |V cs | = 0.975 and |V us | = |V cd | = 0.222.
Finally we mention that we do not properly include processes where top quarks are involved. The amplitudes presented in ref. [15] assume massless quarks, an approximation that is certainly not justified in the case of the top quark. The fact that we nevertheless include the t-channel exchange of the top quark (with |V td | = |V ts | = 0 and |V tb | = 1) for W -pair production, therefore, results in an error. Fortunately, these processes are suppressed for energy scales that are not too large, either by small CKM matrix elements or by the small b quark distribution function. Indeed, we checked that the contribution of the subprocess bb → W + W − (treating the top as massless) is completely negligible for Run II while it is of the order of 2% for the LHC. Furthermore, in the case of W ± Z production we did not include the process bg → W − Zt. This process is present at next-to-leading order but is strongly suppressed by the large top quark mass, as well as the small b quark distribution function.
Standard Model Results
Total Cross Sections
The total cross sections for NLO vector-boson pair production were computed long ago [7,8,9,10,11,12] and have recently been updated [17]. In Tables 1 and 2 we present the total cross sections for the various processes at the Tevatron and LHC, for the MRST and CTEQ parton distributions. For the purpose of comparison we tabulated the results for σ tot , the cross sections without any cuts applied, but we also give σ cut , the cross sections with the standard cuts defined in section 2. At the Tevatron, the W + Z and W − Z total cross sections are equal by CP invariance. The cross section values are for the scale µ = (M V 1 + M V 2 )/2, where M V i are the masses of the two produced vector bosons. Because the difference between the MRST and CTEQ results is rather small, and given the fact that these distributions will be updated regularly, we restrict ourselves in the remainder of this paper to the MRST distribution. For the same choices of input parameters and parton distributions, we obtain perfect agreement with the total cross- sections tabulated in Tables 1 through 4 of ref. [17]. 4 The significant differences between the total cross sections for the MRST distributions given in our Tables 1 and 2 and those in ref. [17] have their origin in different input parameters, in particular sin 2 θ W . In the case of the CTEQ set an additional difference is due to the fact that in ref. [17] the more recent CTEQ(5) parton distributions have been used.
ZZ W + W − W − Z LO NLO LO NLO LO NLO σ tot (MRST)ZZ W + W − W − Z W + Z LO NLO LO NLO LO NLO LO NLO
The one-loop corrections to the total cross sections are of the order of 50% of the leading-order term. However, as we will see below, the corrections can be much larger for large p T or invariant mass of the vector bosons, particularly at the LHC. This is related to the fact that at next-to-leading order the sub-processes qg → V 1 V 2 q have to be taken into account. These sub-processes generally dominate the tail of the p T -distribution (see e.g. ref. [8]). Therefore, a scale choice like
µ 2 = µ 2 st ≡ 1 2 (p 2 T (V 1 ) + p 2 T (V 2 ) + M 2 V 1 + M 2 V 2 )(4)
seems to be more appropriate. The difference between the two different scale choices is very small for the total cross section, since it is dominated by low p T vector bosons. However, for more exclusive quantities the differences can be substantial. It is therefore necessary to investigate the theoretical uncertainty related to the scale dependence in some detail. To start with, in Fig. 1 we consider the scale dependence of the cross section for Wpair production at the Tevatron. We apply our standard cuts and vary the scale around µ = M W . The leading-order scale dependence is entirely due to the decrease of quark distribution functions q(x) with increasing factorization scale for moderate x. The NLO result does indeed have a reduced scale dependence. We also show the NLO result with an additional cut on the transverse hadronic energy, E had T < 40 GeV, implemented at the parton level. In the remainder of this paper we refer to this additional cut as jet veto, even though it does not exactly correspond to a jet veto applied by experimentalists. This additional cut reduces the scale dependence further. These results, together with the modest one-loop corrections we find below for kinematic distributions at the Tevatron, lead to a rather satisfactory description of W -pair production at this collider.
The situation is somewhat more delicate for the LHC. The one-loop corrections can be huge in the tails of the distributions. We therefore investigate the scale dependence in a more detailed way. We again consider the cross section with and without the same jet veto, E had T < 40 GeV. We also consider the cross section with an additional pair of cuts on the transverse momenta of the charged leptons. We require the larger of the two transverse momenta of the charged leptons, p max T , to be bigger than 200 GeV and the smaller, p min T , to be bigger than 100 GeV. As discussed above, it is therefore more appropriate to vary the scale around µ as given in eq. (4) instead of µ = M W .
The purpose of these additional cuts is to investigate the scale dependence of the cross section in the region where larger corrections are expected. Indeed, the one-loop correction to the LHC cross section for µ = µ st increases from 60% to 80% as the additional set of cuts is applied. Before applying the additional cuts, the situation is very similar to the Tevatron. The scale dependence at LO is reduced at NLO, and is reduced even further if the jet veto is applied. For the high p T case this is not quite true. The leading-order result is surprisingly stable under scale variations. This feature is somewhat artificial, however; we have checked that it does not hold if the additional cuts are changed to e.g. p max T (ℓ) > 400 GeV and p min T (ℓ) > 200 GeV. In this case, the leadingorder cross section decreases with increasing scale. With the cuts p max T (ℓ) > 200 GeV and p min T (ℓ) > 100 GeV we just happen to be close to the transition from a rising to a falling leading-order cross section, a transition which is associated with the different behavior under evolution of the quark distribution functions at small x vs. moderate x.
The reduction of the scale dependence of the next-to-leading order results when a jet veto is applied seems to be quite general though. This situation is a bit paradoxical because the cross section with a jet veto is less inclusive and, therefore, expected to be more sensitive to large logarithms created by incomplete cancellation of the infrared singularities. On the other hand, any subprocess appearing at NLO in pp → W W produces additional hadronic energy in the final state. Thus, a cut on E had naturally suppresses the one-loop corrections and tends to stabilize the perturbative expansion. This effect competes against the stronger sensitivity to large logarithms. Apparently, for the jet veto we applied, E had T < 40 GeV, the subprocess-suppression effect still dominates.
W + W −
In this subsection we present some results for kinematic distributions for the processes pp → W − W + → ℓ −ν ℓ ′+ ν ′ and pp → W − W + → ℓ −ν ℓ ′+ ν ′ . Similar studies have been carried out earlier (see e.g. ref. [14]). Throughout, we apply our standard cuts as defined in section 2. As an illustration, we have chosen eight variables, four p T -like quantities and four angular distributions. They all are defined in terms of observable momenta. The p T -like distributions are defined as follows: For massless leptons, the true rapidity y(ℓ) = 1 2 ln E(ℓ)+p L (ℓ) E(ℓ)−p L (ℓ) is equal to the pseudorapidity η(ℓ) = − ln(tan(θ/2)), so we refer to both as rapidity. For the Tevatron, we have to specify the proton direction: it corresponds to θ = 0.
At the Tevatron, the rapidity distribution for ℓ ′+ can be obtained trivially from that for ℓ − by changing the sign of η. For the LHC, there is no such simple relation between the two distributions. Note also that the ∆η(ℓ) distribution is not symmetric around η = 0 for the Tevatron; it is symmetric for the LHC. The cos θ ℓℓ observable has been investigated in ref. [27] in the context of Higgs boson detection in the intermediate Higgs mass range m H = 155-180 GeV. It is therefore particularly interesting to consider the effect of the QCD corrections to this observable.
In Figs. 3 and 4 we show the distributions for the Tevatron, evaluated at µ = M W and with our standard cuts applied. Recall that the branching ratios for the leptonic decay of the vector bosons are not included. The perturbative expansion for angular-type distributions is generally better behaved than that for steeply falling p T -like distributions. The latter often suffer from large NLO corrections in the tails of the distribution. The insets of Fig. 3 same fraction of the W momentum, i.e. they have cancelling transverse momenta. NLO QCD corrections have a big effect at large p miss T because they allow a recoiling final-state parton to spoil the p T balance of the two neutrinos. The presence of anomalous couplings can also have a big effect in this region, by relaxing the helicity anti-correlation of the two W bosons [20].
In Figs. 5 and 6, the same two sets of distributions are displayed for the LHC, again for µ = M W . The NLO corrections to the angular distributions are again modest. The p T -like distributions, however, can have much larger corrections, particularly in their tails, where the NLO result can easily exceed the leading order result by a factor of 5. The p miss T distribution has the largest corrections of all, for the reason mentioned above, and the factor can be 20 or more. As mentioned before, the huge corrections in the tail of the generic p T -like distributions have to do with the subprocess qg → W W q which dominates in this kinematical region. It may be argued that a scale choice as in eq. (4) is mandatory in this case. However, we have checked that such a scale choice does not lead to a substantial improvement.
The problem is that in a NLO calculation for pp → W W the partonic processqq → W W is included at NLO, but the qg → W W q subprocess is included only at LO. Therefore, in a kinematical region where the latter dominates, the calculation presented in this paper is effectively only a leading order calculation. The only fully satisfactory way to improve the theoretical prediction in such cases is to include the one-loop corrections to the subprocess with a qg in the initial state. These O(α 2 s ) corrections correspond to a NNLO contribution to pp → W W . At the same order in α s , there are two new subprocesses: gg → W W at one loop, and gg → W W qq at tree level. Due to the large gluon density at small x, these contributions are also expected to be important for LHC energies, and would have to be included if a reliable prediction for the tails of the p T -like distributions were required.
As discussed above, a way to suppress the partonic processes with gluons in the initial state is to impose a jet veto. The effect of the cut E had T > 40 GeV on the p T -like distributions is even more dramatic than its effect on the total cross section. This can be seen in Fig. 5, where we also show the NLO curves with the jet veto (dot-dashed lines). Indeed, the one-loop corrections are very small throughout. We conclude that if a reliable theoretical prediction is desired for the tail of a p T -like distribution, and only a next-to-leading order program is available, then a jet veto is unavoidable.
W ± Z
In this subsection we study W Z production, followed by leptonic decay of each boson. In particular, we examine the effects of QCD corrections on the approximate radiation zero at cos θ = (g − [18]. Since the precise flight direction of the W boson is not known, due to the uncertainty in the longitudinal momentum carried by the neutrino, we simply choose to plot a distribution in the (true) rapidity difference between the Z boson and the charged lepton coming from the decay of the W , ∆y Zℓ ≡ y Z − y ℓ . This quantity is similar to the rapidity difference ∆y W Z ≡ |y W − y Z | studied in ref. [10], but uses only the observable charged-lepton variables. It is the direct analog of the variable y γ − y ℓ + considered in ref. [19] for the case of W γ production. (It is possible to determine cos θ : For W Z production at Run II, followed by leptonic decays of both the W and Z bosons, we plot the distribution, in picobarns, in the rapidity difference between the Z and the charged lepton ℓ from the decay of the W , ∆y Zℓ ≡ y Z − y ℓ . Leptonic branching ratios are not included and the scale has been set to µ = (M W + M Z )/2. The basic cuts used are p T (ℓ) > 20 GeV and |η(ℓ)| < 2 for all three charged leptons, and a missing transverse momentum cut of p miss T > 20 GeV. We plot the ∆y Zℓ distribution with these cuts, and also after imposing an additional cut on the W decay lepton, E(ℓ) > 100 GeV; the latter curves have been scaled up by a factor of 5. The dashed curves are Born-level results; the solid curves include the O(α s ) corrections.
1 +g − 2 )/(g − 1 −g − 2 )
in the W γ or W Z rest frame, by solving for the neutrino longitudinal momentum using the W mass as a constraint, up to a two-fold discrete ambiguity for each event [28]. However, ref. [19] found that the ambiguity degrades the radiation zero -at least if each solution is given a weight of 50% -so that the rapidity difference y γ − y ℓ + is more discriminating than cos θ.)
As one can see from Fig. 7, there is a residual dip in the ∆y Zℓ distribution, even at order α s . This dip can easily be enhanced by requiring a minimal energy for the decay lepton from the W . In Fig. 7, we have chosen E(ℓ) > 100 GeV. Note that these curves are scaled up by a factor 5. Unfortunately, only a few tens of W Z → leptons events are expected at Run II of the Tevatron, so the observation of such a dip will be rather difficult prior to the LHC.
Triple Gauge Boson Vertices
New physics may modify the self-interactions of vector bosons, in particular the triple gauge boson vertices. If the new physics occurs at an energy scale well above that being probed experimentally, it can be integrated out, and the result expressed as a set of anomalous (non-Standard Model) interaction vertices.
Here we consider anomalous W + W − Z and W + W − γ trilinear couplings, and their effects on the hadronic production of W W and W Z pairs up through order α s . The most general set of Lagrangian terms for W W V , V ∈ {Z, γ}, that conserves C and P separately, is (see e.g. [29,30])
L/g W W V = ig V 1 (W * µν W µ V ν − W µν W * µ V ν ) + iκ V W * µ W ν V µν + i λ V M 2 W W * ρµ W µ ν V νρ ,(5)
where X µν ≡ ∂ µ X ν −∂ ν X µ and the overall coupling constants g W W V are given by g W W γ = −e and g W W Z = −e cot θ W respectively, with θ W the weak mixing angle. The Standard Model triple gauge boson vertices are recovered by letting g V 1 → 1, κ V → 1 and λ V → 0. The coupling factors can be written in terms of their deviation from Standard Model values: g V 1 = 1 + ∆g V 1 and κ V = 1 + ∆κ V . Electromagnetic gauge invariance requires g γ 1 = 1, or ∆g γ 1 = 0. Sometimes other constraints are imposed on the couplings. For example, if one requires the existence of an effective Lagrangian with SU(2) × U(1) invariance, and neglects operators with dimension eight or higher, then the number of independent coefficients in eq. (5) is reduced from five to three,
∆g Z 1 = α W φ cos 2 θ W , λ γ = λ Z = α W , ∆κ γ = α W φ + α Bφ , ∆κ Z = α W φ − sin 2 θ W cos 2 θ W α Bφ ,(6)
where α W , α W φ and α Bφ are coefficients of the dimension-six operators in this effective Lagrangian [30]. If one arbitrarily supposes that α W φ = α Bφ , one arrives at the so-called HISZ scenario [31], with only two independent couplings. The momentum-space vertex W − α (q)W + β (q)V µ (p) (where p + q +q = 0) corresponding to eq. (5) can be written as
Γ αβµ (q,q, p)/g W W V =q α g βµ g V 1 + κ V + λ V q 2 M 2 W − q β g αµ g V 1 + κ V + λ Vq 2 M 2 W +(q µ − q µ ) −g αβ g V 1 + 1 2 p 2 λ V M 2 W + λ V M 2 W p α p β .(7)
Here we have used momentum conservation, and the fact that the terms q α ,q β and p µ can be neglected, but we have not imposed on-shell conditions on the vector bosons. As it stands, this vertex will eventually lead to a violation of unitarity. To avoid this, the deviations from the Standard Model, ∆g V 1 , ∆κ V and λ V , have to be supplemented with form factors. Since the form factors are supposed to be produced by unknown physics, the form they should take is a priori somewhat arbitrary. We choose a conventional dipole form factor, i.e.
∆g V 1 → ∆g V 1 (1 +ŝ/Λ 2 ) 2 , ∆κ V → ∆κ V (1 +ŝ/Λ 2 ) 2 , λ V → λ V (1 +ŝ/Λ 2 ) 2 ,(8)
whereŝ is the invariant mass of the vector boson pair, and Λ is in the TeV range.
Tree, Virtual and Bremsstrahlung Amplitudes for W W
Replacing the Standard Model vector-boson-vertex by the more general vertex given above results in the some modifications of the primitive amplitudes presented in ref. [15]. We use the same notation in this paper and refer the reader to ref. [15] for more details.
The box-parent primitive amplitudes are not affected by changes in the trilinear vectorboson-vertex. The change in the triangle-parent primitive amplitude can be obtained by simply computing the one tree-level diagram with the new vertex. Since the vertex is no longer symmetric in the exchange W ↔ V , we get slightly different results for the different final states. For the W W final state, the new tree amplitude, which replaces A tree,b in eq. (2.9) of ref. [15], is [24] 6|(1 + 2)|5 + 16 [25] 3|(5 + 6)|4 )
A tree,B = i 2 s 12 s 34 s 56 (g V 1 + κ V + λ V )( 13+ 1|(3 + 4)|2 2g V 1 36 [45] + λ V M 2 W 3|(1 + 2)|5 6|(1 + 2)|4 .(9)
For the limit
g V 1 → 1, κ V → 1, λ V → 0,
With the above results we also get immediately the one-loop primitive amplitude for anomalous couplings. Assuming the usual decomposition into finite and divergent pieces, the finite pieces are vanishing, as in the Standard Model case, and the divergent pieces are still given by c Γ A tree,B V where
V = − 1 ǫ 2 µ 2 −s 12 ǫ − 3 2ǫ µ 2 −s 12 ǫ − 7 2 .(11)
The result for the bremsstrahlung diagrams with an additional positive helicity gluon radiated off the quark line is given by in eq. (2.22) of ref. [15]. The result for a negative helicity gluon can be obtained by the usual flip operation [15].
× (g V 1 + κ V + λ V )(
We also have to modify the prescription in ref. [15] for dressing the above primitive amplitudes with electroweak couplings. Since g V 1 , κ V and λ V are relative couplings, i.e. the overall coupling g W W V has not been changed, the dressing with electroweak factors is almost identical to the Standard Model case. The only subtlety is that both Z and γ appear as intermediate states in W W production. In the coefficient functions
C L,{ u d } = ±2Q sin 2 θ W + s 12 (1 ∓ 2Q sin 2 θ W ) s 12 − M 2 Z ,(13)C R,{ u d } = ±2Q sin 2 θ W ∓ 2Q sin 2 θ W s 12 s 12 − M 2 Z ,
defined in ref. [15], the first term (±2Q sin 2 θ W ) is from the intermediate γ, while the second term is from the intermediate Z. Correspondingly, we should set V = γ (V = Z) in eqs. (9) and (12)
Numerical Results
More systematic studies of the effects of anomalous couplings on hadronic production of vector boson pairs have been carried out elsewhere [32,33,34,20,21]. Here we merely consider one sample distribution.
The effect of anomalous couplings is enhanced for gauge bosons that are produced at large transverse momentum. In order to exploit this feature, the D0 collaboration considered a double-binned E T spectrum for the charged leptons coming from the decay of W pairs [35]. Any deviation from the Standard Model should be more pronounced in the high E T bins.
We have computed a similar double-binned E T spectrum at NLO for Run II of the Tevatron. As in ref. [35] we compute for each event the larger and smaller transverse energies of the two leptons, E max T and E min T . (E T is equivalent to p T (ℓ), of course.) We impose our standard event cuts and then bin each E T into five bins with the following limits in GeV: E T = {20, 38.1, 72.5, 138, 263, 500}.
In order to get a feeling for the theoretical uncertainties, we repeated the Standard Model computation for three scales, µ st /2, µ st and 2µ st , where µ st is given in eq. (4). We computed the same NLO double-binned cross section including the effects of anomalous couplings. As an illustration we have chosen the HISZ scenario with α W = α W φ = α Bφ = 0.1. This corresponds to the following values for the anomalous couplings appearing in the Lagrangian in eq. (5):
∆g γ 1 = 0; ∆g Z 1 = 0.13; λ γ = λ Z = 0.1; ∆κ γ = 0.2; ∆κ Z = 0.07.
The form factors have been chosen according to eq. (8) with Λ = 2 TeV. At present these values are still consistent with LEP2 bounds [36]. Table 3 presents the results. As usual, the leptonic branching ratios are not included, and we use the MRST parton distributions. The three numbers in the left column of each box give the Standard Model result for the three choices of µ, with µ increasing from top to bottom. The fourth number in each box is the result with the anomalous couplings included and µ = µ st . Units for the cross sections for each row are given in square brackets. As expected, the results for the Standard Model and anomalous cross section are very similar for the low E T bins. For the high E T bins, however, the differences are large and certainly much bigger than the most conservative estimate of the theoretical error. The same results are also shown in Fig. 8, where we plot the natural logarithm of the total cross section (in units of 10 −1 fb) for each bin for the Standard Model with µ = µ st (left) and the HISZ scenario with µ = µ st (right). Again, the significant differences in the high E T bins become apparent.
Conclusions
We have presented a general purpose Monte Carlo program that is able to compute any infrared-safe quantity in vector boson pair production at hadron colliders at nextto-leading order in the strong coupling constant α s . This program generalizes previous calculations (with the exception of the recent ref. [17]) in that the spin correlations are fully taken into account. The decay of the vector bosons into leptons was included in the narrow-width approximation, whereas MCFM [17] also includes the singly-resonant diagrams needed to go beyond this approximation. For the total cross-section, computed in the narrow-width approximation, our results agree perfectly with those of MCFM for the same choice of input parameters.
As an illustration of the usefulness of the program we presented several distributions for the Run II at the Tevatron and the LHC. However, we refrained from performing a detailed phenomenological analysis; this is probably best done once the data are available.
In addition to Standard Model processes, we considered also the inclusion of anomalous couplings between the vector bosons. We presented the one-loop amplitudes for a generalized trilinear W + W − Z and W + W − γ vertex. The inclusion of these amplitudes into our program allows the calculation of anomalous effects at next-to-leading order, with full spin correlations. These effects are shown to be very prominent for large transverse momentum of the gauge bosons, confirming results of ref. [20]. Such an analysis at Run II of the Tevatron should yield improved bounds on anomalous couplings, although for large improvements one probably has to wait for the LHC [30].
Figure 1 :
1Scale dependence of σ cut , the cross section for W -pair production at the Tevatron with standard cuts. The scale is given in units of M W . We show the LO, NLO and NLO with jet veto curves. The inset shows the three curves normalized to 1 at µ = M W .
Figure 2 :
2Scale dependence of σ cut , the cross section for W -pair production at the LHC with standard cuts. The scale is given in units of µ st as defined in eq. (4). We show the LO, NLO and NLO with jet veto curves without additional cuts (left) and with an additional cut p max T (ℓ) > 200 GeV and p min T (ℓ) > 100 GeV (right). The insets show the curves normalized to 1 at µ = µ st .
p
T (ℓ − ): transverse momentum of the negatively charged lepton. M ℓℓ : invariant mass of the lepton pair.
p miss T : missing transverse momentum, ( p T (ℓ − ) + p T (ℓ ′+ ) + p T (jet)) 2 .p max T : maximal transverse momentum of the two charged leptons, max{pT (ℓ − ), p T (ℓ ′+ )}.In the case of the Tevatron, the p T (ℓ − ) and p T (ℓ ′+ ) distributions are equal. However, this is not true for the LHC.The four angular distributions we considered are defined as follows:η(ℓ − ): rapidity of the negatively charged lepton. ∆η(ℓ): rapidity difference between the leptons, η(ℓ − ) − η(ℓ ′+ ). cos θ ℓℓ : angle between the leptons, cos( [ p(ℓ − ), p(ℓ ′+ )]). cos φ ℓℓ : transverse angle between the leptons, cos( [ p T (ℓ − ), p T (ℓ ′+ )]).
Figure 3 :
3Differential cross sections in pb/GeV for W − W + production at the Tevatron for the p T -like variables p T (ℓ − ), M ℓℓ , p miss T and p max T defined in the text at LO (dashed curves) and NLO (solid curves), with µ = M W . Standard cuts have been applied and the branching ratios for the leptonic decay of the vector bosons are not included. The insets show the ratio dσ N LO /dσ LO . The units on the horizontal axes are GeV.
Figure 4 :
4Differential cross sections in pb for the Tevatron for the W − W + angular variables defined in the text at LO (dashed curves) and NLO (solid curves), with µ = M W . Standard cuts have been applied and the leptonic branching ratios are not included.
Figure 5 :
5Differential cross sections in pb/GeV for W − W + production at the LHC, for the p T -like variables defined in the text at LO (dashed curves) and NLO (solid curves), with µ = M W . Standard cuts have been applied and the leptonic branching ratios are not included. Also shown (as dot-dashed lines) are the NLO curves with a jet veto E had T < 40 GeV. The insets show the ratio dσ N LO /dσ LO . The units on the horizontal axes are GeV.
Figure 6 :
6Differential cross sections in pb for the LHC for the W − W + angular variables defined in the text at LO (dashed curves) and NLO (solid curves), with µ = M W . Standard cuts have been applied and the leptonic branching ratios are not included.
Figure 7
7Figure 7: For W Z production at Run II, followed by leptonic decays of both
72 s 34 s 56 t 127
Figure 8 :
8Cross section σ [fb/10] for W -pair production at the Tevatron with no anomalous couplings (left) and for the HISZ scenario with α W = α W φ = α Bφ = 0.1 (right). Standard cuts have been applied and the scale has been set to µ = µ st as defined in eq. (4).
Table 1 :
11.13 1.44 9.52 12.4 1.37 1.84 σ tot (CTEQ) 1.16 1.47 9.89 12.8 1.38 1.86 σ cut (MRST) 0.352 0.446 3.17 4.22 0.377 0.506 σ cut (CTEQ) 0.362 0.457 3.31 4.40 0.385 0.520 Cross sections in pb for pp collisions at √ s = 2 TeV. The statistical errors are ±1 within the last digit.
Table 2 :
2Cross sections in pb for pp collisions at √ s = 14 TeV. The statistical errors are ±1 within the last digit.
show the ratio dσ N LO /dσ LO . They indicate that for the Tevatron, the NLO corrections are not too large, except for the p miss T distribution. The large corrections to the p miss T distribution reflect a suppression of the Born-level cross section when p miss T is large [20]: The only way to get a large p miss T at Born level is to have a large W W invariant mass. Then the two neutrinos are almost back-to-back in the W W rest frame. Also, for large invariant mass the Standard Model W W helicity amplitudes are dominantly those where the W + and W − have opposite helicities. The V − A decay of the W bosons then implies that two neutrinos tend to carry roughly the
we recover the Standard Model result,A tree,b =
i
s 12 s 34 s 56
− 36 [45] 1|(5 + 6)|2 + 13 [24] 6|(1 + 2)|5
+ 16 [25] 3|(5 + 6)|4
=
i
s 12 s 34 s 56
13 [25] 6|(2 + 5)|4 + [24] 16 3|(1 + 6)|5 .
, when they are dressed with the first (second) term inC { L R },{ u d } .Otherwise, all the prefactors remain the same and only the 'new' primitive amplitudes have to be plugged in.4.3 Tree, Virtual and Bremsstrahlung Amplitudes for W ZFor the W Z final state, the new tree primitive amplitude is A tree,B = −i 2 s 12 s 34 s 56 g Z 1 + κ Z + λ Z s 12 M 2W
36 [45] 1|(5 + 6)|2
+ (g Z
1 + κ Z + λ Z ) 16 [25] 3|(1 + 2)|4
+ 6|(3 + 4)|5 2g Z
1 13 [24] +
λ Z
M 2
W
3|(5 + 6)|2 1|(5 + 6)|4
. (14)
The new bremsstrahlung primitive amplitude is
A tree,B
7
=
−i
2 17 72 s 34 s 56 t 127
× g Z
1 + κ Z + λ Z t 127
M 2
W
36 [45] 1|(5 + 6)(2 + 7)|1
(15)
+ (g Z
1 + κ Z + λ Z ) 16 1|(2 + 7)|5 3|(5 + 6)|4
+ 6|(3 + 4)|5 −2g Z
1 13 1|(2 + 7)|4
+
λ Z
M 2
W
3|(5 + 6)(2 + 7)|1 1|(5 + 6)|4
.
E max
T
\ E min
T
20 − 38.1 38.1 − 72.5 72.5 − 138 138 − 263 263 − 500
20 − 38.1
[pb]
1.07
1.07
1.03
1.10
-
-
-
-
38.1 − 72.5
[pb]
1.62
1.62
1.54
1.61
0.77
0.76
0.74
0.77
-
-
-
72.5 − 138
[10 −1 pb]
1.67
1.60
1.49
1.76
3.30
3.31
3.15
3.56
1.30
1.32
1.27
1.50
-
-
138 − 263
[10 −2 pb]
0.63
0.57
0.49
2.43
1.27
1.20
1.09
4.18
3.07
3.16
2.98
6.90
1.08
1.14
1.09
2.32
-
263 − 500
[10 −4 pb]
0.7
0.6
0.5
22
1.2
1.1
0.9
37
2.6
2.5
2.2
55
9.0
9.8
9.0
63
2.3
2.5
2.4
9.5
Table 3 :
3Double-binned E T cross sections for pp → W + W − → leptons at √ s = 2 TeV. The three numbers in the left column in each entry are the Standard Model results for the scales µ = µ st × {1/2, 1, 2}, with µ st given in eq. (4). The fourth number in the entry is the cross section with anomalous couplings, as defined in the text. In this case, since there is only an intermediate W , the dressing with electroweak coupling factors is indeed identical to the Standard Model case.
Modes in which one of the vector bosons decays hadronically have been studied at the Tevatron, but at Standard Model levels these events are hard to separate from the QCD production of a vector boson plus jets[5].
The virtual corrections can be divided into terms with poles in ǫ, the parameter of dimensional regularization, plus residual finite terms. The pole terms have a universal form and cancel against infrared divergences in the real corrections, so if the real corrections include decay correlations, then the virtual pole terms must also, in order to get a finite answer.
We wrote two independent programs, one in Fortran 77 and one in Fortran 90, both of which are available upon request.
Tables 1 and 2of ref.[17] express their good agreement with refs.[8,10,12] for the older HMRSB parton distributions;Tables 3 and 4are for the Tevatron Run II and LHC with the MRST and CTEQ(5) sets.
Anomalous W + W − Z and W + W − γ Couplings
Acknowledgments L.D. and A.S. would like to thank the Theory Group of ETH Zürich for its hospitality while part of this work was carried out. We are grateful to John Campbell and Keith Ellis for assistance in the comparison of our results with those of ref.[17].
. E Eichten, I Hinchliffe, K Lane, C Quigg, Rev. Mod. Phys. 56579E. Eichten, I. Hinchliffe, K. Lane and C. Quigg, Rev. Mod. Phys. 56 (1984) 579.
. A A Iogansen, N G , V A Khoze, Sov. J. Nucl. Phys. 36717A.A. Iogansen, N.G. Ural'tsev and V.A. Khoze, Sov. J. Nucl. Phys. 36 (1983) 717;
J F Gunion, H E Haber, G Kane, S Dawson, The Higgs Hunter's Guide. Addison-WesleyJ.F. Gunion, H.E. Haber, G. Kane and S. Dawson, The Higgs Hunter's Guide (Addison-Wesley, 1990).
. M S Chanowitz, W B Kilgore, hep-ph/9412275Phys. Lett. 347387M.S. Chanowitz and W.B. Kilgore, Phys. Lett. B347 (1995) 387 [hep-ph/9412275].
. K T Matchev, D M Pierce, hep-ph/9904282K.T. Matchev and D.M. Pierce, preprint hep-ph/9904282;
. H Baer, hep-ph/9906233preprintH. Baer, et al., preprint hep-ph/9906233.
. F Abe, CDF CollaborationPhys. Rev. Lett. 751017F. Abe, et al. (CDF Collaboration), Phys. Rev. Lett. 75 (1995) 1017;
. S Abachi, D0 CollaborationPhys. Rev. Lett. 773303S. Abachi, et al. (D0 Collaboration), Phys. Rev. Lett. 77 (1996) 3303;
. S Abachi, D0 CollaborationPhys. Rev. Lett. 791441S. Abachi, et al. (D0 Collaboration), Phys. Rev. Lett. 79 (1997) 1441.
. R W Brown, K O Mikaelian, Phys. Rev. 19922R.W. Brown and K.O. Mikaelian, Phys. Rev. D19 (1979) 922;
. R W Brown, K O Mikaelian, D Sahdev, Phys. Rev. 201164R.W. Brown, K.O. Mikaelian and D. Sahdev, Phys. Rev. D20 (1979) 1164;
. K O Mikaelian, M A Samuel, D Sahdev, Phys. Rev. Lett. 43746K.O. Mikaelian, M.A. Samuel and D. Sahdev, Phys. Rev. Lett. 43 (1979) 746.
. J Ohnemus, Phys. Rev. 441403J. Ohnemus, Phys. Rev. D44 (1991) 1403.
. S Frixione, Nucl. Phys. 410280S. Frixione, Nucl. Phys. B410 (1993) 280.
. J Ohnemus, Phys. Rev. 443477J. Ohnemus, Phys. Rev. D44 (1991) 3477.
. S Frixione, P Nason, G Ridolfi, Nucl. Phys. 3833S. Frixione, P. Nason and G. Ridolfi, Nucl. Phys. B383 (1992) 3.
. J Ohnemus, J F Owens, Phys. Rev. 433626J. Ohnemus and J.F. Owens, Phys. Rev. D43 (1991) 3626.
. B Mele, P Nason, G Ridolfi, Nucl. Phys. 357409B. Mele, P. Nason and G. Ridolfi, Nucl. Phys. B357 (1991) 409.
. J F Gunion, Z Kunszt, Phys. Rev. 33665J.F. Gunion and Z. Kunszt, Phys. Rev. D33 (1986) 665.
. J Ohnemus, hep-ph/9403331Phys. Rev. 501931J. Ohnemus, Phys. Rev. D50 (1994) 1931 [hep-ph/9403331].
. L Dixon, Z Kunszt, A Signer, hep-ph/9803250Nucl. Phys. 5313L. Dixon, Z. Kunszt and A. Signer, Nucl. Phys. B531 (1998) 3 [hep-ph/9803250].
. S Frixione, Z Kunszt, A Signer, hep- ph/9512328Nucl. Phys. 467399S. Frixione, Z. Kunszt and A. Signer, Nucl. Phys. B467 (1996) 399 [hep- ph/9512328].
. J M Campbell, R K Ellis, hep-ph/9905386J.M. Campbell and R.K. Ellis, preprint hep-ph/9905386.
. U Baur, T Han, J Ohnemus, hep-ph/9403248Phys. Rev. Lett. 723941U. Baur, T. Han and J. Ohnemus, Phys. Rev. Lett. 72 (1994) 3941 [hep-ph/9403248].
. U Baur, S Errede, G Landsberg, hep- ph/9402282Phys. Rev. 501917U. Baur, S. Errede and G. Landsberg, Phys. Rev. D50 (1994) 1917 [hep- ph/9402282].
. U Baur, T Han, J Ohnemus, hep-ph/9507336Phys. Rev. 531098U. Baur, T. Han and J. Ohnemus, Phys. Rev. D53 (1996) 1098 [hep-ph/9507336].
. U Baur, T Han, J Ohnemus, hep-ph/9410266Phys. Rev. 513381U. Baur, T. Han and J. Ohnemus, Phys. Rev. D51 (1995) 3381 [hep-ph/9410266].
. R K Ellis, D A Ross, A E Terrano, Nucl. Phys. 178421R.K. Ellis, D.A. Ross and A.E. Terrano, Nucl. Phys. B178 (1981) 421.
. A D Martin, R G Roberts, W J Stirling, R S Thorne, hep-ph/9803445Eur. Phys. J. 4463A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, Eur. Phys. J C4 (1998) 463 [hep-ph/9803445].
. H L Lai, CTEQ collaborationhep- ph/9606399Phys. Rev. 551280H.L. Lai et al. (CTEQ collaboration), Phys. Rev. D55 (1997) 1280 [hep- ph/9606399].
. S Dittmaier, M Bohm, A Denner, err. B391Nucl. Phys. 376483S. Dittmaier, M. Bohm and A. Denner, Nucl. Phys. B376 (1992) 29, err. B391 (1993) 483.
W Beenakker, Physics at LEP2. G. Altarelli, T. Sjöstrand and F. ZwirnerGenevaW. Beenakker et al., in Physics at LEP2, eds. G. Altarelli, T. Sjöstrand and F. Zwirner (Geneva, 1996).
. M Dittmar, H Dreiner, hep- ph/9703401Phys. Rev. 55167The Higgs PuzzleM. Dittmar and H. Dreiner, Phys. Rev. D55 (1997) 167 [hep-ph/9608317]; hep- ph/9703401, in Tegernsee 1996, The Higgs Puzzle.
. J Gunion, Z Kunszt, M Soldate, Phys. Lett. 163389J. Gunion, Z. Kunszt and M. Soldate, Phys. Lett. 163B (1985) 389;
. J Gunion, M Soldate, Phys. Rev. 34826J. Gunion and M. Soldate, Phys. Rev. D34 (1986) 826;
. W J Stirling, Phys. Lett. 163261W.J. Stirling et al., Phys. Lett. 163B (1985) 261.
. K Hagiwara, R D Peccei, D Zeppenfeld, K Hikasa, Nucl. Phys. 282253K. Hagiwara, R.D. Peccei, D. Zeppenfeld and K. Hikasa, Nucl. Phys. B282 (1987) 253.
. J Ellison, J Wudka, Ann. Rev. Nucl. Part. Sci. preprint hep-ph/9804322, to appear inJ. Ellison and J. Wudka, preprint hep-ph/9804322, to appear in Ann. Rev. Nucl. Part. Sci.
. K Hagiwara, S Ishihara, R Szalapski, D Zeppenfeld, Phys. Rev. 482182K. Hagiwara, S. Ishihara, R. Szalapski and D. Zeppenfeld, Phys. Rev. D48 (1993) 2182.
. U Baur, D Zeppenfeld, Nucl. Phys. 308127U. Baur and D. Zeppenfeld, Nucl. Phys. B308 (1988) 127.
. U Baur, T Han, J Ohnemus, hep-ph/9305314Phys. Rev. 485140U. Baur, T. Han and J. Ohnemus, Phys. Rev. D48 (1993) 5140 [hep-ph/9305314].
. U Baur, T Han, J Ohnemus, hep-ph/9710416Phys. Rev. 572823U. Baur, T. Han and J. Ohnemus, Phys. Rev. D57 (1998) 2823 [hep-ph/9710416].
. B Abbott, D0 collaborationhep-ex/9803004Phys. Rev. 5851101B. Abbott et al. (D0 collaboration) Phys. Rev. D58 (1998) 051101 [hep-ex/9803004].
CERN-EP/99-15LEP Electroweak Working Group, and SLD Heavy Flavour and Electroweak Groups. LEP Collaborations, LEP Electroweak Working Group, and SLD Heavy Flavour and Electroweak Groups, preprint CERN-EP/99-15 (February, 1999).
| [] |
[
"Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification",
"Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification"
] | [
"Marco Forgione \nIDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI\n2023Lugano, YokohamaSwitzerland, Japan\n",
"Dario Piga \nIDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI\n2023Lugano, YokohamaSwitzerland, Japan\n"
] | [
"IDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI\n2023Lugano, YokohamaSwitzerland, Japan",
"IDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI\n2023Lugano, YokohamaSwitzerland, Japan"
] | [] | Effective quantification of uncertainty is an essential and still missing step towards a greater adoption of deep-learning approaches in different applications, including mission-critical ones. In particular, investigations on the predictive uncertainty of deep-learning models describing non-linear dynamical systems are very limited to date. This paper is aimed at filling this gap and presents preliminary results on uncertainty quantification for system identification with neural state-space models. We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs through approximate inference techniques. Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime, where predictions cannot be trusted. | 10.48550/arxiv.2304.06349 | [
"https://export.arxiv.org/pdf/2304.06349v1.pdf"
] | 258,108,153 | 2304.06349 | f8a1cf1896bbd2624a3ce9cc21bd8ace1a387c2b |
Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification
April 14, 2023
Marco Forgione
IDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI
2023Lugano, YokohamaSwitzerland, Japan
Dario Piga
IDSIA Dalle Molle Institute for Artificial Intelligence USI-SUPSI
2023Lugano, YokohamaSwitzerland, Japan
Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification
April 14, 2023Please cite this version of the paper: M. Forgione and D. Piga. Neural State-Space Models: Empirical Evaluation of Uncertainty Quantification. In Proc. of the 22nd IFAC World Congress,
Effective quantification of uncertainty is an essential and still missing step towards a greater adoption of deep-learning approaches in different applications, including mission-critical ones. In particular, investigations on the predictive uncertainty of deep-learning models describing non-linear dynamical systems are very limited to date. This paper is aimed at filling this gap and presents preliminary results on uncertainty quantification for system identification with neural state-space models. We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs through approximate inference techniques. Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime, where predictions cannot be trusted.
Introduction
In recent years, the system identification community has shown renewed interest in deep-learning tools and techniques for data-driven modeling of non-linear dynamical systems (Ljung et al., 2020). To cite a few examples, system identification approaches based on 1-D convolutional neural networks are presented in Andersson et al. (2019); Wu and Jahanshahi (2019). Training of neural NARX architectures with a regularization term promoting decay of the model's linearized impulse response are introduced in Peeters et al. (2022). Neural networks architectures and fitting criteria for continuous-time dynamical model identification are presented in Mavkov et al. (2020). Finally, algorithms for efficient training of tailor-made neural state-space models are discussed in Forgione and Piga (2020) and Beintema et al. (2021).
A common (and justified) criticism to the above-mentioned deep system identification approaches is the general lack of uncertainty description and analysis. Indeed, the methods presented in those contributions only produce nominal point predictions, with no explicit measure of their reliability. While the models are shown to deliver high performance in the considered benchmarks, results may dramatically deteriorate when they are used in an out-of-distribution regime, i.e. on a test set whose characteristics (in terms of input amplitude, frequency, power, etc.) differ significantly from the ones of the training data. Even worse, no mechanism is in place to detect this failure mode, and models may quietly produce off-target, possibly dangerous predictions.
In current machine learning research, uncertainty quantification is recognized as paramount to increase reliability and acceptance of black-box models like neural networks, and it is thus seen as a fundamental step towards their adoption in mission-critical applications (Loquercio et al., 2020). Different approaches, both deterministic and probabilistic, have been proposed, see Gawlikowski et al. (2021) for a recent survey. The probabilistic perspective is arguably more general and theoretically appealing. Certain methodologies like ensemble learning (Lakshminarayanan et al., 2017) and dropout (Srivastava et al., 2014), first introduced in a deterministic settings, are now better understood as approximate inference algorithms in a Bayesian probabilistic framework.
Most of the contributions on uncertainty quantification presented in the deep-learning literature involve static regression problems (typically from the UCI datasets) with feed-forward neural architectures and/or image classification problems (typically from the CIFAR dataset and variants thereof) with convolutional ones, see Maddox et al. (2019); Wilson and Izmailov (2020); Izmailov et al. (2021). To date, little attention has been devoted to sequential learning problems and in particular to non-linear dynamical systems modeling.
A notable exception is the recent contribution (Zhou et al., 2022), where learning of dynamical systems in neural input/output form is formulated in a Bayesian probabilistic framework. Compared to (Zhou et al., 2022), our work is focused on neural state-space models, which are arguably more suitable for downstream control applications (e.g. for model predictive control) and for analysis with standard system theoretic tools. Furthermore, the main objectives in (Zhou et al., 2022) are to select the relevant input regressors and to induce sparsity in the network, while our work is focused on uncertainty description and recognition of the out-of-distribution regime.
We obtain uncertainty bounds by framing the neural state-space identification problem in a Bayesian probabilistic settings and by deriving (approximate) posterior distributions for the neural network parameters and for its output predictions. From a technical perspective, we use the Laplace approximation (Bishop and Nasrabadi, 2006) to describe the parameter posterior distribution and exploit our recent results in to speed up the required Hessian matrix computations. We show that the obtained uncertainty bounds, while not always calibrated, widen significantly when neural state-space models are used in an out-of-distribution regime.
Based on the obtained uncertainty description, we then introduce a new metric, called surprise index that, for a given trained model and a new input sequence, detects whether the model is suitable to predict the corresponding output before collecting any new data. Thus, the surprise index may be used to assess beforehand whether the predictions generated by a model fed by a specific input signal can be trusted.
We demonstrate the effectiveness of our methodology on a variation of the Wiener-Hammerstein identification benchmark (Schoukens et al., 2009) conveniently modified to generate data from different regimes, and release the codes required to reproduce our results in the GitHub repository https://github. com/forgi86/sysid-neural-unc.
Methodology
Dataset and objective
We are given a dataset D = (u, y) with N input samples u k ∈ R nu and (possibly noisy) output samples y k ∈ R, collected from a dynamical data-generating system S.
Our goal is to estimate from D a neural state-space model M of the unknown dynamics of S, which for a new input sequence u * , generates a prediction of the corresponding output y * , plus an indicator of the predictions' reliability.
Given a suitable probabilistic neural model structure with a prior distribution defined over its parameters, the problem may be tackled through (approximate) statistical inference of the posterior predictive distribution (ppd) p(y * |u * , D), which in turn may be used to generate output predictions with credible intervals.
In this paper, we follow indeed a probabilistic approach bearing in mind that, due to assumptions and approximations introduced to carry out the inference step efficiently, the obtained ppd and bounds may be somewhat inaccurate. Still, we aim at exploiting probabilistic reasoning and tools to obtain useful indicators of model predictions' reliability. These indicators should (at least) be able to detect when the model is operating in an extrapolation regime, and thus its prediction cannot be fully trusted.
The case of multi-input single-output systems is discussed to simply exposition. However, the results can be extended straightforwardly to multi-input multi-output systems.
Model structure
We consider the following neural state-space model structure M :
x k+1 = F(x k , u k ; θ) (1a) y k = G(x k ; θ) (1b) y k =ŷ k + e k , e k ∼ N (0, 1 /β) (1c) θ ∼ N (0, 1 /τ) ,(1d)
where F and G are feed-forward neural networks having compatible dimensions, x k ∈ R nx is the state at time k, and θ ∈ R n θ is a vector of parameters to be estimated from data. The measured output y k ∈ R is assumed to be corrupted by a zero-mean white Gaussian noise error e with precision β. The prior on the model parameters θ is also Gaussian, with precision τ .
3 Probabilistic derivations
Posterior parameter distribution
The posterior distribution p(θ|D) of θ conditioned on the observations D is given by the Bayes rule:
p(θ|D) = p(θ)p(D|θ) p(D) .(2)
The functional form of the Gaussian prior distribution p(θ) on the model parameters θ is:
p(θ) = τ n θ (2π) n θ exp −τ 2 n θ −1 i=0 θ 2 i ,(3)
while the likelihood p(D|θ) is:
p(D|θ) = β N (2π) N exp −β 2 N −1 k=0 (y k −ŷ k (θ)) 2 ,(4)
Posterior predictive distribution
The posterior predictive distribution p(y * |u * , D) given a new input sequence u * is:
p(y * |u * , D) = θ p(y * |u * , θ)p(θ|D) dθ.(5)
Even when the approximate p(θ|D) has a simple structure, exact solution of the integral above is intractable and further approximations/simplifications are required to evaluate the ppd.
Approximate inference
We present in this section the approximate inference approaches used to obtain the parameter posterior p(θ|D) and the predictive posterior p(y * |u * , D).
Laplace approximation of the parameter posterior
The parameter posterior p(θ|D) is approximated using the Laplace's method (Bishop and Nasrabadi, 2006) centered around the maximum a posteriori (MAP) point estimate θ MAP . All the derivations and required computations are specified in this section.
MAP point estimate
To obtain the MAP estimate, we consider the negative logarithm of the posterior distribution L(θ) = − log p(θ|D):
L(θ) = =E lik (θ) β 2 N −1 k=0 (y k −ŷ k (θ)) 2 + =Eprio(θ) τ 2 n θ −1 i=0 θ 2 i +cnst,(6)
where cnst is a term that does not depend on θ.
The MAP estimate θ MAP is:
θ MAP = arg min θ L(θ).(7)
Computation of θ MAP corresponds to a non-linear (regularized) least-squares problem, which for neural state-space models is usually tackled with stochastic gradient descent algorithms or variants thereof.
Laplace approximation
The Laplace approximation of the parameter posterior distribution centered around the MAP estimate is defined as:
p(θ|D) = N (θ MAP , P θ MAP ),(8)
where P θ MAP is the inverse of the Hessian of the negative log-likelihood evaluated in θ MAP :
P −1 θ MAP = ∂ 2 L(θ) ∂θ 2 θ=θ MAP .(9)
The Hessian of E prio (θ) has the simple functional form:
∂ 2 E prio (θ) ∂θ 2 = τ I,(10)
while the Hessian of E lik (θ) is:
∂ 2 E lik (θ) ∂θ 2 = β N −1 k=0 ∂ŷ k ∂θ ∂ŷ k ∂θ + β N −1 k=0 (ŷ k −y k ) ∂ 2ŷ k ∂θ 2 .(11)
According to the Gauss-Newton (GN) Hessian approximation (Wright et al., 1999), the expression above is dominated by the first term β
N −1 k=0 ∂ŷ k ∂θ ∂ŷ k ∂θ and the second contribution β N −1 k=0 (ŷ k − y k ) ∂ 2ŷ
k ∂θ 2 may be neglected. The GN approximation is accurate, for instance, when ∂ 2ŷ k ∂θ 2 is small (i.e. model predictions are nearly affine), whenŷ k − y k is small (which is typically the case for the optimal θ if the variance of e k is also small) and, more in general, when ∂ 2ŷ k ∂θ 2 andŷ k − y k are uncorrelated (which is also expected for the optimized value of θ, as the residualŷ k − y k is then close to the white measurement noise e k ).
Overall, the covariance matrix P θ MAP with GN Hessian approximation is:
P −1 θ MAP ≈ τ I + β N −1 k=0 ∂ŷ k ∂θ ∂ŷ k ∂θ .(12)
Remark 1 The term β N −1 k=0 ∂ŷ k ∂θ ∂ŷ k ∂θ in (12) is also a finite-sample approximation of the Fisher Information Matrix, which corresponds in frequentist statistics to the (asymptotic) precision of the maximum likelihood estimator ( Van den Bos, 2007). In this sense, the methodologies of this paper are also applicable to the derivation of confidence intervals in a frequentist setting.
Computational aspects
A straightforward approach to obtain of the gradients ∂ŷ k ∂θ , k = 0, . . . , N − 1 needed in (11) is to invoke N independent back-propagation operations through the neural state-space model's unrolled computational graph at each time step. Overall, this naive approach requires a number of operations O(N 2 ).
The computational cost can actually be lowered to O(N ) using the recursive methodology based on sensitivity equations first introduced by the authors in Forgione et al. (2022) and reported hereafter for completeness.
Let us introduce the state sensitivities s k = ∂x k ∂θ ∈ R nx×n θ . By taking the derivatives of the left-and right-hand side of (1a) w.r.t. the model parameters θ, we obtain a recursive equation describing the evolution of s k :
s k+1 = J f x k s k + J f θ k ,(13)
where J f x k ∈ R nx×nx and J f θ k ∈ R nx×n θ are the Jacobians of F(x k , u k ; θ) w.r.t. x k and θ, respectively.
Let us now take the derivative of (1b) w.r.t. θ:
∂ŷ k ∂θ = J gx k s k + J gθ k ,(14)
where J gx k ∈ R ny×nx and J gθ k ∈ R ny×n θ are the Jacobians of G(x k , u k ; θ) w.r.t. x k and θ, respectively, and n y is the number of outputs (n y = 1 in this paper).
The Jacobians J f x k and J f θ k can be obtained through n x back-propagation operations through F, thus at cost O(n x n θ ). Similarly, J gx k and J gθ k can be obtained through n y back-propagation operations through G at cost n y n θ . Thus, the computational effort required to obtain ∂ŷ k ∂θ in (14) (given the previous sensitivity s k−1 ) is O((n x + n y )n θ ). Overall, all the derivatives of interest ∂ŷ k ∂θ , k = 0, . . . , N −1 are then computed at a total cost O(N (n x + n y )n θ ).
Linearization-based approximation of the ppd
Our approximation of the posterior predictive distribution is based on a linearization of the neural network model with respect to its parameters about the MAP estimate:ŷ
* (θ) ≈ŷ * (θ MAP ) + J * (θ − θ MAP ),(15)
where J * is the Jacobian ofŷ * with respect to the parameters θ, computed for θ = θ MAP . According to the approximation above, we obtain:
y * ∼ N ŷ * (θ MAP ), =Σ y * J * P θ MAP J * + 1 β I .(16)
Note that the ppd's covariance matrix is the sum of a term J * P θ MAP J * related to the approximate knowledge of the true system parameters (our epistemic uncertainty), plus a term 1 /βI related to measurement noise (the intrinsic aleatoric uncertainty). We are interested in particular into the diagonal entries of Σ y * , which correspond to the variance of the output predictions at the different time steps and thus represent their uncertainty. Specifically, we construct and visualize 99.7% credible intervals centered around the nominal prediction and having width ±3 times the square root of these diagonal entries.
Surprise index
The diagonal entries of J * P θ MAP J * are also of interest and they are related to the variance in the noise-free output predictions. In particular, a relatively large ratio between the k-th diagonal entry of the matrix andŷ * k indicates an unreliable prediction at time instant k whose uncertainty is large compared to the predicted value itself. For a full sequence u * of length N , we introduce in this paper an aggregate surprise index s(u * ) defined by:
s(u * ) = 100 × N −1 k=0 J * P θ MAP J * kk N −1 k=0 |ŷ * k (θ MAP )| (%),(17)
which measures the relative size of the uncertainty throughout the sequence u * .
In (17), the subscript kk denotes the diagonal element of a matrix in the k-th row and column.
It is important to note that computation of s(u * ) does not require the actual output y * and thus it can be carried out even without running an actual experiment on the real system. Therefore, s(u * ) may be used to assess beforehand whether the model is expected to give reliable predictions when fed with the sequence u * .
Numerical example
In this section, we test the methodologies presented in the paper on a non-linear system identification problem. The developed software is based on the PyTorch deep-learning library and it is available in the GitHub repository https:// github.com/forgi86/sysid-neural-unc. Computations are performed on a PC equipped with an AMD Ryzen 5 1600x processor, 32 GB of RAM, and an nvidia 1060 GPU.
We consider as true system a discrete-time Wiener-Hammerstein with sampling frequency f s = 51200 Hz consisting in the series interconnection of a transfer function G 1 (z), a static non-linearity f (·), and a transfer function G 2 (z):
G1(z) = 0.010252 + 0.030757z −1 + 0.030757z −2 + 0.010252z −3 1 − 2.151941z −1 + 1.744729z −2 − 0.510767z −3 G2(z) = 0.008706 − 0.004596z −1 − 0.004596z −2 + 0.008706z −3 1 − 2.574867z −1 + 2.235716z −2 − 0.652629z −3 f (x) = elu − 10 11 x ,
where elu(x) = e x−1 for x ≤ 0 and 0 otherwise. The Bode plots of G 1 , G 2 and the static non-linearity f (·) are also shown in Figure 1 and Figure 2, respectively. This Wiener-Hammerstein system is closely inspired to the dynamics of the benchmark Schoukens et al. (2009) involving a real electronic circuit. In this paper, we prefer this synthetic Wiener-Hammerstein system to the original benchmark in order to be able to generate data from different dynamical regimes with ease. For analogy with Schoukens et al. (2009), inputs and outputs of our numerical example are assumed to be in Volts (V) units hereafter.
As for the neural state-space model (1), in line with previously published results on the benchmark (Beintema et al., 2021), F and G have a single hidden layer with 15 nodes and tanh static non-linearity, plus a direct linear input/output term. In total, the model has n θ = 385 parameters. We use a training dataset where the input is a 10000-sample multisine signal with flat spectrum in the frequency range [0 2] kHz and standard deviation 0.4 V, and the output is corrupted by a white Gaussian noise with standard deviation σ e = 1 /β = 5 · 10 −3 V. Note that the input spectrum does not cover the transmission zero of the transfer function G 2 (z) located at approximately 5.5 kHz.
To compute the MAP estimate θ MAP efficiently, the negative log-likelihood (6) is minimized over batches of sub-sequences extracted from the training data in random order, see Forgione and Piga (2020) for details. The batch size and sub-sequence length are both set to 256. Neural network parameter optimization is performed over 120 epochs 1 of the Adam algorithm followed by 4 epochs of L-BFGS, using the standard implementation and default settings of PyTorch. Overall, the optimization procedure takes 873 s.
Once θ MAP is available, the posterior covariance P θ MAP is obtained according to the Laplace approximation (12). The time required to obtain P θ MAP , which is largely dominated by the computation of the gradients ∂ŷ k ∂θ , is 44 s using the recursive gradient computation method in Forgione et al. (2022), while it increases to 465 s with the naive implementation.
For model testing, we consider four scenarios where the input signal u * is: 1) a multisine with standard deviation 0.4 V and bandwidth [0 2] kHz (same as training input); 2) a multisine with standard deviation 0.4 V and bandwidth [1 2] kHz; 3) a multisine with standard deviation 0.8 V and bandwidth [0 2] kHz; 4) a multisine with standard deviation 0.4 V and bandwidth [0 10] kHz. For each test set, we compute the nominal predictionŷ * by simulating the state-space model (1) with θ = θ MAP and e = 0, and the approximate ppd according to (16). The approximate ppd is then used to obtain 99.7% credible intervals (having width ±3 times the square root of the diagonal entries of the approximate ppd's covariance matrix Σ y * ) and the surprise index s(u * ) according to (17).
We evaluate the performance of the nominal predictions in terms of the FIT index:
FIT = 100 × 1 − N −1 k=0 (y * k −ŷ * k ) 2 N −1 k=0 (y * k − y * ) 2 (%),(18)
where y * is the sample mean of the sequence y * .
To evaluate the goodness of the credible intervals, we report their empirical coverage, namely the percentage of time steps where the actual output y * lies inside the intervals. A value close to 99.7% indicates well-calibrated intervals.
Note that Signals 3 and 4 drive the system in a dynamical ranges unseen during training and thus force the model to operate in an extrapolation regime. For these signals, we expect the FIT index to decrease and the uncertainty intervals to get wider. Wider uncertainty bounds result in a larger surprise index s(u * ), which in turn should allow us to detect the FIT decrease without knowledge of the actual output y * . The FIT index, surprise index, and coverage of the four test signals are reported in Table 1. We observe that FIT and surprise indexes are indeed negatively correlated, as expected.
In Figures 3, 4, 5, 6, we show relevant time traces for the four test signals. In the top panel, we show the actual output y * (black line) together with the posterior meanŷ * (blue line) and 99.7% credible intervals (shaded blue area). In the middle panel, we show the error signal e = y * −ŷ * (red line), together with 99.7% credible intervals (shaded red area). We also show ±3σ e horizontal bands, corresponding to the aleatoric component of the output uncertainty (black lines). Finally, the bottom panel represents the input signal.
For Signals 1 and 2, the prediction quality is very high (black and blue line overlapping in the the top panel). Uncertainty bounds are not visible (as too narrow) in the top panel and can only be appreciated in the (magnified) middle one. In the middle panel, we also note that uncertainty bounds are largely dominated by the aleatoric component (red shaded area mostly comprised within the ±3σ e bands). Furthermore, these bounds are well calibrated (the red line lies within the bounds for most of the time steps), as also indicated by the coverage indexes which are close to the target value 99.7 %.
For Signal 3, the prediction quality decreases significantly in the time instant when the input/output samples are large (a condition not seen during training). The uncertainty bounds also expand in these regions and, remarkably, appear to be still rather well calibrated (i.e. the error e lies indeed within the uncertainty region in most of the time steps, with a coverage index of 96.1 %). Furthermore, the surprise index of 2.10 % is significantly larger than the one computed over Signals 1 and 2 (0.33 % and 0.43 %, respectively). Thus, the out-of-distribution regime is effectively detectable using the proposed methodology. For Signal 4, uncertainty bounds are also enlarged, even though they are clearly not well calibrated. Indeed, the error signal e is too often out of the red shaded area, thus the latter is not a well-calibrated 99.7% credible interval, as also indicated by the low coverage of 80.6 %). Nonetheless, the high surprise index s(u * ) = 4.03 alerts the used that the model is working in an extrapolation regime and consequently its performance will be low.
Conclusion
We have presented a viable approach for uncertainty quantification with neural state-space models. Based on the obtained uncertainty description, we have defined a surprise index that indicates whether the model predictions generated from a given input are expected to be reliable, i.e. close to the response of the true system. This preliminary work may be extended in different directions. First, other inference approximation techniques may be adopted to obtain a richer and more accurate characterization of the uncertainty. In this sense, efficient sampling techniques such as Hamiltonian Monte Carlo may be considered to overcome the limiting assumptions (e.g. uni-modality) of the currently used Laplace approximation. A challenge in this sense is to devise scalable algorithms applicable to large neural-network models.
Furthermore, tools like the surprise index may be used to choose informative input signals to be used for model training/refinement. This could pave the way for experiment design and active learning in the context of system identification with neural state-space models.
Finally, to foster further research in uncertainty quantification and out-ofdistribution recognition, specific benchmarks and performance metrics should be devised and shared with the system identification community.
Figure 1 :
1WH system: Bode diagrams of G 1 (z) and G 2 (z).
Figure 2 :
2WH system: Static non-linearity f (·).
Figure 3 :
3WH system: results on multisine signal 1.
Figure 4 :
4WH system: results on multisine signal 2.
Figure 6 :
6WH system: results on multisine signal 4.
Table 1: FIT index, uncertainty intervals coverage, and surprise index on the test datasets.Signal
FIT (%) coverage (%) surprise (%)
multisine 1 98.1
99.2
0.33
multisine 2 97.7
98.6
0.43
multisine 3 93.9
96.1
2.10
multisine 4 87.8
80.6
4.03
An epoch corresponds to the processing of all the contiguous sub-sequences of length 256 in the training dataset in random order.
AcknowledgementThis work was partially supported by the European H2020-CS2 project AD-MITTED, Grant agreement no. GA832003.
Deep Convolutional Networks in System Identification. C Andersson, A H Ribeiro, K Tiels, N Wahlström, T B Schön, 10.1109/CDC40024.2019.90302192019 IEEE 58th Conference on Decision and Control (CDC). Andersson, C., Ribeiro, A.H., Tiels, K., Wahlström, N., and Schön, T.B. (2019). Deep Convolutional Networks in System Identification. In 2019 IEEE 58th Conference on Decision and Control (CDC), 3670-3676. doi: 10.1109/CDC40024.2019.9030219.
Nonlinear state-space identification using deep encoder networks. G Beintema, R Tóth, M Schoukens, PMLRLearning for Dynamics and Control. Beintema, G., Tóth, R., and Schoukens, M. (2021). Nonlinear state-space identi- fication using deep encoder networks. In Learning for Dynamics and Control, 241-250. PMLR.
Pattern recognition and machine learning. C M Bishop, N M Nasrabadi, Springer4Bishop, C.M. and Nasrabadi, N.M. (2006). Pattern recognition and machine learning, volume 4. Springer.
On the adaptation of recurrent neural networks for system identification. M Forgione, A Muni, D Piga, M Gallieri, arXiv:2201.08660arXiv preprintForgione, M., Muni, A., Piga, D., and Gallieri, M. (2022). On the adapta- tion of recurrent neural networks for system identification. arXiv preprint arXiv:2201.08660.
Model structures and fitting criteria for system identification with neural networks. M Forgione, D Piga, 10.1109/AICT50176.2020.93688342020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT). Forgione, M. and Piga, D. (2020). Model structures and fitting criteria for system identification with neural networks. In 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT), 1-6. doi:10.1109/AICT50176.2020.9368834.
J Gawlikowski, C R N Tassi, M Ali, J Lee, M Humt, J Feng, A Kruspe, R Triebel, P Jung, R Roscher, arXiv:2107.03342A Survey of Uncertainty in Deep Neural Networks. arXiv preprintGawlikowski, J., Tassi, C.R.N., Ali, M., Lee, J., Humt, M., Feng, J., Kruspe, A., Triebel, R., Jung, P., Roscher, R., et al. (2021). A Survey of Uncertainty in Deep Neural Networks. arXiv preprint arXiv:2107.03342.
What are Bayesian Neural Network Posteriors Really Like?. P Izmailov, S Vikram, M D Hoffman, A G G Wilson, PMLRInternational conference on machine learning. Izmailov, P., Vikram, S., Hoffman, M.D., and Wilson, A.G.G. (2021). What are Bayesian Neural Network Posteriors Really Like? In International conference on machine learning, 4629-4640. PMLR.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. B Lakshminarayanan, A Pritzel, C Blundell, Advances in Neural Information Processing Systems. 30Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Advances in Neural Information Processing Systems, 30.
Deep Learning and System Identification. L Ljung, C Andersson, K Tiels, T B Schön, IFAC-PapersOnLine. 532Ljung, L., Andersson, C., Tiels, K., and Schön, T.B. (2020). Deep Learning and System Identification. IFAC-PapersOnLine, 53(2), 1175-1181.
A General Framework for Uncertainty Estimation in Deep Learning. A Loquercio, M Segu, D Scaramuzza, IEEE Robotics and Automation Letters. 52Loquercio, A., Segu, M., and Scaramuzza, D. (2020). A General Framework for Uncertainty Estimation in Deep Learning. IEEE Robotics and Automation Letters, 5(2), 3153-3160.
A Simple Baseline for Bayesian Uncertainty in Deep Learning. W J Maddox, P Izmailov, T Garipov, D P Vetrov, A G Wilson, Advances in Neural Information Processing Systems. 32Maddox, W.J., Izmailov, P., Garipov, T., Vetrov, D.P., and Wilson, A.G. (2019). A Simple Baseline for Bayesian Uncertainty in Deep Learning. Advances in Neural Information Processing Systems, 32.
Integrated neural networks for nonlinear continuous-time system identification. B Mavkov, M Forgione, D Piga, IEEE Control Systems Letters. 44Mavkov, B., Forgione, M., and Piga, D. (2020). Integrated neural networks for nonlinear continuous-time system identification. IEEE Control Systems Letters, 4(4), 851-856.
L Peeters, G I Beintema, M Forgione, M Schoukens, arXiv:2204.05892NARX identification using Derivative-Based Regularized Neural Networks. arXiv preprintPeeters, L., Beintema, G.I., Forgione, M., and Schoukens, M. (2022). NARX identification using Derivative-Based Regularized Neural Networks. arXiv preprint arXiv:2204.05892.
Wiener-Hammerstein benchmark. J Schoukens, J Suykens, L Ljung, Proc. of the 15th IFAC symposium on System Identification. of the 15th IFAC symposium on System IdentificationSchoukens, J., Suykens, J., and Ljung, L. (2009). Wiener-Hammerstein bench- mark. In Proc. of the 15th IFAC symposium on System Identification (SYSID 2009).
Dropout: A Simple Way to Prevent Neural Networks from Overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. 151Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfit- ting. The Journal of Machine Learning Research, 15(1), 1929-1958.
Parameter estimation for scientists and engineers. A Van Den Bos, John Wiley & SonsVan den Bos, A. (2007). Parameter estimation for scientists and engineers. John Wiley & Sons.
Bayesian Deep Learning and a Probabilistic Perspective of Generalization. A G Wilson, P Izmailov, Advances in neural information processing systems. 33Wilson, A.G. and Izmailov, P. (2020). Bayesian Deep Learning and a Probabilis- tic Perspective of Generalization. Advances in neural information processing systems, 33, 4697-4708.
Numerical optimization. S Wright, J Nocedal, Springer Science357Wright, S., Nocedal, J., et al. (1999). Numerical optimization. Springer Science, 35(67-68), 7.
Deep Convolutional Neural Network for Structural Dynamic Response Estimation and System Identification. R T Wu, M R Jahanshahi, Journal of Engineering Mechanics. 14514018125Wu, R.T. and Jahanshahi, M.R. (2019). Deep Convolutional Neural Network for Structural Dynamic Response Estimation and System Identification. Journal of Engineering Mechanics, 145(1), 04018125.
Sparse Bayesian Deep Learning for Dynamic System Identification. H Zhou, C Ibrahim, W X Zheng, W Pan, Automatica. 144110489Zhou, H., Ibrahim, C., Zheng, W.X., and Pan, W. (2022). Sparse Bayesian Deep Learning for Dynamic System Identification. Automatica, 144, 110489.
| [] |
[
"Analysing Fairness of Privacy-Utility Mobility Models",
"Analysing Fairness of Privacy-Utility Mobility Models"
] | [
"Yuting Zhan ",
"Hamed Haddadi ",
"\nAFRA MASHHADI\nImperial College London\nUK\n",
"\nUniversity of Washington\nUSA\n"
] | [
"AFRA MASHHADI\nImperial College London\nUK",
"University of Washington\nUSA"
] | [] | Preserving the individuals' privacy in sharing spatial-temporal datasets is critical to prevent re-identification attacks based on unique trajectories. Existing privacy techniques tend to propose ideal privacy-utility tradeoffs, however, largely ignore the fairness implications of mobility models and whether such techniques perform equally for different groups of users. The quantification between fairness and privacy-aware models is still unclear and there barely exists any defined sets of metrics for measuring fairness in the spatial-temporal context. In this work, we define a set of fairness metrics designed explicitly for human mobility, based on structural similarity and entropy of the trajectories. Under these definitions, we examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing. Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain. We conclude that the tension between the re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting. | 10.48550/arxiv.2304.06469 | [
"https://export.arxiv.org/pdf/2304.06469v1.pdf"
] | 258,108,289 | 2304.06469 | 501d7f7cc453714ca03560c176ebbbeed588c543 |
Analysing Fairness of Privacy-Utility Mobility Models
Yuting Zhan
Hamed Haddadi
AFRA MASHHADI
Imperial College London
UK
University of Washington
USA
Analysing Fairness of Privacy-Utility Mobility Models
Preserving the individuals' privacy in sharing spatial-temporal datasets is critical to prevent re-identification attacks based on unique trajectories. Existing privacy techniques tend to propose ideal privacy-utility tradeoffs, however, largely ignore the fairness implications of mobility models and whether such techniques perform equally for different groups of users. The quantification between fairness and privacy-aware models is still unclear and there barely exists any defined sets of metrics for measuring fairness in the spatial-temporal context. In this work, we define a set of fairness metrics designed explicitly for human mobility, based on structural similarity and entropy of the trajectories. Under these definitions, we examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing. Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain. We conclude that the tension between the re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting.
shown to offer higher service quality to neighbourhoods with more white people [5]. However, in such contexts, only a handful of recent studies exist that examine the fairness of location-based systems [16,42,43], with little consensuses on how fairness should be defined and measured for spatial-temporal applications.
In this work, we aim to measure and evaluate the fairness of the location privacy-preserving algorithms applied to mobility traces. We seek to answer the research question as to whether the outcome of the PUT models satisfies fairness.
Extended from the notion of fairness in broader machine learning literature, fairness in location privacy-preserving mechanisms could also be concluded in two categories: individual fairness and group fairness. Individual fairness ensures that similar users receive similar outcomes with respect to the specific privacy-aware inference tasks [4]. That is, whether these privacy-aware models preserve the privacy and service quality of similar users equally. In order to do so, we first posit a set of similarity metrics to mathematically denote a notion of user similarity grounded on the human mobility literature [15], in terms of both the structural similarity of their heatmap images and the entropy of their trajectories. On the other hand, group fairness ensures the independence between the model outcome and a sensitive attribute (i.e., gender, age, ethnicity, etc) of interest. That is, it ensures equal privacy gain and utility loss over multiple groups.
We examine two machine learning-based privacy-preserving approaches (i.e., TrajGAN [33] and Mo-PAE [44]), compared to the original inference tasks that optimize only for privacy or for utility. We evaluate their fairness on two real-life mobility datasets: Geolife [46] and MDC [25]. Our results indicate that both TrajGAN and Mo-PAE do not guarantee individual fairness; users with similar trajectories might receive different privacy gain outcomes where the individual fairness criteria are violated in these location privacy-aware settings. More specifically, we observe that for the users with similar traces, even when the outcome of the prediction task is identical, the privacy gains amongst those users are highly different, leading to some users not advantaging from obfuscation as others do. Different to the highlights of individual fairness, the results of group fairness of privacy-aware models show that there is no demographic disparity in the privacy and prediction outcome. However, as we discuss, this observation highly reflects the socio-cultural settings where these traces have been collected and are less of a by-product of the privacy-preserving models. The contributions of our paper are as follows:
• We theoretically denote the notion of similarity for tackling the measurements of individual fairness of spatialtemporal datasets.
• We offer a set of individual fairness metrics specifically defined based on mobility characteristics that can help the broader research community measure fairness for spatial-temporal applications.
• We examine the privacy-preserving algorithms in terms of both individual fairness and group fairness on two representative mobility datasets, and show their deficiencies in accounting for fairness can lead to undesired consequences.
• We systematically discuss why individual fairness and group fairness are competing in the privacy-aware setting.
RELATED WORK
Fairness in Machine Learning
Literature on fairness in machine learning (Fair-ML) tends to focus on the absence of any prejudices or favoritism toward an individual or group based on their inherent or acquired characteristics [30]. The majority of fairness research strives to avoid the decision made by automated systems skewed toward the advantaged groups or individuals. In [15], authors proposed a framework for understanding different definitions of fairness through two views of the world: i) we are all equal (WAE, mostly ensuring the group fairness), and ii) what you see is what you get (WYSIWYG, mostly ensuring the individual fairness). The framework shows that the fairness definitions and their implementations correspond to different axiomatic beliefs about the world, described as two fundamentally incompatible worldviews. A single algorithm cannot satisfy either definition of fairness under both worldviews [15].
The most adopted metrics for fairness in machine learning are widely based on the WAE assumption and denoted as group fairness, which is also known as statistical parity and demographic parity [9]. These metrics aim to ensure that there is independence between the predicted outcome of a model and sensitive attributes of age, gender, and race. If variations of statistical parity exist, Fair-ML will concentrate on relaxation of this measure by ensuring that groups from sensitive attributes and non-sensitive attributes meet the same misclassification rate (false negative rate, also known as equalized odds [17]), or equal true positive rate (also known as equal opportunity [17]).
In the context of mobility data and its applications, such as equitable transportation, research attention also mainly devoted to group fairness. Transportation equity heavily employs statistical tests for equity analysis, which is appropriate for discovering unfairness [43]. The such metric is often defined based on census tract information, which offers an aggregate demographic characteristic of the residing population. (author?) [42] defined fairness in terms of the regionbased fairness gap and assesses the gap between mean per capita ride-sharing demand across groups over time. The two metrics differ from each other. One is based on a binary label associated with the majority of the sub-population (e.g., white), and the other is based on a continuous distribution of the demographic attributes. Similarly, (author?) [18] proposed a graph-based approach for integrating group-based (census) into e-scooter demand prediction. Through the integration of an optimization regularizer, they showed that it is possible for their model to jointly learn the flow patterns and socio-economic factors, and returns socially-equitable flow predictions. Hosford et al. [20] investigated the equity of access to bike sharing in multiple cities in Canada. Ge et al. [16] studied racial and gender discrimination in the expanding transportation network companies. These handfuls of recent works all focus on group-based fairness metrics and collective methods (e.g., demand or flow prediction).
On the other hand, individual fairness claims that similar individuals should be treated similarly concerning the target task [9]. For example, in making hiring decisions, the algorithm has to possess perfect knowledge of comparing the "qualification" of two individuals. In most cases, the difficulty with individual fairness lies in the notion of measuring similarity. For example, (author?) [43] used the population and employment density of each city area for achieving individual fairness in bike-sharing demand prediction. The difficulty, again, lies in the fact that there is often a lack of perfect knowledge to determine the similarity in demand between two areas. In broader spatial-temporal data and application, the definitions of mobility similarity are almost non-existence, so as individual fairness of spatial-temporal data. Although previous work in fairness literature [6] has examined the boundary of fairness and privacy, these works have been applied to low dimensional datasets (e.g., COMPAS) that differ greatly from complex mobility data of people.
In this work, we offer a new perspective on how to measure individual fairness metrics defined based on the literature on mobility and examine its application in assessing the fairness of the privacy-preserving algorithms applied to mobility traces.
Privacy Methods for Spatial-Temporal Data
Large-scale human mobility data contain crucial insights into understanding human behaviour but are hard to share in non-aggregated form due to their highly sensitive nature. Decades of research on privacy examined various anonymous mechanisms on human trajectories [1,35,41]. A mobility privacy study conducted by De Montjoye et al [8] illustrates that four spatial-temporal points are enough to identify 95% of the individuals in a certain granularity, demonstrating the necessity of the anonymous mechanism against the re-identification attack. Previous work, ranging from kanonymity [1], differential privacy [35,41], to information theoretic metrics [32,45], explore scientific guarantees that the subjects of the data cannot be re-identified while the data remain practically useful. More recently, PUT models based on machine learning, which simultaneously aim to optimize for data privacy protection and utility, are emerging.
In these lines of work, researchers have focused on the objective of training neural network models that optimize for reducing privacy leakage risk of individual trajectories while at the same time minimizing the depreciation in the mobility utility. These models have been shown to be superior to differential privacy techniques. In this paper, we selected two machine learning-based PUT models based on two different strategies of GAN and Representation
Learning, but both with promising high performance in terms of both utility and privacy. These two PUT models mainly focus on temporal correlations in time-series data and aim to reduce the user re-identification risk (i.e., privacy) while minimizing the downgrade in the accuracy of mobility prediction task (i.e., utility). We describe the details of these two privacy-aware spatial-temporal models:
TrajGAN [33]: it is an end-to-end deep learning model to generate synthetic data that preserves the real trajectory data's essential spatial, temporal, and thematic characteristics. Compared with other standard geo-masking methods, TrajGAN can better prevent users from being re-identified. TrajGAN claims to preserve essential spatial and temporal characteristics of the original data, verified through statistical analysis of the generated synthetic data distributions, which is in a line with the data utility assessment based on the mobility prediction task in our work. Hence, we train a
TrajGAN-based PUT model to evaluate the mobility predictability and privacy protection of synthetic data generated by TrajGAN. [44]: it is a privacy-preserving adversarial feature encoder. In contrast to the TrajGAN that aims to generate synthetic data, Mo-PAE trains an encoder that forces the extracted representations f to convey maximal information about data utility while minimizing private information about user identities via adversarial learning. It consists of a multi-task adversarial network to learn an LSTM-based encoder , which can generate the optimized feature representations = ( ) via lowering the privacy disclosure risk of user identification information (i.e., privacy) and improving the mobility prediction accuracy (i.e., utility) concurrently.
Mo-PAE
FAIRNESS DEFINITION AND METRICS
In this section, we first define the mathematical representation of fairness in spatial-temporal applications before we incorporate it into our analysis.
Formulation of the Problem
In this work, we aim to measure and evaluate the fairness of the privacy-preserving algorithms applied to mobility traces. We seek to figure out whether these models equally preserve the user privacy and inference accuracy of similar users. We try to determine whether fairness metrics benefit from a privacy-preserving model simultaneously, laying a theoretical foundation for further research on the privacy-preserving fairness-aware mechanism for human mobility.
Both individual-and group-based fairness are discussed.
We first introduce some basic notations and abbreviations utilized in this work: individuals are labelled as u, if individuals and are similar, that is ∽ ; sensitive or protected attributes are denoted as S; raw data without sensitive attributes is denoted as ; is the ground-truth labels for a specific inference task and ′ is the predicted one, which is the variant that depends on and . The true positive rate (i.e., TPR, recall, or sensitivity) is utilized to judge the performance of the multi-categorical classifiers, which refers to the proportion of who should be predicted accurately that received a positive result. TPR is also utilized in the inference tasks' quality of the examined models and is denoted as task accuracy.
Individual Fairness
Individual fairness [9] states that individuals who are similar, with respect to a specific task, should be treated similarly (i.e., ∽ when ∽ ) [34]:
( ′ | , , ) = ( ′ | , , )(1)
As we have mentioned in Section 2.1, the difficulty with individual fairness lies in the notion of measuring .
To measure individual fairness in the context of spatial-temporal data, we need two sets of definitions corresponding to i) the similarity between users' trajectories ( ); and ii) the similarity of the outcome of the PUT models ( ), as well as their generalizability for different mobility datasets and PUT models. We define each next:
3.2.1 Similarity of Trajectories. Grounded on the literature on mobility [13,29,38], we mathematically denote the notion of trajectory similarity ( ) based on i) the structural similarity index of mobility heatmap images; and ii) the entropy of trajectories.
Structural Similarity Index Measure (SSIM): SSIM was initially designed to quantify image quality degradation caused by processing, such as data compression or losses in data transmission, which leverages the differences between the reference image and the processed image [40]. To apply SSIM metrics in this work, we construct heatmap images from the raw geo-located data with the methodology proposed by [13]. Figure 1 shows some sample heatmap images with spatial granularity coarsening from 50 meters to 900 meters by the left to right. These heatmap images structurally represent mobility features extracted from mobility traces, which use pixel intensity to encode the frequency of the visit spent in a given area; hence, the brighter pixels denote the more frequently visited locations of the user. SSIM has been shown to be a well-suited metric to compute the image similarity of the heatmap images apecifically when applied to mobility heatmap images [13,29]. Unlike Mean Square Error, the SSIM metric has been shown not to be significantly impacted by the changes in luminosity and contrast.
In this work, we formulate the SSIM measure as the perceptual difference of two similar users' heatmap images, and . See the Appendix for full definitions. We then leverage the integrated heatmap image, which combines all user trajectories, to calculate the effective SSIM index ( ) that indicates the overall trajectory similarity of users.
The SSIMs between individual ( ) and integrated trajectory ( ) are denoted by calculating the SSIM (i.e., local values of the SSIM, = ( − )). is utilized to lower the impact of the unreached area, that is, only the swept area in the integrated heatmap image was selected for further analysis.
Hence, the average SSIM value of the selected points is what we define as . Additionally, as this metric relies on heatmap images, it is highly influenced by spatial granularity, where each pixel in the image corresponds to the spatial boundary of the data. Intuitively, in Figure 1, as the granularity coarsens, the trajectories become blurry and, thus, more similar. The impact of the spatial granularity on the SSIM index will discuss in Section 5.1.1.
Entropy of Trajectories (EOTs):
Mobility literature defines the highest potential accuracy of predictability of any individual, termed as "maximum predictability" (Π ) [27]. Maximum predictability is determined by the of a person's trajectory information (e.g., frequency, sequence of location visits, etc.). Hence, some similar characteristics of user spatial-temporal patterns are able to be captured by leveraging the entropy of trajectory. In this paper, we conclude and define four types of entropy to measure trajectory similarity for spatial-temporal applications, denoted as iii) Heatmap Entropy (HE): the entropy of the users' heatmap images. In contrast to the aforementioned entropy models, we define a two-dimensional entropy ( 2 ) to quantify the irregularity (i.e., unpredictable dynamics) of the user's heatmap image. The entropy of trajectory heatmap images is calculated using the two-dimensional sample entropy method ( 2 ) [37]. In a trajectory heatmap image, the features are extracted by accounting for the spatial distribution of pixels in different -length square windows. iv) Actual Entropy (AE): the entropy of capturing entire spatial-temporal order present in user's mobility pattern. To capture AE, (author?) [38] proposed an actual entropy model using the Lempel-Ziv algorithm. Different to other types of entropy, AE depends not only on the frequency of visited locations but also on the order in which the nodes were visited and the time spent at each location [38]. In this work, the given area is segmented using structured grids, where each grid is initialized as 0. Then the visited locations and whether the person reached the cell previously are tracked.
If the person visits an unreached cell, the location is marked as 1, generating time-series binary data to characterize the trajectory.
See the Appendix for full definitions and related equations of these four different entropies.
Similarity of Users.
With the aforementioned definition of trajectory similarity, we mathematically define the users with similar trajectories as the similar users by two techniques: i) -thresholding: setting the threshold to filter similar users based on their trajectories' similarity. To be specific, if the trajectory similarity of and is greater than a threshold , that is ( , ) > , this pair of users will be selected out as the similar users, that is ∽ .
ii) clustering: grouping similar users together via clustering techniques. We use k-means clustering to cluster users based on their SSIM and EOTs features. We apply the Elbow and Silhouette method [26] to determine the number of clusters (k values). The resulting clusters present a group of highly similar users together.
Similarity of Outcome.
To understand whether users with similar trajectories receive similar outcomes from the models, we first need to define what it means to receive a similar outcome mathematically. As the objective of the PUT models is to optimize privacy gain and minimize utility loss, we consider privacy gain and utility gain as positive outcomes. After selecting out the similar users, we then measure the difference of them in privacy gain outcome, with trajectory similarity and performance discrepancy. To measure the fairness of systems as a whole for each model and outcome, we report the percentage of user pairs for whom fairness was violated (i.e., violation% or V %). As we will show, in our experiments, we set = 0.8 to correspond to users with at least 80% similarity of trajectory which imposes the model's outcome to be within 20% difference between the similar users. The choice of = 0.8 is based on the various literature in fairness and literature [2,12]. We discuss the impact of this threshold on policy making in the Discussion section of this article.
Δ = 1 − ( )/ ( ),
Group Fairness
Different to individual fairness lies heavy on the similarity definition, group fairness has been vastly discussed and shares a systematic analysis approach in broader Fair-ML study. In this work, we bridge the gap between the standard group fairness metrics and the specific privacy-preserving mechanism of spatial-temporal data.
Group fairness as also referred to as Demographic Parity [15] states that demographic groups should receive similar decisions, inspired by civil rights laws in different countries [3]. To be specific, group fairness argues that a disadvantaged group (in terms of the sensitive attributes) should receive similar treatment to the advantaged group, that is:
( ′ = 1| = 0, = 1) = ( ′ = 1| = 1, = 1)(2)
It is worth nothing that PUT spatial-temporal models are by definition group unaware that is indicating a sensitive attribute (e.g., race, or gender) is not an explicit feature into these models. However specific demographic groups of users may exhibit certain properties in their mobility behaviour (e.g., students) that could still impact the outcome of the PUT models. For instance, age and employment status can highly influence peoples' day-to-day trajectory. A user whose trajectory data is limited to his home and office location could be highly predictable by the PUT model, however, also highly re-identifiable (with low privacy gain). This means the notion of group fairness in the context of this study is highly dependent on the examined dataset. We elaborate more on this discussion in Section 6.
In order to quantify the group fairness in a more statistical approach, group fairness score (i.e., GFS) for spatial-temporal data are calculated by disparate impact for disadvantaged groups:
= ( ′ = 1| = , = 1) ( ′ = 1| = , = 1)(3)
EXPERIMENT SETUP
In this section, we describe the datasets we used to evaluate the fairness of PUT models and the steps we took to set up the PUT models for examination.
Datasets
In order to evaluate the fairness of the examined models, we use two datasets that the original papers used to assess the privacy level of their models. As mentioned in Section 3.2.1, in Figure 1, with the granularity coarsens, the trajectories become blurry and thus more similar to each other. Figure 2 confirms this observation by illustrating the SSIM-and EOTs-based similarity of all the users for varying spatial granularity for both datasets. As the spatial granularity coarsens, we observe an increase in the SSIM values, with users becoming more similar to each other. Furthermore, as different types of entropy are considering different features of the spatial-temporal data, Figure 2 presents the expected similarity of users for various EOTs-based measures. In addition to the distribution of the entropy values presented in the Figure 2 for each dataset, we observe that across both datasets, SSIM along with SE and AE correspond to the most relaxed measure of similarity, LE and HE correspond to stricter measures of similarity. The corresponding percentage of user pairs that meet each similarity criterion is described in Table 1.
Original Properties of the Trajectory
Before describing the privacy and utility trade-off for mobility trajectories of the PUT models, we first give brief definitions of two popular inference tasks (i.e., user re-identification and mobility prediction), which are also applied to assess the privacy gain and utility decline in the PUT models we discussed. These two popular inference tasks are named original tasks in this paper, where the original demonstrates the nature of the data before being processed by any privacy-aware model. These original tasks are leveraged to assess the native data characteristics in terms of user re-identification (UR) and mobility predictability (MP), respectively. See the Appendix for full definitions.
FAIRNESS ANALYSIS
In this section, we present our analysis in studying whether the PUT models can be considered fair. To do so, we analyze these models in terms of individual fairness and group fairness. The similarity applied in the individual fairness is defined by SSIM and EOTs, and group fairness is grouping users based on demographic attributes such as gender, age, and employment status.
Individual Fairness
The metrics of trajectories' similarity are crucial for quantifying individual fairness. As definitions in Section 3.2, the can be quantified by SSIM and EOTs. In this section, we discuss individual fairness with two different similarity quantification approaches. First, the discriminated based on -thresholding metrics of SSIM and EOTs directly.
Second, the k-means clustering approach, based on the characteristics of SSIM and EOTs aforementioned, is leveraged to classify similar users. The user pair is defined to achieve individual fairness when the outcome difference (Δ or Δ ) between and is within 20%. Table 1 shows the percentage of user pairs that commit fairness violation (i.e., V% = % of (Δ >0.2)). For instance, in Table 1 Overall, individual fairness is not achieved in the two selected PUT models, especially for the unfairness of the privacy gain, which is generally higher than the utility decline. When comparing two different privacy models in a row, TrajGAN achieves less fairness violation rate than Mo-PAE in both privacy gain and utility decline outcomes. For instance, in the MDC dataset, when 45.26% and 29.42% of user pairs commit fairness violations in privacy gain and utility decline, respectively, the Mo-PAE reports twice as many fairness violations for both outcomes. While both the Geolife and MDC data exhibit individual unfairness, the Geolife is worse in both the PUT models and the accuracy of the original tasks. In both original tasks, Geolife's unfairness rate is as high as 60%, and this inequity is exacerbated when with PUT models. In contrast to Geolife, the performance of the MDC in the original tasks conforms to the definition of individual fairness, that is, the performance difference of task accuracy in MDC is within 20% in both user re-identification tasks and mobility prediction tasks.
Similarity Based on -Thresholding.
Impact of Spatial Granularity on Similarity: After the overall comparison of threshold metrics, we discuss the model discrepancy when trajectory similarity is based on the SSIM index under varying granularity. As a crucial metric in distinguishing the trajectory similarity, the SSIM index could be affected by different parameters, which will result in subtle performance disparities in the quantification of individual fairness. The spatial granularity of trajectory is the most important one among these parameters. These disparities could be intuitively observed in the heatmaps ( Figure 1). In contrast to the SSIM, the spatial granularity has less impact on different types of entropy, hence, they are not discussed here.
The
Metrics Cluster Size
Original, V% of (DIFF>0.2) Mo-PAE, V% of (DIFF>0.2) TrajGAN, V% of (DIFF>0. Table 2. K-means-clustering-based individual fairness among diverse models and datasets. The numbers present the percentage of users for whom individual fairness was violated based on their difference in the outcome being greater than 0.2. The fair instances are highlighted in italic font. The maximum/minimum instances of each column are highlighted in bold font.
Similarity Based on K-means Clustering.
Alternative to the results presented based on the similarity thresholding, Table 2 demonstrates the results of individual fairness based on the clustering technique described in Section 3.2.3.
Applying the Elbow and Silhouette methods, we decide the number of clusters (k) to be 4 and 5 for MDC and Geolife, respectively. For each cluster, the table reports the percentage of users whose individual fairness was violated for a given outcome and under various models. More precisely, the results presented here indicate that the original model that objectifies a single task (prediction or privacy) is able to meet the individual fairness criteria for the MDC dataset.
We can observe that in the case of the Mo-PAE model, the privacy gain exhibits high variations across users in the same clusters. Even in the cases where the model satisfies individual fairness by performing similarly in terms of utility
Group Fairness
Group fairness states that groups across different sensitive attributes should receive similar outcomes. To be specific, group fairness argues that a disadvantaged group should receive similar treatment to the advantaged group. Figure 4 presents the discrepancy of the privacy gain from two PUT models for different demographic groups, and Figure 5 presents the utility decline. We observe that both Mo-PAE and TrajGAN perform equally for different gender attributes, as shown in Figure 4a, where the orange boxes (labelled as All Groups) on both are very small. That is while the privacy gain varies across individuals within the same gender, the model achieves group fairness when grouping individuals by gender. The same observations could be made for the age and employment status, where we see that there exist bigger differences across the classes than the gender, but they still achieve group fairness as Δ < 20%. Similarly, in Figure 5, we can observe that both models equally meet the group fairness criteria on the utility decline.
In order to quantify the group fairness of the disadvantaged groups in a more statistical approach, the results of the group fairness score (GFS) are shown in Table 3. For instance, for different age groups, the subgroup with ages between 22 and 27 (i.e., "22 -27") is regarded as the advantaged group, as it has the dominant user number for all age groups. The other age groups' GFSs are calculated based on the disparate impact between them and the advantaged group. Then, compare all GFSs against the fairness threshold of 0.8, which is defined in Section 3.2.3, that is, GFS ≥ 80% indicates fairly treating the disadvantaged group and GFS < 80% indicates the unfairly treating. For example, the result of "28-33" group (i.e., GFS = 98.65%) then indicated that the model satisfy the group fairness as 98.65% > 80%. Table 3. Group fairness scores (GFS) of three models with different demographic attributes. GFS ≥ 80% indicates the fairly treating the minority subgroup; GFS < 80% indicates the unfairly treating.
In conclusion, except for two subgroups with age attributes (i.e., "<21" and ">39") violating the four-fifths rule, the other subgroups satisfy the group fairness. Finally, it is worth noting that the results presented here are highly dependent on the studied dataset, as we discuss in the next section.
DISCUSSION
In this section, we describe the limitations and implications of our work and discuss possible future directions.
Limitation
Despite our efforts, the presented work also has its limitations. Firstly, the collected mobility dataset is often biased as they only present a subset of the population who took part in data collection. In many cases, the users are limited to students or those affiliated with the research team that has collected the dataset. This limitation means the examined trajectories are not representative of everyone's mobility behaviour. Furthermore, the demographics of the participants are also limited in terms of age and socio-economic diversity.
Secondly, in our paper, we reported that we did not observe any violation of group fairness across gender, age and employment level for the examined PUT models. However, we acknowledge that the results presented regarding group fairness are highly influenced by the city and societal structures in which the data was collected. In the case of MDC, users' traces correspond to a level of socio-economic and cultural freedom associated with life in Switzerland. Such observations will indeed differ if we examine other cultures, such as those in the United States or Asian countries, where there is a broader socio-economic and gender inequality gap. We also believe the availability of datasets with rich demographic information could enable future work to examine the intersection of individual fairness within demographic groups. Finally, it is worth noting that, unlike online datasets, offline mobility datasets come in limited size due to the great burden the data collection imposes on participants and are handful. Although this limitation could impact the generalization of our results (e.g., that is we cannot claim that Mo-PAE is always fairer than TrajGan), the methods proposed in this study are generalizable and applicable to other PUT models and across mobility datasets.
Indeed, we believe future work would focus on creating a toolkit for computing spatial-temporal fairness of datasets and models. We expand on the implications of our work next.
Implication
Our paper has multiple important implications: first, our work offers a novel methodology for defining fairness in the context of spatial-temporal datasets. We believe works such as ours will help shape the future roadmap of Fair-ML studies by offering possibilities to measure equity within different systems such as those of mobility based ones (e.g., transportation). The choice of which of the proposed similarity metrics to select for evaluating individual fairness is another critical dimension that could be highly context and application dependent. For example, for applications where there is a need for strict fairness measurement, corresponding to the WYZIWIG worldview [15], a strict similarity measure such as combined entropy (EOTs) could be chosen. In contrast, for applications where the groups are not necessarily equal, but for the purposes of the decision-making process, we would prefer to treat them as if they were, a less sensitive similarity measure such as coarse grain SSIM could be used.
Although our focus in this work was on fairness analysis of the PUT models, we believe our study can be the first step towards implementing fairness interventions embedded in these models. For example, in-processing approaches rely on adjusting the model during the training to enforce fairness goals to be met and optimized in the same manner as accuracy. This goal is often achieved through adversarial networks or fair representation learning approaches such as [21], model induction, model selection, and regularization [43]. Of course, designing such mitigation strategies requires access to the underlying architecture of the PUT models which is most of the time not possible, and is in contrast to taking these models as black-box as we did in this study.
Regarding the relationship between privacy versus fairness, location privacy-preserving mechanisms generally prevent information leakage against protected attributes, and these attributes are also essential to fairness analysis, they are used to ensure little discrimination against protected population subgroups. This dimension also explains why the PUT models achieve group fairness but not individual fairness, as these sensitive attributes considered by group fairness are in protection. The competing trend between individual-and group-fairness also implies another interesting trade-off in Fair-ML. From the individual perspective, the re-identification risk and individual fairness are in tension. We believe designing privacy-preserving models to become fairness-aware is a research direction that will receive significant attention in the future.
CONCLUSION
Intuitively, fairness has a close relationship to privacy, no matter structural data or unstructured data in machine learning. But the quantification between them is still unclear. In this paper, we proposed different metrics for measuring individual fairness in the context of spatial-temporal mobility data. We compared different location privacy-protection mechanisms (PUT models) on the defined individual-and group-based metrics. Our results on two real trajectory datasets show that the privacy-aware models achieve fairness at the group level but violate individual fairness. Our findings raise questions regarding the equity of the privacy-preserving models when individuals with similar trajectories receive a very different level of privacy gain. We leverage the empirical results of our work to make valuable suggestions for the further integration of fairness objectives into the PUT models. Especially when discussing the individual perspective, the tension between the user re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting.
APPENDIX
Inference tasks
Here we list some basic definitions of inference tasks in mobility literature.
User
Re-identification Task (UR). The accuracy of the user re-identification task is leveraged to assess the trajectory uniqueness of the mobility trajectory. With more and more intelligent devices and sensors being utilized to collect information about human activities, the trajectories also expose increasing intimate details about users' lives, from their social life to their preferences. A mobility privacy study conducted by De Montjoye et al [8] illustrates that four spatial-temporal points are enough to identify 95% of the individuals in a certain granularity. As human mobility traces are highly unique, a mechanism capable of reducing the user re-identification risk can offer enhanced privacy protection in mobility data sharing. The enhanced privacy protection is referred to privacy gain (or PG) in the PUT models.
Mobility Prediction Task (MP).
The accuracy of the mobility prediction task is leveraged to assess the predictability of the mobility trajectory. Mobility datasets are of great value for understanding human behaviour patterns, smart transportation, urban planning, public health issue, pandemic management, etc. Many of these applications rely on the next location forecasting of individuals, which in the broader context can provide an accurate portrayal of citizens' mobility over time. For the mobility prediction task in this work, the raw geolocated data or other mobility data commonly contain three elements: user identifier u, timestamps t, location identifiers l. Hence, each location records r could be denoted as = [ , , ], while each location sequence S is a set of ordered location records = { 1 , 2 , 3 , · · · , }, namely mobility trajectory. Therefore, given the past mobility trajectory = { 1 , 2 , 3 , · · · , }, the mobility prediction task is to infer the most likely location l +1 at the next timestamp t +1 . The results of two PUT models indicate that a bit of mobility prediction accuracy is sacrificed in exchange for higher privacy protection. The sacrificed prediction accuracy is referred to utility decline in the PUT models.
Performance of the Privacy-Utility Trade-off Models
Before examining fairness, we first offer analysis and comparison of the two described PUT models that we are investigating in terms of privacy and utility. Figure 6 presents the privacy utility trade-off of Mo-PAE and TrajGAN over the two described datasets. The y-axis presents the privacy gain brought to the raw dataset by applying these models, whereas the x-axis presents the decline in privacy prediction due to this privacy gain. The data fed into the Mo-PAE [44] are a list of trajectories with specific sequence length SL, that is { 1 , 2 , 3 , · · · , }. For instance, if the sequence length is 10, that indicates each trajectory contains 10 history location records r, 10 = { 1 , 2 , 3 , · · · , 10 }, and = 10.
Related Equations
i. SSIM. In this work, we use the known SSIM measure as the perceptual difference of two similar users' heatmap images, and :
( , ) = (2 + 1 )(2 + 2 )
( 2 + 2 + 1 )( 2 + 2 + 2 ) , 1 = ( 1 ) 2 , 2 = ( 2 ) 2(4)
where and are the averages, and are the variances, and is the covariance of and ; L is the dynamic range of the pixel-values, 1 = 0.01 and 2 = 0.03 by default.
ii. Shannon Entropy (SE). SE is the entropy of the probabilities of visited location distribution. To be specific, this entropy is defined by following the notion in [27,39] and measured as:
ℎ = − ∑︁ =1 ( ) log 2 [ ( )](5)
where is the length of the probability vector, ( ) is the probability of location .
iii. LonLat Entropy (LE). LE is the entropy of the geo-located locations in a time-series format. This entropy reflects the probability of a new sub-string and quantifies the irregularity or complexity of the time-series data. The of visited longitudes and latitudes are integrated as the LE:
= ln Φ ( , ) − ln Φ +1 ( , )(6)
where details and default values of the threshold and the definition of function Φ ( , ) can be found in the study [7,14].
iv. Heatmap Entropy (HE). HE is the entropy of the users' heatmap images. The entropy of trajectory heatmap images was calculated using the two-dimensional sample entropy method ( 2 ) [37]. In a trajectory heatmap image
( 2 ), the image features were extracted by accounting for the spatial distribution of pixels in different -length square windows with origin at ( , ).
where is the similarity threshold, is the total number of square windows, is the probability of pixels set ( , )
satisfying specific conditions, ( ) is the average probability, and is a distance function to calculate the difference of corresponding points.
v. Actual Entropy (AE). AE is the entropy of capturing entire spatial-temporal order present in user's mobility pattern.
In this work, the given area is segmented using structured grids, where each grid is initialized as 0. Then the visited locations and whether the person reached the cell previously are tracked. If the person visits an unreached cell, the location is marked as 1, generating time-series binary data to characterize the trajectory. The actual entropy is calculated using:
= 1 ∑︁ Λ −1 ln( )(8)
where Λ is the length of the shortest sub-string starting at position which does not previously appear from position 1 to − 1, and is the length of the binary trajectory data.
ShannonFig. 1 .
1Entropy (SE), LonLat Entropy (LE), Heatmap Entropy (HE), Actual Entropy (AE). The integrated entropy of these Sample mobility heatmap images with various spatial granularities of MDC and Geolife. Three different trajectories are shown with different granularities (50 m, 100 m, 300 m, 500 m, 700 m, and 900 m).
Fig. 2 .
2Overview of SSIM and entropy distribution of trajectories of MDC and Geolife datasets. Different granularities of SSIM are compared in a row, where the granularity are ranging from 100-meter to 900-meter. four different types of entropy is denoted as EOTs. The details of them are as followed, and see the Appendix for full definitions. i) Shannon Entropy (SE): the entropy of probabilities of visited location distribution. Leveraging the common definition of Shannon entropy ( ℎ ), a classic notion of data uncertainty, we first calculate ℎ of the trajectory to characterize visited location distribution and their probabilities. A larger ℎ indicates greater disorder and consequently reduces the predictability of an individual's movements. ii) LonLat Entropy (LE): the entropy of the geo-located locations in a time-series format. Considering the spatialtemporal pattern of the mobility data, the entropy of visited locations in terms of longitudes and latitudes are separately estimated by using the fuzzy entropy . This entropy reflects the probability of a new sub-string and quantifies the irregularity or complexity of the time-series data.
and utility gain outcome, Δ = 1 − ( )/ ( ). Both Δ and Δ contribute to the evaluation of . When with the clustering approach, the average pairwise differences of Δ and Δ for all the members of each cluster are assessed. Regardless of the grouping technique in similar users, we argue that Δ or Δ satisfies fairness if it is within 1 − , otherwise, the PUT model is considered to be violating individual fairness for user pair and . The threshold of different combinations of SSIM and EOTs are utilized to distinguish similar users and map all users into a list of pairs
4. 1
1.1 MDC. This deadset is recorded from 2009 to 2011, contains a large amount of continuous mobility data for 184 volunteers with smartphones running a data collection software, in the Lausanne/Geneva area. Each record of the gps-wlan dataset represents a phone call or an observation of a WLAN access point collected during the campaign[25].In addition to the trajectory data, MDC includes individual user demographic information: categorical age groups, gender, and employment status. To the best of our knowledge, MDC is the only dataset that has published users' demographic information along with their trajectories.4.1.2 Geolife. This dataset is collected by Microsoft Research Asia from 182 users in the four-and-a-half-year periodfrom April 2007 to October 2011 and contains 17,621 trajectories[46]. As the Geolife dataset does not include demographic attributes of individuals, we are unable to measure the group fairness for this dataset and our analysis suffices for the individual fairness dimension.
Figure 3then shows the impact of varying spatial granularity on the model discrepancy. The model that achieves individual fairness should perform less discrepancy with higher SSIM. The accuracy of original tasks and two PUT models are compared in granularity at 100 meters, 300 meters, 500 meters, and 900 meters. In conclusion, different models have diverse sensitivities of varying granularities. Both original tasks (UR and MP) in the two datasets have an increasing difference with a higher SSIM index, which means they violate individual fairness. For the Mo-PAE, individual fairness is met on MDC data but not on Geolife. The Mo-PAE is also the most sensitive model for varying granularities. For instance, when granularity changes from 100-meter(Figure 3a) to 900-meter(Figure 3d), Mo-PAE has the most obvious change in its line trend on the UR (i.e., privacy gain), and the decreasing trend at 100-meter granularity is lost at 900-meter. Overall, the selection of SSIM granularity has a significant impact on the judgement of the individual fairness of a model. However, these impacts become subtle when the SSIM is applied to the trajectory similarity distinction, as the user pairs table reduced the granularity impact to some extent. For the remaining of the analysis, the granularity of the SSIM is chosen as 100-meter.
Fig. 3 .
3The model performance discrepancy when trajectory similarity is based on the SSIM in different granularities. Figure (a) to Figure (d) are the results of MDC dataset, Figure (e) to Figure (h) are of Geolife. The performance discrepancy (i.e., Performance DIFF) of each model in different granularities compares in each sub-figure.
Fig. 4 .Fig. 5 .
45The privacy protection outcome of PUT models across different demographic groups for the MDC dataset. The prediction accuracy outcome of PUT models across different demographic groups for the MDC dataset.decline (clusters 2 and 4 of MDC and all clusters in Geolife ), the privacy gains of those users are very different from each other.
Fig. 6 .
6Pareto Frontier trade-off of Utility and Privacy on two datasets. The hollow squares and diamonds present the results of the Mo-PAE models. The solid points present the results of the TrajGAN. Blue color presents sequence length SL = 5. Black color presents SL = 10.
As
Mo-PAE is highly dependent on the sequence length and Lagrange multipliers that indicate to what extent privacy or utility must be optimized, each point on the corresponding plots presents experiments with one set of hyper-parameters. These results show that as the Mo-PAE achieves maximum privacy protection it comes with the cost of degrading the prediction accuracy. Similarly, TrajGAN achieves 80% privacy gain when applied on Geolife Dataset but it highly degrades the utility. For the Lagrange multipliers setting of the Mo-PAE in this work, we choose 1 = −0.1, 2 = 0.8, 3 = −0.1, as this combination exerts the most promising privacy-utility trade-off in the Mo-PAE model.
=
[ ( , )| [ ( , ), ( . )] ≤ , ( , ) ≠ ( , )]
Individual fairness among diverse models and datasets with SSIM and EOTs. % of pairs represents the ratio of the pairs that meet the thresholding requirements. The maximum/minimum instances of each column are highlighted in bold font.Metrics
% of pairs
Original, V% of (DIFF>0.2)
Mo-PAE, V% of (DIFF>0.2) TrajGAN, V% of (DIFF>0.2)
Trajectory
Uniqueness
Mobility
Predictability
Privacy
Gain
Utility
Decline
Privacy
Gain
Utility
Decline
MDC
SE
36.17%
10.50%
11.11%
87.69%
39.75%
41.65%
27.32%
LE
12.85%
8.31%
7.90%
88.81%
36.95%
41.32%
25.10%
HE
14.11%
12.89%
9.60%
86.88%
41.30%
38.23%
27.14%
AE
33.05%
12.64%
10.28%
87.10%
35.95%
45.26%
29.42%
SSIM
65.06%
14.57%
13.17%
88.98%
42.02%
39.50%
27.50%
EOTs
1.73%
6.10%
1.22%
84.76%
30.49%
44.51%
27.44%
EOTs+SSIM
1.64%
6.45%
1.29%
83.87%
31.61%
43.23%
28.39%
Geolife
SE
33.16%
57.91%
61.09%
94.14%
71.84%
67.85%
58.58%
LE
9.11%
57.41%
61.20%
94.32%
71.50%
65.09%
56.05%
HE
7.29%
61.37%
63.47%
94.09%
70.43%
69.78%
58.87%
AE
27.03%
57.44%
59.36%
93.23%
72.96%
71.96%
58.54%
SSIM
63.52%
59.88%
61.49%
94.13%
74.77%
63.53%
53.05%
EOTs
0.62%
61.54%
58.46%
89.23%
66.15%
78.46%
72.31%
EOTs+SSIM
0.61%
62.50%
57.81%
89.06%
65.63%
78.13%
71.88%
Table 1.
Table 1
1presents the individual fairness of different models by thethresholding metrics based on SSIM and EOTs. The threshold of different combinations of SSIM and EOTs are utilized to distinguish similar users ( ∽ ) and map all users into a list of pairs with trajectory similarity and performance discrepancy. Based on fairness thresholding criteria defined in Section 3.2.3, similar users (i.e., user pairs)imply at least 80% pairwise similarity of their trajectories. "% of pairs" in thetable represents the percentage of the user pairs that meet the corresponding metric threshold requirements. For instance, with the MDC dataset, 36.17% of user pairs have a more than 80% similarity when under the metric. That is, under the metric, 36.17% user pairs are qualified for further analysis of outcome similarity.
, with the MDC dataset under the metric, there are only 10.50% and 11.11% of the qualified user pairs violate the fairness criteria in two original tasks, which implies that the individual fairness is achieved, as both V% are within 20%. Different from the original tasks, two PUT models have V% that are all higher than 20%, hence, they violate individual fairness. The higher V% indicates that the model causes more disparities in performance. The values in the italic format present the cases where the outcome to meet individual fairness (i.e., % ≤ 20%) in theTable 1.
Privacy preserving data publishing of categorical data through k-anonymity and feature selection. Aristos Aristodimou, Athos Antoniades, Constantinos S Pattichis, Healthcare technology letters. 31Aristos Aristodimou, Athos Antoniades, and Constantinos S Pattichis. Privacy preserving data publishing of categorical data through k-anonymity and feature selection. Healthcare technology letters, 3(1):16-21, 2016.
Fairness in machine learning. Solon Barocas, Moritz Hardt, Arvind Narayanan, Nips tutorial. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. Nips tutorial, 1:2017, 2017.
Big data's disparate impact. Solon Barocas, D Andrew, Selbst, Calif. L. Rev. 104671Solon Barocas and Andrew D Selbst. Big data's disparate impact. Calif. L. Rev., 104:671, 2016.
On the apparent conflict between individual and group fairness. Reuben Binns, Proceedings of the 2020 conference on fairness, accountability, and transparency. the 2020 conference on fairness, accountability, and transparencyReuben Binns. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 514-524, 2020.
Ridehail revolution: Ridehail travel and equity in Los Angeles. Anne Elizabeth Brown, Los AngelesUniversity of CaliforniaAnne Elizabeth Brown. Ridehail revolution: Ridehail travel and equity in Los Angeles. University of California, Los Angeles, 2018.
On the privacy risks of algorithmic fairness. Hongyan Chang, Reza Shokri, 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEEHongyan Chang and Reza Shokri. On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P), pages 292-303. IEEE, 2021.
Characterization of surface emg signal based on fuzzy entropy. Weiting Chen, Zhizhong Wang, Hongbo Xie, Wangxin Yu, IEEE Transactions on neural systems and rehabilitation engineering. 15Weiting Chen, Zhizhong Wang, Hongbo Xie, and Wangxin Yu. Characterization of surface emg signal based on fuzzy entropy. IEEE Transactions on neural systems and rehabilitation engineering, 15(2):266-272, 2007.
Unique in the crowd: The privacy bounds of human mobility. Yves-Alexandre De Montjoye, A César, Michel Hidalgo, Vincent D Verleysen, Blondel, Scientific reports. 31Yves-Alexandre De Montjoye, César A Hidalgo, Michel Verleysen, and Vincent D Blondel. Unique in the crowd: The privacy bounds of human mobility. Scientific reports, 3(1):1-5, 2013.
Fairness through awareness. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel, Proceedings of the 3rd innovations in theoretical computer science conference. the 3rd innovations in theoretical computer science conferenceCynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214-226, 2012.
Privacy-aware location sharing with deep reinforcement learning. Ecenaz Erdemir, Pier Luigi Dragotti, Deniz Gündüz, 2019 IEEE International Workshop on Information Forensics and Security (WIFS). IEEEEcenaz Erdemir, Pier Luigi Dragotti, and Deniz Gündüz. Privacy-aware location sharing with deep reinforcement learning. In 2019 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6. IEEE, 2019.
Privacy-aware time-series data sharing with deep reinforcement learning. Ecenaz Erdemir, Pier Luigi Dragotti, Deniz Gündüz, IEEE Transactions on Information Forensics and Security. 16Ecenaz Erdemir, Pier Luigi Dragotti, and Deniz Gündüz. Privacy-aware time-series data sharing with deep reinforcement learning. IEEE Transactions on Information Forensics and Security, 16:389-401, 2020.
Certifying and removing disparate impact. Michael Feldman, A Sorelle, John Friedler, Carlos Moeller, Suresh Scheidegger, Venkatasubramanian, proceedings of the 21th ACM SIGKDD. the 21th ACM SIGKDDMichael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD, pages 259-268, 2015.
A deep learning approach for identifying user communities based on geographical preferences and its applications to urban and environmental planning. L Danielle, Ferreira, A A Bruno, Carlos Nunes, V Alberto, Katia Campos, Obraczka, ACM Transactions on Spatial Algorithms and Systems. 63Danielle L Ferreira, Bruno AA Nunes, Carlos Alberto V Campos, and Katia Obraczka. A deep learning approach for identifying user communities based on geographical preferences and its applications to urban and environmental planning. ACM Transactions on Spatial Algorithms and Systems, 6(3):1-24, 2020.
Entropyhub: An open-source toolkit for entropic time series analysis. W Matthew, Bernd Flood, Grimm, Plos one. 1611259448Matthew W Flood and Bernd Grimm. Entropyhub: An open-source toolkit for entropic time series analysis. Plos one, 16(11):e0259448, 2021.
The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making. A Sorelle, Carlos Friedler, Suresh Scheidegger, Venkatasubramanian, Communications of the ACM. 644Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making. Communications of the ACM, 64(4):136-143, 2021.
Racial and gender discrimination in transportation network companies. Yanbo Ge, R Christopher, Don Knittel, Stephen Mackenzie, Zoepf, National Bureau of Economic ResearchTechnical reportYanbo Ge, Christopher R Knittel, Don MacKenzie, and Stephen Zoepf. Racial and gender discrimination in transportation network companies. Technical report, National Bureau of Economic Research, 2016.
Equality of opportunity in supervised learning. Moritz Hardt, Eric Price, Nathan Srebro, arXiv:1610.02413arXiv preprintMoritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. arXiv preprint arXiv:1610.02413, 2016.
Socially-equitable interactive graph information fusion-based prediction for urban dockless e-scooter sharing. Suining He, G Kang, Shin, Proceedings of the ACM Web Conference 2022. the ACM Web Conference 2022Suining He and Kang G Shin. Socially-equitable interactive graph information fusion-based prediction for urban dockless e-scooter sharing. In Proceedings of the ACM Web Conference 2022, pages 3269-3279, 2022.
A moral framework for understanding fair ml through economic models of equality of opportunity. Hoda Heidari, Michele Loi, Krishna P Gummadi, Andreas Krause, Proceedings of the conference on fairness, accountability, and transparency. the conference on fairness, accountability, and transparencyHoda Heidari, Michele Loi, Krishna P Gummadi, and Andreas Krause. A moral framework for understanding fair ml through economic models of equality of opportunity. In Proceedings of the conference on fairness, accountability, and transparency, pages 181-190, 2019.
Who are public bicycle share programs serving? an evaluation of the equity of spatial access to bicycle share service areas in canadian cities. Kate Hosford, Meghan Winters, Transportation research record. 267236Kate Hosford and Meghan Winters. Who are public bicycle share programs serving? an evaluation of the equity of spatial access to bicycle share service areas in canadian cities. Transportation research record, 2672(36):42-50, 2018.
Fairnn-conjoint learning of fair representations for fair decisions. Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael Ying Yang, Eirini Ntoutsi, Bodo Rosenhahn, International Conference on Discovery Science. SpringerTongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael Ying Yang, Eirini Ntoutsi, and Bodo Rosenhahn. Fairnn-conjoint learning of fair representations for fair decisions. In International Conference on Discovery Science, pages 581-595. Springer, 2020.
A variational autoencoder based generative model of urban human mobility. Dou Huang, Xuan Song, Zipei Fan, Renhe Jiang, Ryosuke Shibasaki, Yu Zhang, Haizhong Wang, Yugo Kato, IEEE MIPR. IEEEDou Huang, Xuan Song, Zipei Fan, Renhe Jiang, Ryosuke Shibasaki, Yu Zhang, Haizhong Wang, and Yugo Kato. A variational autoencoder based generative model of urban human mobility. In 2019 IEEE MIPR, pages 425-430. IEEE, 2019.
Feasibility study for "mobility as a service" concept in london. Maria Kamargianni, Matyas, A Li, Schäfer, UCL Energy Institute, Dept. TranspMaria Kamargianni, M Matyas, W Li, and A Schäfer. Feasibility study for "mobility as a service" concept in london. UCL Energy Institute, Dept. Transp, pages 1-82, 2015.
Fairness, equality, and power in algorithmic decision-making. Maximilian Kasy, Rediet Abebe, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. the 2021 ACM Conference on Fairness, Accountability, and TransparencyMaximilian Kasy and Rediet Abebe. Fairness, equality, and power in algorithmic decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 576-586, 2021.
The mobile data challenge: Big data for mobile computing research. K Juha, Daniel Laurila, Imad Gatica-Perez, Olivier Aad, Trinh-Minh-Tri Bornet, Olivier Do, Julien Dousse, Markus Eberle, Miettinen, Technical reportJuha K Laurila, Daniel Gatica-Perez, Imad Aad, Olivier Bornet, Trinh-Minh-Tri Do, Olivier Dousse, Julien Eberle, Markus Miettinen, et al. The mobile data challenge: Big data for mobile computing research. Technical report, 2012.
Selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes. Rosa Lletı, Cruz Ortiz, Luis A Sarabia, M Sagrario Sánchez, Analytica Chimica Acta. 5151Rosa Lletı, M Cruz Ortiz, Luis A Sarabia, and M Sagrario Sánchez. Selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes. Analytica Chimica Acta, 515(1):87-100, 2004.
Approaching the limit of predictability in human mobility. Xin Lu, E Wetter, N Bharti, A Tatem, L Bengtsson, Scientific reports. 31Xin Lu, E. Wetter, N. Bharti, A. J Tatem, and L. Bengtsson. Approaching the limit of predictability in human mobility. Scientific reports, 3(1):1-9, 2013.
A survey on deep learning for human mobility. Massimiliano Luca, Gianni Barlacchi, Bruno Lepri, Luca Pappalardo, Massimiliano Luca, Gianni Barlacchi, Bruno Lepri, and Luca Pappalardo. A survey on deep learning for human mobility, 2021.
Deep embedded clustering of urban communities using federated learning. Afra Mashhadi, Joshua Sterner, Jeffery Murray, Afra Mashhadi, Joshua Sterner, and Jeffery Murray. Deep embedded clustering of urban communities using federated learning. 2021.
A survey on bias and fairness in machine learning. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, ACM Computing Surveys (CSUR). 546Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1-35, 2021.
A non-parametric generative model for human trajectories. O Kun, S Reza, S R David, Y Wenzhuo, IJCAI-18. 7Kun O., Reza S., David S. R., and Wenzhuo Y. A non-parametric generative model for human trajectories. In IJCAI-18, pages 3812-3817. IJCAI, 7 2018.
Preserving location privacy in geosocial applications. P N Krishna, Shiyuan Puttaswamy, Troy Wang, Divyakant Steinbauer, Amr El Agrawal, Christopher Abbadi, Kruegel, Y Ben, Zhao, IEEE Transactions on Mobile Computing. 131Krishna PN Puttaswamy, Shiyuan Wang, Troy Steinbauer, Divyakant Agrawal, Amr El Abbadi, Christopher Kruegel, and Ben Y Zhao. Preserving location privacy in geosocial applications. IEEE Transactions on Mobile Computing, 13(1):159-173, 2012.
Lstm-trajgan: A deep learning approach to trajectory privacy protection. Jinmeng Rao, Song Gao, Yuhao Kang, Qunying Huang, arXiv:2006.10521arXiv preprintJinmeng Rao, Song Gao, Yuhao Kang, and Qunying Huang. Lstm-trajgan: A deep learning approach to trajectory privacy protection. arXiv preprint arXiv:2006.10521, 2020.
Equality of opportunity: A progress report. E John, Roemer, Social Choice and Welfare. 192John E Roemer. Equality of opportunity: A progress report. Social Choice and Welfare, 19(2):455-471, 2002.
msieve: differential behavioral privacy in time series of mobile sensor data. Nazir Saleheen, Supriyo Chakraborty, Nasir Ali, Md Mahbubur Rahman, Rummana Syed Monowar Hossain, Eugene Bari, Mani Buder, Santosh Srivastava, Kumar, Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. the 2016 ACM International Joint Conference on Pervasive and Ubiquitous ComputingNazir Saleheen, Supriyo Chakraborty, Nasir Ali, Md Mahbubur Rahman, Syed Monowar Hossain, Rummana Bari, Eugene Buder, Mani Srivastava, and Santosh Kumar. msieve: differential behavioral privacy in time series of mobile sensor data. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pages 706-717, 2016.
Fairness gan: Generating datasets with fairness properties using a generative adversarial network. Prasanna Sattigeri, C Samuel, Vijil Hoffman, Kush R Chenthamarakshan, Varshney, IBM Journal of Research and Development. 634/5Prasanna Sattigeri, Samuel C Hoffman, Vijil Chenthamarakshan, and Kush R Varshney. Fairness gan: Generating datasets with fairness properties using a generative adversarial network. IBM Journal of Research and Development, 63(4/5):3-1, 2019.
Two-dimensional sample entropy: Assessing image texture through irregularity. Luiz Eduardo Virgili Silva, Acs Senra Filho, J C Vps Fazan, Felipe, Junior Lo Murta, Biomedical Physics & Engineering Express. 2445002Luiz Eduardo Virgili Silva, ACS Senra Filho, VPS Fazan, JC Felipe, and LO Murta Junior. Two-dimensional sample entropy: Assessing image texture through irregularity. Biomedical Physics & Engineering Express, 2(4):045002, 2016.
Limits of predictability in human mobility. Chaoming Song, Zehui Qu, Nicholas Blumm, Albert-László Barabási, Science. 3275968Chaoming Song, Zehui Qu, Nicholas Blumm, and Albert-László Barabási. Limits of predictability in human mobility. Science, 327(5968):1018-1021, 2010.
An entropy-based approach to the study of human mobility and behavior in private homes. Yan Wang, Ali Yalcin, Carla Vandeweerd, PLoS one. 1512243503Yan Wang, Ali Yalcin, and Carla VandeWeerd. An entropy-based approach to the study of human mobility and behavior in private homes. PLoS one, 15(12):e0243503, 2020.
Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan C Bovik, R Hamid, Eero P Sheikh, Simoncelli, IEEE transactions on image processing. 134Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004.
Protecting locations with differential privacy under temporal correlations. Yonghui Xiao, Li Xiong, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. the 22nd ACM SIGSAC Conference on Computer and Communications SecurityYonghui Xiao and Li Xiong. Protecting locations with differential privacy under temporal correlations. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1298-1309, 2015.
Fairst: Equitable spatial and temporal demand prediction for new mobility systems. An Yan, Bill Howe, Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information SystemsAn Yan and Bill Howe. Fairst: Equitable spatial and temporal demand prediction for new mobility systems. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 552-555, 2019.
Fairness in practice: a survey on equity in urban mobility. An Yan, Bill Howe, A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering. 4232020An Yan and Bill Howe. Fairness in practice: a survey on equity in urban mobility. A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering, 42(3), 2020.
Privacy-aware adversarial network in human mobility prediction. Yuting Zhan, Hamed Haddadi, Afra Mashhadi, Proceedings on Privacy Enhancing Technologies. on Privacy Enhancing Technologies1Yuting Zhan, Hamed Haddadi, and Afra Mashhadi. Privacy-aware adversarial network in human mobility prediction. Proceedings on Privacy Enhancing Technologies, 1:556-570, 2023.
Online location trace privacy: An information theoretic approach. Wenjing Zhang, Ming Li, Ravi Tandon, Hui Li, IEEE Transactions on Information Forensics and Security. 141Wenjing Zhang, Ming Li, Ravi Tandon, and Hui Li. Online location trace privacy: An information theoretic approach. IEEE Transactions on Information Forensics and Security, 14(1):235-250, 2018.
GeoLife: A Collaborative Social Networking Service among User, Location and Trajectory. Yu Zheng, Xing Xie, Wei-Ying Ma, IEEE Data Eng. Bull. 33Yu Zheng, Xing Xie, and Wei-Ying Ma. GeoLife: A Collaborative Social Networking Service among User, Location and Trajectory. IEEE Data Eng. Bull., 33:32-39, June 2010.
| [] |
[
"Mechanism of Information Transmission from a Spot Rate Market to Crypto-asset Markets",
"Mechanism of Information Transmission from a Spot Rate Market to Crypto-asset Markets"
] | [
"Takeshi Yoshihra \nDTrust Operations Department\nState Street Trust and Banking Co., Ltd\n812-0036FukuokaJapan\n",
"Taisei Kaizoji \nNGraduate School of Arts and Sciences\nInternational Christian University\n181-8585MitakaJapan\n"
] | [
"DTrust Operations Department\nState Street Trust and Banking Co., Ltd\n812-0036FukuokaJapan",
"NGraduate School of Arts and Sciences\nInternational Christian University\n181-8585MitakaJapan"
] | [] | We applied the SVAR-LiNGAM to illustrate the causal relationships among the spot exchange rate, and three crypto-asset exchange rates, Bitcoin, Ethereum, and Ripple. It was notable that the causal order, the EUR⁄USD spot rate→Bitcoin→Ethereum→Ripple, was obtained by this approach. All the instantaneous effects were strongly positive. Moreover, it was notable that Bitcoin can influence the EUR⁄USD spot rate positively with a one-day time lag. | null | [
"https://export.arxiv.org/pdf/2211.16176v1.pdf"
] | 254,069,414 | 2211.16176 | 541be850bb798b66c0ccc1c4e4ed87a2373b020d |
Mechanism of Information Transmission from a Spot Rate Market to Crypto-asset Markets
Takeshi Yoshihra
DTrust Operations Department
State Street Trust and Banking Co., Ltd
812-0036FukuokaJapan
Taisei Kaizoji
NGraduate School of Arts and Sciences
International Christian University
181-8585MitakaJapan
Mechanism of Information Transmission from a Spot Rate Market to Crypto-asset Markets
(Received Month dd, year)the causal orderthe crypto-asset exchange ratesthe SVAR-LiNGAM
We applied the SVAR-LiNGAM to illustrate the causal relationships among the spot exchange rate, and three crypto-asset exchange rates, Bitcoin, Ethereum, and Ripple. It was notable that the causal order, the EUR⁄USD spot rate→Bitcoin→Ethereum→Ripple, was obtained by this approach. All the instantaneous effects were strongly positive. Moreover, it was notable that Bitcoin can influence the EUR⁄USD spot rate positively with a one-day time lag.
Introduction
Crypto-asset prices and their volatility have been increasingly drawing attention in these few years. Several analyses on Bitcoin and its price can be found in the recent econometric study. On the one hand, Cheah and Fry (2015) found a speculative bubble in the time-series price data of Bitcoin. Corbet, Lucey, and Yarovaya (2018) found that there were periods of bubble behavior in the historical price data of Bitcoin and Ethereum. On the other hand, Urquhart (2016) suggested that the Bitcoin markets were not efficient, but they were getting more efficient. Further, Nadarajah and Chu (2017) indicated that a power transformation model of Bitcoin returns seemed to be weakly efficient. Tiwari, Jana, Das, and Roubaud (2018) analyzed Bitcoin price data by using long-range dependence estimators, and reported that the market was efficient. Subsequently, Nan and Kaizoji (2019) compared Bitcoin with ⁄ spot, future, and forward rates, and concluded that weak and semi-strong efficiency of Bitcoin could hold in the long term.
Following these empirical investigations, we analyze the efficiency and causal relationships of Bitcoin, Ethereum, and Ripple by applying the Nan and Kaizoji's approach that compares exchange rates in the real markets to indirect crypto-asset exchange rates ( ). Its equation is given by
= _ _ where
_ and _ are the crypto-asset prices in Euros and U.S. Dollars, respectively. The aim of this method is not only to eliminate the influence of the exponential growth of crypto-asset markets, but also to compare those markets with FX markets (see Figure 1 and 2). In the following chapters, we consider four time series variables, (the spot rate of ⁄ ), (the cryptoasset exchange rate by Bitcoin), (that by Ethereum), and (that by Ripple). The rest of this paper is structured as follows. We present the theoretical backgrounds of this research in the next section. Subsequently, we show the results of empirical study and describe some implications for our findings. Conclusions are provided in the last section.
Data and methodology
Data and statistical software
Crypto-asset data, _ , _ , _ , _ , _ , and _ , in this paper were closing price [1] from Yahoo! Finance. The ⁄ spot rate data was closing price from the Bank of England, and its missing values on weekends and annual holidays were complemented by the closing prices on Fridays. To eliminate the influence of arbitrage opportunity, we use crypto-asset and FX data in the same time zone. The original and complemented data of showed close numbers in summary statistics, and it is indicative of unbiasedness by this complementation (see Table 1). The following tests including were analyzed by the complemented data. The sample size of was 735, and that of -, ,
, and was 1062. The sample period of the four processes was from December 04, 2016 to October 31, 2019. All statistical analyses proceeded with R (R Core Team, 2019). The original programming code for the VAR-LiNGAM estimation was given in Moneta, Entner, Hoyer, and Coad (2013), and Entner's website.
Causal discovery and structural modeling 2.2.1 Limitations of VAR and VECM without instantaneous effects
Generally, VAR and VECM approaches have been used to estimate the relationships among the variables. They are very convenient and widely used to predict the future prices in economics and finance. However, such conventional approaches might result in incorrect estimation or analysis due to neglecting instantaneous effects among variables (Hyvärinen, Zhang, Shimizu, & Hoyer, 2010). Consider the VAR (p) model of equation (1),
= + 1 −1 +···· + − + ,(1)
this is called as a "reduced form" of VAR. Alternatively, we also can think of a VAR model which includes instantaneous effects by adding a coefficient 0 for ,
0 = + 1 −1 + 2 −2 +••• + − + ,(2)
where is a matrix of coefficients for { , −1 , −2 ,•••, − }. Equation (2) is known as a "structural form" of VAR, or SVAR. Hyvärinen et al. (2010) noted that there were some estimation errors if we did not consider an instantaneous effect . For example, lagged effects estimated in the conventional VAR are actually instantaneous. Consider a bivariate SVAR (1) model,
( 0 1 0 0 ) = ( 0.9 0 0 0.9 ) −1 + .(3)
This equation indicates that 2 has an instantaneous effect to 1 , but does not have such a lagged effect. On the other hand, if we model the same data by a VAR (1), we obtain = ( 0.9 0.9 0 0.9 ) −1 + .
As this VAR model neglects the instantaneous effect, the estimated coefficients are all lagged. This could lead to a crucial misunderstanding when we make an inference from the results obtained.
This equation represents that 1 and 2 have instantaneous effects on 2 and 3 , respectively, and that no causal relation, except for autocorrelating coefficients , −1 → of each variable, exist in the lagged coefficient matrix. Here, if we construct a VAR (1) model for the same data, the equation is given by
If making an inference on the coefficients in this VAR, we would indicate that 1 has a direct effect to 3 . However, this is spurious because direct causality does not exist in the original SVAR model. Here, the conventional VAR (and VECM) approach sometimes leads to incorrect analyses.
SVAR model
The structural forms have one crucial obstacle when it comes to their estimation. When estimating the simultaneous equations by OLS, we obtain a result with a bias since the dependent variables and error terms are correlated. Therefore, the reduced-form VAR is estimated at the beginning. Considering the SVAR model of equation (2), when multiplying the both sides of equation (2) , and = 0 −1 . It should be now noted that equation (7) is identical to equation (2), which is the conventional reduced-form VAR model. Equation (7) can be estimated by OLS without a bias. Second, we should consider converting from the reduced form to the structural form. However, it is not necessarily possible to identify the structural form from the reduced form because the structural form has ( − 1)/2 parameters more than the reduced form. This is known as an identification problem of the structural form. The identification problem is generally solved by assuming a causal order of variables, or by restricting parameters based on economics or finance theory. The former assumed model is called a recursive SVAR, and the latter is known as a non-recursive SVAR. For the recursive model, one of the typical ways to iron out the identification problem is to orthogonalize [3] the structural-form error term (Kilian & Lütkepohl, 2017). In other words, is assumed to be a lower triangular matrix whose diagonal components are equal to 1. For the non-recursive one, there are several types of restrictions that can identify the model. For instance, Blanchard and Quah (1989) imposed long-term restrictions on the structural forms. Identification by long-term restrictions allows us to identify whether the processes have structural shocks; however, there are a number of limitations for this methodology, such as robustness for low-frequency data, and data transformation sensitivity of whether the variables are computed in level or differences (Kilian & Lütkepohl). The recursive model and the non-recursive model with long-term restrictions are the practical ways to analyze. However, they need to assume causal order among the variables, else they will face inevitable constraints imposed by the volume of data processing. As the main purpose of this paper is to clarify causal relationships without a priori assumptions, a more data-driven method is needed to facilitate the analysis of causality.
On the other hand, Bernanke (1986) and Sims (1986) applied short-term restrictions to the structural forms. However, Kilian and Lütkepohl (2017) noted that identification by short-term restrictions is challenging, because it is sometimes difficult to impose restrictions which completely identify the SVAR. Swanson and Granger (1997) proposed one way to clear this up by combining structural analysis with graph that represents causal relationships using a network structure. Subsequently, Moneta, Entner, Hoyer, and Coad (2013) proposed a graph-theoretic structural analysis using a data-driven method called "independent component analysis (ICA)." In the following sections, ICA and LiNGAM methods are introduced in order to give a theoretical explanation of our application of an SVAR-LiNGAM approach.
Independent component analysis and non-Gaussianity
According to Hyvärinen, Karhunen, and Oja (2001), ICA can be defined as a methodology to uncover unknown elements from multivariate continuous-value data. One of the unique points of this analytical method is to focus on the elements that are non-Gaussian and statistically independent from other elements. Consider an example of a structural equation model provided by Shimizu (2017), 1 = 11 1 + 12 2 2 = 21 1 + 22 2 where denotes an unobserved independent component, and denotes an unobserved coefficient which mixes . This structural model can be written in a matrix representation as 11 12 21 22 ] [ 1 2 ].
[ 1 2 ] = [
(8) Figure 3 shows a causal graph of this structural model. For general cases, the p-variable ICA model for can be defined as = ∑ =1 ( , = 1,2,···, )
where is the mixing coefficient, and is defined as statistically mutually independent. ICA estimates both the coefficient and the independent component when is observed. Hyvärinen et al. (2001) imposed two restrictions to make estimation possible. The first was the assumption that was statistically independent. In other words, this assumption can be represented as ( 1 , 2 ,···, ) = 1 ( 1 ) 2 ( 2 ) ··· ( ) where (·) denotes a probability density function. This assumption is not too strict because the effect of dependent elements is considered in the coefficient term . The second was that followed non-Gaussian distributions. Hyvärinen et al. noted that Gaussian distributions did not have much essential information to estimate the model. Consider a probabilistic variable that follows a Gaussian distribution, its probability density function ( ) can be obtained as
( ) = 1 √2 exp (− ( − ) 2 2 2 ).
Subsequently, the moment generating function ( ) can be represented by (9) can be written as = where is a × matrix, and are × 1 matrices. Comon (1994) and Eriksson and Koivunen (2004) found that could be identifiable except for the scale and order of columns. In other words, although cannot be estimated perfectly, can be estimated and its equation can be represented by = (10) where is a diagonal matrix, and is a permutation matrix. Now, in order to estimate and , consider a vector as = . is called as a demixing matrix, and the estimate of is obtained where equals to −1 . According to Hyvärinen et al. (2001), the way to obtain the estimates of is to hunt for a de-mixing matrix that maximize the independence of each component of . In other words, the maximization of non-Gaussianity of is the key to estimating the ICA models. As Shimizu (2017) noted, one of the indicators that measures the independence is given by
( ) = {∑ ( ) =1 } − ( )
where (·) denotes the entropy of a stochastic variable. The entropy of can be written as ( ) = [− ( )]. ( ) is generally termed as mutual information, and its estimator is determined by
̂( ) = [∑ 1 =1 ∑ {− ( ′ )} =1 ] − 1 ∑ {− ( )} =1 ,
where ′ is a row vector from the -th row of . Moreover, is identifiable except for the components of the permutation matrix and diagonal matrix . Because is an inverse matrix of , we can obtain from equation (10), = .
(11) Although the matrices , , and are all unknown to us, we can estimate the product of these three by means of ICA. Moreover, Shimizu, Hoyer, Hyvärinen, and Kerminen (2006) proposed to utilize ICA with a part of graph theory for causal discovery among multiple variables. Shimizu et al. (2006) demonstrated the way to reveal causal relationships using ICA. They constructed a semi-parametric approach which is termed as "linear non-Gaussian acyclic model (LiNGAM)." The semi-parametric approach is a method which contains characteristics of both parametric and non-parametric approaches. According to Shimizu (2017), it assumes that the functional type is linear (based on the parametric) and does not assume the distribution of independent variables (based on the non-parametric). It is defined by = ∑ ( )< ( ) + ,
Linear non-Gaussian acyclic model
where = 1,···· , = 1,···· , and ≠ . is a linear combination of its error term and other variables . The coefficient represents the existence or scale of the direct causal relationship of → , and it is known as a connection strength. It is assumed that the error term is given exogenously, its distribution is non-Gaussian, and all the error terms are independent each other.
In equation (12), (·) denotes a causal order among variables. Shimizu (2014; 2017) defined that the it was an order that no latter variable affected former variables. For instance, when a causality → → exists, the causal order can be represented as ( ) = 1 , ( ) = 2 , and ( ) = 3 . Equation (12) also can be written in a matrix form as = + .
(13) It should be noted that can be permuted into a lower triangular matrix whose diagonal components are zero. Let us consider an example of LiNGAM of three structural equations, 1 = −0.08 3 + 1 2 = 0.85 1 + 2 3 = 3 . The set of these equations can be written in the form of equation (13), ].
Here, we can draw a causal graph of this LiNGAM. For this example, the causal order is determined as (3) = 1, (1) = 2, and (2) = 3, and its causal graph is provided in Figure 4. In the next section, we show the way to identify an accurate LiNGAM using ICA. We now obtain the correct matrix . Further, by multiplying with a diagonal matrix whose diagonal components are non-zero, we obtain the product by = [ 11 0 − 22 21 22 ].
Identification of LiNGAM using ICA
This form is correctly permutated, and it is known that one or more of the diagonal components become zero for the matrices of other forms (Shimizu et al.). Here, we seek such a matrix of in equation (11)
( ) = ∑ ∑ ( − ) − ∑ ,(14)
where (·) is a probability density function of the standardized error term ⁄ . Maximizing the likelihood of equation (14) for all possible causal orders seems the most clear-cut way to estimate the matrix . Unfortunately, Hyvärinen et al. did not recommend such an approach because it needed too much data processing. Shimizu et al. (2006) alternatively proposed to apply ICA for that estimation, and we follow this procedure. The first step is the application of ICA. We employ an algorithm which is known as FastICA, proposed by Hyvärinen (1999). Second, the permutation of the estimated matrices is held to make them into being lower triangular. Only these two steps are needed in the estimation of LiNGAM by ICA. Further details of the ICA algorithm for LiNGAM can be found in Hyvärinen et al. (2010) and Shimizu (2014).
SVAR-LiNGAM approach
In order to analyze time series data with the LiNGAM approach, Hyvärinen et al. (2010) and Moneta et al. (2013) integrated the approach with an SVAR model. They termed it as a VAR-LiNGAM approach; however, in this paper we call it "SVAR-LiNGAM" after here to hit out on the difference from the conventional VAR. The SVAR-LiNGAM can be represented as
= ∑ ℎ −ℎ ℎ=0 + ,(15)
where is observed, is an error term which is given exogenously. ℎ denotes matrices of the connection strength coefficients with a time lag ℎ. As ℎ can be zero, ℎ includes both instantaneous and lagged effects. As with equation (13), ℎ can be permuted into a lower triangular matrix whose diagonal components are 0 . This is because LiNGAM assumes a directed acyclic graph (DAG) structure.
General econometric methods, such as the conventional VAR and VECM approach, assume that its error terms follow a Gaussian distribution. Although such an assumption is not satisfied in many cases, it is often considered to be robust, or ignorable. This can be considered as a problem of the VAR and VECM. Especially for crypto-asset data, it is difficult to construct a multivariate time-series model, since the price often volatiles and surges. Here, the SVAR-LiNGAM approach appears to be a better way to capture the appropriate estimates, as the model assumes non-Gaussianity and utilizes its information.
Empirical results Data and methodology
Results of unit root and cointegration tests
Unit root test
The time-series data of the four exchange rates were examined to determine whether each process has a unit root. First, we tested the original time series by the ADF test. Appropriate lag lengths for the four processes were selected by Schwarz Information Criterion (Schwarz, 1978). Table 2 shows the results of the ADF tests for the four series. According to the results obtained, 0 could not be rejected in terms of all the test statistics. It was suggested that all four series had a unit root. Second, the ADF tests were also held on the first-difference series. Table 3 shows the results for the four first-difference series. According to the hypothesis testing, 1 was rejected in terms of all the test statistics on the 1 % level of significance. It was suggested that all four first-difference series did not have a unit root. These two results of hypothesis testing indicated that all the original series, , ,
, and , each had a unit root.
Cointegration test
Second, we calculated the four eigenvalues ̂1 , ̂2 , ̂3 , and ̂4 . Using the selected lag lengths, we examined the time series by the two cointegration tests. We checked the four null hypotheses for the time series and obtained test statistics: ̂ and ̂. Table 3 shows the results of the hypothesis testing of cointegration for the four-variable model.
The hypotheses = 0, ≤ 1, and ≤ 2 were rejected at the 1 % significance level, and the hypothesis ≤ 3 could not be rejected in the two tests. This means that the four variables appear to have three cointegrating relationships.
Results of the SVAR-LiNGM and its network
Results of Gaussianity and serial-correlation tests
We first examined the non-Gaussianity of the error terms of the reduced form of VECM. Figure 4 shows histograms and Q-Q plots of the error terms. We conducted several Gaussianity tests, such as Shapiro-Wilk, Shapiro-Francia, and Jarque-Bera tests. The results indicated that all four error terms were non-Gaussian at the 1 % significance level (see Table 14). Here, the four processes were applicable for the SVAR-LiNGAM analysis because they met the necessary and sufficient condition: non-Gaussianity of the error terms. In addition to the original application of SVAR-LiNGAM by Moneta et al. (2013), we added serial correlation tests for the residuals. Non-stationary time series
Estimated instantaneous causal effect matrix and lagged causal effect matrix
We applied the SVAR-LiNGAM analysis to estimate the causal relationships among the four time series variables. We estimated the instantaneous and lagged effects by conducting 1,000 iterations of bootstrapping.
The lag for the four-variable VAR model was selected as three by SIC. Based on equations (2) and (15), the four-variable VAR (3) and SVAR-LiNGAM (3) can be written by
( ) = + 1 ( −1 −1 −1 −1 ) + 2 ( −2 −2 −2 −2 ) + 3 ( −3 −3 −3 −3 ) + ,(16)0 ( ) = + 1 ( −1 −1 −1 −1 ) + 2 ( −2 −2 −2 −2 ) + 3 ( −3 −3 −3 −3 ) + ,(17)
respectively. Table 5 shows the estimates of the lagged effects ̂ from the conventional VAR (3) model, and Table 6 shows those of the instantaneous and lagged effects from the SVAR-LiNGAM. Figure 4 illustrates the causal graph of equation (17). As the instantaneous effects 0 could be considered as the causal structure, the results indicated that the order was → → → . This is because an SVAR-LiNGAM with no lagged effects is identical with the original LiNGAM. In terms of the lagged effects from − 1 , −1 had a strong and positive impact on , and negative impacts on and . It should be noted that the direct causal relationship between −1 and was much weaker than that among other variables. In addition, −1 seemed to affect positively, matching indications in the former VECM analysis. For the period − 2, −2 , −2 , and −2 impacted on the positively. −2 also gave a positive impact on , whereas it affected negatively. −2 positively affected its current value , on the other hand, it had a negative impact on . −2 only affected its current price . For the lagged effect at − 3, −3 gave a negative effect on , whereas both −3 and −3 only affected their current respective values. These results implied that the ⁄ spot rate at − 1 affected prices of the crypto assets negatively, although the spot rate at affected them positively in accordance with the causal order: the spot rate, Bitcoin, Ethereum, and Ripple.
Comparing the estimates of the SVAR-LiNGAM with those of the VAR model of equation (16), we found differences between their estimated coefficients. For example, the coefficients ̂1 of −1 → , , and −1 → , in the VAR model were positive; however, those of ̂1 in the SVAR-LiNGAM were negative. Such difference was caused by a systematic dilemma of the conventional VAR model, which regarded the instantaneous effect as lagged. The SVAR approach revealed that the lagged effects of −1 were negative, although they were thought to be positive in the VAR. In addition, the coefficients ̂1 of −1 → and −1 → seemed to be spurious, because their estimated coefficients in the SVAR were much smaller and close to 0. As Hyvärinen et al. (2010) noted, it was suggested that the conventional VAR model was not a sufficient approach to analyze causal relationships among financial variables, especially crypto-asset data. Figure 5 shows the results of the IRF of the SVAR-LiNGAM (3) with 99-percent confidence bands. We could obtain the results which had similar tendencies with those of the VECM (see Figure 5). Subsequently, we split the whole period into two (2016/12/04 -2018/01/31 and 2018/02/01 -2019/10/31) to allow for the possibility of a trend break. Figure 9 shows the results of the IRF of the SVAR-LiNGAM (3) in comparison with the whole period and two subperiods. As noted in the section 3.1.4, affected all four variables immediately. Additionally, the results obtained from the structural model indicated that affected them less in the latter period. For the shock, , and were affected in a day, and the effect remained permanently. Although the effect of to was limited, its scale slightly increased in the second subperiod. For the shock, and were affected slightly, and the effect diminished in four days. For the shock, only was affected and the effect diminished in four days. These results imply that the spot rate has been the most influential, but the scale of its effect has been falling during these two years. On the other hand, Bitcoin has been getting more influential and enough to affect the ⁄ spot rate.
Impulse response function of the SVAR-LiNGAM
Conclusions
Apart from the conventional VAR or VECM approach, we also applied the SVAR-LiNGAM to illustrate the causal relationships among the four variables. It was notable that the causal order, → → → , was obtained by this approach. All the instantaneous effects were strongly positive; however, the three lagged effects from − 1 were negative. Here, we found a clear difference in the results obtained in the conventional VECM and the SVAR-LiNGAM. As the former approach did not take the instantaneous effects into account in its system, it could not capture the appropriate causal graph among the variables. It was remarkable that spurious and wrongly lagged effects existed in the conventional VAR, and such problems could be removed by the application of the SVAR-LiNGAM.
Moreover, it was notable that −1 positively affected . This implies that Bitcoin can influence the ⁄ spot rate positively with a one-day time lag. It was also remarkable that the instantaneous effects were all positive, whereas the spot rate at − 1 affected prices of the three crypto assets negatively. Here, further researches and analyses should be followed to clarify why and how such causal relationships exist in the crypto-asset economy. Fig. 1. Historical price data of Bitcoin, Ethereum, and Ripple in USD. Tables Table 1 Some descriptive statistics.
Figures
Note: Original
is the original data without filling the weekend missing values. Weekend-filled is the data whose missing values are complemented by the closing price on Fridays. The original and complemented data of show close numbers in summary statistics, and it is indicative of unbiasedness by this complementation. The following tests including are analyzed by Weekendfilled . Table 2 Results of the ADF tests on the original and first-difference series.
Note: *** denotes significance at 1% level. We considered the constant model specification because the estimates of a trend term were small and insignificant when we examined the four processes as the trend model in the ADF tests. Table 3 Results of the Johansen cointegrating test for (∆ , ∆ , ∆ , ∆ ).
Note: *** denotes significance at 1% level. Table 4 Results of normality tests. Table 5 Coefficient matrices ̂ of lagged effects from the conventional VAR (3) model.
Note: *, **, and *** denote significance at 10%, 5%, 1% level, respectively. Table 6 Coefficient matrices ̂ of instantaneous and lagged effects from the SVAR-LiNGAM (3).
Note: ** and *** denote significance at 5% and 1% level, respectively.
Hyvärinen et al. (2010) provided another example for the problematic interpretation of a VAR model. Consider a three-variable SVAR (1
cumulant generating function ( ) can be determined as exp[ ( )] ≝ ( ) ⇒ ( ) = log ( al. pointed out that Gaussian distributions were not useful enough to identify the characteristics of variables, because the values of cumulants were equal to 0 where ≥ 3. Furthermore, Hyvärinen et al. (2001) focused on the identifiability of the ICA model. Equation
Shimizu et al. (2006) and Shimizu (2017) demonstrated how to identify and in equation(11)by means of ICA. For simplicity, consider a LiNGAM of two structural equations, form of this model can be represented as
by means of ICA. Hyvärinen et al. (2010) and Shimizu (2014) introduced the log likelihood of LiNGAM. It is represented by
Fig. 2 .
2Historical price data of the three crypto-asset exchange rates and the FX spot rate of EUR/USD.
Fig. 3 .
3Histograms and plots of error terms of the four-variable VECM (3).
Fig. 4 .
4Graphical representation of the four variables of the SVAR-LiNGAM
Fig. 5 .
5Plots of the impulse response function of the SVAR-LiNGAM.
AcknowledgmentT.K. wish to express his gratitude to the late Dr. Lukáš PICHL for helpful suggestions and discussions. This work was supported by JSPS KAKENHI Grant Number 17K01270, 20K01752, NOMURA Foundation.
Alternative explanations of the money-income correlation. B S Bernanke, Carnegie-Rochester Conference Series on Public Policy. 25Bernanke, B. S. (1986). Alternative explanations of the money-income correlation. Carnegie-Rochester Conference Series on Public Policy, 25, 49-99.
The dynamic effects of aggregate demand and aggregate supply. O J Blanchard, D Quah, The American Economic Review. 794Blanchard, O. J., & Quah, D. (1989). The dynamic effects of aggregate demand and aggregate supply. The American Economic Review, 79(4), 655-73.
Speculative bubbles in Bitcoin markets? An empirical investigation into the fundamental value of Bitcoin. E Cheah, J Fry, Economics Letters. 130Cheah, E., & Fry, J. (2015). Speculative bubbles in Bitcoin markets? An empirical investigation into the fundamental value of Bitcoin. Economics Letters, 130, 32-36.
Independent component analysis, a new concept? Signal processing. P Comon, 36Comon, P. (1994). Independent component analysis, a new concept? Signal processing, 36(3), 287-314.
Datestamping the Bitcoin and Ethereum bubbles. S Corbet, B Lucey, L Yarovaya, Finance Research Letters. 26Corbet, S., Lucey, B., & Yarovaya, L. (2018). Datestamping the Bitcoin and Ethereum bubbles. Finance Research Letters, 26, 81-88.
Distribution of the estimators for autoregressive time series with a unit root. D Dickey, W Fuller, Journal of the American Statistical Association. 74366Dickey, D., & Fuller, W. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366), 427-431.
Identifiability, separability, and uniqueness of linear ICA models. J Eriksson, V Koivunen, IEEE signal processing letters. 117Eriksson, J., & Koivunen, V. (2004). Identifiability, separability, and uniqueness of linear ICA models. IEEE signal processing letters, 11(7), 601-604.
Time Series Analysis. J D Hamilton, Princeton University PressHamilton, J. D. (1994). Time Series Analysis. Princeton University Press.
Fast and robust fixed-point algorithms for independent component analysis. A Hyvärinen, IEEE Transactions on Neural Networks. 103Hyvärinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3).
Independent Component Analysis. A Hyvärinen, J Karhunen, E Oja, John Wiley & SonsHyvärinen, A., Karhunen, J., & Oja, E. (2001). Independent Component Analysis. John Wiley & Sons.
Estimation of a structural vector autoregression model using non-Gaussianity. A Hyvärinen, K Zhang, S Shimizu, P O Hoyer, Journal of Machine Learning Research. 11Hyvärinen, A., Zhang, K., Shimizu, S., & Hoyer, P. O. (2010). Estimation of a structural vector autoregression model using non-Gaussianity. Journal of Machine Learning Research, 11, 1709-1731.
Statistical analysis of cointegration vectors. S Johansen, Journal of Economic Dynamics and Control. 122Johansen, S. (1988). Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control, 12(2), 231-254.
Estimation and hypothesis testing of cointegration vectors in gaussian vector autoregressive models. S Johansen, Econometrica. 596Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in gaussian vector autoregressive models. Econometrica, 59(6), 1551-1580.
Maximum likelihood estimation and inference on cointegration--with applications to the demand for money. S Johansen, K Juselius, Oxford Bulletin of Economics and Statistics. 522Johansen, S., & Juselius, K. (1990). Maximum likelihood estimation and inference on cointegration--with applications to the demand for money. Oxford Bulletin of Economics and Statistics, 52(2), 169-210.
L Kilian, H Lütkepohl, Structural vector autoregressive analysis. CambridgeCambridge University PressKilian, L., & Lütkepohl, H. (2017). Structural vector autoregressive analysis. Cambridge: Cambridge University Press.
Maximum eigenvalue versus trace tests for the cointegrating rank of a VAR process. H Lütkepohl, P Saikkonen, C Trenkler, The Econometrics Journal. 42Lütkepohl, H., Saikkonen, P., & Trenkler, C. (2001). Maximum eigenvalue versus trace tests for the cointegrating rank of a VAR process. The Econometrics Journal, 4(2), 287-310.
Causal inference by independent component analysis: Theory and applications. A Moneta, D Entner, P O Hoyer, A Coad, Oxford Bulletin of Economics and Statistics. 755Moneta, A., Entner, D., Hoyer, P. O., & Coad, A. (2013). Causal inference by independent component analysis: Theory and applications. Oxford Bulletin of Economics and Statistics, 75(5), 705-730.
On the inefficiency of Bitcoin. S Nadarajah, J Chu, Economics Letters. 150Nadarajah, S., & Chu, J. (2017). On the inefficiency of Bitcoin. Economics Letters, 150, 6-9.
Market efficiency of the bitcoin exchange rate: weak and semi-strong form tests with the spot, futures and forward foreign exchange rates. Z Nan, T Kaizoji, International Review of Financial Analysis. 64Nan, Z., & Kaizoji, T. (2019). Market efficiency of the bitcoin exchange rate: weak and semi-strong form tests with the spot, futures and forward foreign exchange rates. International Review of Financial Analysis, 64, 273-281.
R: A language and environment for statistical computing. R Foundation for Statistical Computing. R Core Team, Vienna, AustriaR Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/
Estimating the dimension of a model. The annals of statistics. G Schwarz, 6Schwarz, G. (1978). Estimating the dimension of a model. The annals of statistics, 6(2), 461-464.
The mathematical theory of communication. C E Shannon, W Weaver, University of Illinois pressShannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. University of Illinois press.
LiNGAM: Non-Gaussian methods for estimating causal structures. S Shimizu, Behaviormetrika. 411Shimizu, S. (2014). LiNGAM: Non-Gaussian methods for estimating causal structures. Behaviormetrika, 41(1), 65- 98.
Toukeiteki Inga Tansaku [Causal Discovery. S Shimizu, Shimizu, S. (2017). Toukeiteki Inga Tansaku [Causal Discovery].
. Koudansha, Koudansha.
A linear non-Gaussian acyclic model for causal discovery. S Shimizu, P O Hoyer, A Hyvärinen, A Kerminen, Journal of Machine Learning Research. 7Shimizu, S., Hoyer, P. O., Hyvärinen, A., & Kerminen, A. (2006). A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7, 2003-2030.
Are forecasting models usable for policy analysis. C A Sims, Quarterly Review. Federal Reserve Bank of Minneapolis, WinterSims, C. A. (1986). Are forecasting models usable for policy analysis. In Quarterly Review, Federal Reserve Bank of Minneapolis, Winter, 2-16.
Informational efficiency of Bitcoin-An extension. A K Tiwari, R K Jana, D Das, D Roubaud, Economics Letters. 163Tiwari, A. K., Jana, R. K., Das, D., & Roubaud, D. (2018). Informational efficiency of Bitcoin-An extension. Economics Letters, 163, 106-109.
The inefficiency of Bitcoin. A Urquhart, Economics Letters. 148Urquhart, A. (2016). The inefficiency of Bitcoin. Economics Letters, 148, 80-82.
| [] |
[
"Anomalous cooling and overcooling of active systems",
"Anomalous cooling and overcooling of active systems"
] | [
"Fabian Jan Schwarzendahl \nInstitut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\n40225DüsseldorfGermany\n",
"Hartmut Löwen \nInstitut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\n40225DüsseldorfGermany\n"
] | [
"Institut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\n40225DüsseldorfGermany",
"Institut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\n40225DüsseldorfGermany"
] | [] | The phenomenon that a system at a hot temperature cools faster than at a warm temperature, referred to as the Mpemba effect, has been recently realized for trapped colloids. Here, we investigate the cooling and heating process of a self-propelling active colloid using numerical simulations and theoretical calculations with a model that can directly be tested in experiments. Upon cooling the particles' active motion induces a Mpemba effect. Transiently the system can even exhibit smaller temperatures than its final temperature, a surprising phenomenon which we refer to as activityinduced overcooling. | 10.1103/physrevlett.129.138002 | [
"https://export.arxiv.org/pdf/2111.06109v1.pdf"
] | 243,985,617 | 2111.06109 | ab33a2f40dc722177cd7f49adc0381fa223cc05d |
Anomalous cooling and overcooling of active systems
Fabian Jan Schwarzendahl
Institut für Theoretische Physik II: Weiche Materie
Heinrich-Heine-Universität Düsseldorf
40225DüsseldorfGermany
Hartmut Löwen
Institut für Theoretische Physik II: Weiche Materie
Heinrich-Heine-Universität Düsseldorf
40225DüsseldorfGermany
Anomalous cooling and overcooling of active systems
(Dated: January 17, 2022)
The phenomenon that a system at a hot temperature cools faster than at a warm temperature, referred to as the Mpemba effect, has been recently realized for trapped colloids. Here, we investigate the cooling and heating process of a self-propelling active colloid using numerical simulations and theoretical calculations with a model that can directly be tested in experiments. Upon cooling the particles' active motion induces a Mpemba effect. Transiently the system can even exhibit smaller temperatures than its final temperature, a surprising phenomenon which we refer to as activityinduced overcooling.
When water is cooled down to be frozen, it seems intuitive that the cooler the water the faster it will freeze. Contrary to that, about 2300 years ago, Aristotle already noticed that "to cool hot water quickly, begin by putting it in the sun" [1], observing that hot water can be cooled and also frozen faster than warm water. The first systematic study to investigate this effect was conducted in the 1960s by Mpemba [2], after whom it was named. Thereafter, numerous experimental studies followed but no consensus on the cause of the Mpemba effect for water was found so far [3][4][5][6][7][8]. Recently, the Mpemba effect was discovered for colloidal particles that are subjected to a thermal quench [9], where the colloids were confined to a double well potential mimicking the liquid and frozen state of water. The experimental findings match theoretical predictions giving a clear explanation of the underlying effect [10,11] and provide an experimental road to recent theoretical advances in understanding the Mpemba effect [12][13][14][15].
Cooling of a liquid or the thermal quench of a colloidal particle is a nonequilibrium process since decreasing the temperature effectively removes energy from the system. Nonetheless, the constituents involved in the cooling process, i.e. the liquid or colloids, are passive and do not themselves add or remove energy from the system. In contrast to that, active colloids that are self-propelling, constantly pump energy into the system and are therefore inherently out of equilibrium [16,17]. Active colloidal particles have been realized in various experimental systems [16] and can show fascinating effects such as wall accumulation [18][19][20], activity induced ratchet motion [21,22], emergent nonequilibrium fluxes [23], motility induced phase separation [24][25][26], or vortices [27]. Given their nonequilibrium nature, we ask here if and under which conditions active colloids can exhibit a Mpemba effect and if activity can change it. Surprisingly, we find that activity induces an overcooling of the system, where it transiently reaches a temperature that is lower than its final steady state temperature ( Fig. 1(d)).
To illustrate the Mpemba effect, we imagine two systems, one of which is at an initial warm temperature * [email protected] and the second is at an initial hot temperature and ask the systems to cool down to an imposed cold temperature ( Fig. 1(a)). Normally, the warm system cools faster than the hot system. A Mpemba effect occurs if hot system cools faster than the warm system. Analogously, a cold system can heat up faster than the warm system ( Fig. 1(a)), which is the inverse Mpemba efffect [12,28].
x min 0 x max x/l U (x) (b) 0.0 0.5 x/l
To identify scenarios like that we investigate the cooling and heating process of active colloids in a confining asymmetric potential ( Fig. 1(b)). We model the colloid as an active Brownian particle and show that it exhibits anomalous cooling, which is induced by its active motion. These results are supported using theoretical calculations based on Master equations, which give further insight into the relaxation process described by a series of exponential decays. We consider an active Brownian particle (ABP) modeling a colloid that is suspended in a bath at temperature T . We assume an overdamped motion of the particle in one spacial dimension that is described by the following equation
dx dt = v 0 n − ∂ x U (x) + η,(1)
where v 0 is the particles self-propulsion speed, and n is the direction of propulsion. The particle's propulsion direction is inverted after a time t p , which follows an exponential distribution p(t p ) = 1/τ p e tp/τp with a persistence time τ p . Furthermore, η is a Gaussian white noise with zero mean and variance
η(t)η(t ) = 2D T δ(t − t ),
where D T is the particle's translational diffusion constant. The latter is controlled by the temperature T of the bath, that is D T = k B T γ , where k B is the Boltzmann constant, and γ is the particle's friction coefficient. The particle is exposed to an external double well potential U (x) whose explicit form is
U (x) = −F 0 x, if x < x min , F B (1 − x 2 ) 2 − 1 2 x , if x min < x < x max , F 0 x, if x > x max .(2)
The potential Eq. (2) is displayed in Fig. 1(b), where the terms in Eq. (2) proportional to F 0 model repulsive walls heating time at positions x min and x max and the term proportional to F B models an asymmetric potential with two minima of different height. The size of the box to which our particle is confined l = |x max −x min | gives a natural unit of length and together with the translational diffusion constant, we find a natural unit of time τ D = l 2 /D T and velocity v D = l/τ D . Experimentally the system described by Eq. (1) can for example be realized using a Janus colloid [16,29], yielding a self-propulsion together with optical [9,30] or acoustic [31] traps that establish an external potential. Similar models for active Brownian particles in a double well potential have been considered in [32][33][34][35][36][37][38][39].
t h /max(t h ) passive active 2 3 4 self propulsion v 0 /v D £10 3 10°4 10°3 persistence ø p /ø D 0.50 0.75 1.00 1.25 self propulsion v 0 /v D 10°2 10°1 10 0 persistence ø p /ø D active Mpemba normal cooling normal heating inverse Mpemba (c) (b) (a) (d)
In the following we simulate many realizations of Eq. (1) which yield the probability distribution P (x, t) to find the particle at position x and time t. Typical steady state distributions denoted by π(x) for a passive (v 0 = 0) and an active particle (v 0 = 0) are shown in Fig. 1(c), where the particles are localised around the two minima of the external potential. Additionally, the active particles' probability distribution is enhanced towards the wall regions due to the well known wall accumulation effect that arises from to the particle's persistence and active motion [40][41][42][43][44][45][46][47][48][49][50].
Let us now consider two active Brownian particle systems, which are in their nonequilibrium steady state at a hot and warm temperature (Fig. 2(a)). The respective temperatures are T hot and T warm with T hot > T warm . We then apply a thermal quench to each particle, that is we instantaneously change each particle's temperature to T cold . Figure 2(a)-(c) show how the distributions of the warm and hot particle relax and finally reach their new steady state distributions π cold (x) at temperature T cold (Fig. 2(c)). Intriguingly, we find the hot particle relaxes faster ( Fig. 2(b)) than the warm particle ( Fig. 2(c)). Hence, we observe that the hot particle cools faster than the warm particle, which is an anomalous cooling, i.e. a Mpemba effect. Initially the warm particles' distribution has a stronger localization into the minima of the external potential Eq. (2) than the hot particle ( Fig. 2 (a)). This means that the effective barrier that a warm particle has to overcome to relax to a cold temperature is higher than the barrier of the hot particle. Therefore, the hot particle can cool faster than the warm particle.
To get further insight into the relaxation process, we study how far apart the cooled steady state distribution π cold (x) and the probability distribution P (x, t) of a particle are from each other during a cooling process. Explicitly, we discretize the spatial components of both π cold (x) and P (x, t) into N grid points, giving π i,cold and P i (t) respectively and compute the distance measure
D(t) = N i=0 |P i (t) − π i,cold | .(3)
Figure 2(d) shows the distance measure Eq. (3) which quantifies the cooling process of the hot and warm particle, again showing that the hot particle relaxes and therefore cools faster than the cold particle. From this measure we can extract a cooling time t c , defined as the time at which D(t) has decayed to zero, or here to the noise level. Using the cooling time t c we explore a range of initial temperatures T , each of which we cool down to T cold , yielding cooling curves as shown in Fig. 3(a). For passive particles (v 0 = 0) our system shows normal cooling ( Fig. 3(a)), that is the cooling time increases monotonously with initial temperature. Contrary to that if we turn on the active motion (v 0 = 0), the cooling curve becomes nonmonotonous, i.e. a hot temperature cools faster than a cold one, which is a Mpemba effect that is induced by activity. Here, activity changes the probability distributions of the particles by inducing a wall accumulation (Fig. 1(c)), which in turn enables the Mpemba effect. To scrutinize the dependence of the active Mpemba effect on the particle's self propulsion and persistence, we build a phase diagram shown in Fig 3(b) (see SI for details). As expected, for low persistence and self propulsion we have a situation that is similar to a passive particle and no Mpemba effect is found. Increasing both the self propulsion and persistence then leads to an active Mpemba effect.
Analogous to cooling we study the heating process of a particle towards the temperature T hot . Here, our passive particle has a nonmonotonous heating curve (Fig. 3(c)), an inverse Mpemba effect, which is eliminated by the active motion. A similar picture arises in the phase diagram ( Fig. 3(d)), where in the limit of low self propulsion and persistence, we find an inverse Mpemba effect, which is vanishes for increasing self propulsion and persistence.
In order to get a deeper insight into the relaxation process of our active Brownian particle we move to a statistically equivalent version of Eq. (1) in terms of a probability P (x, t) to find the particle at position x and time t and its polarization P * (x, t). Explicitly we have
∂ t P = v 0 ∂ x P * − ∂ x [(∂ x U (x))P ] + D T ∂ 2 x P (4) ∂ t P * = v 0 ∂ x P − ∂ x [(∂ x U (x))P * ] + D T ∂ 2 x P * − 2 τ p P * ,(5)
where the first terms on the right hand side of both Eq. (4)-(5) stem from the self propulsion of the active particle, the second terms stem from the external potential (Eq. (2)) and the third terms account for translational diffusion, which in turn is controlled by the temperature, as described above. The last term in Eq. (5) comes from the persistent motion of the particle, which is turned around after a persistence time τ p (for a derivation of Eq. (4)-(5) see SI). Relating to our cooling process using a thermal quench, we have to solve Eq. (4) with temperature T cold , and an initial distribution P (x, 0) = π ini (x) that has a hot/warm initial temperature [12]. This problem can be tackled using an eigenfunction expansion [51], giving the formal solution
P (x, t) = π cold (x) + ∞ n=2 a n,ini e −λnt v n (x),(6)
where v n (x) are the eigenfunctions of Eq. (4), λ n are the eigenvalues, and a n,ini are the overlap values with the initial condition (for details see SI). Here, the eigenvalues are sorted as λ 2 < λ 3 < ... such that dominant decay on long time scales in Eq. (6) is with the eigenvalue λ 2 . Further, the overlap values a n,ini are the only pieces in Eq. (4), that have a dependence on the initial temperature. Therefore a 2,ini , which relates to the largest eigenvalue λ 2 , is the relevant quantity when we compare different initial temperatures. The dominant overlap value a 2,ini is associated to distance measure Eq. (3); it is proportional to the ordinate intersect of the long time exponential decay in Eq. (3) (see SI), that is ∆D ∼ a 2,ini ( Fig. 4(b)(inset)). Furthermore, the distance measure and cooling time are proportional with t c ∼ ln(∆D) (see SI). Using ∆D we can now compare the theoretical approach Eq. (4)-(6) to our numerical simulations of Eq. (1) (Fig. 4(a)), showing a good agreement. Here, both approaches show a nonmonotonic behavior of ∆D and therefore also in the cooling time, which implies a Mpemba effect.
We now investigate the relevance of activity in our initial distribution π ini (x) and start from an initial distribution without activity (v 0 = 0), which is a Boltzmann distribution. Then at the thermal quench we lower both the temperature of the system and instantaneously turn on the activity of the particle (v 0 = 0). The resulting cooling process (Fig. 4(b)) surprisingly takes longest for T = T cold . Then, at higher temperatures we find two minima, at which ∆D ≈ 0 and therefore the probability distribution Eq. (6) decays faster with the smaller eigenvalue λ 3 . In the classification by [13] this is referred to as the strong Mpemba effect.
Inspired by the idea that a cooling system subsequently runs through temperatures until it arrives at its steady state, we define a temperature based on the difference measure Eq. (3) that can be traced in time. At a given time t we compare the probability distribution P (x, t) to all possible steady states π T eff (x) with effective temperatures T eff , explicitly
D T eff (t) = i |P i (t) − π i,T eff | ,(7)
which is discretized as before. The temperature T eff for which D T eff (t) is minimal, is then defined as the effective temperature of the system. We now study the evolution of the effective temperature during the cooling process of our model colloid to temperature T cold . For a passive system the effective temperature decays monotonically, until it reaches its steady state ( Fig. 5(a)). Turning on activity, we find that the system first goes to a lower effective temperature than its final steady state temperature ( Fig. 5(a)). This surprising effect is an activity induced overcooling of the system. The probability distributions ( Fig. 5(c)) shows how the distribution P (x, t) is closer to an effective distribution π T eff (x) than to the distribution of the cold state π cold , meaning that the system has an effective temperature T eff < T cold . Our theoretical approach using Eq. (4)-(5) also shows an overcooling ( Fig. 5(a) (inset)). Here, we note that only the long time limit is present due to the truncation of the expansion Eq. (6) at the second order. To investigate the dependence of the overcooling effect on the particles' active motion, we compute a phase diagram by varying the self-propulsion and persistence time (Fig. 5(d)). At low self-propulsion and persistence we recover the passive case with normal cooling, while increasing both leads to an overcooling. As an alternative measure for the overcooling of our system we compute the lag diffusion defined as D(t, ∆t) = [x(t + ∆t) − x(t)] 2 /∆t, where t is the time at which we start measuring the diffusion and ∆t is the lag time. The lag diffusion can be seen as an alternative dynamical definition of an effective temperature [52]. By computing the reduced lag diffusion D(t, ∆t) − D ∞ (∆t), where D ∞ (∆t) is the diffusion at the end of our simulations, we observe that the diffusion shows lower values than its final steady state value (Fig. 5(e)). This effect is purely induced by activity, it is not present for a passive system (see SI).
In summary we have shown that a confined colloid induces a Mpemba effect and even a transient overcooling by its activity. We supported our findings using theoretical calculations revealing a relaxation process that depends on a series of exponential decays. In principle our model allows for a direct experimental verification with active colloids.
We have investigated the interplay of two nonequilibrium phenomena, the cooling or heating process of a colloid and its active motion, showing that the active mo-tion of a particle can fundamentally change the relaxation process. In the future it will be interesting to see how this translates to higher dimensions, and many particle systems. Overcooling enables a fridge to transiently cool a system to lower temperatures than the prescribed temperature which raises the question how the effect optimises the function of coupled heat engines [53]. While we focused on active colloids in this letter, similar effects might arise for biological microswimmers, such as bacteria or microalgae, which can change their motility pattern and therefore their effective temperature in response due to external stimulus [54].
FIG. 2 .
2Probability distributions of cooling active colloids. We show a hot (red dotted line, T hot = 2500T cold ), a warm (blue dashed line, Twarm = 50T cold ) and cold (black solid line) colloid. (a) Initial steady state distributions at time t = 0. A temperature quench is applied at this time. (b) Distributions at time t = 3 × 10 −3 τD. The hot colloid has relaxed to T cold . (c) Distributions at time t = 50 × 10 −3 τD. The warm colloid has relaxed to T cold . (d) Distance D(t) between probability distributions for a hot (red line), warm (blue line), and cold (black line) colloid, compared to the steady state of a cold colloid.
FIG. 3 .
3(a) Cooling time as function of initial temperature T to final temperature T cold for passive (gray circles) and active (orange diamonds) particles (v0 = 3.9 × 10 3 vD, τp = 5 × 10 −4 τD).(b) Cooling phase diagram as function of self propulsion and persistence showing normal cooling and the active Mpemba effect. (c) Heating time as function of initial temperatures T to final temperature T hot for active (green triangles, v0 = 9vD, τp = 1.2τD) and passive (purple pentagons) particle. (d) Heating phase diagram for varying self propulsion and persistence showing the inverse Mpemba effect and its elimination as normal heating.
FIG. 4 .
4Cooling process of temperatures T to T cold , quantified by ∆D. Solid blue line shows theoretical calculation and orange data points are extracted from our numerical approach (inset (b)). In (a) the initial state is active whereas in (b) the initial state is passive. (b) (inset) Example cooling process (black solid line) on a half logarithmic scale, to which we fit two exponential functions (orange dashed lines) to obtain ∆D (see ordinate).
FIG. 5 .
5(a) Effective temperature T eff as function of time for an active (orange solid line) and passive (grey dashed line). (inset) Effective temperature T eff extracted from our theoretical approach. (b)-(c) Probability distributions of the cooling process (orange dashed dotted line), effective distribution πT eff (solid blue line) and final steady state (green dotted line). (b) Shows a typical state at the beginning of the cooling process and (c) a state where the system is overcooled. (d) Phase diagram showing overcooling (orange diamonds) and normal cooling (grey cricles) for varying self-propulsion and persistence. (e) Reduced lag diffusion as a function of time for different lag times (color code).
ACKNOWLEDGMENTSWe thank Lorenzo Caprini for insightful discussions. H.L. was supported within the SPP 2065 (project LO 418/25-1).
. Aristotle , Meteorology Book 1, Part. E. W. Webster12Aristotle, Meteorology Book 1, Part 12 (Translated by E. W. Webster).
Cool?. E B Mpemba, D G Osborne, Physics Education. 4172E. B. Mpemba and D. G. Osborne, Cool?, Physics Edu- cation 4, 172 (1969).
The Mpemba effect: When can hot water freeze faster than cold?. M Jeng, American Journal of Physics. 74514M. Jeng, The Mpemba effect: When can hot water freeze faster than cold?, American Journal of Physics 74, 514 (2006).
Freezing of aqueous solutions containing gases. B Wojciechowski, I Owczarek, G Bednarz, Crystal Research and Technology. 23843B. Wojciechowski, I. Owczarek, and G. Bednarz, Freezing of aqueous solutions containing gases, Crystal Research and Technology 23, 843 (1988).
Supercooling and the Mpemba effect: When hot water freezes quicker than cold. D Auerbach, American Journal of Physics. 63882D. Auerbach, Supercooling and the Mpemba effect: When hot water freezes quicker than cold, American Journal of Physics 63, 882 (1995).
Axisymmetric natural convection-driven evaporation of hot water and the Mpemba effect. M Vynnycky, N Maeno, International journal of Heat and Mass transfer. 557297M. Vynnycky and N. Maeno, Axisymmetric natural convection-driven evaporation of hot water and the Mpemba effect, International journal of Heat and Mass transfer 55, 7297 (2012).
Can natural convection alone explain the Mpemba effect?. M Vynnycky, S Kimura, International Journal of Heat and Mass Transfer. 80243M. Vynnycky and S. Kimura, Can natural convection alone explain the Mpemba effect?, International Journal of Heat and Mass Transfer 80, 243 (2015).
Questioning the Mpemba effect: Hot water does not cool more quickly than cold. H C Burridge, P F Linden, Scientific Reports. 61H. C. Burridge and P. F. Linden, Questioning the Mpemba effect: Hot water does not cool more quickly than cold, Scientific Reports 6, 1 (2016).
Exponentially faster cooling in a colloidal system. A Kumar, J Bechhoefer, Nature. 58464A. Kumar and J. Bechhoefer, Exponentially faster cool- ing in a colloidal system, Nature 584, 64 (2020).
A fresh understanding of the Mpemba effect. J Bechhoefer, A Kumar, R Chétrite, Nature Reviews Physics. 3J. Bechhoefer, A. Kumar, and R. Chétrite, A fresh under- standing of the Mpemba effect, Nature Reviews Physics 3, 534-535 (2021).
The metastable Mpemba effect corresponds to a nonmonotonic temperature dependence of extractable work. R Chétrite, A Kumar, J Bechhoefer, Frontiers in Physics. 9141R. Chétrite, A. Kumar, and J. Bechhoefer, The metastable Mpemba effect corresponds to a non- monotonic temperature dependence of extractable work, Frontiers in Physics 9, 141 (2021).
Nonequilibrium thermodynamics of the markovian Mpemba effect and its inverse, Proceedings of the National Academy of. Z Lu, O Raz, Sciences. 1145083Z. Lu and O. Raz, Nonequilibrium thermodynamics of the markovian Mpemba effect and its inverse, Proceed- ings of the National Academy of Sciences 114, 5083 (2017).
I Klich, O Raz, O Hirschberg, M Vucelja, Mpemba index and anomalous relaxation. 921060I. Klich, O. Raz, O. Hirschberg, and M. Vucelja, Mpemba index and anomalous relaxation, Physical Review X 9, 021060 (2019).
Precooling strategy allows exponentially faster heating. A Gal, O Raz, Physical Review Letters. 12460602A. Gal and O. Raz, Precooling strategy allows exponen- tially faster heating, Physical Review Letters 124, 060602 (2020).
F Carollo, A Lasanta, I Lesanovsky, arXiv:2103.05020Exponentially accelerated approach to stationarity in markovian open quantum systems through the Mpemba effect. arXiv preprintF. Carollo, A. Lasanta, and I. Lesanovsky, Exponen- tially accelerated approach to stationarity in markovian open quantum systems through the Mpemba effect, arXiv preprint arXiv:2103.05020 (2021).
Active particles in complex and crowded environments. C Bechinger, R Di Leonardo, H Löwen, C Reichhardt, G Volpe, G Volpe, Reviews of Modern Physics. 8845006C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Reviews of Modern Physics 88, 045006 (2016).
The 2020 motile active matter roadmap. G Gompper, R G Winkler, T Speck, A Solon, C Nardini, F Peruani, H Löwen, R Golestanian, U B Kaupp, L Alvarez, Journal of Physics: Condensed Matter. 32193001G. Gompper, R. G. Winkler, T. Speck, A. Solon, C. Nar- dini, F. Peruani, H. Löwen, R. Golestanian, U. B. Kaupp, L. Alvarez, et al., The 2020 motile active mat- ter roadmap, Journal of Physics: Condensed Matter 32, 193001 (2020).
Active particles in geometrically confined viscoelastic fluids. N Narinder, J R Gomez-Solano, C Bechinger, New Journal of Physics. 2193058N. Narinder, J. R. Gomez-Solano, and C. Bechinger, Ac- tive particles in geometrically confined viscoelastic fluids, New Journal of Physics 21, 093058 (2019).
G Volpe, I Buttinoni, D Vogt, H.-J Kümmerer, C Bechinger, Microswimmers in patterned environments. 78810G. Volpe, I. Buttinoni, D. Vogt, H.-J. Kümmerer, and C. Bechinger, Microswimmers in patterned environ- ments, Soft Matter 7, 8810 (2011).
Selfassembly of micromachining systems powered by janus micromotors. C Maggi, J Simmchen, F Saglimbeni, J Katuri, M Dipalo, F De Angelis, S Sanchez, R. Di Leonardo, Small. 12446C. Maggi, J. Simmchen, F. Saglimbeni, J. Katuri, M. Di- palo, F. De Angelis, S. Sanchez, and R. Di Leonardo, Self- assembly of micromachining systems powered by janus micromotors, Small 12, 446 (2016).
Bacterial ratchet motors. R Di Leonardo, L Angelani, D Dell'arciprete, G Ruocco, V Iebba, S Schippa, M P Conte, F Mecarini, F De Angelis, E Di Fabrizio, Proceedings of the National Academy of Sciences. 1079541R. Di Leonardo, L. Angelani, D. Dell'Arciprete, G. Ruocco, V. Iebba, S. Schippa, M. P. Conte, F. Mecarini, F. De Angelis, and E. Di Fabrizio, Bacterial ratchet motors, Proceedings of the National Academy of Sciences 107, 9541 (2010).
Ratchet-induced variations in bulk states of an active ideal gas. J Rodenburg, S Paliwal, M Jager, P G Bolhuis, M Dijkstra, R Van Roij, The Journal of Chemical Physics. 149174910J. Rodenburg, S. Paliwal, M. De Jager, P. G. Bolhuis, M. Dijkstra, and R. Van Roij, Ratchet-induced varia- tions in bulk states of an active ideal gas, The Journal of Chemical Physics 149, 174910 (2018).
Emergent probability fluxes in confined microbial navigation. J Cammann, F J Schwarzendahl, T Ostapenko, D Lavrentovich, O Bäumchen, M G Mazza, Proceedings of the National Academy of Sciences. 118J. Cammann, F. J. Schwarzendahl, T. Ostapenko, D. Lavrentovich, O. Bäumchen, and M. G. Mazza, Emer- gent probability fluxes in confined microbial navigation, Proceedings of the National Academy of Sciences 118 (2021).
Dynamical clustering and phase separation in suspensions of self-propelled colloidal particles. I Buttinoni, J Bialké, F Kümmel, H Löwen, C Bechinger, T Speck, Physical Review Letters. 110238301I. Buttinoni, J. Bialké, F. Kümmel, H. Löwen, C. Bechinger, and T. Speck, Dynamical clustering and phase separation in suspensions of self-propelled colloidal particles, Physical Review Letters 110, 238301 (2013).
Living crystals of light-activated colloidal surfers. J Palacci, S Sacanna, A P Steinberg, D J Pine, P M Chaikin, Science. 339936J. Palacci, S. Sacanna, A. P. Steinberg, D. J. Pine, and P. M. Chaikin, Living crystals of light-activated colloidal surfers, Science 339, 936 (2013).
Crystallization in a dense suspension of self-propelled particles. J Bialké, T Speck, H Löwen, Physical Review Letters. 108168301J. Bialké, T. Speck, and H. Löwen, Crystallization in a dense suspension of self-propelled particles, Physical Review Letters 108, 168301 (2012).
Emergent vortices in populations of colloidal rollers. A Bricard, J.-B Caussin, D Das, C Savoie, V Chikkadi, K Shitara, O Chepizhko, F Peruani, D Saintillan, D Bartolo, Nature Communications. 61A. Bricard, J.-B. Caussin, D. Das, C. Savoie, V. Chikkadi, K. Shitara, O. Chepizhko, F. Peruani, D. Saintillan, and D. Bartolo, Emergent vortices in pop- ulations of colloidal rollers, Nature Communications 6, 1 (2015).
A Kumar, R Chetrite, J Bechhoefer, arXiv:2104.12899Anomalous heating in a colloidal system. arXiv preprintA. Kumar, R. Chetrite, and J. Bechhoefer, Anoma- lous heating in a colloidal system, arXiv preprint arXiv:2104.12899 (2021).
Tuning the motility and directionality of self-propelled colloids. J R Gomez-Solano, S Samin, C Lozano, P Ruedas-Batuecas, R Van Roij, C Bechinger, Scientific Reports. 71J. R. Gomez-Solano, S. Samin, C. Lozano, P. Ruedas- Batuecas, R. van Roij, and C. Bechinger, Tuning the motility and directionality of self-propelled colloids, Sci- entific Reports 7, 1 (2017).
The far-from-equilibrium fluctuation of an active brownian particle in an optical trap. C Shen, H D Ou-Yang, 10.1117/12.2531434Optical Trapping and Optical Micromanipulation XVI. 11083C. Shen and H. D. Ou-Yang, The far-from-equilibrium fluctuation of an active brownian particle in an optical trap, in Optical Trapping and Optical Micromanipula- tion XVI , Vol. 11083, edited by K. Dholakia and G. C.
. Spalding, International Society for Optics and Photonics. SPIESpalding, International Society for Optics and Photonics (SPIE, 2019) pp. 84 -91.
Acoustic trapping of active matter. S C Takatori, R De Dier, J Vermant, J F Brady, Nature Communications. 71S. C. Takatori, R. De Dier, J. Vermant, and J. F. Brady, Acoustic trapping of active matter, Nature Communica- tions 7, 1 (2016).
L Caprini, F Cecconi, U M B Marconi, arXiv:2110.03042Correlated escape of active particles across a potential barrier. arXiv preprintL. Caprini, F. Cecconi, and U. M. B. Marconi, Correlated escape of active particles across a potential barrier, arXiv preprint arXiv:2110.03042 (2021).
Active escape dynamics: The effect of persistence on barrier crossing. L Caprini, U Marini Bettolo, A Marconi, A Puglisi, Vulpiani, The Journal of Chemical Physics. 15024902L. Caprini, U. Marini Bettolo Marconi, A. Puglisi, and A. Vulpiani, Active escape dynamics: The effect of per- sistence on barrier crossing, The Journal of Chemical Physics 150, 024902 (2019).
Escape rate of active particles in the effective equilibrium approach. A Sharma, R Wittmann, J M Brader, Physical Review E. 9512115A. Sharma, R. Wittmann, and J. M. Brader, Escape rate of active particles in the effective equilibrium approach, Physical Review E 95, 012115 (2017).
Nonlocal stationary probability distributions and escape rates for an active Ornstein-Uhlenbeck particle. E Woillez, Y Kafri, V Lecomte, Journal of Statistical Mechanics: Theory and Experiment. 202063204E. Woillez, Y. Kafri, and V. Lecomte, Nonlocal stationary probability distributions and escape rates for an active Ornstein-Uhlenbeck particle, Journal of Statistical Me- chanics: Theory and Experiment 2020, 063204 (2020).
Self-propelled particle in a nonconvex external potential: Persistent limit in one dimension. Y Fily, The Journal of Chemical P hysics. 150174906Y. Fily, Self-propelled particle in a nonconvex external potential: Persistent limit in one dimension, The Journal of Chemical P hysics 150, 174906 (2019).
Activated escape of a self-propelled particle from a metastable state. E Woillez, Y Zhao, Y Kafri, V Lecomte, J Tailleur, Physical Review Letters. 122258001E. Woillez, Y. Zhao, Y. Kafri, V. Lecomte, and J. Tailleur, Activated escape of a self-propelled particle from a metastable state, Physical Review Letters 122, 258001 (2019).
Optimal navigation strategy of active Brownian particles in target-search problems. L Zanovello, P Faccioli, T Franosch, M Caraglio, The Journal of Chemical Physics. 15584901L. Zanovello, P. Faccioli, T. Franosch, and M. Caraglio, Optimal navigation strategy of active Brownian parti- cles in target-search problems, The Journal of Chemical Physics 155, 084901 (2021).
Mean first passage time of active Brownian particle in one dimension. A Scacchi, A Sharma, Molecular Physics. 116460A. Scacchi and A. Sharma, Mean first passage time of active Brownian particle in one dimension, Molecular Physics 116, 460 (2018).
Physics of microswimmers-single particle motion and collective behavior: a review. J Elgeti, R G Winkler, G Gompper, Reports on Progress in Physics. 7856601J. Elgeti, R. G. Winkler, and G. Gompper, Physics of mi- croswimmers-single particle motion and collective be- havior: a review, Reports on Progress in Physics 78, 056601 (2015).
Self-propelled rods near surfaces. J Elgeti, G Gompper, Europhysics Letters). 8538002EPLJ. Elgeti and G. Gompper, Self-propelled rods near sur- faces, EPL (Europhysics Letters) 85, 38002 (2009).
Wall accumulation of selfpropelled spheres. J Elgeti, G Gompper, Europhysics Letters). 10148003EPLJ. Elgeti and G. Gompper, Wall accumulation of self- propelled spheres, EPL (Europhysics Letters) 101, 48003 (2013).
How to capture active particles. A Kaiser, H Wensink, H Löwen, Physical Review Letters. 108268307A. Kaiser, H. Wensink, and H. Löwen, How to capture active particles, Physical Review Letters 108, 268307 (2012).
Curvature-guided motility of microalgae in geometric confinement. T Ostapenko, F J Schwarzendahl, T J Böddeker, C T Kreis, J Cammann, M G Mazza, O Bäumchen, Physical Review Letters. 12068002T. Ostapenko, F. J. Schwarzendahl, T. J. Böddeker, C. T. Kreis, J. Cammann, M. G. Mazza, and O. Bäumchen, Curvature-guided motility of microalgae in geomet- ric confinement, Physical Review Letters 120, 068002 (2018).
Confined run-and-tumble swimmers in one dimension. L Angelani, Journal of Physics A: Mathematical and Theoretical. 50325601L. Angelani, Confined run-and-tumble swimmers in one dimension, Journal of Physics A: Mathematical and The- oretical 50, 325601 (2017).
Run-and-tumble dynamics of self-propelled particles in confinement. J Elgeti, G Gompper, Europhysics Letters). 10958003EPLJ. Elgeti and G. Gompper, Run-and-tumble dynamics of self-propelled particles in confinement, EPL (Europhysics Letters) 109, 58003 (2015).
Sedimentation, trapping, and rectification of dilute bacteria. J Tailleur, M Cates, Europhysics Letters. 8660002EPLJ. Tailleur and M. Cates, Sedimentation, trapping, and rectification of dilute bacteria, EPL (Europhysics Let- ters) 86, 60002 (2009).
Active Brownian particles at interfaces: An effective equilibrium approach. R Wittmann, J M Brader, Europhysics Letters). 11468004EPLR. Wittmann and J. M. Brader, Active Brownian par- ticles at interfaces: An effective equilibrium approach, EPL (Europhysics Letters) 114, 68004 (2016).
Detention times of microswimmers close to surfaces: Influence of hydrodynamic interactions and noise. K Schaar, A Zöttl, H Stark, Physical Review Letters. 11538101K. Schaar, A. Zöttl, and H. Stark, Detention times of microswimmers close to surfaces: Influence of hydrody- namic interactions and noise, Physical Review Letters 115, 038101 (2015).
Model microswimmers in channels with varying cross section. P Malgaretti, H Stark, The Journal of Chemical Physics. 146174901P. Malgaretti and H. Stark, Model microswimmers in channels with varying cross section, The Journal of Chemical Physics 146, 174901 (2017).
H Risken, The Fokker-Planck Equation. SpringerH. Risken, in The Fokker-Planck Equation (Springer, 1996).
Self-propelled particle in an external potential: Existence of an effective temperature. G Szamel, Physical Review E. 9012111G. Szamel, Self-propelled particle in an external poten- tial: Existence of an effective temperature, Physical Re- view E 90, 012111 (2014).
Obtaining efficient thermal engines from interacting Brownian particles under time dependent periodic drivings. I N Mamede, P E Harunari, B A N Akasaki, K Proesmans, C E Fiore, arXiv:2110.09235cond-mat.stat-mechI. N. Mamede, P. E. Harunari, B. A. N. Akasaki, K. Proesmans, and C. E. Fiore, Obtaining effi- cient thermal engines from interacting Brownian par- ticles under time dependent periodic drivings (2021), arXiv:2110.09235 [cond-mat.stat-mech].
H C Berg, Random walks in biology. Princeton University PressH. C. Berg, Random walks in biology (Princeton Univer- sity Press, 2018).
| [] |
[
"Understanding Goal-Oriented Active Learning via Influence Functions",
"Understanding Goal-Oriented Active Learning via Influence Functions",
"Understanding Goal-Oriented Active Learning via Influence Functions",
"Understanding Goal-Oriented Active Learning via Influence Functions"
] | [
"Minjie Xu \nLondon, New YorkUSUK\n",
"Bloomberg L P \nLondon, New YorkUSUK\n",
"Gary Kazantsev [email protected] \nLondon, New YorkUSUK\n",
"Bloomberg L P \nLondon, New YorkUSUK\n",
"Minjie Xu \nLondon, New YorkUSUK\n",
"Bloomberg L P \nLondon, New YorkUSUK\n",
"Gary Kazantsev [email protected] \nLondon, New YorkUSUK\n",
"Bloomberg L P \nLondon, New YorkUSUK\n"
] | [
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK",
"London, New YorkUSUK"
] | [] | Active learning (AL) concerns itself with learning a model from as few labelled data as possible through actively and iteratively querying an oracle with selected unlabelled samples. In this paper, we focus on analyzing a popular type of AL in which the utility of a sample is measured by a specified goal achieved by the retrained model after accounting for the sample's marginal influence. Such AL strategies attract a lot of attention thanks to their intuitive motivations, yet they also suffer from impractically high computational costs due to their need for many iterations of model retraining. With the help of influence functions, we present an effective approximation that bypasses model retraining altogether, and propose a general efficient implementation that makes such AL strategies applicable in practice, both in the serial and the more challenging batch-mode setting. Additionally, we present both theoretical and empirical findings which call into question a few common practices and beliefs about such AL strategies.Goal-Oriented Active Learning (GORAL)We focus on pool-based active learning [6] for classification problems in this paper. An unlabelled data sample is denoted by x ∈ X and, when it is labelled, z = (x, y) where y ∈ {1, . . . , K}. We then assume there is a large pool of unlabelled samples U pool = {x} from which an AL strategy will pick samples to be labelled by an oracle and then to be added into a growing labelled dataset NeurIPS 2019 Workshop on Machine Learning with Guarantees, Vancouver, Canada. | null | [
"https://export.arxiv.org/pdf/1905.13183v3.pdf"
] | 170,078,737 | 1905.13183 | 44fd019c944e8be6c8b849f4f48aa4fc442397c7 |
Understanding Goal-Oriented Active Learning via Influence Functions
Minjie Xu
London, New YorkUSUK
Bloomberg L P
London, New YorkUSUK
Gary Kazantsev [email protected]
London, New YorkUSUK
Bloomberg L P
London, New YorkUSUK
Understanding Goal-Oriented Active Learning via Influence Functions
Active learning (AL) concerns itself with learning a model from as few labelled data as possible through actively and iteratively querying an oracle with selected unlabelled samples. In this paper, we focus on analyzing a popular type of AL in which the utility of a sample is measured by a specified goal achieved by the retrained model after accounting for the sample's marginal influence. Such AL strategies attract a lot of attention thanks to their intuitive motivations, yet they also suffer from impractically high computational costs due to their need for many iterations of model retraining. With the help of influence functions, we present an effective approximation that bypasses model retraining altogether, and propose a general efficient implementation that makes such AL strategies applicable in practice, both in the serial and the more challenging batch-mode setting. Additionally, we present both theoretical and empirical findings which call into question a few common practices and beliefs about such AL strategies.Goal-Oriented Active Learning (GORAL)We focus on pool-based active learning [6] for classification problems in this paper. An unlabelled data sample is denoted by x ∈ X and, when it is labelled, z = (x, y) where y ∈ {1, . . . , K}. We then assume there is a large pool of unlabelled samples U pool = {x} from which an AL strategy will pick samples to be labelled by an oracle and then to be added into a growing labelled dataset NeurIPS 2019 Workshop on Machine Learning with Guarantees, Vancouver, Canada.
Introduction
Active learning (AL) [1] allows a model to actively query an oracle for labels with its chosen unlabelled samples, effectively assembling a growing labelled dataset on the fly along with model training. AL has been studied extensively and a large suite of AL strategies have been proposed to date. However, their success in practice has not been entirely consistent [2]. In fact, even research investigating this problem sometimes produces seemingly contradictory findings, e.g. with [3] claiming AL works better under model "mis-match" while [4] claiming otherwise. In quest of a better common understanding of several popular AL strategies that can all be abstracted as selecting samples to boost an explicitly specified goal on the model, this work leverages influence functions [5] to closely analyse such AL strategies and offers several interesting insights.
We summarize our main contributions as follows, a) Formalizing a general goal-oriented AL framework which generalizes many existing AL strategies; b) Being the first to apply influence functions to the AL setting, significantly reducing its computational cost (especially for the batch-mode setting); c) Showing both analytically and empirically that using the current model prediction to resolve the unknown label in such AL strategies (which is a common practice) may not be a sensible choice; d) Demonstrating the difficulties of finding an effective and sensible goal in goal-oriented AL.
L train = {z} for model training. We restrict ourselves to discriminative probabilistic models P θ (y|x) and further assume access to an initial labelled dataset L init from which we will train an initial model to kick off the active learning process, as well as a (labelled) test set L test which we use to measure model performance and accordingly data efficiency (i.e. the minimum |L train | needed to reach a certain level of model performance), a key metric for comparing various AL strategies. These three datasets (U pool , L init , L test ) jointly define the data dependency of an AL instance in our study.
Many AL strategies (e.g. [6,7,8]) are dictated by a utility function π(x;θ) which, based on the current modelθ, assigns a utility score to unlabelled samples. In this paper, we focus on a specific type of utility that depends onθ only through an explicitly defined goal function τ (θ) used for evaluating the models (the higher the better, e.g. model accuracy). And we define this goal-oriented utility to be the difference in goals before and after accounting for an additional sample x, i.e. π goal (x;θ) y τ θ (x,y) − τ (θ),
whereθ (x,y) represents the model obtained from L train ∪ {(x, y)}, that is the current training set augmented with one additional sample x (hypothetically) labelled as y, and y [·] represents an operator that resolves its operand's dependency on y (e.g. min y , E y , etc.). Going forward we call all such AL strategies Goal-Oriented Active Learning (GORAL).
It is worth noting that, unlike the utility function π(x;θ) which depends on a changing modelθ, the explicit goal function τ (θ) in GORAL serves a more comparable target across AL iterations. As a result it might be easier to measure progress as well as perform analyses in GORAL, as we will demonstrate later. However, one should also note that π goal can be much more expensive to compute due to its dependency on model retraining (potentially K times per sample x due to y being unknown), and to carry out such evaluations over a large pool of unlabelled samples only further adds to the problem. Fortunately, we show in Sec. 3 that with the help of influence functions recently leveraged by [5] for interpreting black-box model predictions, this high cost of model retraining could be drastically reduced to make GORAL much more practical for a wide class of models.
Choices of y and τ (θ) in GORAL
The different choices of y and τ (θ) have lead to various AL strategies being proposed.
For y , we consider a) Expectation (E y ): This is probably the most obvious choice, and one can take P (y) directly from Pθ(y|x) [7,8], indirectly from a separately estimated "oracle" model [9], or simply set it to a uniform distribution; b) Min-/Max-imization (min y /max y ): This is being most pessimistic [10] / optimistic [11] as it only looks at the extreme goal that could be achieved amongst all possible labels of x; c) Oracle ("set y to ground-truth"): This is the ideal (albeit unrealistic) case which should provide a performance "upper bound" for the other choices to compare against. Another possibility could be to borrow from the various acquisition functions developed for Bayesian Optimization [12] although we keep to the above ones for the scope of this paper.
For τ (θ), we consider a) Negative dev-set loss (τ dev ): The negative cross-entropy loss on a held-out development set, i.e. τ dev (θ; L dev ) (x,y)∈Ldev log P θ (y|x), serves a good proxy to model accuracy. However it is prone to over-fitting and incurs extra labelling costs; b) Negative prediction entropy (τ ent ): This can be thought of as the negative model uncertainty on an unlabelled dataset U, i.e. τ ent (θ; U) − x∈U H(P θ (y|x)), where H stands for entropy. The rationale behind this goal is that one should favour models that are more certain about its predictions on unseen data [7,13,14]; c) Negative Fisher information (τ fir ): "Fisher information ratio" [15] captures unlabelled samples' impact on the asymptotic efficiency of parameter estimation. More recently, [16] further shows it serves an asymptotic upper bound of the expected variance of the log-likelihood ratio. Inspired by these findings, we propose τ fir (θ; U) − tr(I u (θ)), where I u (θ) 1 |U | x∈U I(θ|x) represents the empirical conditional Fisher information matrix (see Sec. 7 for details).
Approximating GORAL with influence functions
As explained above, to accurately evaluate the goal-oriented utility per Eq. (1) over a large pool of unlabelled samples appears prohibitively expensive in general. In this section we show how, for a wide class of models, this can be efficiently approximated using influence functions [17,5].
Given a training set L train = {z i } n i=1 , we now assume the modelθ is obtained via empirical risk minimization, i.e.θ argmin θ 1 n n i=1 R(zi, θ), where R is the per-sample loss function (with any regularization terms folded in). We then defineθ ,z argmin θ 1 n n i=1 R(zi, θ) + R(z, θ) to be the new model trained with an additional -weighted training sample z. Following [5], under certain regularity conditions (e.g. R being twice-differentiable and strictly convex), influence functions provide a closed-form estimate for the difference in model parametersθ ,z −θ (when is small) via
∂θ ,z ∂ =0 = −H −1 θ ∇ θ R(z,θ), where Hθ 1 n n i=1 ∇ 2
θ R(zi,θ) denotes the Hessian.
Following the chain rule and assuming τ (θ) is differentiable, we can now measure the "influence" of introducing an infinitesimally -weighted sample z (on the goal) as
I(z;θ) ∂τ (θ ,z ) ∂ =0 = ∂τ (θ) ∂θ θ=θ · ∂θ ,z ∂ =0 = −∇ θ τ (θ)H −1 θ · ∇ θ R(z,θ),(2)
which we then leverage to form the approximation (using 1st-order Taylor approximation of τ (θ ,z )):
π goal (x;θ) = y τ (θ ,z ) = 1 n − τ (θ) ≈ y τ (θ) + 1 n I(z;θ) − τ (θ) = 1 n y I(z;θ) ,(3)
and henceforth defineπ goal (x;θ) 1 n y I(z;θ) to be the approximate goal-oriented utility. 1 Note that when y is a linear operator (e.g. Expectation or Oracle), it can switch order with ∇ , i.e. ∇ y [τ (θ ,z )] = y [∇ τ (θ ,z )], and as a result, y [τ (θ ,z )] − τ (θ) = · y [I(z;θ)] + o( ), making π goal (x;θ) itself a direct 1st-order Taylor approximation to π goal (x;θ). Otherwise (e.g. for max y ), Eq. (3) may offer a looser approximation, although we find it still works well in practice.
Also note it is obvious from the r.h.s. of Eq. (2) thatπ goal (x;θ) can be further broken down into two terms, namelyπ goal (
x;θ) = y v θ · ∇ θ R(z,θ) , where vθ − 1 n H −1 θ ∇ θ τ (θ)
is independent of z and hence, once computed, can be reused across all samples. Therefore, computation-wise, even though to evaluate π goal (x;θ) over U pool requires K × |U pool | iterations of model retraining, for π goal (x;θ) it now only requires the same number of gradient computations, i.e. ∇ θ R(z,θ).
Extra caution required for the Expectation operator
Nice and intuitive as it sounds, below we present a perhaps surprising result for the Expectation operator E y , which is that the popular choice of taking the expectation under the current model prediction for resolving the unknown label y may actually render the resulting utility vacuous. Remark 1. With (regularized) maximum-likelihood estimate (or cross-entropy loss), i.e. whenever R(z, θ) = Ω(θ) − log P θ (y|x), the approximate expected utilityπ exp (x;θ) 1 n E y I(z;θ) under the current model prediction (i.e. y ∼ Pθ(y|x)) becomes a constant regardless of the sample x.
To see why, just note that
π exp (x;θ) = − 1 n ∇ θ τ (θ)H −1 θ · E y [∇ θ R(z,θ)] = − 1 n ∇ θ τ (θ)H −1 θ · E y [∇ θ Ω(θ)] − E y [∇ θ log Pθ(y|x)] = − 1 n ∇ θ τ (θ)H −1 θ · ∇ θ Ω(θ) ≡ const.,
where the last step naturally follows from the well known result that the score function has zero mean, i.e. E y [∇ θ log P θ (y|x)] = 0.
It is also worth noting that the above remark holds regardless of the choice of the goal function τ (θ). And therefore for model classes fitting the above assumptions, e.g. logistic regression, this seems a rather poor choice -the actual utility is both expensive and susceptible to noises, while its 1st-order Taylor-approximate utility turns out unusable still.
GORAL in batch-mode
In batch-mode AL, instead of just selecting the top one sample, multiple samples are to be selected, labelled, and then added to the training set in one go at every iteration. Doing so facilitates less greedy AL strategies as it allows one to evaluate and process a batch as a whole (e.g. to take diversity into consideration) rather than sticking to successive locally-optimal individual selections.
However, this is not exempt from the "no free lunch" principle. Denote a batch by X {x}. Due to the combinatorial nature of subset selection, optimizing a holistic batch utility π(X;θ) typically results in a much higher computational cost (e.g. one which scales exponentially with the batch size). As a result, many batch-mode AL strategies in practice choose to simply compose its batch utility from the sum of individual utilities, i.e. π(X;θ) = x∈X π(x;θ). However, doing so essentially reduces it back to the greedy setting and seems like a heuristic at best. For batch-mode GORAL though, we can actually enjoy the best of both worlds -the definition of the goal-oriented utility (1) naturally lends itself to a holistic batch-mode version, yet, as we show below, it still benefits from cheap computation through principled approximations.
Thanks to the explicit goal function τ (θ), we naturally extend the definition of the utility (Eq. (1)) to the batch-mode setting by following the same principle, i.e. π goal (X;θ)
Y τ (θ Z ) − τ (θ),
where Z denotes the batch augmented with hypothetical labels Y ,θ Z the model trained with this additional (labelled) batch, and Y the operator that resolves the unknown Y (similar to Sec. 2.1).
Similarly, we useθ ,Z argmin θ
1 n n i=1 R(z i , θ) + b z∈Z R(z, θ) (b being the batch size |Z|)
to study the "influence" of introducing a batch of samples Z, and as it turns out,
I(Z;θ) ∂τ (θ ,Z ) ∂ =0 = −∇ θ τ (θ)H −1 θ · 1 b z∈Z ∇ θ R(z,θ) = 1 b z∈Z I(z;θ),
which means the collective influence I(Z;θ) is simply the average of the individual I(z;θ)s.
Then applying the same approximation idea as above (Eq. (3)) and assuming b n, we have
π goal (X;θ) = Y τ (θ ,Z ) = b n − τ (θ) ≈ Y τ (θ) + b n I(Z;θ) − τ (θ) = b n Y I(Z;θ) ,
and thus denoteπ goal (X;θ) b n Y I(Z;θ) to be the approximate batch utility. Remark 2. The approximate batch utility is the same as the sum of the approximate individual utilities, i.e.π goal (X;θ) = x∈Xπ goal (x;θ). This is straightforward as b n Y I(Z;θ) = Y 1 n z∈Z I(z;θ) = x∈X 1 n y I(z;θ) . 2 Remark 2 implies that using greedy selection for batch-mode GORAL is actually well-justified.
Computation-wise, accurately selecting the best batch (of size b) from the pool requires |Upool| b times of π goal (X;θ) evaluations, each of which in turn requires K b times of model retraining, both scaling exponentially with the batch size b. Under the approximation, this cost gets drastically reduced to K × |U pool | times of gradient computations, plus a one-time top-b item selection from the pool.
Discussion
In Sec. 6 we present empirical studies that convince the effectiveness of the proposed approximation (6.1), showcase GORAL's robustness against an adversarial setting (6.2), as well as highlight worrying problems with all the three goal functions considered in Sec. 2.1 with honest benchmark results on two representative, non-cherrypicked datasets (6.3). In particular, we demonstrate the close relationship between τ ent and τ fir (also analytically in Sec. 8), and that both are a poor goal for AL as achieving the goal is actually at odds with achieving good data efficiency. In this regard, τ dev performs much better but it needs to address its own issues of potential over-fitting and extra labelling costs.
Further to that, the analytical insights presented in Sec. 3.1 challenge the status quo and prompt us to seek better alternatives when choosing to take expectation over the labels for utility estimation.
[14] Yuhong Guo
Discussion (continued)
Since its introduction to machine learning, influence functions have also been successfully applied to the setting of "optimal subsampling" [18], which bears some resemblance to active learning in that both are trying to select a subset from the data. However, the differences between these two settings are also stark and clear. In particular, for active learning, both its unique dependency on the unknown labels and the discrepancy between the proxy goal and the training objective call for more careful treatment, as have been demonstrated in this paper.
When we were discussing the computational cost of GORAL and its approximation in Sec. 3, our primary focus was on those terms that either scale with the size of the pool |U pool | or the batch b, since they are the more dominant ones (especially in batch-mode). Notably, the one-time cost of computing vθ is also not to be neglected as it involves a Hessian (O(|L train |d 2 )) and an inverse-Hessian-vector product. However, we note that, in the AL setting, it is expected that |U pool | |L train |, and therefore the reduction in computational cost is still significant.
In regards to the issue of a vacuous utility resulting from directly using the current model prediction in the expectation (Sec. 3.1), one possible remedy might be to soften the prediction distribution [19] by annealing it with a temperature T ∈ R + , i.e. setting P (y) ∝ exp( log Pθ (y|x) /T ). Note that this reduces it to a uniform distribution when T → ∞, the original distribution when T = 1, and a singleton distribution at argmax y Pθ(y|x) when T → 0. Hence one can start with a relatively high temperature at the early stage of an AL process when the model is less well trained, and then progressively tune the temperature down as the model has been trained with more data and gets more accurate over time. We leave this to future work.
Experiments
We now carry out empirical studies to validate the efficacy, as well as showcase some problems, of several GORAL strategies. For this we use three datasets, i.e. synth2 [20,21], which is a binary classification dataset crafted to highlight issues with those AL strategies that focus on exploiting "informative" samples only (e.g. uncertainty sampling), rt-polarity [22], a binary sentence classification dataset, and letter [23], a multi-class image classification dataset. For rt-polarity, we encode every sentence by taking its "[CLS]" embedding from BERT [24]. The full dataset We focus on Multinomial Logistic Regression (MLR) for all the experiments. There has been a lot of research specifically concentrated on AL for logistic regression [8,21]. Furthermore, with the recent advent of powerful pre-trained models [24], it is becoming ever more promising that, by simply stacking an additional final layer (typically MLR for classification) on top of those pretrained networks and fine-tuning that layer's parameters to the given task one can readily obtain well performing models with little work. We include intercepts in the model and select the hyperparameter λ with cross validation. 3
Dataset K d |U pool | |L init | |L test |synth2
Approximation quality
We first examine how well the approximate utility proposed in Sec. 3 actually reflects the true utility. Here we present results on the goal function τ ent only, since other goals all result in similar observations. We use the rt-polarity dataset and first train a modelθ from 50 random samples. We then compute both the actual utilities π goal (x;θ) (by performing actual model retraining) and the approximate utilitiesπ goal (x;θ) (per Eq. (3)) over a pool of another 500 random samples. From the scatter plots in Fig. 2, we see that overall the approximation works fairly well across all the various y s considered in this paper. Another observation is that the various y s do result in very different rankings among the samples, as is exemplified by the 5 samples marked with crosses. Each dot represents one sample x in the pool and is coloured consistently across plots (indexing into a color-map using its actual Oracle utility). The grey line represents the line y = x for reference. And the crosses mark samples we single out for closer inspection.
In Fig. 3 we examine the same approximation quality for the batch-mode setting (with batch size b = 10). For practical reasons, we don't examine all the possible 500 10 ≈ 2.5 × 10 20 batches, but instead just pick 491 batches from the same pool using a sliding window. Compared to the serial setting, we observe a slight degradation of the approximation quality when y is linear (i.e. Expectation or Oracle), and a larger degradation when y is max y or min y , echoing our analysis in Sec. 3. Nonetheless, in all cases the approximate utilities still exhibit a strong correlation with the actual ones (even when b /n is as high as 10 /50 = 0.2), which is an inspiring result.
Concurrently, [25] studies the accuracy of influence function approximations for measuring group effects and makes similar observations that the approximation quality is generally very high although it can consistently under-estimate (over-estimate in our case). For the special case of E y∼Pθ (y|x) , i.e. expectation under the current model prediction, we have known from Sec. 3.1 that the approximate utilities will be constant. In Fig. 4 we show the histogram, along with the kernel density estimate, of the actual utilities under this operator (legend "ExPred"), and contrast it with utilities from some other operators, across various batch sizes. From this we see that utilities under E y∼Pθ (y|x) are indeed highly concentrated within a fairly small region, making it highly susceptible to noise (e.g. due to training or numerical instabilities) and therefore less meaningful as a reliable criterion for sample selection in GORAL. In the following subsections, we benchmark several representative GORAL strategies on a series of datasets along with baselines such as random sampling and uncertainty sampling, which are recently shown to be the more consistently effective AL strategies in [21]. And when we mention GORAL, we now always refer to its practical version using influence-function approximations and use its batch-mode version with b = 10.
synth2: an adversarial setting
From Fig. 1 we see that L init is deliberately crafted to mislead the initial model, as we would like to inspect how robust an AL strategy is by checking how quickly it can recover from that. In particular, the synth2 dataset is composed of three groups of clusters, which we name as "central", "distracting", and "definitive" respectively. Within each group there are always two clusters, one for the positive label and one for the negative. The two central clusters are where L init is drawn, and are poised to mislead the initial model into a nearly horizontal divide between the two; The two distracting clusters lie on the upper-left and lower-right corners, and are composed of samples with the largest distances to the ground-truth decision boundary; While the two definitive clusters lie on the upper-right and lower-left corners and, along with the two central clusters, define the optimal decision boundary.
For GORAL we look at τ dev , where the dev-set L dev is composed of a random 10% subset of U pool (i.e. 53 samples) associated with their ground-truth labels. From the snapshots in Fig. 5, we observe the following querying patterns of the different AL strategies on synth2:
• Uncertainty sampling: getting stuck with exhausting the two central clusters initially
• GORAL (Oracle): selecting the "optimal" samples right away
• GORAL (Average): approximating the Oracle querying pattern quite well, leaving the two distracting clusters to the end • GORAL (Minimum): starting with the two distracting clusters Also note that even after we take into account the additional 53 dev-labels exploited by GORAL, it still enjoys a much higher data efficiency than uncertainty sampling, which reaches the similar level of test accuracy (0.966) only after making 230 queries, 170 of which are spent (or rather wasted) on the two misleading central clusters.
What makes a good goal?
Below we benchmark GORAL on two real-world datasets. In Fig. 6 and 8, we show the usual learning curve on the upper half, in which we inspect data efficiency, i.e. how quickly the various strategies help the model reach a certain level of performance (the "end goal" in general AL). On the lower half we show the "goal curve", in which we look at whether the proposed approximate utility actually helps the model achieving the designated goal (the computable "proxy goal" in GORAL). An ideal GORAL strategy should find success in both cases.
For τ dev , we see from Fig. 6a and 8a that there is a clear correlation between the two curves, which should not be surprising given the close relationship between dev-set loss and test accuracy. We can also see the over-fitting effect from Fig. 8a when after about 600 queries the test accuracy starts to gradually drop despite the still increasing goal (under "inforc", i.e. GORAL-Oracle). Yet Oracle aside, the other practical simple GORAL strategies do not consistently outperform the baselines.
For τ ent and τ fir , we have seen in Sec. 8 how they are closely related analytically, and this can also be easily seen from the empirical results below. The most telling message from Fig. 6b and 8b is probably the obvious contrast between the two curves (under "inforc", i.e. GORAL-Oracle), where successfully boosting the goal actually leads to the worst AL performance. And this signifies why τ ent should not be trusted as a sensible goal in GORAL. Similar results hold for τ fir as well.
A novel goal based on Fisher information
The value of unlabelled data for classification problems in the context of active learning has been studied in [15], where Fisher information is used to measure the asymptotic efficiency of parameter estimation and query selection is then aimed at increasing this efficiency. Various notable developments including [26,27] have been made since then but it was not until fairly recently that [16] first presented a rigorous theoretical investigation into the connection between the popular criterion of "Fisher information ratio" used in practice (as well as the various approximations and relaxations therein) and the asymptotic upper bound of the expected variance of the log-likelihood ratio, and thus closed the long-standing gap between theory and practice for works along this line. Below we first briefly recapitulate the main ideas, and then present a novel interpretation of the result which allows us to develop a new goal for the above GORAL framework. Fisher information Given a parametric probabilistic model p θ (x, y), Fisher information is defined as the covariance matrix of the score function, i.e. I(θ) E x,y [∇ θ log p θ (x, y)∇ θ log p θ (x, y)]. Fisher information can be used to estimate the variance of unbiased parameter estimators (e.g. the maximum-likelihood estimator) due to Cov[θn] I(θ * ) −1 (known as the Cramér-Rao lower bound) and Cov[θ∞] = I(θ * ) −1 , where θ * stands for the ground-truth parameters andθ n the parameter estimate from n samples (drawn from p θ * (x, y)).
For discriminative models considered in this paper, parameters θ only affect the conditional P (y|x), i.e. p θ (x, y) = p(x)P θ (y|x). We therefore additionally define conditional Fisher information as I(θ|x) E y|x [∇ θ log P θ (y|x)∇ θ log P θ (y|x)]. It then naturally follows that I(θ) = E x [I(θ|x)].
Fisher Information Ratio (FIR) Intuitively, one would like to reduce the variance Cov[θ n ] during learning, yet having a lower bound on that is not very helpful. [16] show that a different criterion named Fisher information ratio actually serves as an asymptotic upper bound of the expected variance of the log-likelihood ratio (namely log Pθ n (y|x) − log P θ * (y|x)), i.e.
E x,y Var q lim n→∞ √ n · log Pθ n (y|x) − log P θ * (y|x) ≤ tr I q (θ * ) −1 I(θ * ) ,(4)
where q denotes the training distribution q(x, y) = q(x)P θ * (y|x), from which training samples (e.g. denoted by L n {(x i , y i )} n i=1 ) are drawn that give rise to the estimateθ n , and I q (θ * ) E q(x) [I(θ * |x)]. Note that the variance Var q (·) results from the stochacity of L n . Under this criterion, active learning is motivated by selecting queries that form a training distribution q that minimizes FIR, i.e. the r.h.s. of Eq. (4), in a hope to quickly reach an estimateθ n that has a smaller variance of the log-likelihood ratio. However, solving for the optimal q under FIR is by itself a difficult discrete optimization problem, let alone the fact that computing FIR requires the ground-truth data distribution p(x) (for I(θ * )) as well as the true parameters θ * , neither of which is accessible, and hence calls for further approximation.
Adapting FIR for GORAL First of all, given that I q (θ * ) and I(θ * ) are both positive semi-definite, we have tr I q (θ * ) −1 I(θ * ) ≤ tr I q (θ * ) −1 · tr(I(θ * )). Furthermore, in the context of the above GORAL framework, at every step q effectively represents a discrete distribution supported by the training samples. Hence I q (θ * ) = n n+1 1 n xi∈Ln I(θ * |x i ) + 1 n+1 I(θ * |x ) (across the various next chosen query x ) and we therefore approximately treat it as a constant matrix that is independent of x , and that leaves us to concentrate on tr(I(θ * )) alone. As has been shown in [28,16], under certain regularity conditions, I(θ n ) provides a fairly good approximation to I(θ * ) (with high probability).
We therefore formulate our FIR-inspired goal to be negative Fisher information (τ fir ) as follows,
τ fir (θ; U) − tr(I u (θ)), where I u (θ) E x∈U [I(θ|x)] = 1 |U| x∈U I(θ|x),(5)
and we effectively use a large pool of unlabelled samples U to approximate p(x).
Below we show that for multinomial logistic regression τ fir turns out fairly similar to τ ent in nature.
The case for Multinomial Logistic Regression
Take X = R d and one-hot encoding for the labels, i.e. y ∈ ∆ K−1 (a (K − 1)-simplex) that only has one entry (for y = k) as 1 and 0 elsewhere. Multinomial logistic regression is parametrized by Θ (θ 1 , . . . , θ K ) ∈ R d×K (or θ = vec(Θ) ∈ R dK ) 4 and encodes P θ (y|x) through the probability vector p θ (x) = σ(Θ x) ∈ ∆ K−1 , where σ(·) represents the softmax function.
The per-sample loss function, as well as its gradient and Hessian matrix, then look as follows,
R(z, θ) = λ 2 θ θ − y log p θ (x), ∇ θ R(z, θ) = λθ − vec x · (y − p θ (x)) ,(6)
H(θ; z) ∇ 2 θ R(z, θ) = λI + Λ θ (x) ⊗ xx ,
where λ ∈ R + is the hyper-parameter that controls the strength of 2 regularization, Λ θ (x) diag(p θ (x)) − p θ (x)p θ (x) ∈ R K×K , and ⊗ represents the Kronecker product between two matrices. We note that Λ θ (x) is symmetric, diagonally dominant and positive semi-definite, and that the per-sample Hessian H(θ; z) shown above as well as the full-batch Hessian H θ are both symmetric and positive definite, and hence the loss function is convex and H −1 θ does exist.
Expected utility As per Sec. 3, the key term in the utility computation is the expected gradient, and E y ∇ θ R(z,θ) = ∇ θ R (x, E y [y]),θ (due to ∇ θ R(z, θ) being linear to y, per Eq. (6))
= vec x · (pθ(x) − p y ) + λθ, (per Eq. (6) and E y [y] = I · p y = p y )
where p y ∈ ∆ K−1 represents the probability vector of P (y). From this we also see that when one sets p y = pθ(x),π exp (x;θ) = v θ · E y ∇ θ R(z,θ) = λv θθ and becomes independent of x.
Gradient of τ ent We derive the gradient for the goal of negative prediction entropy as follows,
∇ θ H(p θ (x)) = − vec x · (1 + log p θ (x)) Λ θ (x) = − vec x · (p • log p + Hp) ,
where we abbreviate p θ (x) and H(p θ (x)) with p and H respectively in the last step for brevity.
Computing τ fir Below we derive a closed-form solution to the negative Fisher information (Eq. (5)). We first simplify it by making use of the the well-known result that Fisher information is equal to the expected Hessian of the negative log-likelihood, i.e. I(θ|x) = E y [H(θ; z)] = H(θ; x), where the 2nd equation follows from the fact that the Hessian is independent of y as per Eq. (7). Now we can rewrite the goal as τ fir (θ; U) = −E x∈U [tr(H(θ; x))], where tr(H(θ; x)) = tr(λI) + tr(Λ θ (x)) tr(xx ) = λK + (1 − p θ (x)p θ (x)) · x x,
∇ θ tr(H(θ; x)) = −2x x · vec x · p θ (x)Λ θ (x) = 2x x · vec x · ((ν1 − p) • p) ,(8)
where we abbreviate p θ (x) with p and set ν p p in the last step. From Eq. (8), we can see that τ fir , as does τ ent , also favours models that yield minimum-entropy predictions. The close relationship between these two goals is also verified from our empirical studies.
Setting the hyperparameter λ We use the Scikit-learn implementation of MLR in our experiments, which uses a slightly different formulation that involves a regularization constant C and λ = 1 nC (n being the number of training points). In active learning, n keeps increasing as more labelled samples are added into the training set. To maintain this mapping, we choose to first select C using cross validation 5 and then update λ accordingly during AL iterations.
Figure 1 :
1The synth2 dataset
Figure 2 :
2Approximation quality ofπ goal (x;θ) in the serial GORAL setting under various y s.
Figure 3 :
3Approximation quality ofπ goal (X;θ) in batch-mode GORAL (b = 10) under various y s. All elements are similar to those inFig. 2except that each dot (or cross) now represents a batch X.
) τent with batch size 1 (left), 5 (middle) and 10 (right).
) τfir with batch size 10.
Figure 4 :
4Utility distributions under various goals and batch sizes.
Figure 5 :
5) GORAL under τdev (Oracle) (c) GORAL under τdev (Average) (d) GORAL under τdev (Minimum) AL snapshots on synth2
Figure 6 :
6Learning curves and goal curves of various GORAL strategies on letter.
Figure 7 :
7Utility-distribution evolution of various GORAL strategies on letter.
Figure 8 :
8Learning curves and goal curves of various GORAL strategies on rt-polarity.
Figure 9 :
9Utility-distribution evolution of various GORAL strategies on rt-polarity.
and Dale Schuurmans. Discriminative batch mode active learning. In Proceedings of the 20th International Conference on Neural Information Processing Systems, NeurIPS'07, pages 593-600, USA, 2007. Curran Associates Inc. ISBN 978-1-60560-352-0. URL http: //dl.acm.org/citation.cfm?id=2981562.2981637. [15] Tong Zhang and Frank J. Oles. The value of unlabeled data for classification problems. In Proceedings of the 17th International Conference on Machine Learning, volume 20 of ICML'00, pages 1191-1198. Citeseer, 2000. URL http://citeseerx.ist.psu.edu/ viewdoc/download?doi=10.1.1.20.6025&rep=rep1&type=pdf. [16] Jamshid Sourati, Murat Akcakaya, Todd K. Leen, Deniz Erdogmus, and Jennifer G. Dy. Asymptotic analysis of objectives based on Fisher information in active learning. Journal of Machine Learning Research, 18(34):1-41, 2017. URL http://jmlr.org/papers/v18/ 15-104.html. [17] R Dennis Cook and Sanford Weisberg. Residuals and influence in regression. New York: Chapman and Hall, 1982. [18] Daniel Ting and Eric Brochu. Optimal subsampling with influence functions. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NeurIPS'18, pages 3654-3663, USA, 2018. Curran Associates Inc. URL http://dl.acm.org/citation. cfm?id=3327144.3327282. [27] Burr Settles and Mark Craven. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP'08, pages 1070-1079, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics. URL http://dl.acm.org/citation.cfm?id=1613715.1613855. [28] Kamalika Chaudhuri, Sham M. Kakade, Praneeth Netrapalli, and Sujay Sanghavi. Convergence rates of active learning for maximum likelihood estimation. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NeurIPS'15, pages 1090-1098, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id= 2969239.2969361.
Table 1 :
1Datasets6
4
2
0
2
4
6
8
6
4
2
0
2
4
6
8
The last equation in Eq. (3) holds for all the y operators considered in this paper (see Sec. 2.1).
The 2nd equation holds for all the y operators considered in Sec. 2.1 in that their extension Y s are all decomposable, i.e. Y y∈Y f (y) = y∈Y y [f (y)].
We provide a proper exposition along with all the derivations and details in Sec. 8.
We over-parametrize the model by leaving θK as free parameters rather than fixing it to 0 d , as is normally done in the statistics literature, to make it closer to settings where it is used as the last layer of a neural network.
As a result, we set C = 0.1 for synth2 and rt-polarity and C = 1 for letter.
Active learning literature survey. B Settles, 1648University of Wisconsin-MadisonComputer Sciences Technical ReportB. Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009. URL http://burrsettles.com/pub/settles. activelearning.pdf.
David Lowell, Zachary C Lipton, Byron C Wallace, arXiv:1807.04801Practical obstacles to deploying active learning. arXiv preprintDavid Lowell, Zachary C. Lipton, and Byron C. Wallace. Practical obstacles to deploying active learning. arXiv preprint arXiv:1807.04801, 2019. URL https://arxiv.org/abs/ 1807.04801.
When does active learning work?. Lewis P Evans, Niall M Adams, Christoforos Anagnostopoulos, 10.1007/978-3-642-41398-8_16Proceedings of the 12th International Symposium on Advances in Intelligent Data Analysis XII, IDA 2013. the 12th International Symposium on Advances in Intelligent Data Analysis XII, IDA 2013Berlin, HeidelbergSpringer-VerlagLewis P. Evans, Niall M. Adams, and Christoforos Anagnostopoulos. When does active learning work? In Proceedings of the 12th International Symposium on Advances in Intelligent Data Analysis XII, IDA 2013, pages 174-185, Berlin, Heidelberg, 2013. Springer-Verlag. ISBN 978- 3-642-41397-1. doi: 10.1007/978-3-642-41398-8_16. URL https://doi.org/10.1007/ 978-3-642-41398-8_16.
On the relationship between data efficiency and error for uncertainty sampling. Stephen Mussmann, Percy Liang, Proceedings of the 35th International Conference on Machine Learning, ICML'18. the 35th International Conference on Machine Learning, ICML'18Stephen Mussmann and Percy Liang. On the relationship between data efficiency and error for uncertainty sampling. In Proceedings of the 35th International Conference on Machine Learning, ICML'18, pages 3671-3679, 2018. URL http://proceedings.mlr.press/v80/ mussmann18a.html.
Understanding black-box predictions via influence functions. Wei Pang, Percy Koh, Liang, Proceedings of the 34th International Conference on Machine Learning, ICML'17. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine Learning, ICML'17Sydney, AustraliaInternational Convention CentrePang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML'17, pages 1885-1894, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/koh17a. html.
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, Inc. ISBN 0-387-19889Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'94. the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'94New York, NY, USA; New YorkSpringer-VerlagDavid D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'94, pages 3-12, New York, NY, USA, 1994. Springer-Verlag New York, Inc. ISBN 0-387-19889-X. URL http://dl.acm.org/citation. cfm?id=188490.188495.
Toward optimal active learning through sampling estimation of error reduction. Nicholas Roy, Andrew Mccallum, 1-55860-778-1Proceedings of the 18th International Conference on Machine Learning, ICML'01. the 18th International Conference on Machine Learning, ICML'01San Francisco, CA, USAMorgan Kaufmann Publishers IncNicholas Roy and Andrew McCallum. Toward optimal active learning through sampling estima- tion of error reduction. In Proceedings of the 18th International Conference on Machine Learn- ing, ICML'01, pages 441-448, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. ISBN 1-55860-778-1. URL http://dl.acm.org/citation.cfm?id=645530.655646.
Active learning for logistic regression: An evaluation. Andrew I Schein, Lyle H Ungar, 10.1007/s10994-007-5019-50885-6125. doi: 10.1007/ s10994-007-5019-5Machine Learning. 68Andrew I. Schein and Lyle H. Ungar. Active learning for logistic regression: An evalua- tion. Machine Learning, 68(3):235-265, October 2007. ISSN 0885-6125. doi: 10.1007/ s10994-007-5019-5. URL http://dx.doi.org/10.1007/s10994-007-5019-5.
Estimating optimal active learning via model retraining improvement. P G Lewis, Niall M Evans, Christoforos Adams, Anagnostopoulos, arXiv:1502.01664arXiv preprintLewis PG Evans, Niall M Adams, and Christoforos Anagnostopoulos. Estimating optimal active learning via model retraining improvement. arXiv preprint arXiv:1502.01664, 2015. URL https://arxiv.org/abs/1502.01664.
Semisupervised SVM batch mode active learning with applications to image retrieval. C H Steven, Rong Hoi, Jianke Jin, Michael R Zhu, Lyu, http:/doi.acm.org/10.1145/1508850.1508854ACM Transactions on Information Systems (TOIS). 27329Steven C. H. Hoi, Rong Jin, Jianke Zhu, and Michael R. Lyu. Semisupervised SVM batch mode active learning with applications to image retrieval. ACM Transactions on Information Systems (TOIS), 27(3):16:1-16:29, May 2009. ISSN 1046-8188. doi: 10.1145/1508850.1508854. URL http://doi.acm.org/10.1145/1508850.1508854.
Optimistic active learning using mutual information. Yuhong Guo, Russ Greiner, Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI'07. the 20th International Joint Conference on Artifical Intelligence, IJCAI'07San Francisco, CA, USAMorgan Kaufmann Publishers IncYuhong Guo and Russ Greiner. Optimistic active learning using mutual information. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI'07, pages 823-829, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc. URL http://dl.acm.org/citation.cfm?id=1625275.1625408.
Bayesian approach to global optimization: theory and applications. Jonas Mockus, Springer Science & Business Media37Jonas Mockus. Bayesian approach to global optimization: theory and applications, volume 37. Springer Science & Business Media, 2012. URL https://www.springer.com/gp/book/ 9789401068987.
Semi-supervised learning by entropy minimization. Yves Grandvalet, Yoshua Bengio, Proceedings of the 17th International Conference on Neural Information Processing Systems, NeurIPS'04. the 17th International Conference on Neural Information Processing Systems, NeurIPS'04Cambridge, MA, USAMIT PressYves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, NeurIPS'04, pages 529-536, Cambridge, MA, USA, 2004. MIT Press. URL http://dl.acm. org/citation.cfm?id=2976040.2976107.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. URL https://arxiv.org/abs/1503.02531.
Active learning by querying informative and representative examples. Sheng-Jun Huang, Rong Jin, Zhi-Hua Zhou, Proceedings of the 23rd International Conference on Neural Information Processing Systems, NeurIPS'10. the 23rd International Conference on Neural Information Processing Systems, NeurIPS'10USACurran Associates IncSheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. Active learning by querying informative and representative examples. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, NeurIPS'10, pages 892-900, USA, 2010. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2997189.2997289.
A benchmark and comparison of active learning for logistic regression. Yazhou Yang, Marco Loog, 10.1016/j.patcog.2018.06.004.URLhttp:/www.sciencedirect.com/science/article/pii/S00313203183021400031-3203Pattern Recognition. 83Yazhou Yang and Marco Loog. A benchmark and comparison of active learning for logistic regression. Pattern Recognition, 83:401 -415, 2018. ISSN 0031-3203. doi: https://doi.org/10. 1016/j.patcog.2018.06.004. URL http://www.sciencedirect.com/science/article/ pii/S0031320318302140.
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, 10.3115/1219840.1219855Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL'05. the 43rd Annual Meeting of the Association for Computational Linguistics, ACL'05Ann Arbor, MichiganAssociation for Computational LinguisticsBo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL'05, pages 115-124, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. doi: 10.3115/1219840.1219855. URL https: //www.aclweb.org/anthology/P05-1015.
Letter recognition using Holland-style adaptive classifiers. Machine Learning. W Peter, David J Frey, Slate, 10.1023/A:10226064041046Peter W. Frey and David J. Slate. Letter recognition using Holland-style adaptive classifiers. Ma- chine Learning, 6(2):161-182, March 1991. ISSN 0885-6125. doi: 10.1023/A:1022606404104. URL http://dx.doi.org/10.1023/A:1022606404104.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL'19. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL'19Minneapolis, MinnesotaAssociation for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL'19, pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https: //www.aclweb.org/anthology/N19-1423.
On the accuracy of influence functions for measuring group effects. Pang Wei Koh, Kai-Siang Ang, H K Hubert, Percy Teo, Liang, arXiv:1905.13289arXiv preprintPang Wei Koh, Kai-Siang Ang, Hubert H. K. Teo, and Percy Liang. On the accuracy of influence functions for measuring group effects. arXiv preprint arXiv:1905.13289, 2019. URL https://arxiv.org/abs/1905.13289.
Batch mode active learning and its application to medical image classification. C H Steven, Rong Hoi, Jianke Jin, Michael R Zhu, Lyu, http:/doi.acm.org/10.1145/1143844.1143897Proceedings of the 23rd International Conference on Machine Learning, ICML'06. the 23rd International Conference on Machine Learning, ICML'06New York, NY, USAACMSteven C. H. Hoi, Rong Jin, Jianke Zhu, and Michael R. Lyu. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd International Conference on Machine Learning, ICML'06, pages 417-424, New York, NY, USA, 2006. ACM. ISBN 1-59593-383-2. doi: 10.1145/1143844.1143897. URL http://doi.acm.org/ 10.1145/1143844.1143897.
| [] |
[
"LEO-PNT With Starlink: Development of a Burst Detection Algorithm Based on Signal Measurements",
"LEO-PNT With Starlink: Development of a Burst Detection Algorithm Based on Signal Measurements"
] | [
"Winfried Stock \nInstitute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany\n",
"Christian A Hofmann \nInstitute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany\n",
"Andreas Knopp \nInstitute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany\n"
] | [
"Institute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany",
"Institute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany",
"Institute of Information Technology\nUniversity of the Bundeswehr Munich\nNeubibergGermany"
] | [] | Due to the strong dependency of our societies on Global Navigation Satellite Systems and their vulnerability to outages, there is an urgent need for additional navigation systems. A possible approach for such an additional system uses the communication signals of the emerging LEO satellite megaconstellations as signals of opportunity. The Doppler shift of those signals is leveraged to calculate positioning, navigation and timing information. Therefore the signals have to be detected and the frequency has to be estimated. In this paper, we present the results of Starlink signal measurements. The results are used to develope a novel correlation-based detection algorithm for Starlink burst signals. The carrier frequency of the detected bursts is measured and the attainable positioning accuracy is estimated. It is shown, that the presented algorithms are applicable for a navigation solution in an operationally relevant setup using an omnidirectional antenna. | null | [
"https://export.arxiv.org/pdf/2304.09535v1.pdf"
] | 258,212,609 | 2304.09535 | b54be52f72cfc01f9dfc5d208023d890ee7ca499 |
LEO-PNT With Starlink: Development of a Burst Detection Algorithm Based on Signal Measurements
Winfried Stock
Institute of Information Technology
University of the Bundeswehr Munich
NeubibergGermany
Christian A Hofmann
Institute of Information Technology
University of the Bundeswehr Munich
NeubibergGermany
Andreas Knopp
Institute of Information Technology
University of the Bundeswehr Munich
NeubibergGermany
LEO-PNT With Starlink: Development of a Burst Detection Algorithm Based on Signal Measurements
Index Terms-LEOStarlinknavigationsignals of opportu- nity
Due to the strong dependency of our societies on Global Navigation Satellite Systems and their vulnerability to outages, there is an urgent need for additional navigation systems. A possible approach for such an additional system uses the communication signals of the emerging LEO satellite megaconstellations as signals of opportunity. The Doppler shift of those signals is leveraged to calculate positioning, navigation and timing information. Therefore the signals have to be detected and the frequency has to be estimated. In this paper, we present the results of Starlink signal measurements. The results are used to develope a novel correlation-based detection algorithm for Starlink burst signals. The carrier frequency of the detected bursts is measured and the attainable positioning accuracy is estimated. It is shown, that the presented algorithms are applicable for a navigation solution in an operationally relevant setup using an omnidirectional antenna.
I. INTRODUCTION
At least since the widespread use of smartphones, Positioning, Navigation and Timing (PNT) functionalities have become a matter of course in the everyday lives of very many citizens. Additionally, in the context of Industry 4.0 the importance of PNT has been increasing drastically for industry, agriculture, etc. in the last few years. Most devices that offer PNT functionality rely on Global Navigation Satellite Systems (GNSS). Due to some deficiencies of GNSS, e.g., when used in cities or forests and in jamming scenarios, there is an urgent need for additional systems. Plans for including PNT functionalities in the future 6G standard for cellular networks underline this need.
A promising approach for an additional navigation system uses Signals of Opportunity (SoO). Such systems use signals that are not designed or transmitted for the purpose of navigation but make secondary use of already available signals. The advantage of an SoO approach is that no dedicated transmitters have to be operated. On the downside, the structure of the utilized signals is not optimized for PNT and is often unknown.
In the previous decades, especially terrestrial signal sources, such as cellular basestations, were considered as SoO. With the New Space movement and the exponentially growing number
The authors are with the SpaceCom Labs of the Space Systems Research Center, Bundeswehr University Munich, 85579 Neubiberg, Germany (email: [email protected]). This research is partly funded by dtec.bw -Digitalization and Technology Research Center of the Bundeswehr. dtec.bw is funded by the European Union -NextGenerationEU. of LEO satellites in operation, the signals of LEO satellites have become more and more interesting as a source for an SoO navigation approach (LEO-PNT). Several characteristics of LEO satellites make them a promising source: There are LEO satellite signals available all the time on the whole globe, and the orbits of the satellites are known through the TLE files published by the North American Aerospace Defense Command (NORAD). Additionally, LEO satellites move with high velocity (relative to, e.g., a terrestrial receiver), which (over time) causes a fast changing geometry. Furthermore, a considerable Doppler shift is caused, which can be leveraged for PNT. In such an approach, PNT information is derived from frequency measurements of the received signal (similar to the measurement of the time of arrival of GNSS signals and the calculation of PNT information by triangulation).
This paper focuses on using Starlink signals as SoO for a LEO-PNT system. Since 2018, the Starlink system has been continously growing and is now the constellation with the largest number of operational satellites in the LEO [2]. Hence, Starlink is a suitable candidate to provide SoO for LEO-PNT.
However, until recently, very little was known about the structure of Starlink signals. For this reason, so far, the published opportunistic LEO-PNT implementations have leveraged rather superficial signal properties, like the bandwidth, peaks in the frequency domain, and an assumed periodicity of the signal [3]- [6]. Due to the recent publication of an in-depth analysis of the Starlink user downlink signal in [7], dedicated Starlink signal detection and frequency estimation algorithms can be developed that utilize the exact signal structure and might offer a significantly higher estimation accuracy.
In this work, the basic working principles of Doppler shift based LEO-PNT are described. The Starlink user uplink signal is measured and analyzed. To the knowledge of the authors, this work is the first one doing so. Similarities between the uplink and downlink signal structures are identified. Algorithms for burst detection and frequency estimation that utilize the Starlink synchronization sequence are proposed and analyzed. Finally, the impact of the frequency estimation errors of those algorithms on the positioning accuracy of Doppler shift based LEO-PNT is investigated by calculating its lower bound.
II. OPPORTUNISTIC LEO-PNT
The basic working principle of an opportunistic LEO-PNT receiver leveraging the Doppler shift of LEO communication signals is depicted in figure 1. After an initial filtering and Fig. 1: Generic receiver architecture for Doppler shift based LEO-PNT downconversion, the LEO burst signal is detected, and a carrier frequency estimation is conducted. Subsequently, PNT information is calculated from the received carrier frequency f r and the location and velocity of the satellite. This calculation is possible because the high relative velocity between the transmitting satellite and the receiver causes a significant Doppler shift f D , which depends on the relative position between the satellite and receiver. The velocity and location of the satellite are usually obtained from the ephemerides made public by NORAD in TLE files [8]. Strategies to identify the individual satellite (to match the transmitting satellite and the TLE file) have to be applied. As the signal properties of individual satellites are usually not known publicly, those strategies are, for example, based on preknowledge of the approximate receiver position. In the following, this section provides a better understanding of PNT calculation from frequency measurements.
During the overflight of a satellite, the changing relative velocity between transmitter and receiver causes a characteristic evolution of the Doppler shift over time. The precise shape of this Doppler shift curve depends on the receiver position relative to the satellite trajectory. Figure 2 shows some examples for different locations in cross-track direction x c of a static receiver on the earth surface. Different locations in along-track direction x a produce the same Doppler shift curves, but time-shifted. ( x a refers to the direction parallel to the ground track of the satellite; x c is perpendicular to x a and parallel to the earth surface.) Mathematical models to calculate the Doppler shift can be found, e.g., in [10].
When the signal of a single satellite is tracked over the timespan t a , commonly, an opportunistic LEO-PNT receiver estimates (at least some of) the following parameters: the receiver's 3D-position, the receiver's 3D-velocity, the receiver time, and the transmitted carrier frequency. The last parameter can also include the (possibly time-varying) clock drifts of the satellite and the receiver. The number of conducted frequency measurements N must at least match the number of parameters that are estimated by the receiver. However, the higher the number of measurements, the more accurate the PNT estimation is. The same is true for t a . A longer tracking duration t a entails measurements with a greater variety of geometry between satellite and receiver, which improves accuracy. For the same reason, the PNT estimation accuracy can be significantly Several sources introduce errors to Doppler shift based PNT estimation. Among the most significant sources are the following three. First, the orbit of the satellite is not known exactly. The TLE files published by NORAD entail satellite position errors of up to a few kilometers [11]. Second, errors are introduced by the clocks of the receiver and of the satellite. The latter cannot be assumed to have an accuracy comparable to those of GNSS. (This, e.g., results in time varying carrier frequency offsets or sampling frequency offsets.) Last, the frequency estimation introduces errors. While the first-mentioned sources are not in the scope of this work, Section V will focus on the impact of the last-mentioned error source on the positioning accuracy.
III. STARLINK SIGNAL MEASUREMENTS
This section provides an overview of the Starlink signals. At first, a signal model is presented that fits the user uplink as well as the user downlink signal. After that, the uplink signal is described based on signal measurements and analysis conducted in the course of this work. Finally, the downlink signal is described shortly based on the in-depth analysis provided very recently by [7].
A. Signal model
Due to similarities in their structure, the same signal model can be applied for Starlink uplink and downlink signals:
s i [n] = α(c i [n] + d i [n − L c ])e j2π(fci+f Di )Tsn + w[n] (1) where s i [n]
is the i-th received signal burst at the n-th time instant. The data signal in baseband is denoted by d i [n], the noise term by w[n], and T s = 1 fs denotes the sampling period. The carrier frequency f ci depends on the used subchannel and, therefore, can vary from burst to burst. The received signal bursts are shifted in frequency by f Di , which includes the Doppler shift as well as (at least for uplink signals) the Doppler shift pre-compensation applied by the transmitter. Within each
c i [n] =ċ p + 7 k=0ċ k i n − (k + γ)L c(2)
whereċ k i denotes the k-th subsequence with k ∈ {0, 1, ..., 7}. Each subsequence has a length ofL c samples. The relationship between those elements can be described with −ċ 0
i =ċ 1 i = c 2 i = .
.. =ċ 7 i . The prefixċ p i is a cyclic prefix ofċ 0 i and consists of γL c samples. As c i is assumed to be the same for each burst, the index i can be omitted.
B. Uplink signal properties
Conducted measurements: In the course of this work, Starlink user uplink signal measurements were conducted with the setup depicted in figure 3. A horn antenna was placed next to an active Starlink user terminal. The received signal was amplified and downconverted in a low noise block downconverter (LNB). A R&S ® FSW spectrum analyzer and R&S ® IQW wideband I/Q data recorder were used to downconvert the signal to (quasi) baseband and store it. The equipment was synchronized by a 10 MHz rubidium oscillator. The stored signal was analyzed in MATLAB ® . The investigated signal has a duration of 80 s and a sampling rate of f s = 562.500.000 s −1 .
The signal analysis shows, that the vast majority of the bursts (8519 of 8776 revealed bursts) have a bandwidth of around B us = 62.5 MHz, which corresponds to one of eight subchannels within the uplink bandwidth B u . It is noticeable that large blocks of consecutive bursts use the same subchannel. The rare but regular changes in the used subchannel could be explained by a handover between different satellites that use different subchannels. The burst duration seems to be highly variable. While some durations, like, e.g., 0.84 ms, are frequently used, the duration seems to be adaptable in timesteps of 17.87 µs. The rough estimation of the received carrier frequency ∆f r (relative to the frequency of the first burst in the same subchannel) shows that the user terminal applies Doppler shift pre-compensation to the uplink signal.
Correlation based analysis: To further investigate the signal, the following correlation algorithm is established: for two complex signals y 1 [n] with n ∈ {0, ..., L y1 } and y 2 [n] with n ∈ {0, ..., L y2 } and L y1 > L y2 the correlation r y1,y2 can be calculated with
r y1,y2 [l] = 1 A r y1,y2 [l] = 1 A Ly 1 n=0 y 1 [n]y * 2 n − l(3)
where (·) * represents the complex conjugate of a complex value. The normalization factor A is given by
A = r ŷ1,ŷ1 [0] r y2,y2 [0](4)
whereŷ 1 is the part of y 1 with length L Y2 that starts with indexl = max l {r y1,y2 }. The structure of the uplink signal was determined by explorational calculations of (3) with different parts of s i .
With the given sampling rate f s , the parameters of the structure can be specified asL c = 1200 and γL c = 220. The correlations in figure 6 between a burst signal s i and the first element of the synchronisation sequence of the same burst c 1 i were calculated with those values and validate the findings. Further analysis shows that a small number of different correlation sequences c seem to be used in the uplink. Which specific sequence is used appears to be mainly connected to the burst's BRI and the subchannel and therefore presumably the satellite. However, more than 70% of the bursts in subchannel 1 seem to use one of three different correlation sequences. Additionally, the maximum correlation coefficients between different bursts do not significantly exceed 0.8 in most cases. This indicates that even for what above is considered to be the same correlation sequence c indeed is not exactly the same sequence. Furthermore, in some bursts, the relationship between the phase of the transmitted elements c k is not as described in the section above and seems unpredictable.
C. Downlink signal properties
This section sums up the analysis from [7] as far as they are relevant for this work. For Starlink user downlink signals, 2 GHz of bandwidth are allocated. Each Starlink beam uses one of 8 subchannels with bandwidth B d = 240 MHz. In time domain, the downlink signal is composed of consecutive bursts s i , which are thus called frames, with length T f = 1.33 ms. Every frame starts with a synchronisation sequency c, which has the exact structure described in the signal model above. Each of the eight subsequences c k are identical for every burst and satellite, except that the first subsequence and the cyclic prefix are inverse. The subsequences are T c k = 4.27 µs in length, use the entire bandwidth of a subchannel B d , and are made up of 127 (known) DPSK-modulated symbols. The relative length of the prefix c p is γ = 1/32. The data signal d i includes 302 OFDM-like symbols. Aside from the synchronisation sequence c, the first, last, and (in parts) the second to last of the 302 OFDM-like symbols are constant and appear to be used for synchronisation as well.
IV. BURST DETECTION AND FREQUENCY ESTIMATION
In this section, an algorithm for Starlink burst detection and a two-step algorithm for frequency estimation are presented. Those algorithms are correlation-based and utilize the synchronization sequence c. The properties of the presented algorithms are discussed using the Starlink uplink signal.
A. Burst detection algorithm
The presented burst detection algorithm adds up the magnitude of eight partial correlations between the received signal s and a representative . The latter is a measured or reproduced copy of c i , consisting of subsequences k and a prefix p . The normalization factor D is defined analogous to A in (4). A burst is detected at sample l j , if d s, [l j ] exceeds a certain threshold and is the maximum value within a certain signal duration T > L c T s .
B. Frequency estimation algorithm
The carrier frequency estimation of a burst detected at sample l j is conducted with a two-step algorithm. In a first step, a raw frequency estimation is calculated by determining the maximum value of r s, [l j ] when different frequency shifts ∆f are applied to [13].
f j = max ∆f r s, [l j ](7)
∆f [n] = e j2π∆f Tsn
In a second step, a fine carrier frequency estimation is performed using results from (6).
f j = 1 2πL c T s arg 7 k=1 d k−1 s, [l j ] · d k s, [l j ] * + g j L c T s(9)
The correction factor g j ∈ Z accounts for the 1 LTs ambiguity of this algorithm. Results from (7) can be used to calculate g j . The estimator (9) is based on an estimator from [13], which meets the Cramér-Rao bound (CRB).
C. Algorithm analysis with Starlink uplink signals
The presented algorithms are applied to the measured Starlink uplink signal, which contains different synchronization sequences. Therefore, the calculations are conducted with three different representatives 1 , 2 , and 3 .
When comparing the presented detection algorithm to a simple correlation-based approach, some properties are noticeable. First, the results d s, are significantly less susceptible to an unknown carrier frequency offset or Doppler shift in Fig. 8: Detections with r s, and d s, and frequency estimations withf j andf j for all bursts in subchannel 1 the received signal than the results of a simple correlation r s, . This is observed from figure 7, where the algorithms are calculated for consecutive and non-consecutive bursts. When the burst used as a representative and the burst under investigation are transmitted within a short time period, (nearly) the same Doppler shift pre-compensation is applied to both. Otherwise, significantly different pre-compensations are applied, resulting in a significant decrease in the magnitude of r s, . The same conclusion can also be derived from figure 8, which shows r s, and d s, at the samples l j , where bursts are detected. For improved clarity, only the results with the best fitting representative for each burst are presented. Furthermore, the results in figure 7 show that r s, suppresses the (added Gaussian-distributed) noise better due to a higher correlation gain. However, r s, is significantly more computationally expensive than d s, .
Frequency estimation results are presented in figure 8, as well. Again, only the frequency estimations with the representative with the best detection properties (d s, [l j ]) are considered there. Also, as the representatives are not in baseband, the results contain a frequency offset.
V. POSITIONING ACCURACY ESTIMATION
In the following section, the achievable positioning accuracy is calculated. As a first step, the availabe SNR at the receiver antenna output is estimated. Thereafter, a lower bound of the frequency estimation error is presented. Consequently, the resulting error for Doppler shift based positioning is derived. It is important to mention that additional error sources like, e.g., the ephemeris errors are not considered here.
Assumptions about the SNR: With simple transformations of equations from [14] the SNR r at the receiver antenna output can be calculated with
SNR r = Φ t λ 2 c G r 4πk B T N(10)
where Φ t is the spectral flux density of the transmitted signal, describing the power per surface area and per wavelength, λ c is the carrier wavelength of the downlink signal, and G r is the receiver antenna gain. The Boltzmann constant is denoted by k B , the noise temperature by T N . Carrier frequency offset estimation: When estimating the carrier frequency offset ν of the received signal, the estimation accuracy is lower bounded by the modified Cramér-Rao bound (MCRB) from [13] MCRB(ν) = 3
T 2 b 2πL 3 0 1 SNR r(11)
with the symbol duration T b , and observation duration L 0 T s . Positioning estimation: As a last step, PNT information is calculated from the conducted frequency measurements. Therefore, N measurements with zero-mean, independent, Gaussian distributed measurement errors with variance σ 2 are assumed. Additionally, a static receiver with known altitude x h , and unknown longitude x l and latitude x b (in geodetic coordinates) is assumed. In accordance with [15], for this scenario the CRB can be specified as
CRB x l ,x b = σ 2 tr (H T H) −1(12)
where tr(·) represents the trace of a matrix. The matrix H is defined as
H = ∂f1(x l ,x b ) ∂x l ... ∂f N (x l ,x b ) ∂x l ∂f1(x l ,x b ) ∂x b ... ∂f N (x l ,x b ) ∂x b T(13)
with f 1 (x l , x b ), ..., f N (x l , x b ) being the received carrier frequencies (including Doppler shift) at the N time instances the measurements were conducted.
Results for tracking a single Starlink satellite: Equations (10-13) are now used to calculate the lower bound of the positioning error. The MCRB(ν) from (11) is used as variance σ 2 in (12). The received frequencies f 1 (x l , x b ), ..., f N (x l , x b ) are calculated using the spherical-earth model from [16], omitting the earth rotation.
The following scenario is assumed: a receiver tracks a single Starlink satellite for the timespan t a and estimates the frequencies f n ∈ {f 1 , ..., f N } of the synchronization sequences c at time instances t n = qT f with q ∈ {− N −1 2 , ..., N 2 }. T f = 1 750 s is the repetition time at which Starlink transmits the synchronization sequence. Assuming that the full synchronization sequence of the Starlink user downlink signal is used, the symbol duration is T b = 4.17 × 10 −9 s and the number of observed symbols is L 0 = 8 · 127 [7]. The satellite passes the zenith at t = 0 s at an orbit with hight x hs = 550 km. The transmitted carrier frequency is f c = 11.7 GHz, the spectral flux density is assumed to be φ t = −122 dB/m 2 /MHz. This is the maximum value at the ground according to Starlink's FCC filing [17]. Figure 9 shows the lower bound of the positioning accuracy for the tracking timespan t a = 4 min for different receiver antenna gains. For a simple patch antenna with, e.g., G r = 8 dB, the results show a positioning error of less than 1 km for a distance of 200 -700 km between the receiver and the ground track of the satellite. For smaller distances the accuracy deteriorates rapidely due to inaccuracies in crosstrack direction. For larger distances, the estimation in alongtrack direction is the dominant error source. Figure 10 shows the impact of the tracking timespan t a on the positioning accuracy. Depending on Starlink's beamstearing protocol the maximum timespan to receive and track the main lobe signal of a satellite might be limited.
VI. CONCLUSION
In this work, measurement results of the Starlink user uplink signal are analyzed. Each burst's synchronization sequence is found to consist of 8 repetitions of the same subsequence. Thereby, the identified uplink structure showed significant similarities to the Starlink user downlink signal. Algorithms that utilize the Starlink synchronization sequence for burst detection and frequency estimation are proposed and analyzed by applying them to the Starlink uplink signal. It is shown that the presented detection algorithm is very robust against an unknown carrier frequency offset or Doppler shift in the received signal. The presented frequency estimation is computationally efficient and promises an estimation variance near the lower bound. Finally, the impact of frequency estimation errors on the positioning accuracy of Doppler shift based LEO-PNT is investigated by calculating its lower bound. When the Starlink synchronization sequence is used for frequency estimation of a single satellite overflight, the induced positioning errors for most measurement scenarios are in the order of kilometers. Strategies to improve the accuracy include conducting measurements from different satellites with different orbits, using highly directional antennas, and utilizing more parts of the Starlink burst for frequency estimation.
Fig. 2 :
2Received center frequency f r for different receiver positions in cross-track direction during a satellite overflight improved by conducting measurements from several satellites with different orbits.
Fig. 4 :
4Exemplary Starlink uplink burst in time and frequency domain in baseband burst, f Di is assumed to be constant. The complex channel gain α describes all channel effects except the Doppler shift. The baseband synchronization sequence c i [n] with length L c and n ∈ {0, ..., L c − 1} can be described by
Fig. 5 :
5Subchannel, relative carrier frequency, and BRI over the time of detection of each burst For the Burst Repetition Time (BRI), defined as the time between the beginning of consecutive bursts, of a large majority of the bursts, it seems to apply BRI ∈ {6.67 ms, 8.00 ms, 9.33 ms, 10.67 ms, 16.00 ms, 18.67 ms}. The subchannels of the bursts with bandwidth B us are shown in figure 5, as well as the BRI of those bursts in subchannel 1 (14.0 GHz − 14.0625 GHz). (Very few bursts with higher BRIs are not shown.)
Fig. 6 :
6n] k [n − (γ + k)L c − l] * (6) Correlation r si,c 1 i for two exemplary bursts
Fig. 7 :
7Correlation results r s, and d s, for consecutive and non-consecutive bursts for SNR = −20 dB
Fig. 9 :
9Lower bound for the standard deviation of the positioning error for different values of G r when t a = 4 min
Fig. 10 :
10Lower bound for the standard deviation of the positioning error for different values of t a when G r = 8 dB
arXiv:2304.09535v1 [eess.SP] 19 Apr 2023LEO
satellite
signals
Filtering and
downconversion
I/Q
data
Burst
detection
Frequency
estimation
PNT
processing
Ephemerides
PNT
I am not afraid of the jammer: Navigating with signals of opportunity in gps-denied environments. Z M Kassas, J Khalife, A Abdallah, C Lee, Proceedings of the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020). the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020)10Z. M. Kassas, J. Khalife, A. Abdallah, and C. Lee, "I am not afraid of the jammer: Navigating with signals of opportunity in gps-denied environments," Proceedings of the 33rd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2020), pp. 1566-1585, 10 2020.
Watch spacex launch 51 starlink internet satellites on jan. 15 after delays. M Wall, M. Wall, "Watch spacex launch 51 starlink internet satellites on jan. 15 after delays." [Online]. Available: https://www.space.com/ spacex-launch-starlink-group-2-4
Exploiting starlink signals for navigation: First results. M Neinavaie, J Khalife, Z M Kassas, Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021). the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021)M. Neinavaie, J. Khalife, and Z. M. Kassas, "Exploiting starlink signals for navigation: First results," in Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), 10 2021, pp. 2766-2773.
Acquisition, doppler tracking, and positioning with starlink leo satellites: First results. IEEE Transactions on Aerospace and Electronic Systems. 582022--, "Acquisition, doppler tracking, and positioning with starlink leo satellites: First results," IEEE Transactions on Aerospace and Electronic Systems, vol. 58, pp. 2606-2610, 6 2022.
First results of differential doppler positioning with unknown starlink satellite signals. M Neinavaie, Z Shadram, S Kozhaya, Z M Kassas, 2022 IEEE Aerospace Conference (AERO). IEEEM. Neinavaie, Z. Shadram, S. Kozhaya, and Z. M. Kassas, "First results of differential doppler positioning with unknown starlink satellite signals," in 2022 IEEE Aerospace Conference (AERO). IEEE, 3 2022, pp. 1-14.
The first carrier phase tracking and positioning results with starlink leo satellite signals. J Khalife, M Neinavaie, Z Z Kassas, IEEE Transactions on Aerospace and Electronic Systems. 582022J. Khalife, M. Neinavaie, and Z. Z. Kassas, "The first carrier phase tracking and positioning results with starlink leo satellite signals," IEEE Transactions on Aerospace and Electronic Systems, vol. 58, pp. 1487- 1491, 4 2022.
T E Humphreys, P A Iannucci, Z Komodromos, A M Graff, Signal structure of the starlink ku-band downlink. 102022arXivT. E. Humphreys, P. A. Iannucci, Z. Komodromos, and A. M. Graff, "Signal structure of the starlink ku-band downlink," arXiv, 10 2022.
Twoline element sets. North American Airospace Defense Command (NORAD). North American Airospace Defense Command (NORAD), "Two- line element sets." [Online]. Available: http://celestrak.org/NORAD/ elements/
Digital Communications with Emphasis on Data Modems. R W Middlestead, John Wiley & Sons, Inc32017Hoboken, NJ, USAR. W. Middlestead, Digital Communications with Emphasis on Data Modems. Hoboken, NJ, USA: John Wiley & Sons, Inc., 3 2017.
Navigation using carrier doppler shift from a leo constellation: Transit on steroids. M L Psiaki, NAVIGATION. 682021M. L. Psiaki, "Navigation using carrier doppler shift from a leo con- stellation: Transit on steroids," NAVIGATION, vol. 68, pp. 621-641, 9 2021.
Enter leo on the gnss stage: Navigation with starlink satellites. Z Kassas, M Neinavaie, J Khalife, Inside GNSS. 112021Z. Kassas, M. Neinavaie, and J. Khalife, "Enter leo on the gnss stage: Navigation with starlink satellites," Inside GNSS, pp. 42-51, 11 2021.
Ultranarrowband waveform for iot direct random multiple access to geo satellites. C A Hofmann, A Knopp, IEEE Internet of Things Journal. 66C. A. Hofmann and A. Knopp, "Ultranarrowband waveform for iot direct random multiple access to geo satellites," IEEE Internet of Things Journal, vol. 6, no. 6, pp. 10 134-10 149, 2019.
Synchronization Techniques for Digital Receivers. U Mengali, A N , Springer USBoston, MAU. Mengali and A. N. D'Andrea, Synchronization Techniques for Digital Receivers. Boston, MA: Springer US, 1997.
A F Molisch, Wireless Communications. Wiley Publishing2nd edA. F. Molisch, Wireless Communications, 2nd ed. Wiley Publishing, 2011.
F Guo, Y Fan, Y Zhou, C Xhou, Q Li, Space Electronic Reconnaissance: Localization Theories and Methods. Wiley6F. Guo, Y. Fan, Y. Zhou, C. Xhou, and Q. Li, Space Electronic Reconnaissance: Localization Theories and Methods. Wiley, 6 2014.
Analysis on the performance bound of doppler positioning using one leo satellite. X Chen, M Wang, L Zhang, 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring). 5X. Chen, M. Wang, and L. Zhang, "Analysis on the performance bound of doppler positioning using one leo satellite," 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), pp. 1-5, 5 2016.
Spacex non-geostationary satellite system, attachment a, technical information to supplement schedule s. FCC filing. FCC filing, "Spacex non-geostationary satellite system, attachment a, technical information to supplement schedule s," 2018.
| [] |
[
"Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering",
"Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering"
] | [
"S Teresi \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"N Sebe \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"T Frottier \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"J Patterson \nUniversité Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance\n",
"A Kandazoglou \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"P Noël \nDepartment of Materials\nETH Zurich\nCH-8093ZurichSwitzerland\n",
"P Sgarro \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"D Térébénec \nUniversité Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance\n",
"N Bernier \nUniversité Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance\n",
"F Hippert \nUniversité Grenoble Alpes\nCNRS\nGrenoble INP\nLMGP\nF-38000GrenobleFrance\n",
"J.-P Attané \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"L Vila \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n",
"P Noé \nUniversité Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance\n",
"M Cosset-Chéneau [email protected] \nUniversité Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance\n"
] | [
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Department of Materials\nETH Zurich\nCH-8093ZurichSwitzerland",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance",
"Université Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance",
"Université Grenoble Alpes\nCNRS\nGrenoble INP\nLMGP\nF-38000GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance",
"Université Grenoble Alpes\nCEA\nLETI\nF-38000GrenobleFrance",
"Université Grenoble Alpes\nCEA\nCNRS\nINP-G\nF-38054Spintec, GrenobleFrance"
] | [] | Driving a spin-logic circuit requires the production of a large output signal by spin-charge interconversion in spin-orbit readout devices. This should be possible by using topological insulators, which are known for their high spin-charge interconversion efficiency. However, high-quality topological insulators have so far only been obtained on a small scale, or with large scale deposition techniques which are not compatible with conventional industrial deposition processes. The nanopatterning and electrical spin injection into these materials has also proven difficult due to their fragile structure and low spin conductance. We present the fabrication of a spin-orbit readout device from the topological insulator Sb2Te3 deposited by large-scale industrial magnetron sputtering on SiO2. Despite a modification of the Sb2Te3 layer structural properties during the device nanofabrication, we measured a sizeable output voltage that can be unambiguously ascribed to a spin-charge interconversion process. | null | [
"https://export.arxiv.org/pdf/2304.09228v1.pdf"
] | 258,213,077 | 2304.09228 | 1bd173edbc5e35b1758a52de005d8b7a2c15fb06 |
Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering
S Teresi
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
N Sebe
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
T Frottier
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
J Patterson
Université Grenoble Alpes
CEA
LETI
F-38000GrenobleFrance
A Kandazoglou
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
P Noël
Department of Materials
ETH Zurich
CH-8093ZurichSwitzerland
P Sgarro
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
D Térébénec
Université Grenoble Alpes
CEA
LETI
F-38000GrenobleFrance
N Bernier
Université Grenoble Alpes
CEA
LETI
F-38000GrenobleFrance
F Hippert
Université Grenoble Alpes
CNRS
Grenoble INP
LMGP
F-38000GrenobleFrance
J.-P Attané
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
L Vila
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
P Noé
Université Grenoble Alpes
CEA
LETI
F-38000GrenobleFrance
M Cosset-Chéneau [email protected]
Université Grenoble Alpes
CEA
CNRS
INP-G
F-38054Spintec, GrenobleFrance
Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering
Driving a spin-logic circuit requires the production of a large output signal by spin-charge interconversion in spin-orbit readout devices. This should be possible by using topological insulators, which are known for their high spin-charge interconversion efficiency. However, high-quality topological insulators have so far only been obtained on a small scale, or with large scale deposition techniques which are not compatible with conventional industrial deposition processes. The nanopatterning and electrical spin injection into these materials has also proven difficult due to their fragile structure and low spin conductance. We present the fabrication of a spin-orbit readout device from the topological insulator Sb2Te3 deposited by large-scale industrial magnetron sputtering on SiO2. Despite a modification of the Sb2Te3 layer structural properties during the device nanofabrication, we measured a sizeable output voltage that can be unambiguously ascribed to a spin-charge interconversion process.
Introduction
The spin-orbit interaction induced by the spin-charge interconversion opens the way to the creation of spin-logic architectures for low power computing [1] . In these architectures, information is stored in a ferroelectric polarization state coupled to the magnetization direction of a ferromagnetic electrode. The magnetic state of the electrode is then read electrically using the spin to charge interconversion in either heavy metals [2] , Rashba interfaces [3] or topological insulators [4] . In these mechanisms, the spin current produced by the ferromagnetic electrode is converted into a transverse charge current thanks to the spin orbit coupling. In order to perform spin-logic operations, the spin-orbit readout device [5] must be able to manipulate the ferroelectric polarization state of a neighboring device using its output voltage such that the devices can be cascaded [6] . Although significant progress has been made in minimizing the ferroelectric switching field [7,8] , the required voltages are still much higher than those obtained in current spin-orbit readout devices [9,10,11] . Therefore, their optimization becomes a major challenge for the realization of spin-logic circuits such as the Magnetoelectric Spin-Orbit (MESO) device [6] .
Several approaches have been explored for output signal optimization of this spin-orbit readout blocks. It has been observed that downscaling the device leads to a large increase of the output signal [10] , while interface engineering is also required to improve the spin-injection efficiency [9,12] , and to decrease the shunting of the produced charge current [12,13,14] . To date, most studies have focused on heavy metals, with relatively low spin-charge interconversion efficiencies and resistivities [15] . Therefore, the next natural strategies to optimize the signal of the spin-orbit readout device is to look for materials with higher resistivities [10] and higher spincharge interconversion efficiencies.
Topological insulators are promising materials for the realization of spin-orbit readout devices with high output voltage. These high-resistivity materials [16] are indeed known to exhibit high spin-charge interconversion efficiencies due to the Edelstein-Effect in their topological surface states [17,18,19] , and have already demonstrated their interest for spintronics applications in the context of the spin-orbit torque [20] . The use of these materials for spin-logic circuits is however limited by their fabrication at small-scale by MBE [17,18,21] or by mechanical exfoliation [22,23] . In addition, patterning these materials into nanoscale devices is complicated by their sensitivity to conventional nanofabrication processes [24] , so most spin-charge interconversion measurements in topological insulators have been made on microscale devices. Finally, the electrical spin injection into these materials is notoriously difficult due to intermixing effects at the interface with metals [16,25,26] and their low spin conductance caused by their semiconducting nature [27] .
Sb2Te3 is one of the first topological insulator discovered [28] , and is known to harbor a high spin-charge interconversion efficiency [29] . This system is also used in industry as part of phasechange memory [30,31] , which has led to the development of large-scale deposition techniques for this material [32,33] , some of which are industrially compatible, such as magnetron sputtering [34] . However, Sb2Te3 obtained by large-scale deposition technique has never been studied for the realization of a spin-orbit readout device.
In this paper, we demonstrate the fabrication of a spin-orbit readout device based on Sb2Te3 deposited using industry compatible processes on SiO2 substrates. We show that a sizeable spin-charge interconversion signal can be obtained in this device by engineering the interface between the Sb2Te3 and the ferromagnetic spin-injection electrode. Finally, we discuss the effect of our nanofabrication processes on the quality of the Sb2Te3 film. We show that a ferromagnetic spin injection layer deposited on Sb2Te3 creates a stress on this material. This leads to the appearance under the ferromagnetic electrode of disorder due to stress-induced disorientation of the initially c-axis oriented Sb2Te3 crystallites. ( ) / . The number of ~1 nm thick Sb2Te3 quintuple layers (QLs) separated by Te-Te van der Waals gaps observed in (a) is lower than that deduced from X-Ray Reflectivity (XRR) because of a partial surface oxidation of the film during Focused Ion Beam (FIB) preparation of the sample lamella prior to the STEM measurement. (c) Sheet resistance, (d) Hall measurement signal and (e) change of conductivity versus magnetic fields measured at different temperatures. The even and odd in field signals were removed from the Hall and weak-antilocalization data, respectively.
Material characterization
A 15 nm thick Sb2Te3 film was deposited by magnetron sputtering in an industrial cluster tool at » 250°C on a 300 mm diameter (100) Si wafer, covered with a 100 nm thick thermal SiO2 layer as a bottom insulating layer. Co-sputtering of a stoichiometric Sb2Te3 target and a Te target was used. As shown previously [31,34] , this deposition method compensates for Te desorption and allows to deposit well-oriented Sb2Te3 films, with Sb and Te planes parallel to the film surface (Figure 1a), due to the formation of a Te atomic layer on top of the 100 nm SiO2 layer. Using X-Ray Reflectivity (XRR), the film thickness is measured to be about 15 nm, while its RMS roughness has been estimated to be 1.4 nm using atomic force microscopy ( Figure S1c, supporting information). The Sb2Te3 film was left uncapped for the purpose of being further integrated in devices, which leads to the formation of a thin oxide layer on the film surface [35] .
The structural quality of the film was controlled by X-Ray Diffraction (XRD) patterns acquired in the − 2 geometry (Figure 1b). Only 00l diffraction peaks are detected (hexagonal indexation of the rhombohedral structure of Sb2Te3). An in-plane diffraction pattern ( Figure S1a, supporting information) shows that no preferred in-plane orientation of the crystallites is present. The XRD results show thus that the Sb2Te3 film is polycrystalline with a fiber texture.
The measured hexagonal lattice parameters (c = 3.0537(30) nm and a = 0.42655(10) nm) are consistent with literature values [36] . The structure of Sb2Te3 can be described as a stacking of quintuple layers (QL) consisting of Te-Sb-Te-Sb-Te planes perpendicular to the [001]
direction (hexagonal indexation) and separated by van der Waals-like gaps as visible in Figure 1a. The degree of out-of-plane orientation of the Sb2Te3 crystallites in the film was determined by analyzing a rocking curve shown in Figure comparable to the ones obtained for MBE-deposited films on amorphous layers [37] or Ge (001) substrates [38] , demonstrating the industrial potential of magnetron sputtering for the fabrication of high structural quality topological insulators on a large scale.
We then characterized the transport properties of our Sb2Te3 films using electrical measurements on as-deposited 15 nm thick films with gold contacts at the corners using the conventional van der Pauw method. The sheet resistance displays a metallic behavior with decreasing temperature (Figure 1c) with a resistivity of 6000 Ω ⋅ nm at low temperature. This indicates that the Sb2Te3 bulk is conductive. A predominantly temperature-independent carrier density of 9 = 10 :; cm => with a single hole character was extracted from Hall measurements ( Figure 1d), allowing us to extract a mobility of = 10 cm : ⋅ V =A ⋅ s =A at room temperature.
The metallic behavior of Sb2Te3 has also been observed in MBE-deposited films [39] and bulk crystals [40] . It is attributed to the presence of thermodynamically favored Sb-Te antisite defects, which push the Fermi level into the valence bands of Sb2Te3 [28] , thus making its bulk conductive. The carrier density is one order of magnitude higher than that measured in MBEdeposited films of similar thicknesses [29,41] , indicating that our magnetron sputtering deposited films present a relatively large density of defects [42] . However, this density is closer to that of MBE films than what has been achieved in widely studied topological insulators such as Bi2Se3, in which a difference of two orders of magnitude in carrier density between sputtered [43,44] and MBE films [21] was observed. The mobility of our film is low compared to MBE-deposited films, which can be understood by estimating the Ioffe-Regel parameter found to be close to unity, thus indicating an intermediate level of disorder that decreases the mobility [42] . We finally performed weak-antilocalization measurements ( Figure 1e). While our low field measurements do not allow us to extract the number of conduction channels, it is still possible to obtain the coherence length [45] , which was found to be 50 nm at 10 K by fitting the low field signal using the Hikami-Larkin-Nagaoka formula [46] . This length goes to zero at around 30 K (Figure 2e), as evidenced by the quadratic field dependence of the conductivity variation above this temperature. This dependence of the coherence length with the temperature is consistent with previous weak antilocalization measurements in Sb2Te3 [47] .
Spin-orbit readout device fabrication
We then used the Sb2Te3 film to fabricate the spin-orbit readout device shown in Figure 2a. In this device, a ferromagnetic electrode placed on top of a T-shaped spin-orbit coupling material injects a spin current into the spin-orbit coupling material upon application of an electrical bias current across their interface (Figure 2a). This spin current is then converted into a transverse charge current in the spin-orbit coupling material and detected as a transverse voltage FG (Figure 2a and Figure 2b). Here, we used Sb2Te3 as the spin-orbit coupling material, on top of which a 20 nm thick CoFe ferromagnetic electrode is deposited. A 1 nm TiOx barrier is inserted between the Sb2Te3 and CoFe layers, as it is known to promote the spin injection in materials with mismatched spin conductance [48] . The T-shape Sb2Te3 portion is first patterned by conventional Figure 2b.
For spin-charge interconversion measurements, we used a standard lock-in amplifier (123 Hz, MNOP = 100 µA) with the connecting scheme shown in Figure 2b to measure the transverse resistance signal FG = FG / MNOP . The measurements are carried out by applying a magnetic field along the CoFe electrode (Figure 2a), thus allowing to reverse its magnetization direction.
A typical transverse signal obtained at 10 K while scanning the magnetic field is shown in negative field values. The peaks at small field values can be attributed to the planar Hall effect [11] , while the baseline is possibly due to a small misalignment of the CoFe electrode with the inner leg of the Sb2Te3 structure [10] . It is important to note that no high-field transverse signal difference was observed in the absence of a TiOx barrier, evidencing the importance of the barrier to avoid intermixing and/or spin conductivity mismatch. We further observed that Δ FG decreases as the temperature increases and vanishes at about 30 K while the low field signal distortions are still present (Figure 2d). Δ FG thus follows the temperature dependence of the coherence length measured in unpatterned Sb2Te3 (Figure 2e).
At this point, the attribution of the observed Δ FG signal to spin-charge interconversion effects deserves a comment. Indeed, the anomalous Hall effect produced by the charge current flowing vertically into the electrode may produce a signal with similar symmetries [9,49] , while the ordinary Hall effect from the stray fields of the ferromagnetic electrode may also produce such a signal [50] . It is possible to separate these different contributions using a combination of systematic geometric dependencies and finite element simulations [16] . Here, these spurious effects can be ruled out using the temperature dependence of Δ FG (Figure 2e). As shown in Figure 1d, the Hall effect is almost temperature independent. Moreover, the anomalous Hall effect of CoFe does not vanish at 30 K [51] . Therefore, the ordinary and anomalous Hall effects cannot be at the origin of the observed transverse signal, which therefore comes from a spincharge interconversion effect in Sb2Te3. The absence of transverse signal when no TiOx barrier is present proves that a physical separation between the ferromagnetic injection electrode and the Sb2Te3 is necessary to avoid intermixing at the interface [16] , as well as spin-backflow and shunting in the ferromagnet [12] .
Effect of the patterning on the Sb2Te3 layer structure
It would be tempting at this point to consider that we succeeded in obtaining an interconversion signal in the topological surface states of a Sb2Te3 film of high structural quality. However, it is known that these materials are sensitive to nanofabrication processes that tend to create disorder. Limiting this disorder is fundamental for electrical transport to be driven by topological surface states [42] , and to achieve a high spin-charge interconversion efficiency [17] .
To investigate how the nanofabrication process affects the structural quality of Sb2Te3, we performed Scanning Transmission Electron Microscopy (STEM) measurements on our devices.
This analysis was performed on two representative regions of the sample (points A and B in Figure 2b).
At point A, Sb2Te3 retains its structural quality as the Sb-Te quintuple layers are clearly visible ( Figure 3a). In addition, an Energy-Dispersion X-ray (EDX) spectroscopy map (Figure 3c) shows that the TiOx barrier is continuous and prevents the Te atoms from diffusing into the CoFe layer, while the Ti, Co and Fe atoms do not diffuse into the Sb2Te3 layer. The EDX maps of all atomic species are shown in Figure S3 of the supporting information. The structure of Sb2Te3 under the CoFe electrode, i.e. at point B (Figure 3b) where the interconversion takes place, is, however, more relevant for the spin-charge interconversion. Here, while the Sb and Te planes are still visible, they appear to be blurred and non-parallel to the plane of the substrate.
This indicates that the nanofabrication process has induced the disorientation of Sb2Te3 grains whose c-axis is now no longer perpendicular to the substrate. In addition, the Energy-dispersive X-ray spectroscopy (EDX) maps of Figure 3d shows that the continuity of the TiOx layer has been broken in some points and that a large number of magnetic Fe atoms are present in Sb2Te3 (see Figure S3 and Figure S4 in the supporting information for the extensive EDX data set). The measured XRD patterns are shown in Figure 4 and Figure S7a Besides, A Williamson-Hall analysis of the FWHM of the diffraction peaks [52,53] (Figure 4b) shows We therefore attribute the tilt of the Sb2Te3 crystallites observed in the HAADF-STEM image at point B in the device (Figure 2b) to the strain applied by the thick CoFe electrode used to inject the spin current into Sb2Te3. Interestingly, despite the tilt of a majority of the Sb2Te3 crystallites, a sizeable spin-charge interconversion signal was measured. Assuming that this interconversion is driven only by out-of-plane oriented crystallites, a nanofabrication process which would limit the tilt of Sb2Te3 crystallites would lead to a much larger transverse signal in the spin-orbit read-out device presented in Figure 3a.
Conclusion
In conclusion, we have presented the growth of the topological insulator Sb2Te3 on a largescale using industrially compatible processes with a structural quality comparable to MBE deposited films. Furthermore, the density of defects comes closer to those of MBE-deposited films than any previous attempts to deposit topological insulators using magnetron sputtering.
We then patterned Sb2Te3 into nanoscale spin-orbit readout devices, with a geometry compatible with the spin-orbit readout block of the MESO device. We obtained a sizeable spincharge interconversion signal at low temperature by introducing a TiOx barrier between the CoFe ferromagnetic electrode and the Sb2Te3 film. Finally, we studied the effect of our nanofabrication processes on the structural quality of the Sb2Te3 film. We observed that the presence of the thick CoFe layer used as a spin injection electrode creates a stress that induces disorder in the underneath Sb2Te3 layer.
The recent proposal of a MESO device for low energy spin-logic application has renewed the interest in spin-charge interconversion processes. New types of materials are being investigated to achieve higher spin-charge interconversion efficiencies. However, it is worth noting that while tremendous progresses have been made in this area, a double challenge remains for their use in spin-orbit readout block of the MESO device. First, the materials studied must be grown on a large-scale using industrially compatible processes. Indeed, although interesting from a fundamental point of view, the fabrication of exfoliated and MBE-grown materials cannot be easily scaled up, and large-scale deposition methods must be developed. Second, interconversion studies focus primarily on microscopic [54,55] or macroscopic systems [56,57,58,59] .
Although these methods efficiently identify materials of interest from the spin-charge interconversion point of view, their integration in nanodevices with a geometry compatible with the spin-orbit readout block of the MESO device remains scarce [16] and challenging, as illustrated in this paper.
Here, we have addressed both aspects of this problem on a specific material, Sb2Te3. It is clear that the disorder level in the material, as well as the nanofabrication processes, can still be optimized in order to obtain a larger spin to charge interconversion signal using topological surface state. In addition, these considerations are also relevant for alternative logic architectures such as the recently introduced Ferroelectric Spin-Orbit devices [60] using the ferroelectric polarization of bulk Rashba semiconductors such as GeTe to store the information [61] . This study provides unique insights to overcome the challenges limiting the integration of the recently discovered spin-charge interconversion materials into spin-logic circuits and spin-orbit torques-based memories.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author
Supporting information
Spin-orbit readout using thin films of topological insulator Sb2Te3 deposited by industrial magnetron sputtering S. Teresi It shows that the film has a fiber texture.
XRD analysis of the patterned Sb2Te3 films
The measurements presented in Figure 4 of the main text where performed in the out-of-plane − 2 configuration using a Panalytical Empyrean diffractometer equipped with a cobalt source ( V ) radiation ( = 1.79 Å) and a j filter on the diffracted beam. The instrument contribution to the width of the diffraction peaks is negligible. The XRD patterns for samples 1 and 2 are shown in Figure S7a. In Figure S5b we show the Sb2Te3 c-parameter extracted from the fit of the XRD patterns of Figure S5a and Figure 4a of the main text The D value is found the same in all films within the error bars and equal to the thickness (14 nm) of the pristine reference film determined by XRR. For the reference and sample 2, ∼ 5 × 10 => , while for sample 3, = 2 ± 0.5 × 10 =: . Even though the large error bars and small slope prevents a correct estimation of the strain in the reference and sample 2, it is clear from Figure 4b that the strain is much larger in sample 3 than in sample 1 and 2.
Figure 1 :
1Structural and electrical characterization of the Sb2Te3 pristine film. (a) Scanning Transmission Electron Microscopy image acquired in High Angle Annular Dark Field (HAADF) mode and (b) XRD pattern measured in the − 2 geometry using a Cu (Kα1) radiation source ( = 1.5406 Å) as a function of = 4
S1b of the supporting information. This curve can be described as the combination of a narrow peak with Full Width at Half Maximum (FWHM) of 0.8°, superimposed on a slightly broader one (FWHM of 3.8°). This indicates the co-existence of well-oriented Sb2Te3 crystallites, with the c axis perpendicular to the film surface, and slightly disoriented crystallites. The detection of Laue oscillations (Figure S2 of the supporting information) indicates the high structural quality of the film. These results are
Figure 2 :
2(a) Schematic representation of the spin-orbit readout device. The charge current is injected between the ferromagnetic electrode in red and the inner leg of the T-shaped Sb2Te3 structure (orange). The polarization of the spin current generated at the interface between the electrode and Sb2Te3 is along the ± magnetization of the electrode (black arrow). The spin to charge interconversion in Sb2Te3 creates a transverse charge current in the two outer legs of the Sb2Te3 structure with a sign that depends on the direction of magnetization (black dotted arrows).(b) SEM image of the device presented in (a) with the connections used to flow the charge current and measure the transverse voltage FG . The A and B points corresponds to the Scanning Transmission Electron Microscopy observation area presented in Fig. 3. (c) Transverse resistance versus magnetic field measured at 10 K using the connection configuration presented in (b). (d) Evolution of the transverse resistance versus magnetic field with the temperature. (e) Transverse resistance signal and coherence length as a function of temperature.
Figure 2c .
2cIt is constant at high field with a difference of 2Δ FG ~ 45 mΩ between positive and
Figure 3 :Figure 2b .
32bSTEM-HAADF images of the FIB lamellae extracted at point A (a) and B (b) in Fig. 2b. Several Sb2Te3 grains are visible in (b) but with no longer preferred orientation with respect to the plane of the substrate. (c) and (d) HAADF images and corresponding EDX maps acquired at point A and B, respectively. The white bars correspond to 7 and 10 nm scales in (c) and (d) respectively.Points A and B of the device underwent the same EBL processes and have similar in plane feature sizes. Therefore, it is likely that the lower quality of the Sb2Te3 film at point B compared to point A comes from their different etch/deposit history. In order to investigate the effect of the different etching and deposition steps on the structural quality of Sb2Te3, we performed XRD measurements in the − 2 configuration on unpatterned fullsheet samples that underwent the same etching and deposition steps as points A and B of the devices shown in For this purpose, we prepared a set of three 1 cm by 1 cm macroscopic samples. Sample 1 is the 15 nm pristine Sb2Te3 sample that will be used as a reference. Sample 2 is the 15 nm Sb2Te3 film, on which the low energy IBE step used to remove the Sb2Te3 surface oxide was applied, followed by the deposition of the TiOx(1 nm)/CoFe(5 nm) layer. Finally, sample 3 underwent the same process as sample 2, with the addition of a high energy etch followed by the deposition of a 20 nm thick CoFe layer. The etching and deposition history of sample 2, respectively 3, is the same as that of point A, respectively B inFigure 2b.
Figure 4 :
4(a) Comparison of the XRD patterns obtained for sample 1 (reference) and for sample 3 in the −2 configuration. The measurements were carried out using a Co ( V ) radiation source ( = 1.79 Å).= 4 ( ) / .. (b) Williamson-Hall plot for the reference, sample 2 and sample 3. Z : is plotted as a function of [ : with Z = (2 / ) ( [ ) and [ = (4 / ) ( [ ) where 2 [ is the center of a diffraction peak and its FWHM in rds. The linear behavior is fitted using Z : = : [ : + (2 × 0.9/ ) : with the diffracting coherent domain of the Sb2Te3 crystallites is related to the random mean square width of the strain distribution. For the reference and sample 2, ∼ 5 × 10 => , while for sample 3, = 2 ± 0.5 × 10 =: .
(supporting information). Out-of-plane oriented Sb2Te3 crystallites are detected in all samples. The rocking curves of samples 1 (reference) and 2 are similar. In sample 3 the rocking curve is a single peak with a FWHM of 1.9°. The diffraction peaks observed in samples 1 and 2 have the same positions and widths. The only change between the two samples is an overall decrease in peak intensity in sample 2, by a factor of 1.35 compared to the reference sample (Figure 7a, supporting contrast, large changes are observed between the XRD patterns of samples 1 and 3. A strong decrease in the intensity of the diffraction peaks is observed in sample 3 compared to the reference sample (Figure 4a). Besides, the diffraction peaks in sample 3 are observed at larger diffraction angles than in the reference and their width is larger, the latter effect being particularly marked for the peaks observed at large diffraction angles. The loss of intensity in sample 3 with respect to sample 1 can be explained by the fact that in sample 3 most of the Sb2Te3 crystallites are strongly tilted. Only crystallites with the c-axis close to the perpendicular to the substrate are detected in the −2 configuration used for acquisition of the XRD patterns. The small reduction of intensity between the reference and sample 2 suggests that tilted crystallites also exist in sample 2 but their number is much smaller than in sample 3. The c parameter of the Sb2Te3 crystallites in sample 3 is slightly smaller (by about 0.4%) than the c parameter measured in sample 1 (Figure S7b, supporting information). The c parameters of the samples 1 and 2 are identical. This result proves that the 20 nm thick CoFe layer in sample 3 induces a uniform strain on the out-of-plane oriented Sb2Te3 crystallites, whereas the thinner (5 nm thick) CoFe layer in sample 2 does not induce any significant strain effect.
that while the size of the Sb2Te3 crystallites in the direction normal to the substrate remains the same in all samples, within the limits of experimental accuracy, a large inhomogenous strains are present in sample 3 but not in the reference and in sample 2. All these results show that the deposition of a 20 nm thick CoFe layer strongly alters the structural state of the Sb2Te3 film. It induces a strong tilt of a majority of the Sb2Te3 crystallites and induces uniform and non-uniform strains on the remaining out of plane-oriented crystallites.
, N. Sebe, T. Frottier, A. Kandazoglou, P. Noël, P. Sgarro, D. Terebenec, N. Bernier, F. Hippert, J.-P. Attané, L. Vila, P. Noé and M. Cosset-Chéneau* 1. Analysis of the pristine film The XRD analysis on the pristine Sb2Te3 film was carried out in the out-of-plane − 2 configuration using a Bruker D8 diffractometer equipped with a Ge monochromator selecting the Cu ( VA ) radiation ( = 1.5406 Å). The degree out-of-plane orientation of the crystallites was determined by measuring a rocking curve around the 009 peak (Figure S1b). The detection of Laue oscillations, shown in Figure S2 for the 006 peak, indicates the high structural quality of the film The thickness of the film was evaluated by analyzing the intensity profile of sin( ) and N the number of unit cells in the direction perpendicular to the film. The obtained thickness (Nc » 14 nm) is close to the value deduced from XRR.The in-plane XRD pattern shown inFigure S1awas measured using a Rigaku Smartlab diffractometer with the Cu ( V ) radiation with no rotation of the film during the measurement.
Figure S1 :
S1(a) In-plane X-ray diffraction pattern measured on the pristine Sb2Te3 film using a Cu (Kα) radiation source. Only h k 0 diffraction peaks of Sb2Te3 are detected (hexagonal indexation of the rhombohedral structure of Sb2Te3). (b) Rocking curve ( scan at 2 fixed, with ω the incidence angle) measured for the 009 reflection of the pristine Sb2Te3 film. Dashed lines represent the best fit to the data using two Gaussian curves. The instrument contribution to the FWHM is 0.1°. (c) Atomic Force Microscopy image of the pristine Sb2Te3 pristine film. (d) Height profile of the Sb2Te3 film measured along the white line in (c).
Figure S2 :
S2Zoom on the (006) reflection of the out-of-plane x-ray diffraction pattern of the pristine Sb2Te3 film shown inFig.1bof the main text. The intensity is plotted as a function of = 2/ ( ). The instrument contribution to the broadening of the peaks is negligible.
was prepared by Ga+ Focused Ion Beam (FIB) milling using a FEI Strata 400 machine. Prior to milling, a thin layer of marker pen is written onto the specimen surface over the region of interest and then a protective W layer is deposited in the FIB machine using ion beam assisted deposition. The specimens are plasma cleaned prior to the TEM analysis in order to remove the ink and leave a region of vacuum near the region of interest such that the protective layers do not interfere with the top layer. For the thin lamella preparation, we used a 16 kV operation voltage for the initial thinning and finished with a low beam energy in the range 5-8 kV to reduce FIB-induced damage. The thin foil was observed in STEM mode at 200 kV using a convergence semi angle of ~ 18 mrad for the incident electron probe in a probecorrected ThermoFisher Titan Themis microscope equipped with the Super-X detector system for Energy Dispersive X-ray (EDX) spectrometry. The Super-X system comprises four 30mm² windowless silicon drift detectors placed at an elevation angle of 18° from the horizontal with a symmetrical distribution along the beam axis and a 0.64 ± 0.06 sr total solid angle. EDX hypermaps were acquired with a pixel size less than 0.1 nm, a pixel dwell time of ~ 50 µs and for a total acquisition time of ~ 15 min. Hypermaps were processed in the Bruker Esprit v2.2 software using standard TEM recipes for background subtraction and Gaussian peak deconvolution. STEM HAADF images were acquired with a camera length of 86 mm corresponds to inner and outer collection angles of the HAADF detector (Fischione Model M3000) of ~ 78 and 230 mrad. Gun lens and spot size values were selected to provide a probe current of approximately 30 pA. We paid attention to verify that there was no electron beam induced damage by comparing STEM images before and after each STEM/EDX acquisition.
Figure S1 :
S1HAADF image and EDX maps of the atomic species present in the device at point A ofFig. 2b. The limited interdiffusion of the Sb (red), Te (green) and Ti (yellow) atoms is visible in (a). The good crystallinity of the Sb2Te3 can be observed in (b). (c) shows Si on top of the device (right), which is due to the recipe used for the FIB lamella preparation. A small quantity of Co and Fe atoms is present in the Sb3Te2 layer (e) and (i), which can also be due to the sample lamella preparation. The Ti and CoFe layers are shown to be oxidized in (g), even though the ferromagnetic layers seem to have undergone only a partial oxidation, with the presence of non-oxidized grains as indicated by the darker line in (g) at the position of the CoFe layer.
Figure S2 :
S2HAADF and EDX map of the atomic species present in the device at point A ofFigure 2bin the main text. (a) Shows the presence of several Sb2Te3 grains under the thick CoFe electrode with difference out of plane orientations. In (d), the TiOx barrier appears broken in the region indicated by the white arrow. This leads to an increased diffusion of Co (f) and Fe (h) atoms in the Sb2Te3 layer and to a local oxidation of the film.
Figure S3 :
S3(a) Comparison of the XRD patterns obtained for sample 1 (reference) and for sample 2 in the −2 configuration. (b) c-parameters extracted from the XRD measurements presented in Figure 4a of the main text and Figure S7a. Broadening of the (00l) diffraction peaks as a function of the scattering angle 2 has been studied for three samples. The peak profiles are close to a Gaussian line and the FWHM of the peak has been obtained by a Gaussian fit, considering the VA and V: contributions. The instrument contribution to the measured FWHM is negligible. The FWHM in samples 1 and 2 are identical within the error bars and they only slightly increase with 2 . In sample 3 the FWHM steadily increase with 2 . Qualitatively, this trend reveals the existence of a distribution of c values around the mean value determined above. In order to separate the non-uniform strain contribution from the size contribution, we have plotted Z : as a function of [ : where Z = (2 / ) ( [ ) and [ = (4 / ) ( [ ) with 2 [ the center of a diffraction peak and its FWHM in rds (Figure 4b in the main text, Williamson-Hall type plot assuming Gaussian profiles). For all films, the data can be fitted by a linear law that can be interpreted as Z : = : [ : + (2 × 0.9/ ) : where D is the size of a coherently diffracting domain in the direction normal to the film and ε is related to the random mean square width of the strain distribution.
Electron-Beam Lithography (EBL). For this purpose, the oxidized top Sb2Te3 layer is first removed through the EBL resist using a low energy Ion Beam Etching (IBE), and a TiOx(1nm)/CoFe(5nm) hard mask is then deposited on the deoxidized Sb2Te3 surface by electron beam evaporation, without breaking the vacuum. All metal deposition steps are carried out using electron beam evaporation. The uncovered Sb2Te3 area is then removed by a soft IBE after lift-off of the hard mask. The ferromagnetic electrode is patterned in a second EBL step.A high-energy IBE is first applied to remove the CoFe oxide layer from the hard mask, before the deposition of the 20 nm thick CoFe electrode, without breaking vacuum. This procedure ensures a good quality of spin contact through TiOx between CoFe and Sb2Te3. The hard mask of CoFe on Sb2Te3 is finally partially removed by IBE, leaving an oxidized thin TiOx/CoFe layer on top of the region of Sb2Te3 which is not covered by the ferromagnetic electrode. A typical scanning microscopy image of the final device is shown in
AcknowledgementsWe acknowledge support from the Institut Universitaire de France, from the Project
. S Manipatruni, D E Nikonov, I A Young, Nature Phys. 14338S. Manipatruni, D.E. Nikonov, I.A. Young, Nature Phys. 2020, 14, 338
. Jairo Sinova, Sergio O Valenzuela, J Wunderlich, C H Back, T Jungwirth, Rev. Mod. Phys. 871213Jairo Sinova, Sergio O. Valenzuela, J. Wunderlich, C. H. Back, T. Jungwirth, Rev. Mod. Phys. 2015, 87, 1213
. V T Pham, H Yang, W Y Choi, A Marty, I Groen, A Chuvilin, F S Bergeret, L E , V. T. Pham, H. Yang, W. Y. Choi, A. Marty, I. Groen, A. Chuvilin, F. S. Bergeret, L. E.
. I V Hueso, F Tokatly, Casanova, Phys. Rev. B. 104184410Hueso, I. V. Tokatly, F. Casanova, Phys. Rev. B 2021, 104, 184410
. C H Li, O M J Van 't Erve, J T Robinson, Y Liu, L Li, B T Jonker, Nature Nanotech. 9218C. H. Li, O. M. J. van 't Erve, J. T. Robinson, Y. Liu, L. Li, B. T. Jonker, Nature Nanotech. 2014, 9, 218
. V T Pham1, L Vila, G Zahnd, P Noël, A Marty, J P Attané, Appl. Phys. Lett. 114222401V. T. Pham1, L. Vila, G. Zahnd, P. Noël, A. Marty, J. P. Attané, Appl. Phys. Lett. 2019, 114, 222401
. S Manipatruni, D E Nikonov, C.-C Lin, T A Gosavi, H Liu, B Prasad, Y.-L Huang, E Bonturim, R Ramesh, I A Young, Nature. 56535S. Manipatruni, D. E. Nikonov, C.-C. Lin, T. A. Gosavi, H. Liu, B. Prasad, Y.-L. Huang, E. Bonturim, R. Ramesh, I. A. Young, Nature 2019, 565, 35
. B Prasad, Y.-L Huang, R V Chopdekar, Z Chen, J Steffes, S Das, Q Li, M Yang, C , B. Prasad, Y.-L. Huang, R. V. Chopdekar, Z. Chen, J. Steffes, S. Das, Q. Li, M. Yang, C.-
. C Lin, T Gosavi, D E Nikonov, Z Q Qiu, L W Martin, B Huey, I Young, J Íñiguez, S Manipatruni, R Ramesh, Adv. Mater. 32C. Lin, T. Gosavi, D. E. Nikonov, Z. Q. Qiu, L. W. Martin, B. D Huey, I. Young, J. Íñiguez, S. Manipatruni, R. Ramesh, Adv. Mater. 2020, 32, 2001943
. R Ramesh, S Manipatruni, Proc. R. Soc. A. 47720200942R. Ramesh, S. Manipatruni, Proc. R. Soc. A 2021, 477, 20200942
. V T Pham, L Vila, G Zahnd, A Marty, W Savero-Torres, M Jamet, J.-P Attané, Nano Lett. 166755V. T. Pham, L. Vila, G. Zahnd, A. Marty, W. Savero-Torres, M. Jamet, J.-P. Attané, Nano Lett. 2016, 16, 11, 6755
. V T Pham, I Groen, S Manipatruni, W Y Choi, D E Nikonov, E Sagasta, C.-C Lin, T A Gosavi, A Marty, L E Hueso, I A Young, F Casanova, Nat. Electron. 2020309V. T. Pham, I. Groen, S. Manipatruni, W. Y. Choi, D. E. Nikonov, E. Sagasta, C.-C. Lin, T. A. Gosavi, A. Marty, L. E. Hueso, I. A. Young, F. Casanova, Nat. Electron. 2020, 3, 309
. H Mizuno, H Isshiki, K Kondou, Y Zhu, Y Otani, Appl. Phys. Lett. 11992401H. Mizuno, H. Isshiki, K. Kondou, Y. Zhu, Y. Otani, Appl. Phys. Lett. 2021 119, 092401
. V T Pham, M Cosset-Chéneau, A Brenac, O Boulle, A Marty, J.-P Attané, L Vila, Phys. Rev. B. 103201403V. T. Pham, M. Cosset-Chéneau, A. Brenac, O. Boulle, A. Marty, J.-P. Attané, L. Vila, Phys. Rev. B 2021 103, L201403
. M Cosset-Chéneau, M Fahmy, A Kandazoglou, C Grezes, A Brenac, S , M. Cosset-Chéneau, M. Husien Fahmy, A. Kandazoglou, C. Grezes, A. Brenac, S.
. P Teresi, P Sgarro, A Warin, V T Marty, J.-P Pham, L Attané, Vila, Phys. Rev. B. 106220405Teresi, P. Sgarro, P. Warin, A. Marty, V. T. Pham, J.-P. Attané, L. Vila, Phys. Rev. B 2022 106, L220405
. L Liu, R A Buhrman, D C Ralph, arXiv:1111.3702v32012L. Liu, R. A. Buhrman, D. C. Ralph, arXiv:1111.3702v3 2012
. J.-C Rojas-Sánchez, A Fert, Phys. Rev. Applied. 1154049J.-C. Rojas-Sánchez, A. Fert, Phys. Rev. Applied 2019, 11, 054049
. W Y Choi, I C Arango, V T Pham, D C Vaz, H Yang, I Groen, C.-C Lin, E S , W. Y. Choi, I. C. Arango, V. T. Pham, D. C. Vaz, H. Yang, I. Groen, C.-C. Lin, E. S.
. K Kabir, P Oguz, J J Debashis, H Plombon, D E Li, A Nikonov, L E Chuvilin, I A Hueso, F Young, Casanova, Nano Lett. 227992Kabir, K. Oguz, P. Debashis, J. J. Plombon, H. Li, D. E. Nikonov, A. Chuvilin, L. E. Hueso, I. A. Young, F. Casanova, Nano Lett 2022, 22, 19, 7992
. P Noel, C Thomas, Y Fu, L Vila, B Haas, P-H Jouneau, S Gambarelli, T Meunier, P Ballet, J P Attané, Phys. Rev. Lett. 120167201P. Noel, C. Thomas, Y. Fu, L. Vila, B. Haas, P-H. Jouneau, S. Gambarelli, T. Meunier, P. Ballet, J. P. Attané, Phys. Rev. Lett. 2018 120, 167201
. J.-C Rojas-Sánchez, S Oyarzún, Y Fu, A Marty, C Vergnaud, S Gambarelli, L Vila, M Jamet, Y Ohtsubo, A Taleb-Ibrahimi, P Le Fèvre, F Bertran, N Reyren, J.-M George, A Fert, Phys. Rev. Lett. 11696602J.-C. Rojas-Sánchez, S. Oyarzún, Y. Fu, A. Marty, C. Vergnaud, S. Gambarelli, L. Vila, M. Jamet, Y. Ohtsubo, A. Taleb-Ibrahimi, P. Le Fèvre, F. Bertran, N. Reyren, J.-M. George, A. Fert, Phys. Rev. Lett. 2016 116, 096602
. H Wang, J Kally, J S Lee, T Liu, H Chang, D R Hickey, K A Mkhoyan, M Wu, A Richardella, N Samarth, Phys. Rev. Lett. 11776601H. Wang, J. Kally, J. S. Lee, T. Liu, H. Chang, D. R. Hickey, K. A. Mkhoyan, M. Wu, A. Richardella, N. Samarth, Phys. Rev. Lett. 2016, 117, 076601
. J Han, L Liu, Mater, 960901J. Han, L. Liu, APL Mater. 2021, 9, 060901
. P Deorani, K Son, N Banerjee, M Koirala, S Brahlek, H Oh, Yang, Phys. Rev. B. 9094403P. Deorani, J Son, K. Banerjee, N. Koirala, M. Brahlek, S. Oh, H. Yang, Phys. Rev. B 2014, 90, 094403
. J Tian, S Hong, I Miotkowski, S Datta, Y P Chen, Sci. Adv. 31602531J. Tian, S. Hong, I. Miotkowski, S. Datta, Y. P. Chen, Sci. Adv. 2017, 3, e1602531
. A Dankert, J Geurs, M V Kamalakar, S Charpentier, S P Dash, Nano Lett. 157976A. Dankert, J. Geurs, M. V. Kamalakar, S. Charpentier, S. P. Dash, Nano Lett. 2015, 15, 12, 7976
. C H Li, O M J Van 't Erve, Y Y Li, L Li, B T Jonker, Sci. Rep. 629533C. H. Li, O. M. J. van 't Erve, Y. Y. Li, L. Li, B. T. Jonker, Sci. Rep. 2016, 6, 29533
. F Bonell, M Goto, G Sauthier, J F Sierra, A I Figueroa, M V Costache, S Miwa, Y , F. Bonell, M. Goto, G. Sauthier, J. F. Sierra, A. I. Figueroa, M. V. Costache, S. Miwa, Y.
. S O Suzuki, Nano Valenzuela, Lett, 85893Suzuki, S. O. Valenzuela, Nano Lett. 2020, 20, 8, 5893
. L A Walsh, C M Smyth, A T Barton, Q Wang, Z Che, R Yue, J Kim, M J Kim, R M Wallace, C L Hinkle, Phys. Chem. C. 4223551L. A. Walsh, C. M. Smyth, A. T. Barton, Q. Wang, Z. Che, R. Yue, J. Kim, M. J. Kim, R. M. Wallace, C. L. Hinkle, Phys. Chem. C 2017, 121, 42, 23551
. A Fert, H Jaffrès, Phys. Rev. B. 64184420A. Fert, H. Jaffrès, Phys. Rev. B 2021, 64, 184420
. D Hsieh, Y Xia, D Qian, L Wray, F Meier, J H Dil, J Osterwalder, L Patthey, A V , D. Hsieh, Y. Xia, D. Qian, L. Wray, F. Meier, J. H. Dil, J. Osterwalder, L. Patthey, A. V.
. H Fedorov, A Lin, D Bansil, Y S Grauer, R J Hor, M Z Cava, Hasan, Phys. Rev. Lett. 103146401Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, M. Z. Hasan, Phys. Rev. Lett. 2009, 103, 146401
. K Kondou, R Yoshimi, A Tsukazaki, Y Fukuma, J Matsuno, K S Takahashi, M Kawasaki, Y Tokura, Y Otani, Nat. Phys. 121027K. Kondou, R. Yoshimi, A. Tsukazaki, Y. Fukuma, J. Matsuno, K. S. Takahashi, M. Kawasaki, Y. Tokura, Y. Otani, Nat. Phys. 2016, 12, 1027
. D Térébénec, N Castellani, N Bernier, V Sever, P Kowalczyk, M Bernard, M , D. Térébénec, N. Castellani, N. Bernier, V. Sever, P. Kowalczyk, M. Bernard, M. C.
. N P Cyrille, F Tran, P Hippert, Noé, Phys Status Solidi RRL. 15Cyrille, N. P. Tran, F. Hippert, P. Noé, Phys Status Solidi RRL 2021, 15, 2000538
. P Kowalczyk, F Hippert, N Bernier, C Mocuta, C Sabbione, W Batista-Pessoa, P Noé, Small. 141704514P. Kowalczyk, F. Hippert, N. Bernier, C. Mocuta, C. Sabbione, W. Batista-Pessoa, P. Noé, Small 2018, 14, 1704514
. E Longo, L Locatelli, M Belli, M Alia, A Kumar, M Longo, M Fanciulli, R Mantovan, Adv. Mater. Interfaces. 82101244E. Longo, L. Locatelli, M. Belli, M. Alia, A. Kumar, M. Longo, M. Fanciulli, R. Mantovan, Adv. Mater. Interfaces 2021, 8, 2101244
. M Rimoldi, R Cecchini, C Wiemer, A Lamperti, E Longo, L Nasi, L Lazzarini, R Mantovan, M Longo, Adv, 10M. Rimoldi, R. Cecchini, C. Wiemer, A. Lamperti, E. Longo, L. Nasi, L. Lazzarini, R. Mantovan, M. Longo, RSC Adv. 2020, 10, 19936
. F Hippert, P Kowalczyk, N Bernier, C Sabbione, X Zucchi, D Térébénec, C Mocuta, Pierre Noé, J. Phys. D: Appl. Phys. 2020154003F. Hippert, P. Kowalczyk, N. Bernier, C. Sabbione, X. Zucchi, D. Térébénec, C. Mocuta, Pierre Noé, J. Phys. D: Appl. Phys. 2020, 53, 154003
. Paul Noel, Université Grenoble AlpesPhD ThesisPaul Noel, PhD Thesis, Université Grenoble Alpes, November 2019
. T L Anderson, H N Krause, Acta Cryst. B. T.L. Anderson, H.N. Krause, Acta Cryst. B 1974, 30 1307
. J E Boschker, E Tisbi, E Placidi, J Momand, A Redaelli, B J Kooi, F Arciprete, R Calarco, AIP Advances. 715106J. E. Boschker, E. Tisbi, E. Placidi, J. Momand, A. Redaelli, B. J. Kooi, F. Arciprete, R. Calarco, AIP Advances 2017, 7, 015106
. B Zheng, Y Sun, J Wu, M Han, X Wu, K Huang, S Feng, J. Phys. D: Appl. Phys. 50105303B. Zheng, Y. Sun, J. Wu, M. Han, X. Wu, K. Huang, S. Feng, J. Phys. D: Appl. Phys. 2017, 50, 105303
. J Zhang, C.-Z Chang, Z Zhang, J Wen, X Feng, K Li, M Liu, K He, L Wang, X , J. Zhang, C.-Z. Chang, Z. Zhang, J. Wen, X. Feng, K. Li, M. Liu, K. He, L. Wang, X.
. Q.-K Chen, X Xue, Y Ma, Wang, Nat. Commun. 2574Chen, Q.-K. Xue, X. Ma, Y. Wang, Nat. Commun. 2011, 2, 574
. R Sultana, G Gurjar, S Patnaik, V P S Awana, Mater. Res. 546107R. Sultana, G. Gurjar, S. Patnaik, V. P. S. Awana, Mater. Res. Express 2018, 5 046107
. X Zhang, Z Zeng, C Shen, Z Zhang, Z Wang, C Lin, Z Hu, J. Appl. Phys. 11524307X. Zhang, Z. Zeng, C. Shen, Z. Zhang, Z. Wang, C. Lin, Z. Hu, J. Appl. Phys. 2014 115, 024307
. M Brahlek, N Koirala, N Bansal, S Oh, Solid State Commun. 21554M. Brahlek, N. Koirala, N. Bansal, S. Oh, Solid State Commun. 2015 215, 54
. P Sahu, J Y Chen, J C Myers, J P Wang, Appl. Phys. Lett. 112122402P. Sahu, J.Y. Chen, J.C. Myers, J.P. Wang, Appl. Phys. Lett. 2018 112, 122402
. D C Mahendra, T Liu, J.-Y Chen, T Peterson, P Sahu, H Li, Z Zhao, M Wu, J.-P , D.C. Mahendra, T. Liu, J.-Y. Chen, T. Peterson, P. Sahu, H. Li, Z. Zhao, M. Wu, J.-P.
. Wang, Appl. Phys. Lett. 114102401Wang, Appl. Phys. Lett. 2019, 114, 102401
. J Dufouleur, L Veyrat, B Dassonneville, C Nowka, S Hampel, P Leksin, B Eichler, O G Schmidt, B Büchner, R Giraud, Nano Lett, 166733J. Dufouleur, L. Veyrat, B. Dassonneville, C. Nowka, S. Hampel, P. Leksin, B. Eichler, O. G. Schmidt, B. Büchner, R. Giraud, Nano Lett. 2016, 16, 11, 6733
. S Hikami, A I Larkin, Y Nagaoka, Prog. Theor. Phys. 63707S. Hikami, A.I. Larkin, Y. Nagaoka, Prog. Theor. Phys. 1980, 63, 707
. L Locatelli, A Kumar, P Tsipas, A Dimoulas, R Mantovan, Sci. Rep. 2022. 123891L. Locatelli, A. Kumar, P. Tsipas, A. Dimoulas, R. Mantovan, Sci. Rep. 2022, 12, 3891
. P J Zomer, M H D Guimarães, N Tombros, B J Van Wees, Phys. Rev. B. 86161416P. J. Zomer, M. H. D. Guimarães, N. Tombros, and B. J. van Wees, Phys. Rev. B 2012 86, 161416(R)
. I Groen, V T Pham, N Leo, A Marty, L E Hueso, F Casanova, Phys. Rev. Applied. 1544010I. Groen, V. T. Pham, N. Leo, A. Marty, L. E. Hueso, F. Casanova, Phys. Rev. Applied 2021, 15, 044010
. E K De Vries, A M Kamerbeek, N Koirala, M Brahlek, M Salehi, S Oh, B J Van Wees, T Banerjee, Phys. Rev. B. 92201102E. K. de Vries, A. M. Kamerbeek, N. Koirala, M. Brahlek, M. Salehi, S. Oh, B. J. van Wees, and T. Banerjee, Phys. Rev. B 2016 92, 201102(R)
. Y Omori, E Sagasta, Y Niimi, M Gradhand, L E Hueso, F Casanova, Y Otani, Phys. Rev. B. 9914403Y. Omori, E. Sagasta, Y. Niimi, M. Gradhand, L. E. Hueso, F. Casanova, Y. Otani, Phys. Rev. B 2019 99, 014403
. G K Williamson, W H Hall, Acta Metallurgica. 122G.K. Williamson, W.H. Hall, Acta Metallurgica 1953, 1, 22
. F Hippert, P Kowalczyk, N Bernier, C Sabbione, X Zucchi, D Térébénec, C Mocuta, Pierre Noé, J. Phys. D: Appl. Phys. 2020154003F. Hippert, P. Kowalczyk, N. Bernier, C. Sabbione, X. Zucchi, D. Térébénec, C. Mocuta, Pierre Noé, J. Phys. D: Appl. Phys. 2020, 53, 154003
. L Liu, A Richardella, I Garate, Y Zhu, N Samarth, C.-T Chen, Phys. Rev. B. 91235437L. Liu, A. Richardella, I. Garate, Y. Zhu, N. Samarth, and C.-T. Chen, Phys. Rev. B 2015 91, 235437
. L A Benitez, W Torres, J F Sierra, M Timmermans, J H Garcia, S Roche, M V Costache, S O Valenzuela, Nat. Mater. 19170L.A. Benitez, W. Savero Torres , J. F. Sierra, M. Timmermans , J. H. Garcia, S. Roche, M. V. Costache, S. O. Valenzuela, Nat. Mater. 2020, 19, 170
. D C Mahendra, J.-Y Chen, T Peterson, P Sahu, B Ma, N Mousavi, R Harjani, J.-P , D.C. Mahendra, J.-Y. Chen, T. Peterson, P. Sahu, B. Ma, N. Mousavi, R. Harjani, J.-P.
. Nano Wang, Lett, 84836Wang, Nano Lett. 2019 19, 8, 4836
. L M Vincente-Arche, S Mallik, M Cosset-Cheneau, P Noël, D C Vaz, F Trier, T A , L.M. Vincente-Arche, S. Mallik, M. Cosset-Cheneau, P. Noël, D. C. Vaz, F. Trier, T. A.
. C.-C Gosavi, D E Lin, I A Nikonov, A Young, A Sander, J.-P Barthélémy, L Attané, M Vila, Bibes, Phys. Rev. Materials. 564005Gosavi, C.-C. Lin, D. E. Nikonov, I. A. Young, A. Sander, A. Barthélémy, J.-P. Attané, L. Vila, M. Bibes, Phys. Rev. Materials 2021 5, 064005
. H He, L Tai, D Wu, H Wu, A Razavi, K Wong, Y Liu, K L Wang, APL Materials. 971104H. He, L. Tai, D. Wu, H. Wu, A. Razavi, K. Wong, Y. Liu, K. L. Wang, APL Materials 2021, 9, 071104
. E Longo, M Belli, M Alia, M Rimoldi, R Cecchini, M Longo, C Wiemer, L , E. Longo, M. Belli, M. Alia, M. Rimoldi, R. Cecchini, M. Longo, C. Wiemer, L.
. P Locatelli, A Tsipas, G Dimoulas, M Gubbiotti, R Fanciulli, Mantovan, Adv. Funct. Mater. 322109361Locatelli, P. Tsipas, A. Dimoulas, G. Gubbiotti, M. Fanciulli, R. Mantovan, Adv. Funct. Mater. 2022 32, 2109361
. P Noël, F Trier, L M Vicente Arche, J Bréhin, D C Vaz, V Garcia, S Fusil, A Barthélémy, L Vila, M Bibes, J.-P Attané, Nature. 580483P. Noël, F. Trier, L. M. Vicente Arche, J. Bréhin, D. C. Vaz, V. Garcia, S. Fusil, A. Barthélémy, L. Vila, M. Bibes, J.-P. Attané, Nature 2020 580, 483
. S Varotto, L Nessi, S Cecchi, J Sławińska, P Noël, S Petrò, F Fagiani, A Novati, M , S. Varotto, L. Nessi, S. Cecchi, J. Sławińska, P. Noël, S. Petrò, F. Fagiani, A. Novati, M.
. D Cantoni, E Petti, M Albisetti, R Costa, M Calarco, M Nardelli, S Bibes, Cantoni, D. Petti, E. Albisetti, M. Costa, R. Calarco, M. Buongiorno Nardelli, M. Bibes, S.
. J.-P Picozzi, L Attané, R Vila, C Bertacco, Rinaldi, Nat. Electron. 4740Picozzi, J.-P. Attané, L. Vila, R. Bertacco, C. Rinaldi, Nat. Electron. 2021, 4,740
| [] |
[
"Fermion-boson duality in integrable quantum field theory",
"Fermion-boson duality in integrable quantum field theory"
] | [
"P Baseilhac ",
"V A Fateev \nOn leave of absence from L. D. Landau Institute for Theoretical Physics, ul. Kosygina 2\n117940MoscowRussia\n",
"\nLaboratoire de Physique Mathématique\nUniversité Montpellier II Place E. Bataillon\n34095MontpellierFrance\n"
] | [
"On leave of absence from L. D. Landau Institute for Theoretical Physics, ul. Kosygina 2\n117940MoscowRussia",
"Laboratoire de Physique Mathématique\nUniversité Montpellier II Place E. Bataillon\n34095MontpellierFrance"
] | [] | We introduce and study one parameter family of integrable quantum field theories. This family has a Lagrangian description in terms of massive Thirring fermions ψ, ψ † and charged bosons χ, χ of complex sinh-Gordon model coupled with BC n affine Toda theory. Perturbative calculations, analysis of the factorized scattering theory and the Bethe ansatz technique are applied to show that under duality transformation, which relates weak and strong coupling regimes of the theory the fermions ψ, ψ † transform to bosons and χ, χ and vive versa. The scattering amplitudes of neutral particles in this theory coincide exactly with S-matrix of particles in pure BC n Toda theory, i.e. the contribution of charged bosons and fermions to these amplitudes exactly cancel each other. We describe and discuss the symmetry responsible for this compensation property. | 10.1142/s0217732398002989 | [
"https://arxiv.org/pdf/hep-th/9905221v1.pdf"
] | 3,155,869 | hep-th/9905221 | 8ab92f681ba9007a9b93666c1db47ba6946da123 |
Fermion-boson duality in integrable quantum field theory
May 1999
P Baseilhac
V A Fateev
On leave of absence from L. D. Landau Institute for Theoretical Physics, ul. Kosygina 2
117940MoscowRussia
Laboratoire de Physique Mathématique
Université Montpellier II Place E. Bataillon
34095MontpellierFrance
Fermion-boson duality in integrable quantum field theory
May 1999arXiv:hep-th/9905221v1 30 1
We introduce and study one parameter family of integrable quantum field theories. This family has a Lagrangian description in terms of massive Thirring fermions ψ, ψ † and charged bosons χ, χ of complex sinh-Gordon model coupled with BC n affine Toda theory. Perturbative calculations, analysis of the factorized scattering theory and the Bethe ansatz technique are applied to show that under duality transformation, which relates weak and strong coupling regimes of the theory the fermions ψ, ψ † transform to bosons and χ, χ and vive versa. The scattering amplitudes of neutral particles in this theory coincide exactly with S-matrix of particles in pure BC n Toda theory, i.e. the contribution of charged bosons and fermions to these amplitudes exactly cancel each other. We describe and discuss the symmetry responsible for this compensation property.
Introduction
Duality plays an important role in the analysis of statistical and quantum field theory (QFT) systems. It maps a weak coupling region of one theory to a strong coupling region of the other and makes it possible to use perturbative and semiclassical methods for the study of dual systems in different regions of the coupling constants. For example, the wellknown duality between the sine-Gordon and massive Thirring models [1] plays a crucial role for study of many two-dimensional quantum systems. The phenomenon of electricmagnetic duality in four-dimensional gauge theories, conjectured in [2] and developped in [3] opens the possibility for the non-perturbative analysis of the spectrum and the phase structure in supersymmetric Yang-Mills theory.
Known for many years the phenomenon of duality in QFT still looks rather mysterious and needs further study. This analysis essentially simplifies for two-dimensional integrable relativistic theories. These QFT's besides the Lagrangian formulation possess also an unambiguous definition in terms of factorized scattering theory (FST). The FST, i.e. the explicit description of the spectrum of particles and their scattering amplitudes, contains all information about QFT. These data permit one to use non-perturbative methods for the analysis of integrable QFT and makes it possible in some cases to justify the existence of two different (dual) representations for the Lagrangian description of the theory. An interesting example of duality in two-dimensional integrable systems is the weak coupling -strong coupling flow from the affine Toda theories (ATT) to the same theories with the dual affine Lie algebra [4]. The duality in rank r non-simply laced ATTs coupled with massive Thirring model was studied in [7]. It was shown there that dual theory can be formulated as the non-linear sigma-model with Witten's Euclidian black hole metric [6] (complex sinh-Gordon theory) coupled with non-simply laced ATTs. Lie algebras of these "dual" ATTs belong to the dual series of affine algebras but have the smaller rank r = r − 1.
In this paper in section 2 we consider one-parameter family of integrable QFT, which has the Lagrangian formulation in terms of complex fermion field (ψ, ψ † ), complex boson field (χ, χ) and n scalar fields ϕ = (ϕ 1 , ..., ϕ n ). This QFT possesses U(1) ⊗ U(1) symmetry generated by fermion and boson charges Q ψ and Q χ . It can be considered as BC n ATT coupled with massive Thirring and complex sinh-Gordon (CSG) [5] models. In the weak coupling region this QFT admits a perturbative analysis. There the spectrum of particles, besides the charged fermions (ψ, ψ † ) and bosons (χ, χ), contains the scalar neutral particles M a with the masses characteristic for the BC n ATT. Perturbative calculations show that classical mass ratios are not destroyed by quantum corrections and that charged particles possess non-diagonal scattering. The scattering amplitudes of the charged particles can be expressed through the solution of factorization (Yang-Baxter) equation. The perturbative analysis together with U(1) ⊗ U(1) symmetry fix this solution up to one parameter, which depends on the coupling constant.
In section 3 we introduce the external field A coupled with charges Q ψ and Q χ . We use standard Bethe ansatz (BA) technique to find the exact relation between the coupling constant and parameter of FST. We show that in the strong coupling regime the behavior of fermions ψ in the external field is similar to the behavior of weakly coupled bosons χ and vice versa. The resulting FST possesses the property of self-duality together with fermion-boson transformation ψ ↔ χ. Another remarkable property of this QFT is the exact coincidence of the scattering amplitudes S ab of neutral particles M a with S-matrix of pure BC n ATT. It means that fermion and boson contributions to these amplitudes exactly cancel each other. In the last section we describe and discuss the symmetry which is responsible for this compensation property.
2 Integrable deformation of BC n Toda theory and factorized scattering theory
In this section we consider the QFT which can be described by Dirac fermion ψ, complex scalar field χ and n scalar fields ϕ = (ϕ 1 , ..., ϕ n ) with the action :
A n = d 2 x 1 2 ∂ µ χ∂ µ χ 1 + ( β 2 ) 2 |χ| 2 + iψγ µ ∂ µ ψ − g 2 (ψγ µ ψ) 2 − M 2 0 2 |χ| 2 e βϕ 1 − M 0 ψψe −βϕn + 1 2 (∂ µ ϕ) 2 − M 2 0 2β 2 2 exp(βϕ 1 ) + 2 n−1 i=1 exp β(ϕ i+1 − ϕ i ) + exp(−2βϕ n ) ,(1)
where (1) plays the role of usual contact counterterm which cancels divergencies coming from fermion loop. With this term the action (1) has the form of BC n affine Toda theory coupled with massive Thirring and CSG [5] models. Following the notations of ref. [7] we denote this QFT as BC n (ψ, χ, β). It possesses U(1) ⊗ U(1) symmetry, generated by the charges
g/π = − β 2 4π(1+β 2 /4π) . The last term − M 2 0 2β 2 exp(−2βϕ n ) inQ ψ = dxψγ 0 ψ ; Q χ = dx χ∂ 0 χ − χ∂ 0 χ 2i(1 + ( β 2 ) 2 |χ| 2 ) .(2)
The QFT (1) is integrable. It possesses the local integrals P s with odd (Lorentz) integer spins s. The explicit form of these integrals is not in the scope of this paper. Some additional symmetry of BC n (ψ, χ, β) model generated conserved charges with halfinteger spin s = n + 1 2 is described in the last section. We also checked at the tree level that multiparticles amplitudes factorize into two-particle ones.
For small β we can use the perturbation theory for the analysis of QFT (1). Its spectrum contains charged fermions (ψ, ψ † ), charged bosons (χ, χ) with mass M and neutral particles M a , a = 1, ..., n. In one loop approximation the mass ratios in BC n (ψ, χ, β) theories are not destroyed by the quantum corrections and have the classical values :
M a = 2M sin( πa h ); a = 1, ..., n;(3)
here and later h = 2n + 1. The scattering theory in integrable theory (1) is completly defined by a two particle S-matrix. The non-diagonal scattering is possible only between the particles ψ, ψ † , χ, χ of equal mass. All other amplitudes S aχ , S aψ and S ab are the pure phases. The scattering matrix of charged particles with U(1) ⊗ U(1) symmetry (ψ → e iη ψ, χ → e iξ χ) and C, P, T invariance is characterized by the following amplitudes :
|ψ(θ 1 )ψ(θ 2 ) > in = S ψ (θ)|ψ(θ 2 )ψ(θ 1 ) > out ; |χ(θ 1 )χ(θ 2 ) > in = S χ (θ)|χ(θ 2 )χ(θ 1 ) > out ; |ψ(θ 1 )χ(θ 2 ) > in = γ(θ)|χ(θ 2 )ψ(θ 1 ) > out + δ(θ)|ψ(θ 2 )χ(θ 1 ) > out ; |ψ † (θ 1 )χ(θ 2 ) > in = α(θ)|χ(θ 2 )ψ † (θ 1 ) > out + β(θ)|ψ † (θ 2 )χ(θ 1 ) > out ; |ψ(θ 1 )ψ † (θ 2 ) > in = T ψ (θ)|ψ † (θ 2 )ψ(θ 1 ) > out + R ψ (θ)|ψ(θ 2 )ψ † (θ 1 ) > out (4) + µ(θ)|χ † (θ 2 )χ(θ 1 ) > out +ν(θ)|χ(θ 2 )χ † (θ 1 ) > out ; |χ(θ 1 )χ † (θ 2 ) > in = T χ (θ)|χ † (θ 2 )χ(θ 1 ) > out + R χ (θ)|χ(θ 2 )χ † (θ 1 ) > out + µ(θ)|ψ † (θ 2 )ψ(θ 1 ) > out + ν(θ)|ψ(θ 2 )ψ † (θ 1 ) > out .
All the amplitudes (4) depend on rapidity difference θ = θ 1 −θ 2 . They satisfy the following crossing symmetry condition :
S ψ (iπ − θ) = T ψ (θ); S χ (iπ − θ) = T χ (θ); R ψ (iπ − θ) = R ψ (θ); (5) R χ (iπ − θ) = R χ (θ); α(iπ − θ) = γ(θ); β(iπ − θ) = µ(θ); δ(iπ − θ) = ν(θ).
The perturbative expansion for these amplitudes in the first order in β 2 has the form:
−S ψ (θ) = 1 + iβ 2 4h h coth hθ 2 − 2 sinh θ + O(β 4 ) = −T ψ (iπ − θ) ; S χ (θ) = 1 − iβ 2 4h h coth hθ 2 + 2 sinh θ + O(β 4 ) = T χ (iπ − θ) ; R ψ (θ) = R χ (θ) = − iβ 2 2 sinh(hθ) + O(β 4 ) ; α(θ) = γ(θ) = 1 − iβ 2 2h sinh(θ) + O(β 4 ) ; (6) β(θ) = δ(θ) = µ(iπ − θ) = ν(iπ − θ) = − iβ 2 4 sinh(hθ/2) + O(β 4 ).
Factorization property imposes non-trivial limitations to the scattering amplitudes. They should satisfy the functional Yang-Baxter (factorization) relations. There are two types of C, P, T invariant solutions of Yang-Baxter equations with U(1) ⊗ U(1) symmetry. The first one corresponds to the case S ψ = S χ and is expressed through the direct product of two sine-Gordon S-matrix. It follows from eq. (6) that in the BC n (ψ, χ, β) model is realized another case S ψ = S χ . The solution of the factorization equation with this property possesses one (besides scale of θ) arbitrary parameter x and has the form :
S ψ (θ) = − sinh(λθ + ixπ) sinh(λθ) Y n (θ); S χ (θ) = sinh(λθ − ixπ) sinh(λθ) Y n (θ); T ψ (θ) = − cosh(λθ − ixπ) cosh(λθ) Y n (θ); T χ (θ) = cosh(λθ + ixπ) cosh(λθ) Y n (θ); (7) R ψ (θ) = R χ (θ) = − 2i sin xπ sinh(2λθ) Y n (θ); α(θ) = γ(θ) = Y n (θ); β(θ) = δ(θ) = −i sin xπ sinh(λθ) Y n (θ); µ(θ) = ν(θ) = p sin xπ cosh(λθ) Y n (θ), p 2 = 1.
The solution (7) is consistent with crossing symmetry (5) only if
λ = h 2 = n + 1 2 , p = (−1) n+1 .(8)
Function Y n (θ) satisfies the unitarity and crossing symmetry relations :
Y n (θ)Y n (−θ) = sin 2 πx sinh 2 θ + sin 2 πx ; Y n (θ) = Y n (iπ − θ).(9)
The minimal solution of eqs. (9) is known [8] and has the form :
Y n (θ) = R(θ)R(iπ − θ) ; R(θ) = ∞ l=0 F l (θ, x)F l (θ, 1 − x) F l (θ, 0)F l (θ, 1) ; F l (θ, x) = Γ( hθ 4πi + hl 2 + x) Γ( hθ 4πi + h 2 (l + 1 2 ) + x) .(10)
In particular the amplitudes S ψ and S χ which are the pure phases can be represented as : (7) is invariant under this transformation together with fermion-boson transformation ψ ↔ χ.
− S ψ (θ) = exp i ∞ −∞ dω ω sin(ωθ) sinh(πxω/h) cosh[πω(h + 2(1 − x))/2h] sin(πω/h) cosh(πω/2) ; (11) −S χ (θ) = exp i ∞ −∞ dω ω sin(ωθ) sinh(π(1 − x)ω/2h) cosh[πω(h + 2x)/2h] sin(πω/h) cosh(πω/2) . We note that function Y n (θ) is invariant under the transformation x → 1 − x. S-matrix
For small x function Y n (θ) has the expansion :
Y n (θ) = 1 − 2πix h sinh θ + O(x 2 ).(12)
We can see that small x expansion of amplitudes (7) coincides with perturbative expansion (6) if
x = β 2 4π 1 + O(β 2 ) .(13)
The amplitudes (7) possess poles at the physical strip 0 < Imθ < π which are located at the points θ a , where :
θ a = iπ(1 − 2a h ), a = 1, ..., n.(14)
These poles correspond to the neutral bound states M a with masses (3). The scattering amplitudes S aψ , S aχ , S ab including these particles can be obtained by the usual fusion procedure. For this purpose it is convenient to represent the particles M a in the form :
M a (θ) = ψ(θ + θ a 2 )ψ † (θ − θ a 2 ) + ψ † (θ + θ a 2 )ψ(θ − θ a 2 ) + (−1) a χ(θ + θ a 2 )χ † (θ − θ a 2 ) + χ † (θ + θ a 2 )χ(θ − θ a 2 ) ,(15)
and use for the particles ψ(θ), χ(θ) the commutation relations with S-matrix (7). In this way we obtain the following expression for the amplitudes S aψ , S aχ :
S aψ (θ) = S aχ (θ) = S ψ (θ + θ a 2 )T ψ (θ − θ a 2 ) + R ψ (θ + θ a 2 )R ψ (θ − θ a 2 ) (16) + 2β(θ + θ a 2 )µ(θ − θ a 2 ).
In particular the amplitudes S 1ψ = S 1χ can be written as :
S 1ψ (θ) = S 1χ (θ) = sinh θ − i cos(π(1 − 2x)/h) sinh θ + i cos(π(1 − 2x)/h) sinh θ + i cos(π/h) sinh θ − i cos(π/h) .(17)
To describe all two particle amplitudes we introduce the notations [4] :
(z) = sinh( θ 2 + iπz 2h ) sinh( θ 2 − iπz 2h ) , {z} = (z − 1)(z + 1) (z − 1 + 2x)(z + 1 − 2x) ,(18)then S aψ = S aχ (θ) = a p=1 { h 2 + 2p − a − 1}.(19)
The scattering amplitudes of neutral particles M a have the form :
S ab (θ) = |a+b|−1 |a−b|+1(step2) {p}{h − p}.(20)
The amplitudes (19), (20) are invariant under the transformation x → 1 − x. They also are in agreement with first order of perturbation theory if x has the form (13). To find the exact relation between parameter x and coupling constant β in action (1) we have to use non-perturbative approach to the QFT BC n (ψ, χ, β).
Non-perturbative consideration
The QFT (1) possesses the symmetry U(1) ⊗ U(1) generated by the charges Q ψ and Q χ (2) and admits the introduction of the external fields A ψ and A χ coupled with these charges. For simplicity we consider the configurations with only one non-zero field A ψ or A χ , which we denote as A. The hamiltonian H ψ (H χ ) in the external field A has an additional term equal to −AQ ψ (−AQ χ ) :
H ψ = H 0 − AQ ψ ; H χ = H 0 − AQ χ ,(21)
where H 0 is the Hamiltonian of the QFT (1). To find the exact relation between x and β we calculate the specific ground state energy E ψ (A)(E χ (A)) in the limit A → ∞ from the Hamiltonian and from S-matrix data. The calculation of these asymptotics from the Hamiltonian (21) follows exactly the lines of ref. [7], where similar calculations were done, so we reproduce here the result :
E ψ (β, A → ∞) = − A 2 (1 + β 2 /4π) 2π ,(22)E χ (β, A → ∞) = − 2A 2 (1 + β 2 /4π) β 2 .
We calculate now the same values from the S-matrix using the BA approach (see for example refs. [9]). We consider the case corresponding to the ground state energy E ψ taking into account that for function E χ all consideration differs only by the notations (ψ → χ). Due to additional term −AQ ψ every positively (negatively) charged particle ψ(θ)(ψ † (θ)) acquires the additionnal energy A(−A). For A > M the ground state containes a sea of positively charged particles ψ(θ), which fill all possible states inside some interval −B < θ < B. The distribution ǫ ψ (θ) of particles within this interval is determinated by their scattering amplitude S ψ (θ). The specific ground state energy can be expressed through the function ǫ ψ (θ) as :
E ψ (A) − E ψ (0) = − M 2π B −B cosh θǫ ψ (θ)dθ,(23)
where non-negative function ǫ ψ (θ) satisfies, in the interval −B < θ < B, the BA equation:
B −BK ψ (θ − θ ′ )ǫ ψ (θ ′ )dθ ′ = A − M cosh θ,(24)
where the kernelK ψ (θ) in (24) is related to the ψψ scattering phase by :
K ψ (θ) = δ(θ) − 1 2πi d dθ log S ψ (θ),(25)
and the parameter B is determinated by the boundary conditions ǫ ψ (±B) = 0. The Fourier transform K ψ (ω) of the kernel (25) can be obtained from eq. (11) and has the form :
K ψ (ω) = sinh πω(1 − x)/h cosh πω(h + 2x)/h cosh(πω/2) sinh(πω/h) .(26)
For the function E χ (β, A) we obtain exactly the same equations with function ǫ χ (θ) satisfying eq. (24), where kernelK χ (θ) is related to the amplitude S χ by the eq. (25). The amplitudes S ψ and S χ (11) are connected by the transformation x → 1 − x. It means that:
K χ (ω) = sinh πωx/h cosh πω(h + 2(1 − x))/h cosh(πω/2) sinh(πω/h) .(27)
The main term of the asymptotics of the function E ψ (E χ ) at A → ∞ can be expressed explicitly through the kernel K ψ (ω) (K χ (ω)) by the relation [10] :
E ψ (β, A → ∞) = − A 2 2πK ψ (0) = − A 2 2π(1 − x) ; (28) E χ (β, A → ∞) = − A 2 2πK χ (0) = − A 2 2πx .
Comparing eqs. (22) and (27) we find the exact value for parameter x(β) :
x(β) = β 2 4π + β 2 = 1 − x 4π β .(29)
The term E 0 in eq. (23) is the bulk vacuum energy of QFT (1). It can also be expressed through the kernel K ψ (ω)(K χ (ω)) by the relation :
E 0 = − M 2 8 K ψ,χ (ω) cosh πω 2 | ω=i −1 = M 2 sinh π/h 8 sin(πx/h) sin(π(1 − x)/h) .(30)
The bulk vacuum energy is symmetric under the transformation β → 4π/β. It follows from the eqs. (26), (27) and (29) that the transformation relates the function E ψ and E χ :
E ψ (β, A) = E χ ( 4π β , A).(31)
It means that strong coupling behavior (β >> 1) of the fermions ψ, ψ † in the external field A coincides with weak coupling behavior (β << 1) of the bosons χ, χ † and vice versa.
In the weak coupling region the behavior of functions E ψ and E χ is rather different. In this limit K ψ (θ) = 1 + O(β 2 ) and function E ψ (A) can be easily calculated. As function of β it has the smooth behavior at β << 1 and can be written in parametric form as :
E ψ (A) − E(0) = − A 2 2π 1 − 2u + 2u(1 − u) log u 1 − u + O(β 2 ), M 2A 2 = u(1 − u).(32)
The kernel K χ (θ) in the weak coupling limit is not trivial and has the form :
K χ (θ) ≃ β 2 ω 4h
cosh πω(h + 2)/h cosh(πω/2) sinh(πω/h) .
8
The BA equations with the kernel (33) were studied in [7], where the function E χ (A) corresponding to this kernel was calculated. It can be also written in the parametric form:
E χ (A) − E(0) = − 2A 2 β 2 1 − 2u − hu(1 − u) 1 − u 1 − u 2/h + O(1) M 2A 2 = u(1 − u) u 1 − u 2/h .(34)
It was shown in [7] that function (34) coincides with the classical minimum of the functional H χ (21), where in the weak coupling limit we can neglect the fermionic terms. This gives us an additional test relating QFT (1) and FST (7). The singular behavior (O(1/β 2 )) of function E χ reflects the instability of weakly coupled bosons with respect to introduction of external field. The threshold behavior (∆ = (A − M)/M << 1) of functions (32) and (34) is also rather different. The first function has there singularity ∼ ∆ 3/2 characteristic for fermionic particles. The second one possesses there behavior ∼ ∆ 2 characteristic for the weak coupling limit of bosonic theory. It follows from eq. (31) that properties of particles ψ and χ change drasticaly under the flow from weak to strong coupling (fermions transform to bosons and vice versa).
The scattering theory of charged particles ψ and χ (7) is invariant under the transformation β → 4π/β together with fermion-boson transformation ψ ↔ χ. The amplitudes S aψ and S aχ (19) are invariant under this transformation. The amplitudes S ab (θ) of scattering of neutral particles M a (θ) are also self-dual. The remarkable property of the amplitudes S ab (θ) is that with function x(β) defined by eq. (29) they coincide exactly with scattering matrix of particles in pure BC n ATT proposed in ref. [4]. The pure BC n ATT can be obtained from the QFT (1) by the reduction ψ = χ = 0. It means that all contributions to the amplitudes S ab of neutral particles coming from charged particles χ and ψ exactly cancel each other. This cancellation (which can be checked in perturbation theory) can not be occasionnal. It should be a symmetry responsible for this compensation property. We discuss this symmetry in the next section.
Concluding remarks
The symmetry responsible for the exact compensation of fermion and boson contributions to the amplitudes of neutral particles should relate ψ and χ particles. It is possible only if this symmetry is generated by the conserved charges with half-integer spins. It should also be consistent with the FST (7). One can check that this scattering matrix commutes with the symmetry algebra T n , generated by the charges Q ± (Q ± ) with (Lorentz) spin s equal to h/2 = n + 1/2 (−n − 1/2), and "fermion number" F . The charges Q ± , Q ± , F possess the following commutation relations :
Q 2 ± = Q 2 ± = {Q + , Q − } = {Q − , Q + } = 0 ; F, Q ± = ±Q ± ;
F, Q ± = ∓Q ± ;
{Q + , Q − } = P h ; {Q + , Q − } = P h ,(35)
where P h (P h ) is the right (left) component of the local integral of motion with spin h. This local charge acts on asymptotic (in, out) states of charged particles ψ(θ) and χ(θ) with the eigenvalue λ 2 (θ) (λ 2 (θ)) :
λ(θ) = (M exp θ) h/2 ; λ(θ) = (M exp θ) −h/2 .(36)
The action of the operators Q ± (Q ± ) and e iπF on the one-particle states are defined as :
Q + |ψ(θ) >= λ(θ)|χ(θ) > ; Q + |χ(θ) >= λ(θ)|ψ † (θ) > ; Q − |ψ † (θ) >= λ(θ)|χ(θ) > ; Q − |χ(θ) >= λ(θ)|ψ(θ) > ; (37) e iπF |χ(θ) >= e iπx |χ(θ) > ; e iπF |ψ(θ) >= −e −iπx |ψ(θ) > ; e iπF |χ(θ) >= e −iπx |χ(θ) > ; e iπF |ψ † (θ) >= −e iπx |ψ † (θ) > .
The operators Q ∓ act at the same way as Q ± with the substitution λ → λ. The action of the charges Q ± (Q ± ) on many particles sates can be defined from the following co-product rules :
∆(Q ± ) = Q ± ⊗ I + e ±iπF ⊗ Q ± , ∆(Q ∓ ) = Q ∓ ⊗ I + e ±iπF ⊗ Q ∓ .(38)
Using eqs. (37), (38) and commutativity condition of algebra T n with S-matrix (4), we can derive all the ratios of the amplitudes (7). The eigenvalues of the operator exp(iπF ) on the states |χ > (|ψ >) move from 1 (−1) at β = 0, to −1 (1) at β = ∞. Particles χ (ψ) have fractional values of fermion number equal to x(β) (1−x(β)). At the "self-dual" point β 2 = 4π the fermion numbers of particles ψ and χ are equal to 1/2. At this point we have the symmetry ψ ↔ χ. The amplitudes S ψ and S χ coincide and S-matrix (7) of the QFT BC n (ψ, χ, √ 4π) can be expressed as the direct product of two-particle S-matrices of sine-Gordon model :
S(θ) = n a=0 sinh θ − i sin(πa/h) sinh θ + i sin(πa/h) S n (θ) ⊗ S n (θ)(39)
where S n (θ) is S-matrix of sine-Gordon model corresponding to the coupling constant β 2 SG /8π = 2 2n+3 . The FST (39) coincides with S-matrix of C [7]. It means that corresponding QFTs also coincide at the point β 2 = 4π.
(1) n+1 (ψ 1,2 , √ 4π) model consid- ered in
To construct the currents T
(±) n+3/2 (T (∓)
n+3/2 ) generating conserved charges Q ± (Q ∓ ) it is convenient to rewrite the action (1) in terms of other fields. We introduce the scalar field Φ related with the fields ψ, ψ † by the usual bosonisation rules [1], and the fields Φ 0 , ϕ 0 which describe the dual representation for CSG-model [7,11]. In terms of these fields the action (1) can be represented as :
A n = d 2 x 1 2 (∂ µ Φ 0 ) 2 + 1 2 (∂ µ ϕ 0 ) 2 − M 0 cos(α ′ Φ 0 )e β ′ ϕ 0(40)+ 1 2 (∂ µ Φ) 2 − M 0 cos(αΦ)e −βϕn + 1 2 (∂ µ ϕ) 2 − M 2 0 2β 2 n−1 i=0 exp β(ϕ i+1 − ϕ i ) ,
where parameters α, α ′ , β ′ are defined by the relations :
α 2 − β 2 = 4π , α ′ 2 − β ′ 2 = 4π , β ′ = 4π/β.(41)
The first three terms in (40) which we denote as A σ (Φ 0 , ϕ 0 ) correspond to the first (sigmamodel) term in the action (1). With action A σ (Φ 0 , ϕ 0 ) the fields χ, χ and χχ can be represented as :
χ ∼ exp 2iπ α ′ Φ 0 − 2π β ′ ϕ 0 ; χ ∼ exp − 2iπ α ′ Φ 0 − 2π β ′ ϕ 0 ; (42) χχ − const ∼ exp − 4π β ′ ϕ 0 = exp(−βϕ 0 ).
The action (40) is not available for the ordinary perturbation theory in coupling constant β, however with this action QFT BC n (ψ, χ, √ 4π) can be treated as perturbed conformal field theory. Using this approach we introduce the fields φ 0 (φ 0 ), φ(φ) which are the right (left) chiral components of fields Φ 0 , Φ, and right (left) derivatives ∂ = ∂ 0 + ∂ 1 (∂ = ∂ 0 − ∂ 1 ). The straightforward calculation shows that spin n + 3/2 (−(n + 3/2)) currents T (±) n+3/2 (T (∓) n+3/2 ), which generate conserved charges Q ± (Q ∓ )can be written in the form :
T (±) n+3/2 = exp ± 4πi α φ (κ∂ − ∂ϕ n )...(κ∂ − ∂ϕ 1 )(κ∂ − ∂ϕ 0 ) exp ± 4πi α ′ φ 0 ,(43)
where κ = β 4π + 1 β .
The currents T (∓)
n+3/2 can be obtained from the eq. (43) by the substitution φ 0 → φ 0 , φ → φ, ∂ → ∂.
The fermion number F can be expressed through the charges Q ψ and Q χ (2) as :
F = x(β)Q χ + (1 − x(β))Q ψ .(44)
Together with charges Q ± and Q ∓ it generates the algebra T n . It follows from the representation (15) for the neutral particles M a (θ) and eq. (36) that all these particles are annihilated by the local conserved charge P h : P h |M a 1 (θ 1 )...M an (θ n ) > in(out) = 0.
The neutral boson sector of the theory N, which contains only asymptotic particles M a (θ), is defined by the kernel of the conserved charge P h . The restriction of the scattering theory to the neutral sector N, defines self-consistent FST. The QFT corresponding to this FST possesses the Lagrangian description with the local action, which can be obtained from the action (1) by the reduction ψ = χ = 0. At the end we note that besides this reduction, the QFT (1) possesses several other integrable reductions. For example : χ = 0, ψ = 0; ψ = 0, χ = 0; ψ = 0, χ is real; χ = 0, ψ is Majorana fermion; χ is real, ψ is Majorana fermion and so on. The FSTs corresponding to these reductions are described in refs. [4,7,8].
On leave of absence from L. D. Landau Institute for Theoretical Physics, ul. Kosygina 2, 117940 Moscow, Russia
. S Coleman, Phys. Rev. D. 11S. Coleman, Phys. Rev. D 11 (1975) 2088;
. S Mandelstam, Phys. Rev. D. 113026S. Mandelstam, Phys. Rev. D 11 (1975) 3026.
. C Montonen, D Olive, Phys. Lett. B. 78117C. Montonen and D. Olive, Phys. Lett. B 78 (1977) 117;
. P Goddard, J Nuyts, D Olive, Nucl. Phys. B. 1251P. Goddard, J. Nuyts and D. Olive, Nucl. Phys. B 125 (1977) 1.
. N Seiberg, E Witten, Nucl. Phys. B. 42619N. Seiberg and E. Witten, Nucl. Phys. B 426 (1994) 19;
. Nucl. Phys. B. 431484Nucl. Phys. B 431 (1994) 484.
. H W Braden, E Corrigan, P E Dorey, R Sasaki, Nucl. Phys. B. 338689H.W. Braden, E. Corrigan, P.E. Dorey and R. Sasaki, Nucl. Phys. B 338 (1990) 689;
. G W Delius, M T Grisaru, D Zanon, Nucl. Phys. B. 382365G.W. Delius, M.T. Grisaru and D. Zanon, Nucl. Phys. B 382 (1992) 365.
. K Pohlmayer, ; F Lund, T Regge, Comm. Math. Phys. 461524Phys. Rev. DK. Pohlmayer, Comm. Math. Phys. 46 (1976) 207: F. Lund and T. Regge, Phys. Rev. D 14 (1976) 1524;
. H J Vega, J M Maillet, Phys. Lett. B. 101302H. J. de Vega and J. M. Maillet, Phys. Lett. B 101 (1981) 302;
. Phys. Rev. D. 281441Phys. Rev. D 28 (1983) 1441;
. C Bonneau, F Delduc, Nucl. Phys. B. 245561C. Bonneau and F. Delduc, Nucl. Phys. B 245 (1985) 561.
. E Witten, Phys. Rev. D. 44314E. Witten, Phys. Rev. D 44 (1991) 314.
. V A Fateev, Nucl.Phys. B. 479594V.A. Fateev, Nucl.Phys. B 479 (1996) 594.
. G W Delius, M T Grisaru, D Zanon, Phys. Lett. B. 256164G.W. Delius, M.T. Grisaru and D. Zanon,Phys. Lett. B 256 (1991) 164;
. C Destri, H J Vega, V A Fateev, Phys. Lett. B. 256173C. Destri, H.J. De Vega and V.A. Fateev, Phys. Lett. B 256 (1991) 173.
. G Japaridze, A Nersesyan, P Wieghmann, Nucl. Phys. B. 230511G. Japaridze, A. Nersesyan and P. Wieghmann, Nucl. Phys. B 230 (1984) 511;
. P Hasenfratz, M Maggiore, F Niedermayer, Phys. Lett. B. 245522P. Hasenfratz, M. Maggiore and F. Niedermayer, Phys. Lett. B 245 (1990) 522;
. Al Zamolodchikov, Int. J. Mod. Phys. A. 101125Al. Zamolodchikov, Int. J. Mod. Phys. A 10 (1995) 1125.
. V A Fateev, E Onofri, Al Zamolodchikov, Nucl. Phys. B. 406521V.A. Fateev, E. Onofri and Al. Zamolodchikov, Nucl. Phys. B 406 (1993) 521.
. V A Fateev, Phys. Lett. B. 357397V.A. Fateev, Phys. Lett. B 357 (1995) 397.
| [] |
[
"A scalar field matter model for dark halos of galaxies and gravitational redshift",
"A scalar field matter model for dark halos of galaxies and gravitational redshift"
] | [
"Franz E Schunck \nInstitute for Theoretical Physics\nAstronomy Centre\nSchool of Chemistry, Physics and Environmental Science\nUniversity of Cologne\nD-50923KölnGermany\n\nUniversity of Sussex\nBN1 9QJFalmer, BrightonUnited Kingdom\n"
] | [
"Institute for Theoretical Physics\nAstronomy Centre\nSchool of Chemistry, Physics and Environmental Science\nUniversity of Cologne\nD-50923KölnGermany",
"University of Sussex\nBN1 9QJFalmer, BrightonUnited Kingdom"
] | [] | We analyze the spherically symmetric Einstein field equation with a massless complex scalar field. We can use the Newtonian solutions to fit the rotation curve data of spiral and dwarf galaxies. From the general relativistic solutions, we can derive high gravitational redshift values. PACS no.: 95.35.+d, 04.40.Nr, 98.54.Aj | null | [
"https://export.arxiv.org/pdf/astro-ph/9802258v1.pdf"
] | 119,473,345 | astro-ph/9802258 | 8f58001a382768d0744e06994b19c260da4a3c1c |
A scalar field matter model for dark halos of galaxies and gravitational redshift
19 Feb 1998
Franz E Schunck
Institute for Theoretical Physics
Astronomy Centre
School of Chemistry, Physics and Environmental Science
University of Cologne
D-50923KölnGermany
University of Sussex
BN1 9QJFalmer, BrightonUnited Kingdom
A scalar field matter model for dark halos of galaxies and gravitational redshift
19 Feb 1998(March 21, 2022)
We analyze the spherically symmetric Einstein field equation with a massless complex scalar field. We can use the Newtonian solutions to fit the rotation curve data of spiral and dwarf galaxies. From the general relativistic solutions, we can derive high gravitational redshift values. PACS no.: 95.35.+d, 04.40.Nr, 98.54.Aj
I. INTRODUCTION
The rotation curves for galaxies or galaxy clusters should show a Keplerian decrease v ≃ 1/x at the point where the luminous matter ends. Instead one observes flat rotation curves beyond the galaxies [1]. A linear radial increase of the mass function of galaxies and galaxy clusters have been derived from these observations [2]: M = v 2 limit x. Several models have been discussed where either non-Newtonian gravity [3] or non-interacting matter, dark matter [4], are introduced to solve this problem. For several classes of gravitational theories, it was recently shown that the introduction of dark matter is necessary [5]; cf. [6]. Massive compact halo objects, socalled MACHOs, consisting of baryonic matter are also not able to solve this problem [7].
In 1933 Zwicky [8] was the first who suggested the existence of dark matter in galaxy clusters by investigating the Coma cluster. The total mass needed to gravitationally bind this cluster exceeds the amount of the luminous matter by roughly an order of magnitude. Three years later Smith proved this for the Virgo cluster [9]. Beginning of the 1970's one was able to extend the measurements of the rotation curves of galaxies so that higher mass to luminosity relations could be found: after some radius the rotation curves revealed that there is more mass than contained within the luminous matter [10]. The explanation of a linearly increasing mass was first given by Freeman [11] providing a spherical halo. The investigations for determining the radius of a dark matter halo have to go beyond the HI measurements of the 1980's [12], e.g. by studying satellite galaxies [13], using the weak lensing of background galaxies by foreground dark halos [14], or looking into quasar absorption lines [15]; cf. [16]. From these investigations, halo radii of more than 200kpc are inferred, for our Galaxy 230kpc [17], and recent results from satellite galaxies of a set of spiral galaxies show even more than 400kpc [18]. Recently, measurements of rotation curves of high redshift galaxies have been carried out [19].
We present a solution class of a massless complex scalar field minimally coupled to the Einstein equation [20]. For the Newtonian types of these solutions, we can fit rotation curve data of spiral and dwarf galaxies. The limiting value of the orbital velocity is determined by the central amplitude of the scalar field. The frequency of the scalar field determines the halo characteristics of mass and density near the center.
For the general-relativistic (GR) solutions, we can show that they provide large gravitational redshifts. We discuss how emission and absorption lines produced in the highly relativistic potential of our model can be understood. Near the center of these solutions, we find rotation velocities of about 10 5 km/s so that high luminosities can be expected. Spacetime singularities do not appear within these GR solutions.
A self-gravitationally massive complex scalar field is utilized for the so-called boson stars [21][22][23][24]. These boson stars have no physical singularities as in our solutions or in the case of neutron stars. However, real massless [25] or massive [26] scalar fields (in each case without conserved Noether current) cannot prevent the formation of a singularity (cf. also the exact solution in [27]). Such behavior of singular solutions is supported by analytical investigations of Christodoulou [28] and numerical calculations of Choptuik [29].
II. EINSTEIN-SCALAR-FIELD EQUATIONS
The Lagrange density of a massless complex selfgravitating scalar field reads
L = 1 2 | g | 1 κ R + g µν (∂ µ Φ * )(∂ ν Φ) ,(1)
where R is the curvature scalar, κ = 8πG, G the gravitation constant (h = c = 1), g the determinant of the metric g µν , and Φ the massless complex scalar field. Then we find the coupled system
R µν − 1 2 g µν R = −κT µν (Φ) ,(2)✷Φ = 0 ,(3)
where
T µν = (∂ µ Φ * )(∂ ν Φ) − 1 2 g µν [g σκ (∂ σ Φ * )(∂ κ Φ)](4)
is the energy-momentum tensor and
✷ = ∂ µ | g |g µν ∂ ν / | g |(5)
the generally covariant d'Alembertian.
For spherically symmetric solutions we use the following static line element ds 2 = e ν(r) dt 2 − e λ(r) dr 2 − r 2 (dϑ 2 + sin 2 ϑ dϕ 2 ) (6) and, for the scalar field, the ansatz
Φ(r, t) = P (r)e −iωt ,(7)
where ω is the frequency of the scalar field. The non-vanishing components of the energy-momentum tensor are
T 0 0 = ρ = −T 1 1 = p r = 1 2 [ω 2 P 2 (r)e −ν + P ′2 (r)e −λ ] ,(8)T 2 2 = T 3 3 = −p ⊥ = − 1 2 [ω 2 P 2 (r)e −ν − P ′2 (r)e −λ ] ,(9)
where ′ = d/dr. As equation of state, we find ρ = p r = p ⊥ + P ′2 (r)e −λ . The decisive non-vanishing components of the Einstein equation are
ν ′ + λ ′ = κ(ρ + p r )re λ ,(10)λ ′ = κρre λ − 1 r (e λ − 1) ,(11)
and two further identical components which are fulfilled because of the Bianchi identities. The differential equation for the scalar field is
P ′′ (r) + ν ′ − λ ′ 2 + 2 r P ′ (r) + e λ−ν ω 2 P (r) = 0 . (12)
A typical behavior of the scalar field of Newtonian kind is demonstrated in Fig. 1; the metric potentials are almost constant so that we do not show them here; cf. Section III. General relativistic solutions are shown in Section IX.
For the rest of our paper, we employ the redefined quantities x := ωr and σ := κ/2 P . The numerical calculation was carried out by using a Runge-Kutta-routine of the Fortran-libraries IMSL/NAg. In order to get regular solutions at the origin for the system of differential equations (10), (11), and (12), we have to put the initial conditions σ ′ (0) = 0 and λ(0) = 0.
The system (10), (11), and (12) possesses the selfsimilarity: x → kx, λ → λ, e ν → k 2 e ν , and σ → σ. This self-similarity means that one has only one free parameter, namely the initial value of the scalar field. One scales the solution simply with the second initial value for ν.
III. NEWTONIAN SOLUTION
The Newtonian solutions of our Lagrangian (1) are characterised by almost constant metric potentials ν, λ. The scalar field equation (12) can then be rewritten into the following form:
σ ′′ + 2σ ′ x + σ = 0 ,(13)( ′ = d/dx) which has the solution σ(x) = 1 x [A sin(x) + B cos(x)] ,(14)
where A, B are some constants. For B = 0, the solution is singular at the origin, why we can rule out it. The Newtonian form of the energy density reads
ρ(x) = A 2 x 2 1 − sin(2x) x + sin 2 (x) x 2 .(15)
For small x, we have ρ ∝ A 2 [1 − 2x 2 /9 + x 4 /45], i.e. it is proportional to a constant. For dwarf galaxies, one observes such a behavior of the density [30] where good fitting results are obtained by using the empirical isothermal density profile ρ(x) ∝ 1/(x 2 c + x 2 ), where x c is the core radius. Hence, our constant A resembles a core radius. By applying a maximum halo model, i.e. without baryonic matter, the value of A is higher than for a model including baryonic matter. Therefore the core radius increases if one adds baryonic matter as found in [30]; cf. [31].
The general solution of Eq. (11) is
e −λ = 1 − M (x) x(16)
with the mass function M (x) =
x 0 ρ(ζ)ζ 2 dζ. We find the Newtonian formula (cf. Fig. 2)
M (x) = A 2 x + cos(2x) − 1 2x .(17)
Comparing with the expected Newtonian result M = v 2 limit x, we see that v limit = A; hence, the amplitude of the scalar field at the origin determines the limiting orbital velocity. For small x, the mass function behaves like
M (x) ∝ A 2 x 3 /3 − 2A 2 x 5 /45 + O(x 7 ) .(18)
This corresponds to the constant density at the center. At higher radial distances, the mass function shows the linear behavior. Asymptotically, following (16), the metric potential e −λ approaches the value C 2 := 1 − A 2 , where C 2 < 1. After a redefinition of the coordinate x → C −1 x , the asymptotic space has a deficit solid angle. The area of a sphere of radius x is not 4πx 2 , but 4πC 2 x 2 ; cf., e.g., analogous results for global monopoles and global textures [32] where one also finds a linear increase of the mass function. We show in Sections VII and VIII how one can find a closure of the solutions and avoids this asymptotic problem.
Following (10), the second metric potential behaves asymptotically like e ν → x K , where K = 2A 2 /C 2 = 2(1/C 2 − 1) > 0. The behavior of both metric potentials could be confirmed numerically.
A question is the validity of the formulas (14), (15), and (17). We found for σ(0) = 10 −3 and ν(0) = 0 at x = 10 a deviation of 0.001%, at x = 100 of 0.3% and at x = 1000 of about 4% . This shows clearly that these formulas should be used only near the center. All figures in this paper were produced by using the numerical solutions. If one is interested into the asymptotic behavior of the solutions, the formulas can still be used asymptotically (e.g. for the calculation in the Tables) because one can still derive the order of magnitude from them as we confirmed numerically. For higher intial values of σ, only numerically determined solutions can be used.
IV. ROTATION CURVES
In this Section we shall model rotation curves of dwarf and spiral galaxies. Observations show that rotation curves are becoming flat (constant orbital velocity) in the surrounding region of galaxies where data are received from the 21cm wavelength of neutral hydrogen (HI). But the mass density of the neutral hydrogen is not sufficient to explain this velocity behavior. Therefore, we introduce dark matter consisting of a massless complex scalar field which interacts with the luminous matter exclusively by the gravitational force. We apply only Newtonian solutions of our model.
For the static spherically symmetric metric (6) considered here, circular orbit geodesics obey
v 2 ϕ = 1 2 rν ′ e ν = 1 2 e ν (e λ − 1) + 1 2 κp r r 2 e λ+ν ≃ M (r) r + 1 2 κp r r 2 e λ+ν(19)
which reduces outside of matter for a weakly gravitational field into the Newtonian form v 2 ϕ,N ewt = M (r)/r. But, for p r = 0, we have to use the general-relativistic formula. By using the Newtonian solutions of Section III within (19), we find
v 2 ϕ = A 2 1 − sin(2x) 2x ;(20)
cf. a generic rotation curve in Fig. 3.
Asymptotically, from (19), we have v 2 ϕ = e ν [A 2 /2 + A 2 /2](21)
what means that both the first Newtonian part e ν (e λ − 1)/2 and the second matter part contribute the same amount to the rotation velocity. For e ν = 1, we have v ϕ = A as we had already derived from the general Newtonian solution for the mass function. The next step is now to take observational data for spiral and dwarf galaxies and try to model them by using our model and a model which describes the luminous matter distribution. For spiral galaxies, we use both the universal rotation curves of Persic, Salucci, and Stel [33] and some individual ones. Persic et al. confirmed by investigating data of 967 spirals that the structural properties of dark and visible matter are linked together. This means that a spiral galaxy with low luminosity is stronger dominated by a dark matter halo than a spiral one with high luminosity. This is also one possible statement of the Tully-Fisher relation [1]. From this, it follows that a low luminosity spiral has a rather increasing rotation curve and a high luminosity spiral galaxy a rather decreasing rotation curve.
In the following, we model the universal rotation curves by a combination of a stellar disk and our halo. The rotation curve for the stellar disk follows from an exponential thin disk light distribution [33] v
2 disk (x)/v 2 (x opt ) = 0.72 + 0.325 2.5 (M * B − M B ) 1.97x 1.22 (x 2 + 0.78 2 ) 1.43 ,(22)
where M * B = −20.5 is the absolute magnitude in the blue band (corresponding to log L * = 10.4) and M B = −0.38 + 0.92M I (M I from the I band); this formula can be used within the range 0.04 ≃ x/x opt ≤ 2. The x depending factor arrives from approximations of modified Bessel functions (see below); the constants in the M B depending factor are arranged that it gives the best fit for our universal rotation curves; cf. [33]. The total rotation curves result then from
v total = v 2 disk + v 2 halo .(23)
The outcome can be seen in Fig. 4. One recognizes a very good agreement of the data. From this fit, we find that the amount of disk matter has to decrease with decreasing luminosity, i.e. decreasing absolute magnitude. This is a consequence of the Tully-Fisher relation, but is verified here by fitting the data. From [33], it follows that the rotation curves are self- similar, i.e. only one parameter, the luminosity or the limiting rotation velocity, establishes completely the properties of the halo far from the luminous matter region. This property is also revealed in our model where asymptotically only the parameter A determines the halo; cf. the Newtonian solutions of Section III. Moreover, we can connect this one free parameter with the amplitude of the scalar field at the center.
With the improvements of measuring rotation curves, HI rotation curve data have been found, investigating higher distances from the galaxy center. For an exponentially thin matter disk with a surface density Σ(x/x 0 ) = Σ 0 exp(−x/x 0 ), where x 0 is the disk scale length and Σ 0 some constant at x 0 , the contribution to the rotation velocity can be determined by [11] (cf. also [34]) Table I.
I 0 r 2r 0 K 0 r 2r 0 − I 1 r 2r 0 K 1 r 2r 0 ,(24)
where I n and K n are modified Bessel functions. Equation (22) is actually the approximation of this exact result. This formula shall be used in the following for the contribution of stars and gas in individual galaxies, with specific constants r 0 and Σ 0 . The gas part contributes with a summand v 2 gas under the square root in (23). In some galaxies, a clear bulge can be read off the light curve. For these cases, we use the formula from Kent [35] in order to calculate the circular velocities from the observed surface density σ bulge (r)
v 2 bulge (r) = κ 4r r 0 ζσ bulge (ζ)dζ + κ 2πr ∞ r arcsin r ζ − r ζ 2 − r 2 ζσ bulge (ζ)dζ . (25)
Corrections for flattening of bulges are ignored by this formula.
The galaxies we choose belong to a sample of 11 galaxies of different Hubble types and absolute magnitudes which fulfill several strong requirements [36]. All rotation curve data are measured in the 21-cm line of neutral hydrogen, so that the gas distribution extends far beyond the optical disc (at least 8 scale lengths) and the necessity of dark matter is becoming obvious. Isolation of galaxies are another constraint so that perturbative effects of Table I. nearby situated galaxies are negligible. Besides of that, high quality data are required.
For the calculation of the neutral hydrogen gas for the galaxies of this sample, a further remark is in order. In [37], it was shown that the HI data can be decomposed into a sum of two or three exponential disks. We shall use the results of this paper. We start now fitting dwarf galaxies.
TABLE I. Data for rotation curve fits: luminosity L, disk scale length r0, scalar field frequency ω, mass-to-light ratio for disk and bulge, initial value of the scalar field σ. In all cases, we have used ν(0) = −5 × 10 −6 . The parameters for the decomposed HI gas surface densities can be found in [37]; we have multiplied the total NGC3109 HI mass by a factor of 1.67 because of some missing 21 cm line flux at the VLA where the NGC3109 observations were made [42,36]. [Hubble parameter H0 = 75 km/(s Mpc)].
L
r0 Dwarf galaxies have the characteristic of very low luminosities [30]. Following empirical models, their rotation curves vary from the ones of spiral galaxies in such a way FIG. 7. Spiral galaxies: Rotation curve fits for NGC2403, NGC2903, NGC3198, and NGC7331 with halo (long-dashed), stars (dashed), and HI gas (short-dashed); NGC7331 has also a bulge included which is truncated at 6.2kpc [43]. The velocity v is measured in [km/s] and the radial coordinate r in [kpc]. The parameters for this fit are summarized in Table I. that instead of the Hernquist profile [38] an isothermal density profile has to be applied [30]; cf. also [39]. From the rotation curves, it can be derived that the dark matter density near the center is almost constant, i.e. dark matter has a core in these galaxies. Our Newtonian solutions for the massless complex scalar field reveal a constant density near the center, so it is not surprisingly that good fits for five dwarf galaxies from the Begeman et al. sample have been found ( Fig. 5 and 6) [40][41][42][43]. In all cases, it is recognizable that the dark matter halo dominates the luminous parts of the galaxies. Especially remarkable is the maximum of the DDO154 data which can be matched perfectly by our dark matter halo; hence, decreasing rotation curve data, at the end of observational resolution, can be explained by using only a dominating dark matter component. The prediction of our model is to find oscillations in rotation curve data at high distances from the center of galaxies.
ω −1 (M/L) d (M/L) b σ(0) [
Our results for the sample of spiral galaxies are summarized in Fig. 7, cf. [43,44]. In all cases, we are able to produce good fits. Models using an approximate isothermal sphere or a modified Newtonian dynamics (MOND) were also applied for these galaxies, e.g. [45,36]. Conformal gravitation theory has been explored in [27,47,37,48]; the disadvantage of that model is its increasing rotation curves.
By building universal rotation curves, a smoothing method is used [33]. Important informations about the distribution of the dark matter within individual rotation The bulge is truncated at 2.9kpc; the data for the bulge are taken from [35], but notice a different value for the Hubble constant there. The HI gas contribution [46] was decomposed into σHI = 55 exp(−r/12kpc) − 49 exp(−r/9kpc) [M⊙/pc 2 ]. For other parameters, see Table I. curves could be lost by this method. One can find an oscillating behavior within the optical rotation curves of some galaxies, for example NGC2998 [49]; cf. 'ripples' in the light curve profile [35]. The explanation is that across a spiral arm, positive velocity gradients are observed; the velocity decrease from the outer edge of one arm to the inner edge of the next arm is in some galaxies faster than Keplerian what is taken as compelling evidence for noncircular velocities. In Fig. 8, we fit the rotation curve data from Rubin et al. [49]. We have taken into account four components: A truncated bulge, a thin disk of stars, an HI component, and a dominating dark halo. What one can recognize is that the data are not fit too well but that the maxima and minima of the fit and the data are at the same place. The dark halo has to be the main part in order to find a good fit. (The partly non-smooth behavior of the curves results from numerical problems because of the scaling of our solution by the scalar field frequency ω.) Further investigations will show whether cylindrically or axisymmetric solutions can improve the fit or rule out our model. NGC2998 belongs to a sample of further 22 spiral galaxies for which the MOND model was applied [50]. Oscillations ('wiggles') have been found also in Hα rotation curves [51].
V. SCALING OF THE SOLUTIONS AND THE MASS DISTRIBUTION
As we have seen from the redefined quantity x = ωr, the frequency ω scales the physical dimension of the solution. Additionally, the mass is scaled with 1/ω and the energy density with ω 2 . Table II shows for different values of ω the mass and the density at different radii.
There exist two ways how one can interpret the values of Table II. The last row (ω = 10 −21 /cm) shows a halo having a central density of 10 −21 g/cm 3 ; at x = 20 which means r = 20kLy the density decreases to a value of 10 −23 g/cm 3 and within a sphere of this radius the halo has a mass of 10 43 g= 10 10 M ⊙ .
But one can use Table II also in the vertical direction. A solution with ω = 1/cm (first row) has within a sphere of radius 20cm a mass of 10 24 g, within a sphere of 20 × 10 15 cm (20 times the diameter of our solar system; second row) a mass of 10 37 g, within a sphere of 20Ly (third row) a mass of 10 40 g, and so on. The same procedure is valid for the density columns. The reason why one can do this is that one has a ω independence of the density and the mass at high radii, but a ω dependence for small radii. For the mass (17), we find
M (r) = 4π κ A 2 r + cos(2ωr) − 1 2ω 2 r ,(26)
which goes over for small r values into
M (r) ∝ 4π κ A 2 1 3 ω 2 r 3 − 2 45 ω 4 r 5 .(27)
The same is valid for the density ρ. This result shows that the parameter A determines the behavior of the halo at high distances from the center (namely the limiting orbital velocity), but near the center also the frequency of the scalar field influences the behavior of the mass and the density. Rewriting Eq. (26) into 2ω 2 r κM (r) 4π − A 2 r = cos(2ωr) − 1 (28) we immediately see that the left hand side has to be nonpositive from which we can find the inequality TABLE II. Order-of-magnitude estimation for A = 10 −3 . We put the frequency 1/ω = 1cm, 10 15 cm (about the radius of our solar system), 10 18 cm = 1Ly, and 10 21 cm = 1kLy. Further: ω in units of eV; density ρ at the center and at a distance of x = 20; the mass within a sphere of radius x = 20.
M (R) R ≤ A 2 ,(29)
at an arbitrary radius R. This inequality states that at every radius R, the relation between the Schwarzschild radius and the radius of the object is smaller than the square of the central value of the scalar field which is 10 −6 for a galaxy with v max = 300km/s. Let us, for example, take the value ω = 10 −21 /cm of the last row of Table II. To calculate the mass and the density for smaller radii, we have to take the formulae (15) and (17). The result can be seen in Table III. We recognize in comparison with the values of Table II substantial smaller values.
The order of magnitude values of Table III for the mass and the density show also a possibility of a hierarchy of objects. Recently [52], it has been detected that the dark matter distribution in the Fornax cluster of galaxies is a mixture of two distinct components on different scales (not necessary distinct matter components). One part is associated with the cluster and the second smaller one with the galaxy NGC1399. This observation was done by looking into the X-ray-emitting diffuse plasma of temperatures 10 7 -10 8 K which is widely found in elliptical galaxies and clusters of galaxies. This gas is likely to be in hydrostatic equilibrium, where the thermal pressure is balanced by the gravity of the cluster. In this way, the Xray emission traces the distributions of the dark matter in regions of higher density where more gas is trapped. This observation is the first evidence for a hierarchical nature of the dark matter distribution. Future investigations will show whether two of our dark matter halos can be combined: one for the cluster, the other for the single galaxies.
VI. ENERGY DENSITY, TANGENTIAL PRESSURE, AND REGULARITY
An investigation of the energy density ρ (i.e. the radial pressure p r ) shows a terraced decrease (Fig. 9). Furthermore, we find the relation | ρ | ≥ | p ⊥ |, where the tangential pressure p ⊥ oscillates sinusoidally around zero. From the equation of state, one recognizes that at each extremum of the scalar field σ the difference between radial and tangential pressure vanishes. Hence, at constant spheres, the equation of state ρ = p of a stiff fluid arises. III.
Order-of-magnitude estimation for ω = 10 −21 /cm. In comparison with Table II, we find smaller densities and masses at smaller distances. We check numerically the quadratic curvature invariant (Kretschmann scalar) R 2 inv = R λσµν R λσµν revealing its global regularity for our solutions [20]. The same was found for the two invariants R µν R µν of the Ricci tensor and R 2 of the curvature scalar. The invariants of the irreducible decomposition of Riemann's curvature tensor show also no singularity [20]. Hence our solutions possess no physically relevant singularity.
VII. CONSTRUCTION OF A SURFACE
A massive complex scalar field with mass m can be used for constructing a surface of our halo. This massive scalar field has an exponentially decrease and produces a finite mass. We have two different coordinates x 1 = mr 1 for the surface with the massive complex scalar field and x 2 = ωr 2 in the interior with the massless complex scalar field. At the connection we require x 1 = x 2 , hence ω/m = r 1 /r 2 . We need an equilibrium of the radial pressures and continuity of the metric potentials. Because of the different scalings in the two regions, we find for the radial pressures ω(2)/m = p r (1)/p r (2) [argument (1), e.g. in ω(1), describes the model of massive complex scalar field and argument (2) that of the massless one], i.e., if we have the equality of the metric potentials, the relation between the pressures gives us the relation between the frequency of the massless scalar field and the mass of the massive scalar field. It was possible to find numerically such solutions, for example: Initial values for the interior region: σ(0) = 0.2333, ν(0) = −0.4055; for the surface region: σ(0) = 0.1, ν(0) = −0.14035418; equality for the metric potentials was looked for at x = 3 and we find ω(2)/m = 0.3203. Within the surface, the metric potentials of the massive scalar field go over to the Schwarzschild metric. Hence, we have an object with a finite mass. In the next Section, we shall show another possibility to stop the halo.
VIII. COVER FOR THE HALO
The energy density ρ of the Newtonian solution decreases in leading order as A 2 /x 2 . Therefore, we can come to the conclusion that the energy density will eventually attain an hydrostatic equilibrium with another matter form at the same pressure value, e.g. the cosmic microwave background radiation (CMBR), which is about 10 −34 g/cm 3 (seas of neutrinos and supersymmetric particles would increase this density). Alternatively, a cosmological constant Λ of the order of magnitude 10 −29 g/cm 3 would produce much smaller halos; for estimates of Λ see [53]. From Table IV, we derive a radius of about 10 6 pc for Λ and about 10 8 pc for the CMBR value (cf. the discussion in [54]). Recent observations show halo radii of about 400kpc [18]. Outside the halo, within the spacetime with a constant density, we have a Schwarzschild-de-Sitter metric.
IX. GRAVITATIONAL REDSHIFT AND ROTATION VELOCITY
By increasing the initial value of the scalar field σ(0), the Newtonian description breaks down, i.e. the formulae from Section III are no longer valid. The metric potentials ν and λ are not constant and the full nonlinearity of Einstein's theory has to be taken into account.
The mass and the density function change their behavior smoothly. They still increase together with σ(0) 2 but one recognizes an additional decrease of the x dependent terms; Table V shows the deviation from Newtonian behavior for the mass function with increasing σ(0). This means that one can still use the order of magnitudes of Table II (which was for σ(0) = 10 −3 ) but one has to multiply the values of the mass and the density by 10 4 for σ(0) = 0.1 and by 10 6 for σ(0) = 1, for example. The deviation of the formula is so eminent that one has to use numerical calculations. V. Deviation for M/σ 2 (0) from the Newtonian formula (17) with increasing σ(0); the mass still increases with σ(0). The values for x = 0.1 have to be multiplied by 10 −3 .
x Newtonian σ(0) = 0. The strong curvature connected with these general relativistic (GR) solutions produces also a gravitational redshift z which, for a static spherically symmetric mass distribution, is given by
z = e [ν(A1)−ν(A0)]/2 − 1 ,(30)
where A 0 corresponds to the place of the emitter and A 1 to that of the receiver, both in rest. Table VI shows results for different values of σ(0) up to x = 500. The redshift increases with growing σ(0) and with growing radius. Redshift values reached by a(n emitting) gas cloud at the center of the halo can achieve very high values; we have not found numerically any limits. The z values for different x show the gravitational redshift which appears if our GR halo ends at x. In Table VII, we assume that the gas clouds are in rest within the gravitational halo potential (more likely they circulate around the center so that an additional Doppler redshift has to be taken into account). Then, the redshifts can be interpreted as emission or absorption lines of the corresponding gas clouds. An interpretation of both Tables could be (for σ(0) = 1.0): Some radiation comes from the center and is gravitationally redshifted by z = 4.76, some part of the radiation is absorbed from a gas cloud at a distance x = 1 (z = 3.398), another part at x = 100 (z = 0.087), and finally the halo ends at x = 500 so that gravitational redshift does not take any further effect. Near the center (x = 1), we recognize a steep decrease of redshift values (cf . Table VI and VII). The scale of such a halo is again defined by the frequency ω. If one assumes ω = 10 −18 /cm, then the halo has a diameter of r = 500Ly; hence, not very far away from the source, about r = 400Ly, one has already small gravitational redshift values. How fast does a mass rotate in such a spacetime? Figure 10 shows a numerically calculated solution. We see that velocities of more than 10 5 km/s are reached. This means that 6% of the rest mass energy is stored in kinetic energy. If we assume that per year a mass of 1M ⊙ transfers its kinetic energy into radiation then we find a luminosity of 10 44 erg/s.
X. DISCUSSION
We have shown that we were able to fit the universal rotation curves of Persic and Salucci and of a selection of dwarf and spiral galaxies which are especially suitable to find out characteristics of isolated dark matter halos. In one case, we could match oscillations of rotation curve data and our model.
We have to use Newtonian solutions and to fix two parameters. There is the central amplitude of the scalar field, the parameter A, which determines the limiting orbital velocity. The second parameter, the frequency of the scalar field, ω, varies the values of the mass and the density near the center of the galaxies.
Because our solution consists of a condensed 'star'-like object (with increasing mass) also the energy contribution of the radial pressure p r plays a role for the rotation velocity. We found that this part contributes the same order of magnitude as the Newtonian part coming from the 'normal' energy density ρ. Therefore, general relativity is just the correct theory for the dark matter part provided a compact object with internal pressure is present.
Another issue is the physical basis of the Tully-Fisher relation. It is obvious that in a deeper gravitational potential, i.e. more dark matter, also more luminous matter can find place. Our best fits for the rotation curves are just found if we add for low luminosity galaxies an al- most dominating scalar field halo and vice versa. The contribution of our dark matter form is determined by the parameter A the values of which lie in the order of magnitudes 10 −4 and 10 −3 . In a future paper, we will investigated why such values are preferably found.
In a recent paper [55], the reason for the Tully-Fisher relation is divided into two parts: (a) a mass-to-light ratio of the luminous matter and (b) a relation between the luminous mass and the rotation velocity. It is shown that there is a smaller scatter in part (b) than (a), because (a) depends on the present star formation rate. The conclusion is that part (b) in combination with a well-behaved relation between luminous and dark matter (producing flat and smooth rotation curves, i.e. the 'conspiracy') is the physical basis of the Tully-Fisher relation. The conspiracy maintains a luminous-mass-rotation-velocity relation with a slope of 4. The issue is whether one can reveal a reason from our model. We will investigate this point in a future work. Providing this halo model is realized in nature and one finds a radius of the dark matter halos of galaxies, then this radius could be a hint for a cosmological constant and its order of magnitude.
During the completion of this paper investigations with massive scalar particles were carried out. In [56,57], an isothermal ideal Bose gas which is degenerated in the center of galaxies was used to fit rotation curves of 36 spiral galaxies. The authors found the best fits with a boson particle mass of about 60eV. In [58] excited Newtonian boson star solutions and in [59] excited general relativis-tic boson stars were used as dark halos. In the latter case, the contribution of the radial pressure to the rotation velocity was wrongly neglected. It is well-known that excited boson stars are unstable against small radial perturbations (see e.g. [23]); for a recent numerical investigation of instability of excited boson stars, see [60]. Under the assumption that boson stars are transparent (as the solutions of this paper), maximal gravitational redshift values of about 0.7 have been revealed [61]. In [62], it was shown numerically and analytically that the Newtonian solutions of our model are stable.
FIG. 1 .
1The massless complex scalar field σ = κ/2 P depending on the dimensionless coordinate x = ωr with the initial values σ(0) = 3.8×10 −6 [σ(0)(xopt)] and ν(0) = 0. This solution belongs to the rotation curve with MI = −18.29 inFig. 4.
FIG. 2 .
2The mass function in units of [σ 2 (0)(xopt)/(ωκ)] increases near the center like x 3 and at higher radial values linearly. We use here the same initial values as inFig. 1, so that this curve shows the behavior of mass for the rotation curve with MI = −18.29 inFig. 4.
FIG. 3 .
3A generic rotation curve for an initial value of the scalar field σ(0) = A = 3 × 10 −3 . The Newtonian (M/x) and the pressure (px) contributions are shown separately. In the rotation curve fits, mainly the first part up to the maximum is used. The rotation curve oscillates around the asymptotic value of 90km/s. The velocity v is measured in [km/s] while the dimension of the radial coordinate r depends on the choice of ω; x is dimensionless.
FIG. 4 .
4Eight universal rotation curves with different absolute magnitudes MI introduced in [33]. The radial coordinate x in units of the optical radius xopt which encompasses 83% of the total integrated light and the velocity v in units of v(xopt). For the halo (long-dashed curve), the initial values are: σ(0) MI = −18.29 for the first and MI = −23.08 for the last value.
v 2 disk,gas (r) = κΣ 0 r 2 8r 0 × FIG. 5. Dwarf galaxies: Rotation curve fits for NGC1560, UGC2259, DDO154, and DDO170 with halo (long-dashed), stars (dashed), and HI gas (short-dashed). The velocity v is measured in [km/s] and the radial coordinate r in [kpc]. The parameters for these fits are summarized in
FIG. 6 .
6Dwarf galaxy: Rotation curve fit for NGC3109 with halo (long-dashed), stars (dashed), and HI gas (short-dashed). The velocity v is measured in [km/s] and the radial coordinate r in [kpc]. The parameters for this fit are summarized in
FIG. 8 .
8Spiral galaxy with oscillating rotation curves: NGC2998 with dominating halo (long-dashed), stars (short-dashed), HI gas (dotted), and bulge (dashed). The velocity v is given in [km/s] and the radial coordinate r in units of[kpc]. In order to find a good correspondence with the data it was necessary to have a dominating halo. The extrema of data and halo are at about the same values of r.
ω
[1/cm] ω [eV] ρ(0) [g/cm 3 ] ρ(20) [g/cm 3 ] M (20)
FIG. 9 .
9The energy density ρ (solid curve) and the tangential pressure p ⊥ (dotted curve) [both in units of ω 2 /κ] for the interval of x = 200 up to x = 300. One recognizes very clearly the terraced decrease of the energy density. The matter behaves like stiff matter at the saddle points of ρ. [Initial values: σ(0) = 1 and ν(0) = 1]
FIG. 10 .
10The scalar field σ, the metric potentials e ν and e λ , and the rotation velocity v (in units of [km/s]) for high relativistic initial values of σ(0) = 1 and ν(0) = −3.5. Near the center, particles in circular orbits have velocities of more than one third of the velocity of light.
TABLE
TABLE IV .
IVThis table shows the radius R at which our dark matter halo reaches the density ρ. v is the limiting orbital velocity.σ(0)
v [km/s]
ρ [g/cm 3 ]
R [Mpc]
10 −3
300
10 −29 g/cm 3
2.3
10 −34 g/cm 3
830
4 × 10 −4
120
10 −29 g/cm 3
0.9
10 −34 g/cm 3
330
TABLE
TABLE VI .
VIThe redshift function z(x) = exp[ν(x) − ν(0)] − 1 for solutions with different initial values of the scalar field (ν(0) = 0 in each case). for different initial values of the scalar field (ν(0) = 0 in each case). We assume that the halo ends at x = 500.z
σ(0) = 0.25
0.5
0.75
1.
1.25
1.5
z(10)
0.16
0.66 1.58 3.16 5.88
9.52
z(50)
0.25
0.97 2.13 4.04 7.83 16.17
z(100)
0.29
1.08 2.31 4.29 8.29 17.57
z(200)
0.32
1.18 2.47 4.50 8.66 18.65
z(300)
0.35
1.24 2.56 4.62 8.86 19.17
z(400)
0.36
1.28 2.61 4.70 8.98 19.52
z(500)
0.37
1.31 2.66 4.76 9.08 19.77
TABLE
VII.
Redshift
values for absorption lines z(x) =
exp[ν(500) − ν(x)] − 1
z
σ(0) = 0.25
0.5
0.75
1.
z(1)
0.350
1.147
2.124
3.398
z(5)
0.229
0.536
0.630
0.669
z(10)
0.183
0.386
0.417
0.383
z(50)
0.096
0.170
0.168
0.141
z(100)
0.064
0.108
0.105
0.087
z(200)
0.034
0.056
0.054
0.045
z(300)
0.019
0.030
0.029
0.024
z(400)
0.008
0.012
0.012
0.010
z(500)
0.
0.
0.
0.
ACKNOWLEDGMENTSWe would like to thank Peter Baekler, John D. Barrow
P J E Peebles, Principles of Physical Cosmology (Princeton. Princeton University PressP.J.E. Peebles: Principles of Physical Cosmology (Prince- ton, Princeton University Press, 1993);
E W Kolb, M S Turner, The Early Universe. Redwood CityAddison Wesley PublE.W. Kolb and M.S. Turner: The Early Universe (Redwood City, Addi- son Wesley Publ., 1990);
D W Sciama, Modern cosmology and the dark matter problem. CambridgeCambridge University PressD.W. Sciama: Modern cosmol- ogy and the dark matter problem (Cambridge, Cambridge University Press, 1993);
R Smith, Observational Astrophysics. CambridgeCambridge University PressR. Smith: Observational Astro- physics (Cambridge, Cambridge University Press, 1995).
. J P Ostriker, P J E Peebles, A Yahil, Astrophys. J. 1931J.P. Ostriker, P.J.E. Peebles, and A. Yahil, Astrophys. J. 193, L1 (1974).
. M Milgrom, Astrophys. J. 270384M. Milgrom, Astrophys. J. 270, 365, 371, 384 (1983);
. J Bekenstein, M Milgrom, Astrophys. J. 2867J. Bekenstein and M. Milgrom, Astrophys. J. 286, 7 (1984);
. M Milgrom, Astrophys. J. 333689M. Milgrom, Astrophys. J. 333, 689 (1988);
. R Brada, M Milgrom, Mon. Not. R. Astron. Soc. 276453R. Brada and M. Milgrom, Mon. Not. R. Astron. Soc. 276, 453 (1995);
. M Milgrom, preprint astro-ph/9606148M. Milgrom, preprint astro-ph/9606148.
. V Trimble, Ann. Rev. Astron. Astrophys. 187425V. Trimble, Ann. Rev. Astron. Astrophys. 187, 425 (1987).
. V V Zhytnikov, J M Nester, Phys. Rev. Lett. 732950V.V. Zhytnikov and J.M. Nester, Phys. Rev. Lett. 73, 2950 (1994).
. D H Weinberg, preprint astro-ph/9610003D.H. Weinberg, preprint astro-ph/9610003.
. C Alcock, Phys. Rev. Lett. 742867C. Alcock et al., Phys. Rev. Lett. 74, 2867 (1995).
. F Zwicky, Helv. phys. Acta. 6110F. Zwicky, Helv. phys. Acta 6, 110 (1933).
. S Smith, Astr. J. 8323S. Smith, Astr. J. 83, 23 (1936).
. V C Rubin, W K Ford, Astrophys. J. 159379V.C. Rubin and W.K. Ford, Astrophys. J. 159, 379 (1970);
. D H Rogstad, G S Shostak, Astrophys. J. 176315D.H. Rogstad and G.S. Shostak, Astrophys. J. 176, 315 (1972).
. K C Freeman, Astrophys. J. 160811K.C. Freeman, Astrophys. J. 160, 811 (1970).
. T S Van Albada, R Sancisi, Phil. Trans. R. Soc. Lond. A. 320447T.S. van Albada and R. Sancisi, Phil. Trans. R. Soc. Lond. A 320, 447 (1986).
. D Zaritzky, R Smith, C S Frenk, S D M White, Astrophys. J. 405464D. Zaritzky, R. Smith, C.S. Frenk, and S.D.M. White, Astrophys. J. 405, 464 (1993);
. D Zaritzky, S D M White, Astrophys. J. 435599D. Zaritzky and S.D.M. White, Astrophys. J. 435, 599 (1994).
. T Brainerd, R Blanford, I Smail, Astrophys. J. 466623T. Brainerd, R. Blanford, and I. Smail, Astrophys. J. 466, 623 (1996).
. X Barcons, K M Lazetta, J K Webb, Nature. 376321X. Barcons, K.M. Lazetta, and J.K. Webb, Nature 376, 321 (1995).
Inflationary dark matter models for the formation of large scale structure in the universe. P T P Viana, University of SussexPh. D. thesisP.T.P. Viana: Inflationary dark matter models for the formation of large scale structure in the uni- verse, Ph. D. thesis, University of Sussex, http://star- www.maps.susx.ac.uk/people/ptpv.html (1996).
. A S Kulessa, D Lynden-Bell, Mon. Not. R. Astr. Soc. 255105A.S. Kulessa and D. Lynden-Bell, Mon. Not. R. Astr. Soc. 255, 105 (1992);
. C S Kochanek, Astrophys. J. 457228C.S. Kochanek, Astrophys. J. 457, 228 (1996).
. D Zaritzky, R Smith, C S Frenk, S D M White, preprint astro-ph/9611199D. Zaritzky, R. Smith, C.S. Frenk, and S.D.M. White, preprint astro-ph/9611199.
. N P Vogt, D A Forbes, A C Phillips, C Gronwall, S M Faber, G D Illingworth, D C Koo, Astrophys. J. 46515N.P. Vogt, D.A. Forbes, A.C. Phillips, C. Gronwall, S.M. Faber, G.D. Illingworth, and D.C. Koo, As- trophys. J. 465, L15 (1996);
. D C Koo, N P Vogt, A C Phillips, R Guzmán, K L Wu, S M Faber, C Gronwall, D A Forbes, G D Illingworth, 469535D.C. Koo, N.P. Vogt, A.C. Phillips, R. Guzmán, K.L. Wu, S.M. Faber, C. Gronwall, D.A. Forbes, and G.D. Illingworth, ibid., 469, 535 (1996).
F E Schunck, Selbstgravitierende bosonische Materie. GöttingenCuvillier VerlagUniversity of ColognePh. D. thesisF.E. Schunck: Selbstgravitierende bosonische Materie, Ph. D. thesis, University of Cologne (Cuvillier Verlag, Göttingen 1996).
. T D Lee, Y Pang, Phys. Rep. 221251T.D. Lee and Y. Pang, Phys. Rep. 221, 251 (1992).
. Ph, Jetzer, Phys. Rep. 220163Ph. Jetzer, Phys. Rep. 220, 163 (1992).
. F V Kusmartsev, E W Mielke, F E Schunck, Phys. Rev. D. 433895F.V. Kusmartsev, E.W. Mielke, and F.E. Schunck, Phys. Rev. D 43, 3895 (1991);
. Phys. Lett. 157465Phys. Lett. A157 (1991) 465;
. F V Kusmartsev, F E Schunck, Physica. 17824F.V. Kusmartsev and F.E. Schunck, Phys- ica B178 (1992) 24;
Stability of charged boson stars and catastrophe theory. F E Schunck, F V Kusmartsev, E W Mielke, Approaches to Numerical Relativity, R. d'Inverno. CambridgeCambridge University PressF.E. Schunck, F.V. Kusmartsev, and E.W. Mielke, "Stability of charged boson stars and catastrophe theory", in: Approaches to Numerical Rel- ativity, R. d'Inverno (ed.) (Cambridge University Press, Cambridge, 1992), pp. 130-140;
Stars, stability, and catastrophe theory. F V Kusmartsev, F E Schunck, Classical and quantum systems -Foundations and symmetries, Proceedings of the 2nd International Wigner Symposium. H.D. Doebner, W. Scherer, and F.E. SchroeckGoslar; SingaporeWorld Scientific PublF.V. Kusmartsev and F.E. Schunck, "Stars, stability, and catastrophe theory", in: Classical and quantum systems -Foundations and symmetries, Proceedings of the 2nd International Wigner Symposium (Goslar, 16-20 July 1991), H.D. Doebner, W. Scherer, and F.E. Schroeck (eds.) (World Scientific Publ., Singapore, 1993), pp. 766-769.
. E W Mielke, R Scherzer, Phys. Rev. D. 242111E.W. Mielke and R. Scherzer, Phys. Rev. D 24, 2111 (1981).
. P Baekler, E W Mielke, R Hecht, F W Hehl, Nucl. Phys. B. 288800P. Baekler, E.W. Mielke, R. Hecht, and F.W. Hehl, Nucl. Phys. B 288, 800 (1987).
. M Fabbrichesi, R Iengo, Phys. Lett. B. 292262M. Fabbrichesi and R. Iengo, Phys. Lett. B 292, 262 (1992);
. Ph, D Jetzer, Scialom, Phys. Lett. A. 16912Ph. Jetzer and D. Scialom, Phys. Lett. A 169, 12 (1992).
. D Ph, D Mannheim, Kazanas, Astrophys. and Space Science. 185167Ph.D. Mannheim and D. Kazanas, Astrophys. and Space Science 185, 167 (1991).
. D Christodoulou, Comm. math. Phys. 105337D. Christodoulou, Comm. math. Phys. 105, 337 (1986);
. Comm. math. Phys. 109613Comm. math. Phys. 109, 613 (1987).
. M W Choptuik, Phys. Rev. Lett. 709M.W. Choptuik, Phys. Rev. Lett. 70, 9 (1993).
. B Moore, Nature. 370629B. Moore, Nature 370, 629 (1994).
. R Flores, J R Primack, Astrophys. J. 4271R. Flores and J.R. Primack, Astrophys. J. 427, L1 (1994).
. M Barriola, A Vilenkin, Phys. Rev. Lett. 63341M. Barriola and A. Vilenkin, Phys. Rev. Lett. 63, 341 (1989);
. N Turok, D Spergel, Phys. Rev. Lett. 642736N. Turok and D. Spergel, Phys. Rev. Lett. 64, 2736 (1990).
. M Persic, P Salucci, F Stel, preprint astroph/9506004M. Persic, P. Salucci, and F. Stel, preprint astro- ph/9506004.
. C Carignan, Astrophys. J. 29959C. Carignan, Astrophys. J. 299, 59 (1985).
. S M Kent, Astron. J. 911301S.M. Kent, Astron. J. 91, 1301 (1986).
. K G Begeman, A H Broeils, R H Sanders, Mon. Not. R. astr. Soc. 249523K.G. Begeman, A.H. Broeils, and R.H. Sanders, Mon. Not. R. astr. Soc. 249, 523 (1991).
. D Ph, J Mannheim, Kmetko, astro-ph/9602094Ph.D. Mannheim and J. Kmetko, astro-ph/9602094.
. J Dubinski, R Carlberg, Astrophys. J. 378496J. Dubinski and R. Carlberg, Astrophys. J. 378, 496 (1991).
. D Puche, C Carignan, Astrophys. J. 378487D. Puche and C. Carignan, Astrophys. J. 378, 487 (1991).
. G Lake, R A Schommer, J H Van Gorkum, Astron. J. 99547G. Lake, R.A. Schommer, and J.H. van Gorkum, As- tron. J. 99, 547 (1990).
. A H Broeils, Astron. Astrophys. 25619A.H. Broeils, Astron. Astrophys. 256, 19 (1992).
. M Jobin, C Carignan, Astrophys. J. 100648M. Jobin and C. Carignan, Astrophys. J. 100, 648 (1990).
HI rotation curves of spiral galaxies. K G Begeman, University of GroningenPhD thesisK.G. Begeman, HI rotation curves of spiral galaxies, PhD thesis, University of Groningen (1987).
. T S Van Albada, J N Bahcall, K Begeman, R Sanscisi, Astrophys. J. 295305T.S. van Albada, J.N. Bahcall, K. Begeman, and R. San- scisi, Astrophys. J. 295, 305 (1985).
. S M Kent, Astron. J. 93816S.M. Kent, Astron. J. 93, 816 (1987).
Dark and visible matter in spiral galaxies. A Broeils, University of GroningenPhD thesisA. Broeils, Dark and visible matter in spiral galaxies, PhD thesis, University of Groningen (1992).
. . D Ph, Mannheim, Astrophys. J. 419150Ph.D. Mannheim, Astrophys. J. 419, 150 (1993).
. . D Ph, Mannheim, astro-ph/9605085Ph.D. Mannheim, astro-ph/9605085.
. V C Rubin, W K Ford, N Thonnard, Astrophys. J. 238471V.C. Rubin, W.K. Ford, and N. Thonnard, Astrophys. J. 238, 471 (1980).
. R H Sanders, Astrophys. J. 473117R.H. Sanders, Astrophys. J. 473, 117 (1996).
. M Marcelin, A R Petrosian, P Amram, J Boulesteix, Astron. Astrophys. 282363M. Marcelin, A.R. Petrosian, P. Amram, and J. Boulesteix, Astron. Astrophys. 282, 363 (1994);
. P Amram, E Coarer, M Marcelin, C Balkowski, W T Sullivan, Iii , V Cayatte, Astron. Astrophys. Suppl. Ser. 94175P. Amram, E. le Coarer, M. Marcelin, C. Balkowski, W.T. Sullivan III, and V. Cayatte, Astron. As- trophys. Suppl. Ser. 94, 175 (1992);
. P Amram, J Boulesteix, M Marcelin, C Balkowski, V Cayatte, W T Sullivan, Iii , Astron. Astrophys. Suppl. Ser. 11335P. Amram, J. Boulesteix, M. Marcelin, C. Balkowski, V. Cayatte, and W.T. Sullivan III, Astron. Astrophys. Suppl. Ser. 113, 35 (1995).
. Y Ikebe, H Ezawa, Y Fukazawa, M Hirayama, Y Ishisaki, K Kikuchi, H Kubo, K Makishima, K Matsushita, T Ohashi, T Takahashi, T Tamura, Nature. 379427Y. Ikebe, H. Ezawa, Y. Fukazawa, M. Hirayama, Y. Ishisaki, K. Kikuchi, H. Kubo, K. Makishima, K. Mat- sushita, T. Ohashi, T. Takahashi, and T. Tamura, Nature 379, 427 (1996).
. S M Carroll, W H Press, E L Turner, Annu. Rev. Astron. Astrophys. 30499S.M. Carroll, W.H. Press, and E.L. Turner, Annu. Rev. Astron. As- trophys. 30, 499 (1992).
. D Burstein, V C Rubin, N Thonnard, W K Ford, Astrophys. J. 25370D. Burstein, V.C. Rubin, N. Thonnard, and W.K. Ford, Astrophys. J. 253, 70 (1982).
A physical basis of the Tully-Fisher relation. M.-H Rhee, University of GroningenPh. D. ThesisM.-H. Rhee, A physical basis of the Tully-Fisher relation, Ph. D. Thesis, University of Groningen, http://kapteyn.astro.rug.nl/thesis/theses.html (1996).
. H Dehnen, B Rose, Astrophys. and Space Science. 207133H. Dehnen and B. Rose, Astrophys. and Space Science 207, 133 (1993).
. H Dehnen, B Rose, K Amer, Astrophys. and Space Science. 23469H. Dehnen, B. Rose, and K. Amer, Astrophys. and Space Science 234, 69 (1995).
. S.-J Sin, Phys. Rev. 503650S.-J. Sin, Phys. Rev. D50, 3650 (1994);
. S U Ji, S.-J Sin, Phys. Rev. 503655S.U. Ji and S.- J. Sin, Phys. Rev. D50, 3655 (1994).
. J Lee, I Koh, Phys. Rev. 532236J. Lee and I. Koh, Phys. Rev. D53, 2236 (1995).
. J Balakrishna, E Seidel, W.-M Suen, gr- qc/9712064J. Balakrishna, E. Seidel, and W.-M. Suen, gr- qc/9712064.
. F E Schunck, A R Liddle, Phys. Lett. B. 40425F.E. Schunck and A.R. Liddle, Phys. Lett. B 404, 25 (1997).
. J Balakrishna, F E Schunck, in preparationJ. Balakrishna and F.E. Schunck, in preparation.
| [] |
[
"LQG Control Over SWIPT-enabled Wireless Communication Network",
"LQG Control Over SWIPT-enabled Wireless Communication Network"
] | [
"Huiwen Yang ",
"Lingying Huang ",
"Yuzhe Li ",
"Subhrakanti Dey ",
"Ling Shi "
] | [] | [] | In this paper, we consider using simultaneous wireless information and power transfer (SWIPT) to recharge the sensor in the LQG control, which provides a new approach to prolonging the network lifetime. We analyze the stability of the proposed system model and show that there exist two critical values for the power splitting ratio α. Then, we propose an optimization problem to derive the optimal value of α. This problem is non-convex but its numerical solution can be derived by our proposed algorithm efficiently. Moreover, we provide the feasible condition of the proposed optimization problem. Finally, simulation results are presented to verify and illustrate the main theoretical results.Index Terms-Networked control systems, SWIPT, packet drop, stability.I. INTRODUCTIONSensing and actuation are essential factors for the control of complex dynamical networks. As one of the most fundamental optimal controller in control theory, the linearquadratic-Gaussian (LQG) controller, which is a combination of a Kalman filter with a linear-quadratic regulator (LQR), has been studied for decades [1]-[5]. With the extensive use of wireless devices (e.g., sensors, actuators, and remote controllers), sensing signals and control signals are transmitted over wireless communication networks, where packets may be lost or delayed. The effect of packet loss on Kalman filtering was studied in the seminal paper[6]. Later, the impact of the network reliability on control and estimation was comprehensively and systematically analyzed by[3].In real applications, sensors are usually battery-powered and their battery capabilities are limited. As a result, sensor scheduling and power management are important for reducing power consumption and prolonging the lifetime of a network. Many existing works have considered the sensor scheduling and power control problems for remote state estimation with Kalman filter [7]-[17]. The sensor scheduling with limited communication energy was investigated by [7]-[9]. A deterministic event-based scheduling mechanism to achieve the trade-off between communication rate and estimation quality was proposed in [10], and it was extended to a stochastic event-triggering mechanism in[11]. The power control problems under more practical communication models were studied by [12]-[16]. To further increase the life span of networks, many researchers considered employing energyharvesting sensors [18]-[20]. As a result of the development of radio frequency (RF) energy harvesting circuit design, wireless sensors equipped with an RF energy receiver have the ability to harvest energy from the RF signals transmitted by some wired-supplied devices, which may have unlimited | 10.48550/arxiv.2303.15131 | [
"https://export.arxiv.org/pdf/2303.15131v1.pdf"
] | 257,766,791 | 2303.15131 | ba280a4cfd067cb7343b7acc0f75ade8eba9b91a |
LQG Control Over SWIPT-enabled Wireless Communication Network
Huiwen Yang
Lingying Huang
Yuzhe Li
Subhrakanti Dey
Ling Shi
LQG Control Over SWIPT-enabled Wireless Communication Network
1Index Terms-Networked control systemsSWIPTpacket dropstability
In this paper, we consider using simultaneous wireless information and power transfer (SWIPT) to recharge the sensor in the LQG control, which provides a new approach to prolonging the network lifetime. We analyze the stability of the proposed system model and show that there exist two critical values for the power splitting ratio α. Then, we propose an optimization problem to derive the optimal value of α. This problem is non-convex but its numerical solution can be derived by our proposed algorithm efficiently. Moreover, we provide the feasible condition of the proposed optimization problem. Finally, simulation results are presented to verify and illustrate the main theoretical results.Index Terms-Networked control systems, SWIPT, packet drop, stability.I. INTRODUCTIONSensing and actuation are essential factors for the control of complex dynamical networks. As one of the most fundamental optimal controller in control theory, the linearquadratic-Gaussian (LQG) controller, which is a combination of a Kalman filter with a linear-quadratic regulator (LQR), has been studied for decades [1]-[5]. With the extensive use of wireless devices (e.g., sensors, actuators, and remote controllers), sensing signals and control signals are transmitted over wireless communication networks, where packets may be lost or delayed. The effect of packet loss on Kalman filtering was studied in the seminal paper[6]. Later, the impact of the network reliability on control and estimation was comprehensively and systematically analyzed by[3].In real applications, sensors are usually battery-powered and their battery capabilities are limited. As a result, sensor scheduling and power management are important for reducing power consumption and prolonging the lifetime of a network. Many existing works have considered the sensor scheduling and power control problems for remote state estimation with Kalman filter [7]-[17]. The sensor scheduling with limited communication energy was investigated by [7]-[9]. A deterministic event-based scheduling mechanism to achieve the trade-off between communication rate and estimation quality was proposed in [10], and it was extended to a stochastic event-triggering mechanism in[11]. The power control problems under more practical communication models were studied by [12]-[16]. To further increase the life span of networks, many researchers considered employing energyharvesting sensors [18]-[20]. As a result of the development of radio frequency (RF) energy harvesting circuit design, wireless sensors equipped with an RF energy receiver have the ability to harvest energy from the RF signals transmitted by some wired-supplied devices, which may have unlimited
energy supply but have limited transmission power. Different from the energy harvested from the external environments (e.g., solar energy, wind energy, body heat, etc.), the energy harvested from RF signal is more predictable and controllable, since the transmission power of the energy supplier can be controlled and adjusted. In the era of the fifth generation (5G) of wireless communication, there is an increasing demand for a technology that can transfer both information and power simultaneously to the end-devices [21]. As a result, the concept of simultaneous wireless information and power transfer (SWIPT) was introduced in [22]. In recent years, SWIPT has aroused great interest in wireless communication networks [23]- [28], and has also been considered in networked control systems [29]. In [29], a novel power control problem for remote state estimation was formulated, where the signal transmitted by the remote estimator served as the energy source of the sensor, and the transmission power allocations of the remote estimator and the sensor are jointly optimized. In this paper, we consider applying SWIPT to the LQG control. The control signals are transmitted by a transmitter, which has maximum transmission power. One part of the transmitted power will be used to decode the expected control input, and the other part of the transmitted power will be harvested by the sensor for the subsequent transmission of its measurements.
The contributions of this paper are multi-fold. First, this is the first work that considers using SWIPT to recharge the sensor in the LQG control, which provides a new approach to prolonging the network lifetime. Second, we show that there exist two critical values for the power splitting ratio α, and the cost of the infinite horizon LQG control is bounded if and only if α is between these two critical values. Third, we propose an optimization problem to derive the optimal value of α. This problem is non-convex but its numerical solution can be derived by our proposed algorithm efficiently.
The remainder of this paper is organized as follows. Section II presents the system model and the communication model. Section III introduces necessary preliminaries and provides the main theoretical results. Section IV presents the simulation results to verify the main theorems in Section III. Section V concludes this paper and presents some future work.
Notations: R is the set of real numbers, R n is the ndimensional Euclidean space, and R n×m is the set of real matrices with size n × m. For a matrix X, X > 0 (X ≥ 0) denotes X is a positive definite (positive semidefinite) matrix, λ u i (X) denotes the unstable eigenvalues of X, and Tr(X) denotes the trace of X. E[·] is the expectation of a random variable. P(·|·) refers to conditional probability. Consider the following discrete-time LTI system:
x k+1 = Ax k + Bu a k + w k ,(1)
where A ∈ R n×n and B ∈ R n×q are constant matrices, x k ∈ R n is the system state, u a k ∈ R q is the actual control input exerted to the system by the actuator, w k ∈ R n is a zero mean Gaussian noise with covariance Q ≥ 0. Moreover, we assume x 0 is Gaussian with meanx 0 and covariance P 0 .
The controller and the estimator are colocated with a transmitter, which transmits control signals to the actuator and transfers power to the sensor simultaneously. The control signal u k ∈ R q is sent over a lossy channel. A sequence of independent and identically distributed (i.i.d.) Bernoulli random variables, i.e., {η k }, is used to model the control packet loss. If the control signal is successfully received by the actuator at time k, then η k = 1; otherwise, η k = 0. Then we have
u a k = η k u k .(2)
The sensor can obtain the measurement of the system, i.e.,
y k = Cx k + v k ,(3)
where C ∈ R p×n is a constant matrix, y k ∈ R p is the sensor measurement, and v k ∈ R p is a zero mean Gaussian noise with covariance R > 0. It sends y k to the estimator over a lossy channel, which can be modeled by a sequence of i.i.d. Bernoulli random variables {γ k }. If y k is received successfully by the estimator, then γ k = 1; otherwise, γ k = 0.
In this paper, we consider that the transmission control protocol (TCP) is adopted by the system, which means the senders can know whether the packet delivery is successful or unsuccessful within the same sampling time period. Denote the information set available for the estimator at time k by L k = {γ 1 y 1 , . . . , γ k y k , γ 1 , . . . , γ k , η 1 , . . . , η k−1 }. The estimator needs to determine the optimal state estimationx k based on L k , and the controller needs to calculate the optimal control input u k based onx k .
B. Simultaneous Wireless Information and Power Transfer
By simultaneous wireless information and power transfer (SWIPT), the transmitter at the estimator/controller side can transmit the control signal to the actuator and transmit power to the sensor at the same time. The transmitter uses a constant transmission power p. The portion of the power used for transmitting the control signal is denoted by αp, and the portion for recharging the sensor is (1 − α)p. The actuator will decode the received control signal and the sensor will use all the harvested power to transmit its local estimate at each time.
The probability of successful control signal reception for the actuator is given by
P(η k = 1|h a , α, p) η(h a αp),(4)
where h a is the channel fading of the channel from the transmitter to the actuator, and η(·) : [ 0, ∞ ) → [0, 1] is a monotonically increasing continuous function, which is decided by the particular digital modulation mode and the channel state from the controller to the actuator. For simplicity, we just use η to denote this probability when the power p and the ratio α are fixed. The harvested energy r can be characterized by a function of the received power h s (1 − α)p, i.e.,
r = ψ(h s ((1 − α)p)),(5)
where h s is the channel fading of the channel from the transmitter to the sensor, and ψ is a nondecreasing function determined by the energy harvesting circuit. Similarly, the probability of successful reception of the sensor's local estimate for the estimator is given by
P(γ k = 1|h e , r) γ(h e r),(6)
where h e is the channel fading of the channel from the sensor to the estimator, and γ(·) : [ 0, ∞ ) → [0, 1] is also a monotonically increasing continuous function decided by the particular digital modulation mode and the channel state from the sensor to the estimator, and we use γ to denote this probability for simplicity.
III. LQG CONTROL WITH SWIPT
A. Prelimimaries of LQG control
In this subsection, we present some useful lemmas, which are necessary for the derivation of the main results.
We first define the following variables:
x k E[x k |L k ],(7)P k E[(x k −x k )(x k −x k ) T |L k ].(8)
The optimal estimator can be derived aŝ
x − k+1 AE[x k |L k ] + η k Bu k = Ax k + η k Bu k ,(9)P − k+1 E[(x k+1 −x − k+1 )(x k+1 −x − k+1 ) T |L k ](10)= AP k A T + Q.(11)
For the TCP-like system, the optimal estimator is the following:
x k+1 =x − k+1 + γ k+1 K k+1 (y k+1 − Cx − k+1 ),(12)P − k+1 E[(x k+1 −x − k+1 )(x k+1 −x − k+1 ) T |L k ] = AP k A T + Q.(13)
Consider the following cost function:
J N (x 0 , P 0 ) =E[x T N W N x N + N −1 k=0 (x T k W k x k + u a k T U k u a k )|x 0 , P 0 ],(14)
then, to obtain the optimal control input sequence, the following optimization problem should be solved
min u k ,k=0,1,...,N −1 J N (x 0 , P 0 ).(15)
Define the optimal value function V k (x k ) as follows:
V N (x N ) E[x T N W N x N |L N ],(16)V k (x k ) min u k E[x T k W k x k + η k u T k U k u k + V k+1 (x k+1 )|L k ],(17)
where k = 1, 2, . . . , N − 1. Using dynamic programming, it can be shown that J *
N min u k ,k=0,1,...,N −1 J N (x 0 , P 0 ) = V 0 (x 0 )
. For the TCP-like system, the following lemma holds: (17) has the following form:
Lemma 3.1: [3, Lemma 5.1] The value function V k (x k ) inV k (x k ) = E[x T k S k x k |L k ] + c k , k = 0, 1, . . . , N,(18)
where
S k =A T S k+1 A + W k − ηA T S k+1 B(B T S k+1 B + U k ) −1 B T S k+1 A,(19)c k =Tr (A T S k+1 A + W k − S k )P k + Tr (S k+1 Q) + E[c k+1 |L k ],(20)
with initial values S N = W N and c N = 0, and the optimal control input is given by
u k = −(B T S k+1 B + U k ) −1 B T S k+1 Ax k = L kxk . (21) Since J * N (x 0 , P 0 ) = V 0 (x 0 ), we have J * N (x 0 , P 0 ) =x T 0 S 0x0 + Tr (S 0 P 0 ) + N −1 k=0
Tr (S k+1 Q)
+ N −1 k=0 Tr (A T S k+1 A + W k − S k )E γ [P k ] ,(22)
where E γ [·] explicitly indicates that the expectation is calculated with respect to the arrival sequence {γ k }.
The infinite horizon LQG can be obtained by taking the limit for N → +∞ of the previous equations in Section III.
Define the Modified Riccati Algebraic Equation (MARE) as
S = Π(S, A, B, W, U, η),(23)
where (23). Let A be unstable, (A, B) be controllable, and (A, W 1/2 ) be observable. Then, the MARE has a unique strictly positive definite solution S ∞ if and only if η > η c , where η c is the critical arrival probability defined as
Π(S, A, B, W, U, η) A T SA+W −ηA T SB(B T SB+ U ) −1 B T SA.η c inf η {0 < η < 1|S = Π(S, A, B, W, U, η), S ≥ 0}. (24) Lemma 3.3: [3, Theorem 5.5] Assume that (A, Q 1/2 ) is controllable, (A, C)
is observable, and A is unstable. Then there exists a critical observation arrival probability γ c , such that the expectation of estimator error covariance is bounded if and only if the observation arrival probability is greater than the critical arrival probability, i.e.,
E γ [P k ] ≤ M, ∀k ⇐⇒ γ > γ c ,(25)
where M is a positive definite matrix possibly dependent on P 0 . Moreover, it is possible to compute a lower and an upper bound for the critical observation arrival probability γ c , i.e.,
p min ≤ γ c ≤ γ max ≤ p max , where p min 1 − 1 max i |λ u i (A)| 2 , p max 1 − 1 i |λ u i (A)| 2 , γ max inf γ {0 ≤ γ ≤ 1|P = Π(P, A T , C T , Q, R, γ), P ≥ 0}.
B. Properties of optimal control
As introduced in the last subsection, there exists a critical value for the probability of successful control signal reception, i.e., η c defined in (24), and a critical value for the probability of successful reception of the sensor's local estimate, i.e., γ c defined in (25). In this subsection, we will show there exist two critical values, i.e., the left critical value α and the right critical value α, for the power splitting ratio α. The existence of the two critical values for α results from that α realizes the trade-off between control and estimation. The infinite-horizon average cost of the LQG control will be unbounded if α is smaller than α or larger than α. Moreover, the convergence property of the cost under the optimal control law is analyzed.
Define the following functions:
h(X) AXA T + Q,(26)h(X) A T XA + W,(27)g(α, X) X −1 − γ(r)AXC T (CXC T + R) −1 CXA T ,(28)g(α, X) X −1 − η(α)A T XB(B T XB + U ) −1 B T XA.(29)
For the sake of simplicity, we useg α (X) andĝ α (X) to denotẽ g(α, X) andĝ(α, X), respectively. Also, define
S(α) = lim k→+∞ (ĥ •ĝ α ) k (S N ),(30)P (α) = lim k→+∞ (g α •h) k (P 0 ),(31)
where S N = W . Then, we have the following lemma.
Lemma 3.4: If α 1 ≤ α 2 , then S(α 1 ) ≤ S(α 2 ) and P (α 1 ) ≥ P (α 2 ).
Proof: See Appendix A.
Theorem 3.5: The cost J * N can be bounded as follows:
J min N ≤ J * N ≤ J max N , where J min N =x T 0 S 0x0 + Tr (S 0 P 0 ) + N −1 k=0
Tr (S k+1 Q)
+ (1 − γ(h e ψ(h s ((1 − α)p)))) × N −1 k=0 Tr (A T S k+1 A + W k − S k )P k ,(32)J max N =x T 0 S 0x0 + Tr (S 0 P 0 ) + N −1 k=0
Tr (S k+1 Q)
+ Tr (A T S k A + W k − S k ) × P k − γ(h e ψ(h s ((1 − α)p))) ×P k C T (CP k C T + R) −1 CP k ,(33)
and
P k =(1 − γ(h e ψ(h s ((1 − α)p))))AP k−1 A T + Q,(34)P k =AP k A T + Q − γ(h e ψ(h s ((1 − α)p))AP k C T (CP k C T + R) −1 × CP k A T .(35)
Proof: See Appendix B.
Theorem 3.6: Consider a system with W k = W, ∀k and U k = U, ∀k. There exist two critical values α and α such that the infinite-horizon average cost J *
∞ lim N →∞ 1 N J * N (x 0 , P 0 ) can be bounded if and only if α < α < α. Moreover, J min ∞ ≤ J * ∞ ≤ J max ∞ , where J min ∞ Tr (SQ) + (1 − γ(h e ψ(h s ((1 − α)p)))) × Tr (A T SA + W − S)P ,(36)J max ∞ Tr(SQ) + Tr (A T SA + W − S) × P − γ(h e ψ(h s ((1 − α)p))) ×P C T (CP C T + R) −1 CP ,(37)
and
S =A T SA + W − η(h a αp)A T SB(B T SB + U ) −1 B T SA,(38)P =(1 − γ(h e ψ(h s ((1 − α)p))))AP A T + Q.(39)P =AP A T + Q − γ(h e ψ(h s ((1 − α)p))AP C T (CP C T + R) −1 CP A T .(40)
Moreover, the critical values α and α are the solutions of the following equations:
η(h a αp) = η c , (41) γ(h e ψ(h s ((1 − α)p))) = γ c ,(42)
where η c and γ c are the critical probabilities mentioned in Lemma 3.2 and Lemma 3.3, respectively. Proof: See Appendix C. Remark 3.7: If α is smaller than the left critical value α, the power for transmitting the control signals will be relatively low. As a result, the cost is unbounded due to inadequate control. Similarly, if α is larger than the right critical value α, the power for transmitting the measurements will be relatively low. Consequently, the cost will be unbounded due to inaccurate estimates. Note that the exact value of α cannot be obtained as γ c cannot be calculated exactly as well, but the upper bound and the lower bound of α can be found according to the bounds of γ c in Lemma 3.3.
C. Optimization Problem
When the power splitting ratio α satisfies α < α < α, the LQG cost J * ∞ can be bounded. However, different values of α result in different costs. In this subsection, an optimization problem is proposed as an aid to the selection of α. The intention is to minimize the cost J * ∞ as much as possible. However, J * ∞ cannot be minimized directly since it depends on the specific realization of the sequence {γ k } and cannot be computed analytically. From the perspective of robustness, therefore, it is reasonable to minimize the upper bound of J * ∞ , i.e., J max ∞ . Based on the above statement, we have the following optimization problem:
min α,P ,S J max ∞ ,(43)s.t. S = A T SA + W − η(h a αp)A T SB(B T SB + U ) −1 B T SA,(44)P = AP A T + Q − γ(h e ψ(h s ((1 − α)p))AP C T (CP C T + R) −1 × CP A T ,(45)0 ≤ α ≤ 1,(46)s.t. S =ĥ •ĝ α (S),(47)P =g α •h(P ),(48)0 ≤ α ≤ 1.(49)
Proof: See Appendix D.
Remark 3.9: In this paper, we only consider the situation where problem (43) is feasible, i.e., α ≤ α. Otherwise, the infinite-horizon average cost J max ∞ will be unbounded and there is no need to study this problem.
At the first sight, problem (43) is non-convex and it is difficult to convert it to a convex optimization problem. However, its numerical solution can be easily obtained. First, we can discretize α with interval size δ. With α fixed, S andP can be uniquely determined by respectively solving the equations (48) and (49) via the iterative method, i.e., executing the following iterations until S andP converge to their steady-state values:
S k+1 =ĥ •ĝ α (S k ),(51)P k+1 =gα •h(P k ),(52)
where S 0 andP 0 should be initialized. Note that S andP converge to their steady-state values exponentially fast [2]. Therefore, S andP can be calculated by the iterative method efficiently. Then, we can substitute α, S andP in the objective function (43) and obtain the corresponding objective value. By iterating through all the discretized α, i.e., α ∈ {α + kδ, k = 0, 1, . . . |α ≤ α + kδ ≤ α}, the value of α which can minimize the value of objective function can be found. The searching algorithm for obtaining the numerical solution of the problem (43) is summarized in Algorithm 1.
Algorithm 1 Obtain numerical solution of problem (43)
Require: δ, α, α, P 0 , S 0 , A, B, C, Q, R, W, U Ensure:
α, J initializeα = α α = α, J = 0 whileα ∈ [α, α] dõ P = P 0 , S = S 0 repeat P =gα •h(P ) until converge repeat S =ĥ •ĝα(S) until converge compute J max ∞ according to (37) ifα = α then J = J max ∞ else if J max ∞ < J then J = J max ∞ , α =α end if α =α + δ end while return
Remark 3.10: The accuracy of the obtained numerical solution depends on the selected parameter δ. The optimality gap is bounded by δ, i.e., |α − α * | < δ, where α is the output of Algorithm 1 and α * is the optimal solution of problem (43). A smaller δ brings a higher accuracy, but also leads to more computational overhead.
IV. SIMULATION
In this section, we choose A = 1.2 and B = C = Q = R = W = U = 1. Moreover, we set channel fading h s = h a = 0dB
and h e = −3dB. The transmission power of the transmitter is p = 0.3mW. In this simulation, we adopt the linear energy harvesting model [29], i.e.,
r = ξ(h s (1 − α)p + σ 2 e ),(53)
where σ 2 e denotes the noise power introduced by the receiver antenna and ξ denotes the energy conversion efficiency, which is a constant and characterizes the energy loss of converting the harvested energy to electrical energy. For convenience, ξ is assumed to be 1. In practice, σ 2 e is much smaller than h s (1 − α)p so that it can be neglected. Thus, for simplicity, we assume σ 2 e = 0, i.e.,
r = h s (1 − α)p.(54)
The measurements y k and the control signals u k are transmitted via the binary phase shifting keying (BPSK) transmission scheme with B bits per packet (symbol). Then we have
P(η k = 1|h a , α, p) =η(h a αp) = 2ha αpTs BN 0 −∞ 1 √ 2π e −x 2 /2 dx B ,(55)
and
P(γ k = 1|h e , r) =γ(h e r) = 2he rTs BN 0 −∞ 1 √ 2π e −x 2 /2 dx B ,(56)
where N 0 /2 is the two-sided noise power spectral density and T s is the symbol transmission time. We choose B = 2, T s = 2 × 10 −7 s, and N 0 = 2 × 10 −8 W/Hz in the simulations. First, we set the time horizon T = 500 and conduct a Monte Carlo experiment with 1000 runs. Fig.2 shows the empirical cost for different values of the power splitting ratio α. It can be seen that there exist two critical values for the ratio α, i.e., α = 0.012 and α = 0.994. If α is less than the left critical value, the power for transmitting the control signal will be relatively low. When α is less than the left critical value or larger than the right critical value, the cost is unbounded due to inadequate control and inaccurate estimate, respectively. Then, we present some simulation results to illustrate the main results in Section III. Fig.3-6 show the plot of the probability of successful packet reception (left) and the upper bound J max ∞ and the lower bound J min ∞ of J * ∞ defined in Theorem 3.6 (right) under different simulation parameter settings. Clearly, two critical values for α can also be observed. Fig.3-6 also show the value of α searched by Algorithm 1 (δ = 0.02). By comparing Fig.3 and Fig.4, one can see that the stability of the system has an effect on the two critical values of α: a more unstable system leads to a lower α and a higher α. By comparing Fig.3 and Fig.5, it can be seen that the value of α minimizing J max ∞ will increase when the cost of control is increased. Since inadequate control will cause more cost than inaccurate estimation, the transmitter will tend to use more energy to transmit the control signal to avoid a higher total cost. On the contrary, when the cost of estimation is increased, the transmitter will tend to transfer more energy to the sensor. By comparing Fig.3 and Fig.6, the influence of the channel states can be observed. When the state of the channel from the transmitter to the actuator, i.e., h a , gets worse, the transmitter will use more energy to transmit the control signal to guarantee the control performance. Therefore, the value of α minimizing J max ∞ will increase. Similarly, when h s and h e get worse, the transmitter will transfer more energy to the sensor to ensure the estimation performance, and the value of α minimizing J max ∞ will decrease.
V. CONCLUSION
In this paper, we consider using a novel technology, the so-called SWIPT, to recharge the sensor in the LQG control, which provides a new approach to prolonging the network lifetime. We show that there exist two critical values for the power splitting ratio α, and the cost of the infinite horizon LQG control is bounded if and only if α is between these two critical values. Then, we propose an optimization problem to derive the optimal value of α. This problem is non-convex but its numerical solution can be derived by our proposed algorithm efficiently. Moreover, we provide the feasible condition of the proposed optimization problem. Simulation results are presented to verify and illustrate the main theoretical results. One direction of future work is to consider beamforming when devices are equipped with multiple antennas, which can further improve the spectral efficiency of the system. Another direction is to consider the scenario when there are multiple sensors and multiple actuators. APPENDIX A PROOF OF LEMMA ?? We will prove this lemma by induction. The proof consists of two steps: S1.
If 0 ≤ α 1 ≤ α 2 ≤ 1, thenĥ •ĝ α1 (X) ≤ĥ •ĝ α2 (X); S2. If (ĥ •ĝ α1 ) k (X) ≤ (ĥ •ĝ α2 ) k (X), then (ĥ • g α1 ) k+1 (X) ≤ (ĥ •ĝ α2 ) k+1 (X), ∀0 ≤ α 1 ≤ α 2 ≤ 1.
The first step can be easily completed. We will mainly focus on the second step.
Proposition A.1: If X ≥ Y ≥ 0, thenĥ •ĝ α (X) ≥ĥ • g α (Y ). If (ĥ •ĝ α1 ) k (X) ≤ (ĥ •ĝ α2 ) k (X), we have (ĥ •ĝ α1 ) k+1 (X) =ĥ •ĝ α1 ((ĥ •ĝ α1 ) k (X)) ≤ĥ •ĝ α1 ((ĥ •ĝ α2 ) k (X)) ≤ĥ •ĝ α2 ((ĥ •ĝ α2 ) k (X)) = (ĥ •ĝ α2 ) k+1 (X),(57)
where the first inequality is from proposition A.1, and the second inequality is from S1. APPENDIX B PROOF OF THEOREM 3.5 First, we have the following lemma: Lemma B.1: The expected error covariance matrix E γ [P k ] satisfies the following bounds:
P k ≤ E γ [P k ] ≤P k ,(58)
wherẽ P k =(1 − γ(h e ψ(h s ((1 − α)p))))P k ,
P k =P k − γ(h e ψ(h s ((1 − α)p)))P k C T (CP k C T + R) −1 CP k .
Proof: This lemma can be proved based on the observation that the matrices P − k+1 and P k are concave and monotonic functions of P − k . The proof can be easily derived based on Lemma 5.2 in [3] and is thus omitted.
Then, Theorem 3.5 holds.
APPENDIX C PROOF OF THEOREM 3.6 Following Lemma 3.2 and Lemma 3.3, α and α must respectively satisfy equation (41) and equation (42). Since η(·) and γ(·) are monotonically increasing continuous functions, equation (41) and equation (42) have one unique solution, respectively. Then we have η > η c if and only if α > α. Similarly, we have γ > γ c if and only if α < α. Since lim k→∞ P k = P and lim k→∞ P k = P , the lower bound J min ∞ = lim N →∞ It is easy to see that
S =ĥ •ĝ α (S),(61)
P =h •g α (P ).
LetP =g α (P ), then we havẽ P =g α (h •g α (P )) =g α •h •g α (P ) =g α •h(P ).
Then it can be seen that problem (43) is equivalent to problem (47).
where p is a positive constant, and η(·) : [ 0, ∞ ) → [0, 1] and γ(·) : [ 0, ∞ ) → [0, 1] are monotonically nondecreasing continuous concave functions. Proposition 3.8: Problem (43) is equivalent to the following problem: min α,P ,S Tr (A T SA + W − S)P + Tr (SQ) ,
Fig. 2 .
2Empirical cost Jemp for different values of the power splitting ratio α
Fig. 3 .Fig. 4 .Fig. 5 .
345Simulation results with A = 1.2, B = Q = R = W = U = 1, hs = ha = 0dB and he = −3dB. Simulation results with A = 1.3, B = Q = R = W = U = 1, hs = ha = 0dB and he = −3dB. Simulation results with A = 1.2, B = Q = R = W = 1, U = 10, hs = ha = 0dB and he = −3dB.
Fig. 6 .
6Simulation results with A = 1.2, B = Q = R = W = U = 1, hs = 0dB, ha = −3dB and he = −3dB.
N
J max N can be derived as (36) and (37) based on Theorem 3.5, respectively. One can see that when γ > γ c > p min , the solution of equation (39) exist. According to Lemma 3.2 and Lemma 3.3, MARE (38) and MARE (40) have a unique positive definite solution if and only if η > η c and γ > γ c , respectively. Based on all of the above,
arXiv:2303.15131v1 [eess.SY] 27 Mar 2023 II. PROBLEM SETUP A. System ModelProcess
Sensor
Wireless
Channel
Estimator
= 0 or 1
Actuator
Controller
Wireless
Channel
= 0 or 1
Wireless
Channel
Transmitter
Information
receiver
Power
receiver
ℎ 1 −
ℎ
ℎ
Fig. 1. System Model
Then, we have the following lemmas. Lemma 3.2: [3, Lemma 5.4] Consider the modified Riccati equation defined in
Stochastic analysis and control of real-time systems with random time delays. J Nilsson, B Bernhardsson, B Wittenmark, Automatica. 341J. Nilsson, B. Bernhardsson, and B. Wittenmark, "Stochastic analysis and control of real-time systems with random time delays," Automatica, vol. 34, no. 1, pp. 57-64, 1998.
B D Anderson, J B Moore, Optimal filtering. Courier Corporation. B. D. Anderson and J. B. Moore, Optimal filtering. Courier Corporation, 2012.
Foundations of control and estimation over lossy networks. L Schenato, B Sinopoli, M Franceschetti, K Poolla, S S Sastry, Proceedings of the IEEE. 951L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. S. Sastry, "Foundations of control and estimation over lossy networks," Proceedings of the IEEE, vol. 95, no. 1, pp. 163-187, 2007.
Optimal lqg control across packet-dropping links. V Gupta, B Hassibi, R M Murray, Systems & Control Letters. 566V. Gupta, B. Hassibi, and R. M. Murray, "Optimal lqg control across packet-dropping links," Systems & Control Letters, vol. 56, no. 6, pp. 439-446, 2007.
Channel modeling and lqg control in the presence of random delays and packet drops. J Xu, G Gu, Y Tang, F Qian, Automatica. 135109967J. Xu, G. Gu, Y. Tang, and F. Qian, "Channel modeling and lqg control in the presence of random delays and packet drops," Automatica, vol. 135, p. 109967, 2022.
Kalman filtering with intermittent observations. B Sinopoli, L Schenato, M Franceschetti, K Poolla, M I Jordan, S S Sastry, IEEE transactions on Automatic Control. 499B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. S. Sastry, "Kalman filtering with intermittent observations," IEEE transactions on Automatic Control, vol. 49, no. 9, pp. 1453-1464, 2004.
Sensor data scheduling for optimal state estimation with communication energy constraint. L Shi, P Cheng, J Chen, Automatica. 478L. Shi, P. Cheng, and J. Chen, "Sensor data scheduling for optimal state estimation with communication energy constraint," Automatica, vol. 47, no. 8, pp. 1693-1698, 2011.
An online sensor power schedule for remote state estimation with communication energy constraint. D Han, P Cheng, J Chen, L Shi, IEEE Transactions on Automatic Control. 597D. Han, P. Cheng, J. Chen, and L. Shi, "An online sensor power schedule for remote state estimation with communication energy constraint," IEEE Transactions on Automatic Control, vol. 59, no. 7, pp. 1942-1947, 2013.
Optimal periodic sensor schedule for steady-state estimation under average transmission energy constraint. Z Ren, P Cheng, J Chen, L Shi, Y Sun, IEEE Transactions on Automatic Control. 5812Z. Ren, P. Cheng, J. Chen, L. Shi, and Y. Sun, "Optimal periodic sensor schedule for steady-state estimation under average transmission energy constraint," IEEE Transactions on Automatic Control, vol. 58, no. 12, pp. 3265-3271, 2013.
Event-based sensor data scheduling: Trade-off between communication rate and estimation quality. J Wu, Q.-S Jia, K H Johansson, L Shi, IEEE Transactions on automatic control. 584J. Wu, Q.-S. Jia, K. H. Johansson, and L. Shi, "Event-based sensor data scheduling: Trade-off between communication rate and estimation quality," IEEE Transactions on automatic control, vol. 58, no. 4, pp. 1041-1046, 2012.
Stochastic event-triggered sensor schedule for remote state estimation. D Han, Y Mo, J Wu, S Weerakkody, B Sinopoli, L Shi, IEEE Transactions on Automatic Control. 6010D. Han, Y. Mo, J. Wu, S. Weerakkody, B. Sinopoli, and L. Shi, "Stochastic event-triggered sensor schedule for remote state estimation," IEEE Transactions on Automatic Control, vol. 60, no. 10, pp. 2661- 2675, 2015.
Optimal periodic transmission power schedules for remote estimation of arma processes. Y Li, D E Quevedo, V Lau, L Shi, IEEE Transactions on Signal Processing. 6124Y. Li, D. E. Quevedo, V. Lau, and L. Shi, "Optimal periodic transmis- sion power schedules for remote estimation of arma processes," IEEE Transactions on Signal Processing, vol. 61, no. 24, pp. 6164-6174, 2013.
Multi-sensor transmission power scheduling for remote state estimation under sinr model. Y Li, D E Quevedo, V Lau, L Shi, 53rd IEEE conference on decision and control. IEEEY. Li, D. E. Quevedo, V. Lau, and L. Shi, "Multi-sensor transmission power scheduling for remote state estimation under sinr model," in 53rd IEEE conference on decision and control, pp. 1055-1060, IEEE, 2014.
Data-driven power control for state estimation: A bayesian inference approach. J Wu, Y Li, D E Quevedo, V Lau, L Shi, Automatica. 54J. Wu, Y. Li, D. E. Quevedo, V. Lau, and L. Shi, "Data-driven power control for state estimation: A bayesian inference approach," Automatica, vol. 54, pp. 332-339, 2015.
Multi-sensor transmission management for remote state estimation under coordination. K Ding, Y Li, S Dey, L Shi, IFAC-PapersOnLine. 501K. Ding, Y. Li, S. Dey, and L. Shi, "Multi-sensor transmission management for remote state estimation under coordination," IFAC- PapersOnLine, vol. 50, no. 1, pp. 3829-3834, 2017.
Dynamic pricing for power control in remote state estimation. K Ding, X Ren, H Qi, G Shi, X Wang, L Shi, IFAC-PapersOnLine. 532K. Ding, X. Ren, H. Qi, G. Shi, X. Wang, and L. Shi, "Dynamic pricing for power control in remote state estimation," IFAC-PapersOnLine, vol. 53, no. 2, pp. 11038-11043, 2020.
Joint sensor and actuator placement for infinite-horizon lqg control. L Huang, J Wu, Y Mo, L Shi, IEEE Transactions on Automatic Control. 671L. Huang, J. Wu, Y. Mo, and L. Shi, "Joint sensor and actuator place- ment for infinite-horizon lqg control," IEEE Transactions on Automatic Control, vol. 67, no. 1, pp. 398-405, 2021.
Optimal strategies for communication and remote estimation with an energy harvesting sensor. A Nayyar, T Başar, D Teneketzis, V V Veeravalli, IEEE Transactions on Automatic Control. 589A. Nayyar, T. Başar, D. Teneketzis, and V. V. Veeravalli, "Optimal strategies for communication and remote estimation with an energy harvesting sensor," IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2246-2260, 2013.
Optimal energy allocation for kalman filtering over packet dropping links with imperfect acknowledgments and energy harvesting constraints. M Nourian, A S Leong, S Dey, IEEE Transactions on Automatic Control. 598M. Nourian, A. S. Leong, and S. Dey, "Optimal energy allocation for kalman filtering over packet dropping links with imperfect acknowl- edgments and energy harvesting constraints," IEEE Transactions on Automatic Control, vol. 59, no. 8, pp. 2128-2143, 2014.
Transmission power scheduling for energy harvesting sensor in remote state estimation. Y Li, D E Quevedo, V Lau, S Dey, L Shi, IFAC Proceedings Volumes. 47Y. Li, D. E. Quevedo, V. Lau, S. Dey, and L. Shi, "Transmission power scheduling for energy harvesting sensor in remote state estimation," IFAC Proceedings Volumes, vol. 47, no. 3, pp. 122-127, 2014.
Simultaneous wireless information and power transfer (swipt): Recent advances and future challenges. T D Perera, D N K Jayakody, S K Sharma, S Chatzinotas, J Li, IEEE Communications Surveys Tutorials. 201T. D. Ponnimbaduge Perera, D. N. K. Jayakody, S. K. Sharma, S. Chatzinotas, and J. Li, "Simultaneous wireless information and power transfer (swipt): Recent advances and future challenges," IEEE Communications Surveys Tutorials, vol. 20, no. 1, pp. 264-302, 2018.
Transporting information and energy simultaneously. L R Varshney, 2008 IEEE International Symposium on Information Theory. L. R. Varshney, "Transporting information and energy simultane- ously," in 2008 IEEE International Symposium on Information Theory, pp. 1612-1616, 2008.
Joint transceiver optimization for miso swipt systems with time switching. H Lee, K.-J Lee, H Kim, I Lee, IEEE Transactions on Wireless Communications. 175H. Lee, K.-J. Lee, H. Kim, and I. Lee, "Joint transceiver optimization for miso swipt systems with time switching," IEEE Transactions on Wireless Communications, vol. 17, no. 5, pp. 3298-3312, 2018.
Energy efficiency optimization for comp-swipt heterogeneous networks. J Tang, A Shojaeifard, D K C So, K.-K Wong, N Zhao, IEEE Transactions on Communications. 6612J. Tang, A. Shojaeifard, D. K. C. So, K.-K. Wong, and N. Zhao, "Energy efficiency optimization for comp-swipt heterogeneous networks," IEEE Transactions on Communications, vol. 66, no. 12, pp. 6368-6383, 2018.
Simultaneous wireless information and power transfer (swipt) for internet of things: Novel receiver design and experimental validation. K W Choi, S I Hwang, A A Aziz, H H Jang, J S Kim, D S Kang, D I Kim, IEEE Internet of Things Journal. 74K. W. Choi, S. I. Hwang, A. A. Aziz, H. H. Jang, J. S. Kim, D. S. Kang, and D. I. Kim, "Simultaneous wireless information and power transfer (swipt) for internet of things: Novel receiver design and experimental validation," IEEE Internet of Things Journal, vol. 7, no. 4, pp. 2996- 3012, 2020.
Resource and power allocation in swipt-enabled device-to-device communications based on a nonlinear energy harvesting model. H Yang, Y Ye, X Chu, M Dong, IEEE Internet of Things Journal. 711H. Yang, Y. Ye, X. Chu, and M. Dong, "Resource and power allocation in swipt-enabled device-to-device communications based on a nonlinear energy harvesting model," IEEE Internet of Things Journal, vol. 7, no. 11, pp. 10813-10825, 2020.
Joint precoding optimization for secure swipt in uav-aided noma networks. W Wang, J Tang, N Zhao, X Liu, X Y Zhang, Y Chen, Y Qian, IEEE Transactions on Communications. 688W. Wang, J. Tang, N. Zhao, X. Liu, X. Y. Zhang, Y. Chen, and Y. Qian, "Joint precoding optimization for secure swipt in uav-aided noma networks," IEEE Transactions on Communications, vol. 68, no. 8, pp. 5028-5040, 2020.
Joint transceiver design for network-assisted full-duplex systems with swipt. H Yang, X Xia, J Li, P Zhu, X You, IEEE Systems Journal. 161H. Yang, X. Xia, J. Li, P. Zhu, and X. You, "Joint transceiver design for network-assisted full-duplex systems with swipt," IEEE Systems Journal, vol. 16, no. 1, pp. 1206-1216, 2022.
Joint power allocation for remote state estimation with swipt. H Yang, M Huang, Y Li, S Dey, L Shi, IEEE Transactions on Signal Processing. H. Yang, M. Huang, Y. Li, S. Dey, and L. Shi, "Joint power allocation for remote state estimation with swipt," IEEE Transactions on Signal Processing, 2022.
| [] |
[
"A-MuSIC: An Adaptive Ensemble System For Visual Place Recognition In Changing Environments",
"A-MuSIC: An Adaptive Ensemble System For Visual Place Recognition In Changing Environments"
] | [
"Bruno Arcanjo ",
"Bruno Ferrarini ",
"Michael Milford ",
"Klaus D Mcdonald-Maier ",
"Shoaib Ehsan "
] | [] | [] | Visual place recognition (VPR) is an essential component of robot navigation and localization systems that allows them to identify a place using only image data. VPR is challenging due to the significant changes in a place's appearance under different illumination throughout the day, with seasonal weather and when observed from different viewpoints. Currently, no single VPR technique excels in every environmental condition, each exhibiting unique benefits and shortcomings. As a result, VPR systems combining multiple techniques achieve more reliable VPR performance in changing environments, at the cost of higher computational loads. Addressing this shortcoming, we propose an adaptive VPR system dubbed Adaptive Multi-Self Identification and Correction (A-MuSIC). We start by developing a method to collect information of the runtime performance of a VPR technique by analysing the frame-to-frame continuity of matched queries. We then demonstrate how to operate the method on a static ensemble of techniques, generating data on which techniques are contributing the most for the current environment. A-MuSIC uses the collected information to both select a minimal subset of techniques and to decide when a re-selection is required during navigation. A-MuSIC matches or beats state-of-theart VPR performance across all tested benchmark datasets while maintaining its computational load on par with individual techniques. | 10.48550/arxiv.2303.14247 | [
"https://export.arxiv.org/pdf/2303.14247v1.pdf"
] | 257,766,814 | 2303.14247 | 89c9345ab90b1747bd46ec7658e29c5f39f0eefa |
A-MuSIC: An Adaptive Ensemble System For Visual Place Recognition In Changing Environments
Bruno Arcanjo
Bruno Ferrarini
Michael Milford
Klaus D Mcdonald-Maier
Shoaib Ehsan
A-MuSIC: An Adaptive Ensemble System For Visual Place Recognition In Changing Environments
Visual place recognition (VPR) is an essential component of robot navigation and localization systems that allows them to identify a place using only image data. VPR is challenging due to the significant changes in a place's appearance under different illumination throughout the day, with seasonal weather and when observed from different viewpoints. Currently, no single VPR technique excels in every environmental condition, each exhibiting unique benefits and shortcomings. As a result, VPR systems combining multiple techniques achieve more reliable VPR performance in changing environments, at the cost of higher computational loads. Addressing this shortcoming, we propose an adaptive VPR system dubbed Adaptive Multi-Self Identification and Correction (A-MuSIC). We start by developing a method to collect information of the runtime performance of a VPR technique by analysing the frame-to-frame continuity of matched queries. We then demonstrate how to operate the method on a static ensemble of techniques, generating data on which techniques are contributing the most for the current environment. A-MuSIC uses the collected information to both select a minimal subset of techniques and to decide when a re-selection is required during navigation. A-MuSIC matches or beats state-of-theart VPR performance across all tested benchmark datasets while maintaining its computational load on par with individual techniques.
I. INTRODUCTION
Visual place recognition (VPR) aims to solve the localization component of Simultaneous Localization and Mapping (SLAM) using image information, attractive feature due to the low cost, versatility and availability of cameras [1]. However, VPR is challenging due to the variety of ways in which a place's appearance can change. Changes in illumination [2], seasonal variations [3], and varying viewpoints [4] can make the same place appear vastly different. Many techniques have been proposed to tackle these challenges, but no standalone technique excels in every viewing condition [5].
Combining multiple techniques into a single VPR algorithm can compensate for individual weaknesses, as demonstrated by systems such as [6], [7], [8]. However, these systems either run a static number of techniques for every query frame [6], [7], potentially wasting computational resources, or rely on extra ground-truth information of the runtime environment [8].
In this work, we address the shortcomings of existing combination methods by proposing an adaptive VPR system that selects the minimal optimal subset of techniques based on their runtime VPR performance, without extra groundtruth information. The core of the proposed approach consists of selecting the most competent techniques for the current viewing conditions and triggering a re-selection when those conditions change enough to degrade the VPR capabilities of the running subset. We develop Self-Identification and Correction (SIC), an algorithm for assessing the runtime performance of a technique without extra ground-truth information. SIC identifies when a technique has wrongly matched a query frame, proposes an alternative prediction and quantifies this correction. SIC leverages the frameto-frame similarity continuity that a correct match should present in sequential navigation to filter out erroneous prediction candidates proposed by a VPR technique.
SIC can operate with multiple VPR techniques, a process dubbed Multi-SIC (MuSIC). MuSIC runs and corrects all VPR techniques individually, while generating their respective correction information, and then selects which technique to trust on a per-frame basis. This per-frame selection information is used to identify which techniques are significantly contributing to the current navigation.
Finally, we present our adaptive VPR system, dubbed Adaptive Multi-SIC (A-MuSIC), whose functionality is visible in Fig. 1. A-MuSIC continuously analyses the SIC correction being performed by the active technique subset. A statistically significant change in correction signifies a change in the VPR performance of the active subset, triggering a re-selection. During the re-selection stage, all techniques in the ensemble are evaluated and only the most selected ones by MuSIC remain active until the next reselection stage.
A-MuSIC effectively saves computational resources by only running the minimally required subset of techniques for the current viewing conditions, while still enjoying the VPR performance benefits of multi-technique systems. It is capable of operating with as little as a single technique up to the entire ensemble, requiring no extra ground-truth information of the runtime environment.
Accordingly, we claim the following contributions:
• A novel algorithm, SIC, which identifies incorrect VPR technique matches, proposes new predictions and assesses online technique performance. • MuSIC, an extension of SIC that leverages a static ensemble of VPR techniques to determine the most suitable match for a given query frame, quantifying technique contributions. • A-MuSIC, an adaptive VPR system that optimizes the selection of VPR techniques during runtime to ensure that only the currently necessary techniques are active, saving computational resources while maintaining performance benefits. The rest of this letter is organized as follows. In Section II we provide an overview of different approaches to VPR, with a focus on technique combination and sequence-based methods. Section III describes our methodology, starting from the online check and correction of a single technique, followed by the extension to a static number of techniques, and finally the implementation of the adaptive selection algorithm. In Section IV we detail the settings of our experimentation. We show and analyse our results in Section V. In Section VI, we highlight the benefits and limitations of our method and suggest future research paths.
II. RELATED WORK
Appearance-based localization continues being an important research topic, with several approaches being proposed in the literature. The underlying technology on which these techniques are based on varies widely.
[9] utilizes hand-crafted feature descriptors, such as Scale Invariant Feature Transform [10] and Speeded Up Robust Features [11], to successfully build a landmark representation of the environment. While the use of local features improves resilience against viewpoint variations, it makes the method sensitive to appearance changes such as different illumination conditions [12]. Well-studied global descriptors like Histogram of Oriented gradients [13] and [14] have also been employed in VPR [15] but struggle with viewpoint changes. CoHOG [16] utilizes a region-of-interest approach in conjunction with HOG to minimize this shortcoming. [17] is another popular VPR technique based on the computation of regions of interest in an image.
Image features retrieved from the inner layers of Convolutional Neural Networks (CNNs) have been shown to outperform hand-crafted features [18], [19]. Techniques such as HybridNet, AMOSNet [20] and NetVLAD [21] have utilized these CNN-extracted features to perform state-ofthe-art VPR. Computational efficiency is another added consideration, with CALC [22] being an example of a CNNbased VPR technique designed to perform lightweight VPR.
The variety in available image-processing methods, with different sets of strengths and weaknesses, has led to recent research on the combination of multiple techniques to perform VPR in changing environments. Multi-process fusion (MPF) [6] proposes a system which combines four VPR methods utilizing a Hidden Markov Model to further infuse sequential information. [7] combines techniques in a hierarchical structure, passing only the top place candidates of upper level techniques down to the lower tiers. SwitchHit [8] instead starts by running a single technique and, using prior knowledge of VPR performance in the environment, decides if another technique should be run, repeating the process until a satisfactory confidence threshold is achieved.
In this letter, we propose a multi-technique VPR system which is able to dynamically select which VPR techniques should be used in the current environmental conditions. Unlike [6], [7], our system can operate with as low as a single technique if the remaining are not contributing to the VPR task. Moreover, it does not require any additional ground-truth information about the deployment environment nor technique complementarity, a major benefit over [8].
III. METHODOLOGY Our proposed adaptive system A-MuSIC relies on (a) identifying the online performance of VPR techniques and (b) determining which techniques are necessary for reliable VPR in the current environment. Point (a) is directly addressed by SIC, which corrects a VPR technique by analysing the similarity continuity of its past queries and quantifies the performed correction. Point (b) is then tackled by MuSIC, which selects the most trustworthy technique on a perframe basis and is thus able to quantify the contribution of individual techniques. In this section, we detail the operation of SIC, MuSIC and how A-MuSIC makes use of the collected information to perform adaptive VPR.
A. Self-Identification and Correction (SIC)
The ability to identify a mismatched place at runtime remains an open but important problem in VPR, as it allows for assessing the online performance of a running VPR technique. Moreover, detecting wrongly matched frames SIC WITH K=2, F=2 indicates that the query should be re-matched, directly increasing VPR performance given a successful correction. In real-world applications, ground-truth information is of the deployment environment is rarely available, hence extra information and assumptions are utilized to estimate when an online match is incorrect. Our proposed SIC method quantifies the sequential consistency of the place matching similarity distributions of a technique over a number of past query frames, for both identifying incorrect matches and attempting their correction.
During navigation, when VPR is performed for a query frame q, the employed technique generates a score distribution S q where each score S q,i corresponds to one of the N reference places. With descriptor-based methods, such as HybridNet or NetVLAD, these scores are often the cosine differences between the query frame and the reference images. The usual approach is to take the reference place associated with the highest score in S q , i.e. argmax(S q ), as the place prediction for the evaluated frame [23]. However, we empirically observe that, even in incorrect matches, the score of the correct reference place is often relatively high. As the correct match is often found among the top scores, we interpret the top K values as possible match candidates for the current frame, possibly misplaced due to visual noise. We evaluate the sequential consistency sc of each of these topscoring candidates by referring to the similarity distributions of previous queries. This concept is illustrated in Fig. 2, where a match is successfully corrected by computing the sc of the top 1 and top 2 scores.
We now detail SIC for the correction of a query frame q. S is a matrix where each row vector S q is a score distribution, one per frame for which VPR was performed during the current navigation. Importantly, these rows are ordered by time of observation, hence S q,i represents the i th score of the q th performed query. The top K scores are selected for analysis from S q , where K is a hyperparameter. F , also an hyperparameter, denotes the maximum number of past queries we want to consider when computing the sc of each top candidate. For each of these candidates c, a lower bound f lb is set to prevent trying to look at observation indices that do not exist, i.e. before the start of navigation. Starting from the observation vector S p , the algorithm shifts δ observations in S to S p+δ , for all valid δ values. Note that δ takes on negative values, representing a shift to previously performed queries stored in S. With each query shift, SIC also shifts δ reference place indices from the current top candidate c within the vector S p+δ (Fig. 2). The score S q+δ,c+δ is added to the rolling sum sc. Finally, we take the candidate which achieved the highest average sc as the final place prediction. If this candidate is not the original top ranked score, we consider the original matching an error which was corrected.
Assuming sequential navigation, SIC looks at δ query results back from the current frame. The position of the related score peaks are expected to appear back-shifted accordingly by δ indices. By repeating the shift, summing the score at the shifted indices, we quantify the candidate's frame-to-frame continuity, i.e. its sc. The difference between the average sc of the erroneous match and the corrected prediction can then be collected as the quantification of the correction for assessing online performance. Moreover, the correction process itself can improve VPR performance as more places are matched correctly, as observed in Fig. 4.
B. Multi-Technique SIC (MuSIC)
The motivation for utilizing multiple VPR techniques in unison is increasing tolerance against various types of visual changes. Specifically in Multi-SIC, techniques that perform well within the current environment achieve higher sc than techniques that perform poorly. Hence, MuSIC operates by comparing the sc values of each technique's corrected prediction. Furthermore, there is the added benefit of a higher chance that the correct match is within the top K candidates of at least one of the techniques.
Different techniques have different ranges of output similarity scores. Since sc is computed directly from these scores, we scale the score vectors to allow for a fair comparison. Therefore, each score S q,i is normalized to the same value range using the following equation:
S q,i = S q,i − µ σ(1)
whereŜ q,i is the output scaled value, µ and σ are the mean and standard deviation of vector S q currently being scaled, respectively. Note that this operation does not affect the behaviour of SIC but it is crucial to compare sc values between different techniques.
Each technique then independently corrects itself and proposes its top candidate with highest sc. The technique whose candidate achieves the highest sc is chosen and its proposed prediction is trusted to be the correct prediction for the given query frame. Since MuSIC only chooses one technique per frame, keeping track of its choices during navigation allows for quantifying the contributions of each individual technique.
C. Adaptive MuSIC (A-MuSIC)
By using a static ensemble of techniques, MuSIC achieves higher VPR performance across different environments (Fig. 5. However, using all techniques for every frame is computationally expensive and potentially wasteful if a smaller subset would have sufficed for the current visual conditions. Our adaptive system A-MuSIC addresses this drawback by dynamically selecting the optimal subset of techniques to be run in the current environment. The next subsections detail the selection process and reselection mechanism upon detriment of VPR performance. 1) Technique Selection: As explained in III-B, MuSIC performs a post-correction selection of the place prediction of a set, T , of VPR techniques. MuSIC tracks the history, th, of the selected techniques over the past M frames. Every element in th corresponds to a VPR technique in T . From th, we calculate the proportion of selection of each technique, dubbed coverage and immediately add the technique with highest coverage to the subset of active techniquesT . We repeatedly add techniques toT by order of coverage, until it cumulatively reaches a pre-designated threshold, E. Higher E values favour VPR performance, usually at increased computational cost, as the system is more likely to select multiple techniques. Note that it is possible to choose as little as one technique even for a high E value, given the case that a single technique achieves high enough coverage. Conversely, it is also possible that all techniques are selected, depending on the coverage distribution per technique.
2) Re-Selection Trigger: In the first M frames of navigation, all techniques in T are utilized and SIC generates a correction history vector ch of size M per technique. Each element ch m in ch is the per technique sc difference between the corrected prediction for frame m and its uncorrected prediction. Therefore, ch m is 0 when no correction was performed, with larger values indicating a larger discrepancy between the corrected and uncorrected prediction. A selection is then performed, constructing the first subsetT and the ch vectors of the selected techniques are concatenated into a single vector.
During navigation, every M frames, the techniques inT each generate a new correction history vectorĉh which are again concatenated. The goal is to identify when the previous correction information stored in ch is significantly different from the current correction, stored inĉh. We use a pairedsample, two-tailed T-test between the two distributions, at a significance level of 5%. The null hypothesis H 0 is that the population means of ch andĉh are equal and the alternative hypothesis H 1 is that they are significantly different. Importantly, the same technique subsetT computes both ch andĉh. Therefore, rejecting the null hypothesis represents a significant change in the correction behaviour of techniques inT , which is interpreted as a change in the VPR performance of the subset. When H 0 is rejected, a new selection process is triggered using the same M frames that indicated the change of environment, resulting in a possibly newT . When a re-selection is not triggered, ch is simply updated to equalĉh. If a re-selection does take place, ch is updated to equal the correction vectors of the newly selected techniques.
IV. EXPERIMENTAL SETUP
We conduct experiments to evaluate single technique SIC, MuSIC and the adaptive system A-MuSIC, analysing how each increment of the system affects VPR performance and computational demand. The rest of this section gives details on datasets, baseline VPR techniques utilized, hyperparameter ablation studies and evaluation metrics.
A. Datasets
We evaluate our approach on five different benchmark datasets: Nordland Winter and Fall [24], Gardens Point, St. Lucia [25], 17 Places [26] and Berlin [27]. Table IV-A provides details on how we utilize these datasets. We use a ground-truth tolerance of 1 frame for all datasets except for 17 Places where we use a tolerance of 10 frames.
B. Evaluation 1) Precision-Recall: VPR performance is often quantified using Precision-Recall (PR) curves and the area under these curves (AUC) [28]. The use of PR curves and respective AUC is favoured for class imbalanced datasets. In the VPR context, a small set of correct predictions and a much larger set of incorrect predictions exists for each query frame, resulting in a strongly imbalanced dataset.
2) Accuracy: When evaluating SIC, we are more interested in its match correction capabilities rather than practical VPR performance. While correcting wrong matches leads to improved VPR performance captured by AUC, accuracy is more suitable to analyse the percentage of SIC corrected frames.
3) Quantifying Adaptability: The main advantage of an adaptive VPR system is achieving tolerance against various types of visual changes while not relying on the brute-force usage of multiple baseline techniques. To assess the VPR performance benefits of A-MuSIC in changing environments, we additionally compare the average VPR performance across datasets of the tested techniques. Furthermore, we compute the proportion of technique runs, P T R, for A-MuSIC, given by
P T R = 1 Q * |T | Q q=1 |T q | (2)
where Q is the total number of queries, |T | is the cardinality of the starting set of techniques T and |T q | is the cardinality of subset of techniquesT for query q. The maximum P T R value is 1 when all techniques in T are used for every query frame. The minimum P T R value is given by
M inP T R = 1 |T | (3)
which occurs when only one technique is employed for every query frame. With our configuration of four VPR techniques, the minimum possible P T R is 0.25. We employ P T R to analyse the computational benefits of A-MuSIC and how the changes in total techniques usage affects VPR performance and prediction computation time.
C. System Configuration 1) Baseline Techniques: A-MuSIC is not bound to any specific ensemble of VPR techniques, with any combination of any size being possible. In our experiments, we employ a starting ensemble of four VPR techniques: HOG, NetVLAD, CoHOG and CALC and we use the default implementations provided in [29]. We note that neither CALC nor NetVLAD were directly trained on the employed benchmark datasets.
2) A-MuSIC Settings: As detailed in Section III, SIC contains several hyperparameters and we conduct ablation studies to find the best values in terms of VPR performance and computational efficiency. Rather than fine-tuning SIC for each technique and benchmark dataset, we select global parameter values that result in the best average VPR performance and computation trade-off. The results of the ablation study are presented in Fig. 3. We observe that most of the performance increase can be obtained with low K values, with heavily diminishing returns for higher settings. Furthermore, due to the vectorization of the implementation, larger F settings do not have a detrimental impact on the computational cost of SIC. Overhead computation time per correction is mostly affected by the number of top K Fig. 3: Ablation study on the hyperparameters of SIC, top K , F . and their effects on average VPR accuracy and prediction time per frame. candidates, ranging from less than 1 to 21 milliseconds. Since SIC is performed on top of multiple techniques and most of the performance benefit is seen on lower settings, we choose a relatively conservative top K value of 50. As F does not significantly affect runtime speed, we set its maximum value to 1000 past observations frames.
As detailed in Section III-C the hyperparameters E and M are introduced with the implementation of A-MuSIC. The parameter E controls the target coverage threshold to achieve when selecting a subset of techniques. We set E to 0.7, signalling we require that at least 70% of the selection frames to be covered by the subset of techniques. The setting M defines how many selection frames are used and we select a value of 10.
V. RESULTS & DISCUSSION
In this Section, we present the results obtained by the intermediary components SIC and MuSIC as well as the fully-pledged A-MuSIC system.
A. SIC
In Fig. 4, we compare the VPR accuracies of the standalone VPR techniques against their SIC corrected counterparts. Nearly all techniques achieve higher VPR accuracy across all datasets, showing how SIC is able to successfully correct a large portion of wrong matches. The only exception is HOG on the Berlin dataset, where no accuracy improvement was achieved. This behaviour is likely due to how poorly HOG performs on the dataset, leaving no useful frame-to-frame continuity information for SIC to exploit. The accuracy increase, i.e. more correct matches, also results in increased VPR performance, which can be observed in Table II, with higher AUC values across the board.
Also in Table II, we observe the impact of SIC on the prediction time of the individual techniques. For techniques with high prediction times, such as NetVLAD or CoHOG in larger datasets, the addition of SIC is negligible, being within variation. In the case of faster techniques, such as HOG and CALC, a small average increase of 5 milliseconds is reported.
B. MuSIC
MuSIC achieves stronger VPR performance across all datasets, observable in the PR curves of Fig. 5. In Fig. 6, we can observe how each technique contributed to the increased performance of MuSIC. Note how in datasets where there is a clear dominant technique, MuSIC's PR-curve closely follows that technique's curve, and performance is similar. For example, in the 17 Places dataset, NetVLAD is almost exclusively chosen by MuSIC for every frame (Fig. 6) and their PR-curves are completely overlapped in Fig. 5f. On the other hand, when there is a larger variety of technique choice by MuSIC, there is a larger increase in VPR performance and its PR-curve does not follow any single technique. Such is the case of Winter, where all four techniques contribute to some extent, the performance increase is the most significant and MuSIC's curve quickly diverges from the remaining in Fig. 5a.
The increased VPR performance provided by MuSIC provides evidence that the method is able to choose the correct technique to place a particular match. However, MuSIC requires running all techniques for every query which results in extremely high prediction times. As expected, in
C. A-MuSIC
In Table II, we observe that A-MuSIC achieves higher average VPR performance when compared to every SIC corrected technique. The same is true for performance on individual datasets with the exception of Fall. We can also see very similar average VPR performance between A-MuSIC and MuSIC, with a small drop of 0.01 AUC for the former. Looking at the Berlin, Night-Right, 17 Places and St. Lucia datasets, A-MuSIC achieves the same VPR performance as MuSIC, indicating that the sub-selection of techniques (Fig. 7) is working correctly. This is also visible in the corresponding PR curves in Fig. 5, where the curve for A-MuSIC closely follows that of MuSIC. Winter is the dataset where the largest drop in performance occurs when compared to MuSIC, from an AUC value of 0.95 to 0.90, which is consistent with our findings in Fig. 6. Since Winter benefits the most from using multiple techniques, A-MuSIC's restriction on the amount of techniques used at any given time hurts its performance the most. Fall is the only dataset where A-MuSIC performs slightly worse than the best individual techniques, indicating that NetVLAD was erroneously selected for a portion of this dataset, visible in Fig. 7 at the start of the Fall sequence.
by looking at the selection pattern of A-MuSIC, represented in Fig. 7, we can better understand the VPR performance of the adaptive method. Winter is the dataset where more techniques are selected in unison, with CALC still being on its own more often. On Night-Right, apart from reselection periods, only NetVLAD is ran for the entire dataset. Fall's pattern shows that only one technique was selected (Table II). Moreover, the average prediction time of A-MuSIC is similar to that of NetVLAD, even being faster than the individual technique on some datasets. In Table III, we observe that the overall PTR of A-MuSIC is 0.41, effectively cutting 59% of technique runs when compared to the non-adaptive MuSIC.
VI. CONCLUSIONS AND FUTURE WORK
In this work, we propose a novel multi-technique VPR system capable of selecting a minimal optimal subset of techniques for the current environment without prior ground-truth knowledge. The size of the selection ranges from a single technique to the entire ensemble. Our approach analyses the sequential continuity of top candidates to identify incorrect matches at runtime, allowing a technique to assess its own online performance and attempt self-correction. Additionally, frame-by-frame analysis enables the correction method to utilize multiple techniques, improving overall VPR performance and gathering data on the most significant contributors in the current environment. The adaptive system examines both the correction information and contribution proportions, determining which techniques to run and identifying when a re-selection is necessary.
However, our adaptive system has clear limitations. It assumes sequential navigation, as the underlying correction method relies on analysing the sequential continuity of top predictions. The work could be improved by introducing a dynamic selection frame window for faster detection of changes in correction and triggering re-selection. Moreover, the quantification of correction should be refined for more detailed information on how techniques cope with the current environment, leading to more accurate re-selection timings.
This work demonstrates the advantages of an onlineadaptive VPR system, increasing VPR performance by combining multiple techniques while minimizing unnecessary computation.
Fig. 1 :
1A-MuSIC uses correction information obtained by SIC to detect a detriment in VPR performance, triggering a re-selection. Only the minimal amount of techniques needed for the current viewing conditions is selected, saving computational resources.
Fig. 2 :
2The top match for frame q (solid red) suggests a wrong match and SIC corrects it to the second highest score (solid green).
Fig. 5: Precision-Recall Curves
Fig. 7 :
7A-MuSIC technique selections and re-selection timings nificant computation benefits, with the average prediction time of A-MuSIC being almost 1 second less than that of MuSIC
TABLE I :
IDataset DetailsDataset
Condition
Reference
Traverse
Query
Traverse
Number
of
Images
Nordland
Winter
Extreme
seasonal
Summer
Winter
1000
Nordland
Fall
Moderate
seasonal
Summer
Fall
1000
Berlin
Strong
viewpoint
halen.-2,
kudamm-1
and A100-1
halen.-1,
kudamm-2
and A100-2
250
Night-
Right
Outdoor
Illumination;
Lateral Shift
Day-Left
Night-Right
200
St.
Lucia
Daylight;
Dynamic
Elements
Afternoon
Morning
1100
17
Places
Indoor
Illumination
Day
Night
2000
TABLE II :
IIVPR Performance (AUC) and Prediction Time (ms)Winter
Fall
Berlin
Night-Right
17 Places
St. Lucia
Average
AUC
ms
AUC
ms
AUC
ms
AUC
ms
AUC
ms
AUC
ms
AUC
ms
HOG
0.29
49
0.84
48
0.03
16
0.03
15
0.34
96
0.56
47
0.35
45
CALC
0.30
168
0.88
168
0.06
157
0.13
161
0.38
170
0.48
165
0.37
165
CoHOG
0.23
795
0.85
785
0.28
225
0.45
185
0.31
1525
0.37
729
0.42
707
NetVLAD
0.28
723
0.68
719
0.81
722
0.54
717
0.48
727
0.35
721
0.52
722
HOG+SIC
0.76
54
0.98
55
0.02
20
0.40
17
0.46
104
0.84
52
0.58
50
CALC+SIC
0.86
172
0.99
175
0.56
163
0.71
163
0.67
177
0.92
170
0.79
170
CoHOG+SIC
0.67
747
0.99
776
0.80
218
0.87
184
0.76
1518
0.84
718
0.82
694
NetVLAD+SIC
0.78
747
0.94
743
0.96
736
0.97
742
0.82
745
0.85
717
0.89
738
MuSIC
0.95
1790
1.00
1748
0.95
1164
0.97
1094
0.86
2544
0.92
1937
0.94
1713
A-MuSIC
0.90
621
0.97
649
0.95
558
0.98
766
0.85
982
0.92
747
0.93
721
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
NetVLAD HOG
CALC CoHOG NetVLAD
+ SIC
HOG +
SIC
CALC +
SIC
CoHOG
+ SIC
Accuracy
Accuracy | Standalone VS SIC Corrected
Winter
Fall
Berlin
Night-Right
St. Lucia
17 Places
Fig. 4: VPR accuracies of individual VPR techniques and
their SIC corrected counterparts.
TABLE III
IIItable II, we observe that the prediction time of MuSIC is roughly the sum of times of the baseline techniques.Dataset
PTR
Technique
Runs
Re-
Selections
Winter
0.45
1810
28
Fall
0.36
1450
12
Berlin
0.31
310
28
Night-Right
0.4
320
15
St. Lucia
0.51
2240
235
17 Places
0.36
2870
86
Overall
0.41
9000
404
Fig. 6: MuSIC chosen candidate source technique proportion per dataset at any point in time, but it alternated between NetVLAD, CoHOG and CALC. There was a short delay in the triggering of a re-selection when entering the Fall sequence, allowing for NetVLAD to perform VPR on its own for the beginning of the dataset and resulting in the small AUC drop reported. 17 Places clearly shows A-MUSIC's ability to use as little as a single technique for long periods, with only NetVLAD being used for the entire dataset. While St. Lucia displays more re-selection stages, only CALC was ever selected by the system to perform VPR.The sub-selection of techniques described results in sig-0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Winter
Fall
Night-Right
Berlin
17 Places
St. Lucia
Technique Proportion
MuSIC Candidate Selection
HOG+SIC
CoHOG+SIC
CALC+SIC
NetVLAD+SIC
Visual place recognition: A survey. S Lowry, N Sünderhauf, P Newman, J J Leonard, D Cox, P Corke, M J Milford, IEEE Transactions on Robotics. 321S. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. Cox, P. Corke, and M. J. Milford, "Visual place recognition: A survey," IEEE Trans- actions on Robotics, vol. 32, no. 1, pp. 1-19, 2015.
Towards illumination invariance for visual localization. A Ranganathan, S Matsumoto, D Ilstrup, 2013 IEEE International Conference on Robotics and Automation. IEEEA. Ranganathan, S. Matsumoto, and D. Ilstrup, "Towards illumination invariance for visual localization," in 2013 IEEE International Con- ference on Robotics and Automation. IEEE, 2013, pp. 3791-3798.
Robust visual robot localization across seasons using network flows. T Naseer, L Spinello, W Burgard, C Stachniss, Twentyeighth AAAI conference on artificial intelligence. T. Naseer, L. Spinello, W. Burgard, and C. Stachniss, "Robust visual robot localization across seasons using network flows," in Twenty- eighth AAAI conference on artificial intelligence, 2014.
A discriminative approach to robust visual place recognition. A Pronobis, B Caputo, P Jensfelt, H I Christensen, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEA. Pronobis, B. Caputo, P. Jensfelt, and H. I. Christensen, "A discrimi- native approach to robust visual place recognition," in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006, pp. 3829-3836.
Improving visual place recognition performance by maximising complementarity. M Waheed, M J Milford, K Mcdonald-Maier, S Ehsan, IEEE Robotics and Automation Letters. M. Waheed, M. J. Milford, K. Mcdonald-Maier, and S. Ehsan, "Im- proving visual place recognition performance by maximising comple- mentarity," IEEE Robotics and Automation Letters, 2021.
Multi-process fusion: Visual place recognition using multiple image processing methods. S Hausler, A Jacobson, M Milford, doi=10.1109/LRA.2019.2898427IEEE Robotics and Automation Letters. 4S. Hausler, A. Jacobson, and M. Milford, "Multi-process fusion: Visual place recognition using multiple image processing methods," IEEE Robotics and Automation Letters, 2019, volume=blue4, number=blue2, pages=1924-1931, doi=10.1109/LRA.2019.2898427 .
Hierarchical multi-process fusion for visual place recognition. S Hausler, M Milford, 2020 IEEE International Conference on Robotics and Automation (ICRA). S. Hausler and M. Milford, "Hierarchical multi-process fusion for visual place recognition," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 3327-3333.
Switchhit: A probabilistic, complementarity-based switching system for improved visual place recognition in changing environments. M Waheed, M Milford, K Mcdonald-Maier, S Ehsan, 2022M. Waheed, M. Milford, K. McDonald-Maier, and S. Ehsan, "Switch- hit: A probabilistic, complementarity-based switching system for im- proved visual place recognition in changing environments," in 2022
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 7833-7840.
Appearance-only SLAM at large scale with FAB-MAP 2.0. M Cummins, P Newman, The International Journal of Robotics Research. 309M. Cummins and P. Newman, "Appearance-only SLAM at large scale with FAB-MAP 2.0," The International Journal of Robotics Research, vol. 30, no. 9, pp. 1100-1123, 2011.
Distinctive image features from scale-invariant keypoints. D G Lowe, International journal of computer vision. 602D. G. Lowe, "Distinctive image features from scale-invariant key- points," International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004.
Speeded-up robust features (SURF). H Bay, A Ess, T Tuytelaars, L Van Gool, Computer vision and image understanding. 1103H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," Computer vision and image understanding, vol. 110, no. 3, pp. 346-359, 2008.
Fabmap: Probabilistic localization and mapping in the space of appearance. M Cummins, P Newman, The International Journal of Robotics Research. 276M. Cummins and P. Newman, "Fabmap: Probabilistic localization and mapping in the space of appearance," The International Journal of Robotics Research, vol. 27, no. 6, pp. 647-665, 2008.
Histograms of oriented gradients for human detection. N Dalal, B Triggs, 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Ieee1N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), vol. 1. Ieee, 2005, pp. 886-893.
Building the gist of a scene: The role of global image features in recognition. A Oliva, A Torralba, Progress in brain research. 155A. Oliva and A. Torralba, "Building the gist of a scene: The role of global image features in recognition," Progress in brain research, vol. 155, pp. 23-36, 2006.
Scene signatures: Localised and point-less features for localisation. C Mcmanus, B Upcroft, P Newmann, Proceedings of Robotics: Science and Systems. Robotics: Science and SystemsBerkeley, USAC. McManus, B. Upcroft, and P. Newmann, "Scene signatures: Lo- calised and point-less features for localisation," in Proceedings of Robotics: Science and Systems, Berkeley, USA, July 2014.
Co-HOG: A light-weight, compute-efficient, and training-free visual place recognition technique for changing environments. M Zaffar, S Ehsan, M Milford, K Mcdonald-Maier, IEEE Robotics and Automation Letters. 52M. Zaffar, S. Ehsan, M. Milford, and K. McDonald-Maier, "Co- HOG: A light-weight, compute-efficient, and training-free visual place recognition technique for changing environments," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1835-1842, 2020.
Particular object retrieval with integral max-pooling of cnn activations. G Tolias, R Sicre, H Jégou, ICLR 2016-International Conference on Learning Representations. G. Tolias, R. Sicre, and H. Jégou, "Particular object retrieval with integral max-pooling of cnn activations," in ICLR 2016-International Conference on Learning Representations, 2016, pp. 1-12.
Convolutional neural network-based image representation for visual loop closure detection. Y Hou, H Zhang, S Zhou, 2015 IEEE International Conference on Information and Automation. Y. Hou, H. Zhang, and S. Zhou, "Convolutional neural network-based image representation for visual loop closure detection," in 2015 IEEE International Conference on Information and Automation, 2015, pp. 2238-2245.
Sequence searching with cnn features for robust and fast visual place recognition. D Bai, C Wang, B Zhang, X Yi, X Yang, Computers & Graphics. 70D. Bai, C. Wang, B. Zhang, X. Yi, and X. Yang, "Sequence search- ing with cnn features for robust and fast visual place recognition," Computers & Graphics, vol. 70, pp. 270-280, 2018.
Deep learning features at scale for visual place recognition. Z Chen, A Jacobson, N Sünderhauf, B Upcroft, L Liu, C Shen, I Reid, M Milford, 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEEZ. Chen, A. Jacobson, N. Sünderhauf, B. Upcroft, L. Liu, C. Shen, I. Reid, and M. Milford, "Deep learning features at scale for visual place recognition," in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 3223-3230.
NetVLAD: CNN architecture for weakly supervised place recognition. R Arandjelovic, P Gronat, A Torii, T Pajdla, J Sivic, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionR. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, "NetVLAD: CNN architecture for weakly supervised place recogni- tion," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5297-5307.
Lightweight unsupervised deep loop closure. N Merrill, G Huang, arXiv:1805.07703arXiv preprintN. Merrill and G. Huang, "Lightweight unsupervised deep loop closure," arXiv preprint arXiv:1805.07703, 2018.
Visual place recognition for aerial robotics: Exploring accuracy-computation trade-off for local image descriptors. B Ferrarini, M Waheed, S Waheed, S Ehsan, M Milford, K D Mcdonald-Maier, 2019 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEEB. Ferrarini, M. Waheed, S. Waheed, S. Ehsan, M. Milford, and K. D. McDonald-Maier, "Visual place recognition for aerial robotics: Exploring accuracy-computation trade-off for local image descriptors," in 2019 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE, 2019, pp. 103-108.
Are we there yet? challenging SeqSLAM on a 3000 km journey across all four seasons. N Sünderhauf, P Neubert, P Protzel, Proc. of Workshop on Long-Term Autonomy. of Workshop on Long-Term Autonomy2013N. Sünderhauf, P. Neubert, and P. Protzel, "Are we there yet? chal- lenging SeqSLAM on a 3000 km journey across all four seasons," in Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). Citeseer, 2013, p. 2013.
Fab-map+ ratslam: Appearance-based slam for multiple times of day. A J Glover, W P Maddern, M J Milford, G F Wyeth, 2010 IEEE international conference on robotics and automation. IEEEA. J. Glover, W. P. Maddern, M. J. Milford, and G. F. Wyeth, "Fab- map+ ratslam: Appearance-based slam for multiple times of day," in 2010 IEEE international conference on robotics and automation. IEEE, 2010, pp. 3507-3512.
Indoor place recognition system for localization of mobile robots. R Sahdev, J K Tsotsos, 2016 13th Conference on computer and robot vision (CRV). IEEER. Sahdev and J. K. Tsotsos, "Indoor place recognition system for localization of mobile robots," in 2016 13th Conference on computer and robot vision (CRV). IEEE, 2016, pp. 53-60.
Only look once, mining distinctive landmarks from convnet for visual place recognition. Z Chen, F Maffra, I Sa, M Chli, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEZ. Chen, F. Maffra, I. Sa, and M. Chli, "Only look once, mining distinctive landmarks from convnet for visual place recognition," in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 9-16.
The relationship between Precision-Recall and ROC curves. J Davis, M Goadrich, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningJ. Davis and M. Goadrich, "The relationship between Precision-Recall and ROC curves," in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 233-240.
Vpr-bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change. M Zaffar, S Garg, M Milford, J Kooij, D Flynn, K Mcdonald-Maier, S Ehsan, International Journal of Computer Vision. M. Zaffar, S. Garg, M. Milford, J. Kooij, D. Flynn, K. McDonald- Maier, and S. Ehsan, "Vpr-bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change," International Journal of Computer Vision, pp. 1-39, 2021.
| [] |
[
"Minimizing Energy Use of Mixed-Fleet Public Transit for Fixed-Route Service",
"Minimizing Energy Use of Mixed-Fleet Public Transit for Fixed-Route Service"
] | [
"Amutheezan Sivagnanam \nUniversity of Houston\n\n",
"Afiya Ayman \nUniversity of Houston\n\n",
"Michael Wilbur \nVanderbilt University\n\n",
"Philip Pugliese \nChattanooga Area Regional Transportation Authority\n\n",
"Abhishek Dubey \nVanderbilt University\n\n",
"Aron Laszka \nUniversity of Houston\n\n"
] | [
"University of Houston\n",
"University of Houston\n",
"Vanderbilt University\n",
"Chattanooga Area Regional Transportation Authority\n",
"Vanderbilt University\n",
"University of Houston\n"
] | [] | Public transit can have significantly lower environmental impact than personal vehicles; however, it still uses a substantial amount of energy, causing air pollution and greenhouse gas emission. While electric vehicles (EVs) can reduce energy use, most public transit agencies have to employ them in combination with conventional, internalcombustion engine vehicles due to the high upfront costs of EVs. To make the best use of such a mixed fleet of vehicles, transit agencies need to optimize route assignments and charging schedules, which presents a challenging problem for large public transit networks. We introduce a novel problem formulation to minimize fuel and electricity use by assigning vehicles to transit trips and scheduling them for charging while serving an existing fixedroute transit schedule. We present an integer program for optimal discrete-time scheduling, and we propose polynomial-time heuristic algorithms and a genetic algorithm for finding solutions for larger networks. We evaluate our algorithms on the transit service of a mid-size U.S. city using operational data collected from public transit vehicles. Our results show that the proposed algorithms are scalable and achieve near-minimum energy use. | 10.1609/aaai.v35i17.17752 | [
"https://arxiv.org/pdf/2004.05146v1.pdf"
] | 215,737,227 | 2004.05146 | bf6b9fe0ea3ebf01e112e21be133d1bc23a234f2 |
Minimizing Energy Use of Mixed-Fleet Public Transit for Fixed-Route Service
Amutheezan Sivagnanam
University of Houston
Afiya Ayman
University of Houston
Michael Wilbur
Vanderbilt University
Philip Pugliese
Chattanooga Area Regional Transportation Authority
Abhishek Dubey
Vanderbilt University
Aron Laszka
University of Houston
Minimizing Energy Use of Mixed-Fleet Public Transit for Fixed-Route Service
Public transit can have significantly lower environmental impact than personal vehicles; however, it still uses a substantial amount of energy, causing air pollution and greenhouse gas emission. While electric vehicles (EVs) can reduce energy use, most public transit agencies have to employ them in combination with conventional, internalcombustion engine vehicles due to the high upfront costs of EVs. To make the best use of such a mixed fleet of vehicles, transit agencies need to optimize route assignments and charging schedules, which presents a challenging problem for large public transit networks. We introduce a novel problem formulation to minimize fuel and electricity use by assigning vehicles to transit trips and scheduling them for charging while serving an existing fixedroute transit schedule. We present an integer program for optimal discrete-time scheduling, and we propose polynomial-time heuristic algorithms and a genetic algorithm for finding solutions for larger networks. We evaluate our algorithms on the transit service of a mid-size U.S. city using operational data collected from public transit vehicles. Our results show that the proposed algorithms are scalable and achieve near-minimum energy use.
Introduction
28% of total energy use in the U.S. is for transportation, which results in immense environmental impact, including urban air pollution and greenhouse gas emission [EIA, 2018]. Switching from personal vehicles to public transit systems can reduce this environmental impact. However, even public transit requires a significant amount of energy; for example, bus transit services in the U.S. may be responsible for up to 54 million metric tons of CO 2 emission every year.
Electric vehicles (EVs) can have much lower environmental impact during operation than comparable internal combustion engine vehicles (ICEVs), especially in urban areas.Unfortunately, EVs are also much more expensive than ICEVs (typically, diesel transit buses cost less than $500K, while electric ones cost more than $700K, or close to around $1M with charging infrastructure). As a result, many public transit agencies can afford only mixed fleets of transit vehicles, which may consist of EVs, hybrids (HEVs), and ICEVs.
Transit agencies that operate such a mixed fleet of vehicles face a challenging optimization problem. First, they need to decide which vehicles are assigned to serving which transit trips. Since the advantage of EVs over ICEVs varies depending on the route and time of day (e.g., the advantage of EVs is higher in slower traffic with frequent stops, and lower on highways), the assignment can have a significant effect on energy use and, hence, environmental impact. Second, they need to schedule when to charge electric vehicles because EVs have limited battery capacity and driving range, and may need to be recharged during the day between serving transit trips. Because transit agencies often have limited charging capabilities (e.g., limited number of charging poles, or limited maximum power to avoid high peak loads on the electric grid), charging constraints can significantly increase the complexity of the assignment and scheduling problem.
Contributions: While an increasing number of transit agencies face these problems, there exist no practical solutions to the best of our knowledge. In this paper, we introduce a novel problem formulation and algorithms for assigning a mixed fleet of transit vehicles to trips and for scheduling the charging of electric vehicles. We develop this problem formulation in collaboration with the public transit agency of a mid-size U.S. city, which operates a fleet of EVs, HEVs, and ICEVs. To solve the problem, we introduce an integer program as well as domain specific heuristic and genetic algorithms. We evaluate these algorithms using real data collected from our partner agency (e.g., vehicle energy consumption data, transit routes and schedules) and from other sources (e.g., elevation and street maps).
Our problem formulation applies to transit agencies that have to serve fixed-route transit networks. The objective is to minimize energy consumption (i.e., fuel and electricity use), which can be used to capture minimizing operating costs and/or environmental impact with the appropriate cost factors. Our formulation considers assigning and scheduling for a single day (it may be applied to any number of consecutive days one-by-one), and permits any physically possible re-assignment during the day. Our formulation also allows capturing constraints on charging; for example, our partner agency aims to charge only one vehicle at a time to avoid demand charges from the electric utility.
Organization: In Section 2, we describe our model and problem formulation. In Section 3, we introduce a mixedinteger program as well as heuristic and genetic algorithms. In Section 4, we provide numerical results based on realworld data from our partner agency. In Section 5, we present a brief overview of related work. Finally, in Section 6, we summarize our findings and provide concluding remarks.
Transit Model and Problem Formulation
Vehicles We consider a transit agency that operates a set of buses V. Note that we will use the terms bus and vehicle interchangeably. Each bus v ∈ V belongs to a vehicle model M v ∈ M, where M is the set of all vehicle models in operation. We divide the set of vehicle models M into two disjoint subsets: liquid-fuel models M gas (e.g., diesel, hybrid), and electric models M elec . Based on discussions with our partner agency, we assume that vehicles belonging to a liquid-fuel model can operate all day without refueling. On the other hand, vehicles belonging to an electric model have limited battery capacity, which might not be enough for a whole day. For each electric vehicle model m ∈ M elec , we let C m denote the battery capacity of vehicles of model m.
Locations Locations L include bus stops, garages, and charging stations in the transit network. Trips During the day, the agency has to serve a given set of transit trips T using its buses. Based on discussions with our partner agency, we assume that all the locations and time schedules are fixed for all the trips. A bus serving trip t ∈ T leaves from trip origin t origin ∈ L at time t start and arrives to destination t destination ∈ L at time t end . Between t origin and t destination , the bus must pass through a series of stops at fixed times; however, since we cannot re-assign a bus during a transit trip, the locations and times of these stops are inconsequential to our model. Finally, we assume that any bus may serve any trip. Note that it would be straightforward to extend our model and algorithms to consider constraints on which buses may serve a trip (e.g., based on passenger capacity). Charging To charge its electric buses, the agency operates a set of charging poles CP, which are typically located at bus garages or charging stations in practice. We let cp location ∈ L denote the location of charging pole cp.
For the sake of computational tractability, we use a discrete-time model to schedule charging, which divides time into uniform-length time slots S. A time slot s ∈ S begins at s start and ends at s end . In Section 4, we will present numerical results on the practical impact of varying the length of time slots. A charging pole cp ∈ CP can charge P (cp, M v ) energy to one electric bus v in one time slot. We call the pair of a charging pole cp ∈ CP and a time slot s ∈ S a charging slot, and we let C = CP × S denote the set of charging slots.
Non-Service Trips Besides serving transit trips, buses may also need to drive between trips and charging poles. For example, if a bus has to serve a trip that starts from a location that is different from the destination of the previous trip, the bus first needs to drive to the origin of the next trip. An electric bus may also need to drive to a charging pole after serving a transit trip to recharge, and then drive from the pole to the origin of the next transit trip. We will refer to these deadhead trips, which are driven outside of revenue service, as non-service trips. We let T (l 1 , l 2 ) denote the non-service trip from location l 1 ∈ L to l 2 ∈ L, and we let D(l 1 , l 2 ) denote the time duration of this non-service trip.
Solution Space
Our primary goal is to assign a bus to each transit trip. Additionally, electric buses may also need to be assigned to charging slots to prevent them from running out of power.
Solution Representation
We represent a solution as a set of assignments A. For each trip t ∈ T , a solution assigns exactly one bus v ∈ V to serve t; this assignment is represented by the relation A t → v. Secondly, each electric bus v must be charged when its battery state of charge drops below the safe level for operation. A solution assigns at most one electric bus v to each charging slot (cp, s) ∈ C; this assignment is represented by the relation A (cp,s) → v. We assume that when a bus is assigned for charging, it remains at the charging pole for the entire duration of the corresponding time slot.
Constraints If a bus is assigned to serve an earlier transit trip t 1 and a later trip t 2 , then the duration of the non-service trip from t destination 1 to t origin 2 must be less than or equal to the time between t end 1 and t start 2 . Otherwise, it would not be able to serve t 2 on time. We formulate this constraint as follows:
∀t 1 , t 2 ∈ T , t start 1 ≤ t start 2 , A t1 → v, A t2 → v : D(t destination 1 , t origin 2 ) ≤ t start 2
(1) Note that if the constraint is satisfied by every pair of consecutive trips assigned to a bus, then it is also satisfied by every pair of non-consecutive assigned trips.
We need to formulate similar constraints for non-service trips to, from, and between charging slots:
∀t ∈ T , (cp, s) ∈ C, t start ≤ s start , A t → v, A (cp,s) → v : D(t destination , cp location ) ≤ s start (2) ∀(cp, s) ∈ C, ∀t ∈ T , s start ≤ t start , A t → v, A (cp,s) → v : D(cp location , t origin ) ≤ t start (3) ∀(cp 1 , s 1 ), (cp 2 , s 2 ) ∈ C,s start 1 ≤ s start 2 , A (cp1,s1) → v, A (cp2,s2) → v : D(cp location 1 , cp location 2 ) ≤ s start 2(4)
We also need to ensure that electric buses never run out of power. First, we let N (A, v, s) denote the set of all nonservice trips that bus v needs to complete by the end of time slot s according to the set of assignments A. In other words, N (A, v, s) is the set of all non-service trips to the origins of transit trips that start by s end and to the locations of charging slots that start by s end . Next, we let E(v, t) denote the amount of energy used by bus v to drive a transit or non-service trip t. Then, we let e(A, v, s) be the amount of energy used by bus v for all trips completed by the end of time slot s:
e(A, v, s) = t∈N (A,v,s) E(v, t) + t∈T , At→v, t end ≤s end E(v, t) (5)
Similarly, we let r(A, v, s) be the amount of energy charged to bus v by the end of time slot s:
r(A, v, s) = (cp,s)∈C, A (cp,ŝ) →v,ŝ end ≤s end P (cp, M v ) (6)
Since a bus can be charged only for complete time slots, both the minimum and maximum of the battery level will be reached at the end of a time slot. Therefore, we can express the constraint that the battery level of bus v must always remain between 0 and the battery capacity C Mv as
∀s ∈ S, ∀v ∈ V : 0 < r(A, v, s) − e(A, v, s) ≤ C Mv . (7)
Note that we can give vehicles an initial battery charge by adding dummy charging slots before the day starts.
Objective
Our objective is to minimize the energy use of the vehicles. This objective can minimize both environmental impact and operating costs by imposing the appropriate cost factors on the energy use of liquid-fuel and electric vehicles. We let K gas and K elec denote the unit costs of energy use for liquidfuel and electric vehicles, respectively. Then, by applying the earlier notation e(A, v, s) to all vehicles, we can express our objective as
min A v∈V:Mv∈M gas K gas · e(A, v, s ∞ ) + v∈V:Mv∈M elec K elec · e(A, v, s ∞ )(8)
where s ∞ denotes the last time slot of the day.
Algorithms
First, we present an integer program to find optimal solutions (Section 3.1), whose linear relaxation we will also use as a lower bound in our numerical evaluation. Since the integer program does not scale well, we will also introduce efficient heuristic (Section 3.2) and genetic algorithms (Section 3.3).
Integer Program
Variables Our integer program has five sets of variables. Three of them are binary to indicate assignments and nonservice trips. First, a v,t = 1 (or 0) indicates that trip t is assigned to bus v (or that it is not). Second, a v,(cp,s) = 1 (or 0) indicates that charging slot (cp, s) is assigned to electric bus v (or not). Third, m v,x1,x2 = 1 (or 0) indicates that bus v takes the non-service trip between a pair x 1 and x 2 of transit trips and/or charging slots (or not). Note that for requiring non-service trips (see Equations (1) to (4)), we will treat transit trips and charging slots similarly since they induce analogous constraints. There are also two sets of continuous variables. First, c v s ∈ [0, C Mv ] represents the amount of energy charged to electric bus v in time slot s. Second, e v s ∈ [0, C Mv ] represents the battery level of electric bus v at the start of time slot s (considering energy use only due to trips that have ended by that time). Due to the continuous variables, our program is actually a mixed-integer program.
Constraints First, we ensure that every transit trip is served by exactly one bus:
∀t ∈ T : v∈V a v,t = 1
Second, we ensure that each charging slot is assigned at most one electric vehicle:
∀(cp, s) ∈ C : ∀v∈V: Mv∈M elec a v,(cp,s) ≤ 1
Next, we ensure that Equations (1) to (4) are satisfied. We let F (x 1 , x 2 ) be true if a pair x 1 , x 2 of transit trips and/or charging slots satisfies the applicable one from Equations (1) to (4); and let it be false otherwise. Then, we can express these constraints as follows:
∀v ∈ V, ∀x 1 , x 2 , ¬F (x 1 , x 2 ) : a v,x1 + a v,x2 ≤ 1
When a bus v is assigned to both x 1 and x 2 , but it is not assigned to any other transit trips or charging slots in between (i.e., if x 1 and x 2 are consecutive assignments), then bus v needs to take a non-service trip:
m v,x1,x2 ≥ a v,x1 + a v,x2 − 1 − x∈T ∪ C: x start 1 ≤x start ≤x start 2 a v,x
Note that if x 1 and x 2 have the same location, then the nonservice trip will take zero time and energy. Finally, we ensure that the battery levels of electric buses remain between zero and capacity. First, for each slot s and electric bus v, the amount of energy charged c v s is subject to
c v s ≤ (cp,s)∈C a v,(cp,s) · P (cp, M v ).
Then, for the (n + 1)th time slot and for an electric bus v, we can express variable e v sn+1 as
e v sn+1 = e v sn +c v sn − t∈T : s start n <t end ≤s end n a v,t ·E(v, t)− x1,x2: s start n <x start 2 ≤s end n m v,x1,x2 ·E(v, T (x 1 , x 2 ))
where s n is the (n)th time slot. Note that since e v s ∈ [0, C Mv ], this constraint ensures that Equation (7) is satisfied. Objective We can express Equation (8)
as minimizing v∈V K Mv t∈T a v,t ·E(v, t) + x1,x2∈T ∪C m v,x1,x2 ·E(v, T (x 1 , x 2 )) where K Mv is K elec if M v ∈ M elec and K gas otherwise.
Complexity The integer program contains both variables and constraints in the order of O(|V| · |T | 2 ).
Heuristic Algorithms
Next, we introduce two polynomial-time heuristic algorithms. Due to lack of space, we will publish the pseudocode of the second in an online appendix; here, we describe its principle. Feasibility Both heuristic algorithms use Algorithm 1 to ensure that buses are assigned to trips without violating Equations (1) to (4) and (7). Given an electric bus v and trip t, the algorithm first checks whether bus v would have enough energy if we extended the current assignments A by assigning bus v to trip t (energy feasible(A, v, t)). If this would violate Equation (7), then the algorithm tries to assign bus v to the first available charging slot (assign charging(A, C, v)), and then checks again if bus v would have enough energy to serve trip t (energy feasible(A, v, t)). If it would not, then the charging assignment is removed, and assigning bus v to trip t is deemed infeasible. Otherwise, the algorithm checks if assigning bus v to trip t would violate any of Equations (1) to (4) (assign feasible(A, v, t)). For liquid-fuel vehicles, only the last step is performed.
for t ∈ sortedByTime(T ) do V ← shuffle(V) for v ∈ V do feasible ← FEASIBLE(A, C, v, t) if feasible then A ← A ∪ {v, t} Result: A
Heuristic by Location (Heuristic L): The motivation of this approach is to minimize energy costs by reducing nonservice trips. Algorithm 2 first groups together all trips that share an origin or destination location. Then, it iterates over the groups in a random order. For each group, it sorts the trips according to their start times, and then assigns vehicles to the trips one-by-one, always choosing a feasible vehicle at random. The time complexity of the algorithm is O(|T | · log |T |).
Heuristic by Bus (Heuristic B):
The motivation of this approach is to optimize the utilization of every bus. First, the algorithm sorts all transit trips based on their start time. Then, it iterates over the buses in a random order. For each bus, it tries to assign every trip to the bus, going over the trips one-byone. The time complexity of this approach is O(|T |·log |T |).
Genetic Algorithm
Building on the two heuristic algorithms, we introduce a genetic algorithm, which uses the heuristic algorithms for its initial population P 0 , but improves upon them using iterative random search. The time complexity of each iteration is O(|T | · |P 0 | · log |T |).
Initialization The genetic algorithm starts with a fixed-size initial population P 0 of solutions. We generate each member of the initial population using the two heuristic approaches.
Selection The algorithm computes the energy cost of each solution in the current population P i , and then chooses the N lowest-cost solutions as the basis for the next generation of the population. To create the next generation, the algorithm performs mutation and crossover.
Algorithm 3: MUTATION(P i , C) A ← random(P i ) mCount ← max(1, |A| · mutation prob) for 1 → mCount do v 1 , v 2 ← random(V) t 1 ← random(A, v 1 ) t 2 ← random(A, v 2 ) A ← A − {v 1 , t 1 } − {v 2 , t 2 } feasible 1 ← FEASIBLE(A, C, v 1 , t 2 ) feasible 2 ← FEASIBLE(A, C, v 2 , t 1 ) feasible ← feasible 1 ∧ feasible 2 if feasible then A ← A ∪ {v 1 , t 2 } ∪ {v 2 , t 1 } Result: A
Mutation (Algorithm 3)
Mutation first selects one solution A at random from the basis of the next generation. Then, it selects two buses v 1 and v 2 at random, and selects a transit trip t 1 at random from the trips assigned to v 1 by A, and trip t 2 at random from the trips assigned to v 2 . If the assignments of trips t 1 and t 2 can be switched between buses v 1 and v 2 without violating any constraints, then it switches them. The algorithm repeats from selecting two buses at random, until a desired number of mutation attempts is reached.
Crossover (Algorithm 4) Crossover first selects two solutions A 1 and A 2 at random from the basis of the next generation, and chooses a crossover point at random from (0, 1), which is used to divide the day into two parts at random. Then, it splits each solution A i into two subsets of assignments based on the crossover point: assignments that belong to the first part of the day form the first subset, while assignment that belong to the second part form the second subset. Next, it merges the four parts by swapping the parts of the two solutions. Finally, it selects the two lowest-cost solutions out
Algorithm 4: CROSSOVER(P i , C) P c ← ∅ A 1 ← random(P i ) A 2 ← random(P i − A 1 ) crossover point ← random(0, 1) A 1a , A 1b ← split(A 1 , crossover point) A 2a , A 2b ← split(A 2 , crossover point) A 1 ← merge(A 1a , A 2b , C) A 2 ← merge(A 2a , A 1b , C) P c ← select({A 1 , A 2 , A 1 , A 2 }, 2)
Result: P c of the initial solutions and the merged solution, automatically discarding infeasible ones.
Iteration and Termination
In each iteration, the genetic algorithm generates a new generation based on selection, mutation, and crossover. The algorithm terminates when there is no decrease in the minimum energy cost over a number of new generations, which indicates that algorithms converged.
Numerical Results
We evaluate our algorithms using real data from our partner transit agency.
Dataset
Public Transit Schedule We obtain the schedule of the transit agency in GTFS format, which includes all trips, time schedules, bus stop locations, etc. Trips are organized into 19 bus lines (i.e., bus routes) throughout the city. For our numerical evaluation, we consider trips served during weekdays (Monday to Friday) since these are the busiest days. Each weekday, the agency must serve 1,320 trips using 3 electric buses of model BYD K9S, and 50 diesel and hybrid buses.
Energy Use Prediction
To estimate the energy use of each transit and non-service trip, we use a neural network based prediction model that we train on high-resolution historical data. Our partner agency has installed sensors on its mixedfleet of vehicles, and it has been collecting data continuously for over a year at 1-second intervals from 3 electric, 41 diesel, and 6 hybrid buses. This dataset includes location traces from GPS, real-time fuel and electricity use, battery charge, etc. To train our predictor, we select two months of data from 3 electric and 3 diesel vehicles. In total, we obtain around 6.6 million datapoints for electric buses and 1.1 datapoints for diesel buses (fuel data was recorded less frequently).
We augment this dataset with additional features related weather, road, and traffic conditions to improve our energyuse predictor. We incorporate hourly predictions of weather features, which are based on data collected using Dark Sky API [Sky, 2019] at 5-minute intervals. Weather features include temperature, humidity, pressure, wind speed, and precipitation. We include road-condition features based on a street-level map from OpenStreetMap. We also include road gradients, which we compute along transit routes using an elevation map that is based on high-accuracy LiDAR data from the state government. Finally, we incorporate predictions of traffic conditions, which are based on data obtained using HERE Maps API [HERE, 2020].
In total, we use 26 different features to train a neural network model for energy prediction. We chose this model based on its accuracy after comparing it with various other regression models. Our neural network has one input, two hidden, and one output layer, all using sigmoid activation. We train a different prediction model for each vehicle model, which we then use to predict energy use for every trip.
Non-Service Trips Since non-service trips are not part of the transit schedule, we need to plan their routes and estimate their durations. For this, we use the Google Directions API, which we query for all 2,070 possible non-service trips (i.e., for every pair of locations in the network) for each 1-hour interval of a selected weekday from 5am to 11pm. The response to each query includes an estimated duration as well as a detailed route, which we combine with our other data sources and then feed into our energy-use predictors.
Charging Rate and Energy Costs Electric buses of model BYD K9S have a battery capacity of 270 kWh, and the charging poles of the agency can charge a BYD K9S model bus at the rate of 65 kW/h. We consider 3 charging poles for our numerical evaluation. Finally, based on data from the transit agency, we consider electricity cost to be $9.602 per 100 kWh and diesel cost to be $2.05 per gallon. We first study how well our algorithms scale with increasing problem sizes. To this end, we measure the computation times of our algorithms with 1 to 5 bus lines (selected from the real bus lines), and 10 selected trips for each line. For each case, we assume that the agency has 5 times as many vehicles as bus lines, and that the agency has 3 EVs regardless of the number of lines. We choose the length of time slots to be one hour. We solve the integer program (IP) and its linear relaxation using IBM CPLEX. We run all algorithms on a machine with a Xeon E5-2680 CPU, which has 28 cores, and 128 GB of RAM. Figure 1 shows the computation time for the IP, its linear relaxation (LP), and the Heuristic L algorithm. As expected, the time to solve the IP is significantly higher and increases rapidly with the number of lines. On the the hand, the heuristic algorithm is orders of magnitude faster and scales well. Note that we observe similar results for the other heuristic and genetic algorithms; we omit these results for ease of presentation. Next, we evaluate the performance of our algorithms with respect to solution quality, that is, energy cost. We use the exact same setting as in the previous experiment (Figure 1), except that now we increase the number of bus lines up to 10. For such larger instances, solving the IP is infeasible. Figure 2 shows that our efficient algorithms perform well: the genetic algorithm performs almost as well as the (difference remains below 10%), and the heuristic algorithms perform only slightly worse. For larger instances, the ratio between the performance of our heuristic algorithms and the LP remains stable, which suggests that our heuristic algorithms still perform close to optimal. We introduced uniform-length time slots for the sake of computational tractability. Now, we study whether discretizing time has a significant impact on solution quality by comparing various time-slot lengths. Since the IP can find optimal solutions for small instances, we analyze the performance of the IP with various slot lengths for 1 or 2 lines with 10 trips for each line. Figure 3 shows that loss in solution quality is very small even with longer slots, such as 1 hour.
Results
Finally, we compute assignments for the complete schedule of 1, 320 trips with 3 electric and 50 liquid-fuel buses using heuristic and genetic algorithms. We were able to assign the full schedule using Heuristic B algorithm in around 3 minutes, resulting in total energy cost of $4618. Meanwhile, the genetic algorithm runs for around 3 days (around 3, 500 iterations) and results in energy cost of $4,616. Since an agency might need to find a new assignment every day (e.g., because some buses are unavailable due to maintenance), the heuristic algorithm can be a better option.
Related Work
GPS data, bus stop data, bus transaction data, traffic data and electricity consumption data have been collected to generate simulated models for energy prediction and optimization in transit networks [Wang et al., 2018;Tian et al., 2016;Wang et al., 2017]. Several approaches have been applied in the domain of energy optimization for transit networks, such as the Markov decision processes [Wang et al., 2018], neural networks [Nageshrao et al., 2017], K-greedy algorithms [Paul and Yamada, 2014], genetic algorithms [Durango-Cohen and McKenzie, 2018], and evolutionary algorithms [Santos et al., 2016]. Some works propose solutions that reduce energy costs by changing bus schedules or routes [Hassold and Ceder, 2014;Wang et al., 2018], which can cause inconvenience to passengers.
However, very few research efforts have considered mixed fleets of vehicles [Santos et al., 2016;. Allowing a bus to serve multiple lines instead of limiting it to a single line can reduce energy cost Kliewer et al., 2006;Kliewer et al., 2008]; we also incorporate a similar approach. Trips may be grouped as origindestination pairs and assigned to vehicles, which also reduces the energy costs ; we again explore a similar approach. Our work considers a general formulation of the problem, and we evaluate it based on real data.
To reduce electricity cost in charging stations or garages, [Jahic et al., 2019] applied preemptive, quasi preemptive, and non-preemptive approaches to optimally utilize the maximum load. Since charging time occupies a reasonable portion of the routine, [Chao and Xiaohong, 2013] proposes a battery replacement technique, which reduces the complexity of scheduling with respect to charging, but it is not feasible when the number of electric buses is small, since purchasing more battery would not be cost effective. [Murphey et al., 2012] presents the development of a machine learning framework for energy management optimization in an HEV, developing algorithms based on long and short term knowledge about the driving environment. For the long-term knowledge, the framework uses a neural network (NN) to model the road environment of a driving trip as a sequence of different roadway types and different traffic congestion levels. For short-term knowledge, it uses an additional NN to model the driver's instantaneous reaction to the driving environment. Then, using the predicted values, an additional set of NNs learn to emulate the optimal energy management strategy.
Conclusion
Due to the high upfront costs of EVs, many public transit agencies are forced to operate mixed fleets of EVs, HEVs, and ICEVs. In this paper, we formulated the novel problem of minimizing operating costs and environmental impact for mixed fleets of public transit vehicles, and provided heuristic and genetic algorithms the problem. Based on real-world data, we demonstrated that these algorithms scale well for larger instances and can provide near optimal solutions.
Figure 1 :
1Computation times of various algorithms. Please note the logarithmic scale on the vertical axis.
Figure 2 :
2Energy costs for allocations using various algorithms. Please note the logarithmic scale on the vertical axis.
Figure 3 :
3Energy cost using allocations found by the integer program for various time-slot lengths.
Algorithm 2: HEURISTIC BY LOCATION(V, T , C) stop pairs ← {} for t ∈ T do stop pair ← {t origin , t destination } T ← stop pairs.get(stop pair) T ← T ∪ {t} stop pairs ← stop pairs ∪ {stop pair, T } stop pairs' ← shuffle(stop pairs) for stop pair ∈ stop pairs' do T ← stop pairs.get(stop pair)
Acknowledgment This material is based upon work supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE), under Award Number DE-EE0008467. Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Trading off costs, environmental impact, and levels of service in the optimal design of transit bus fleets. Zhu Chao, Chen Xiaohong, ; Cohen, Mckenzie ; Pablo L Durango-Cohen, Elaine C Mckenzie, EIA. U.S. Energy Information Administration: Use of energy explained -energy use for transportation. 96Transportation Research Part A: Policy and Practice[Chao and Xiaohong, 2013] Zhu Chao and Chen Xiaohong. Optimizing battery electric bus transit vehicle scheduling with battery exchanging: Model and case study. Procedia- Social and Behavioral Sciences, 96:2725-2736, 2013. [Durango-Cohen and McKenzie, 2018] Pablo L Durango- Cohen and Elaine C McKenzie. Trading off costs, envi- ronmental impact, and levels of service in the optimal de- sign of transit bus fleets. Transportation Research Part A: Policy and Practice, 114:354-363, 2018. [EIA, 2018] EIA. U.S. Energy Information Administra- tion: Use of energy explained -energy use for trans- portation (2018). https://www.eia.gov/energyexplained/ use-of-energy/transportation.php, Accessed: January 21st, 2020, 2018.
Improving energy efficiency of public transport bus services by using multiple vehicle types. Ceder ; Stephan Hassold, Avishai Ceder, ; Jahic, 2019 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). Kliewer et al., 2006] Natalia Kliewer, Taieb Mellouli, and Leena SuhlSpringer2415Computer-aided Systems in Public Transportand Ceder, 2014] Stephan Hassold and Avishai Ceder. Improving energy efficiency of public transport bus services by using multiple vehicle types. Transportation Research Record, 2415(1):65-71, 2014. [HERE, 2020] HERE. HERE Maps API. https://developer. here.com/, Accessed: January 21st, 2020, 2020. [Jahic et al., 2019] Amra Jahic, Mina Eskander, and Detlef Schulz. Preemptive vs. non-preemptive charging schedule for large-scale electric bus depots. In 2019 IEEE PES In- novative Smart Grid Technologies Europe (ISGT-Europe), pages 1-5. IEEE, 2019. [Kliewer et al., 2006] Natalia Kliewer, Taieb Mellouli, and Leena Suhl. A time-space network based exact optimiza- tion model for multi-depot bus scheduling. European jour- nal of operational research, 175(3):1616-1627, 2006. [Kliewer et al., 2008] Natalia Kliewer, Vitali Gintner, and Leena Suhl. Line change considerations within a time- space network based multi-depot bus scheduling model. In Computer-aided Systems in Public Transport, pages 57- 70. Springer, 2008.
Mixed bus fleet scheduling under range and refueling constraints. et al., 2019] Lu Li, Hong K Lo, and Feng Xiao. Mixed bus fleet scheduling under range and refueling constraints.
Intelligent hybrid vehicle power control-part i: Machine learning of optimal vehicle power. Murphey, Transportation Research Part C: Emerging Technologies. 104Charging cost optimization for ev buses using neural network based energy predictor. IFAC-PapersOnLineTransportation Research Part C: Emerging Technologies, 104:443-462, 2019. [Murphey et al., 2012] Yi Lu Murphey, Jungme Park, Zhi- hang Chen, Ming L Kuang, M Abul Masrur, and An- thony M Phillips. Intelligent hybrid vehicle power con- trol-part i: Machine learning of optimal vehicle power. IEEE Transactions on Vehicular Technology, 61(8):3519- 3530, 2012. [Nageshrao et al., 2017] Subramanya P Nageshrao, Jubin Ja- cob, and Steven Wilkins. Charging cost optimization for ev buses using neural network based energy predictor. IFAC- PapersOnLine, 50(1):5947-5952, 2017.
Jorge Freire de Sousa, Deborah Perrotta, and Rosaldo JF Rossetti. Towards the integration of electric buses in conventional bus fleets. Yamada ; Topon Paul, Hisashi Yamada, ; Santos, 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). Santos, Zafeiris KokkinogenisIEEEIEEE 19th International Conference on Intelligent Transportation Systems (ITSC)and Yamada, 2014] Topon Paul and Hisashi Yamada. Operation and charging scheduling of electric buses in a city bus route network. In 17th International IEEE Confer- ence on Intelligent Transportation Systems (ITSC), pages 2780-2786. IEEE, 2014. [Santos et al., 2016] Diogo Santos, Zafeiris Kokkinogenis, Jorge Freire de Sousa, Deborah Perrotta, and Rosaldo JF Rossetti. Towards the integration of electric buses in con- ventional bus fleets. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 88-93. IEEE, 2016.
bCharge: Data-driven real-time charging scheduling for large-scale electric bus fleets. Tian, IEEE Real-Time Systems Symposium (RTSS). IEEE17Dark Sky. API documentation, 2019] Dark Sky. API documentation. https://darksky. net/dev/docs, Accessed: January 21st, 2020, 2019. [Tian et al., 2016] Zhiyong Tian, Taeho Jung, Yi Wang, Fan Zhang, Lai Tu, Chengzhong Xu, Chen Tian, and Xiang- Yang Li. Real-time charging station recommendation sys- tem for electric-vehicle taxis. IEEE Transactions on Intel- ligent Transportation Systems, 17(11):3098-3109, 2016. [Wang et al., 2017] Yuan Wang, Dongxiang Zhang, Lu Hu, Yang Yang, and Loo Hay Lee. A data-driven and opti- mal bus scheduling model with time-dependent traffic and demand. IEEE Transactions on Intelligent Transportation Systems, 18(9):2443-2452, 2017. [Wang et al., 2018] Guang Wang, Xiaoyang Xie, Fan Zhang, Yunhuai Liu, and Desheng Zhang. bCharge: Data-driven real-time charging scheduling for large-scale electric bus fleets. In 2018 IEEE Real-Time Systems Symposium (RTSS), pages 45-55. IEEE, 2018.
| [] |
[
"The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization",
"The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization"
] | [
"Aaron Defazio \nFacebook AI Research New York\n\n",
"Robert M Gower Facebook \nFacebook AI Research New York\n\n",
"A I Research \nFacebook AI Research New York\n\n",
"New York \nFacebook AI Research New York\n\n"
] | [
"Facebook AI Research New York\n",
"Facebook AI Research New York\n",
"Facebook AI Research New York\n",
"Facebook AI Research New York\n"
] | [] | The convergence rates for convex and nonconvex optimization methods depend on the choice of a host of constants, including step sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).2. We leverage factorial powers to prove tighter anytime convergence rates for SGD with momentum in the non-smooth convex and strongly-convex cases, see rows 1 and 2 inTable 1.3. We describe a novel SVRG variant with inner-loop factorial power momentum, which improves upon the SVRG++ [Allen Zhu and Yuan, 2016] method in both the convex and strongly convex case, see rows 3 and 4 inTable 1. | null | [
"https://export.arxiv.org/pdf/2006.01244v3.pdf"
] | 224,815,911 | 2006.01244 | 99882599b5de7ee5649bcb2431aaa6d19de5f28d |
The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization
Aaron Defazio
Facebook AI Research New York
Robert M Gower Facebook
Facebook AI Research New York
A I Research
Facebook AI Research New York
New York
Facebook AI Research New York
The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization
The convergence rates for convex and nonconvex optimization methods depend on the choice of a host of constants, including step sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).2. We leverage factorial powers to prove tighter anytime convergence rates for SGD with momentum in the non-smooth convex and strongly-convex cases, see rows 1 and 2 inTable 1.3. We describe a novel SVRG variant with inner-loop factorial power momentum, which improves upon the SVRG++ [Allen Zhu and Yuan, 2016] method in both the convex and strongly convex case, see rows 3 and 4 inTable 1.
Introduction
Consider the stochastic optimization problem
x * ∈ arg min x∈C f (x) = E ξ [f (x, ξ)] ,(1)
where each f (x, ξ) is convex but potentially nonsmooth in x and C ⊂ R d is a bounded convex set.
To solve (1) we use an iterative method that at the kth iteration samples a stochastic (sub-)gradient ∇f (x k , ξ) and uses this gradient to compute a new, and hopefully improved, x k+1 iterate. The simplest of such methods is Stochastic Gradient Descent (SGD) with projection:
x k+1 = Π C (x k − η k ∇f (x k , ξ)) ,(2)
where Π C is the projection onto C and η k is a sequence of step-sizes. Both the variance from the sampling procedure, as well as the non-smoothness of f prevent the sequence of x k iterates from converging. The two most commonly used tools to deal with this variance are iterate averaging techniques [Polyak, 1964] and decreasing step-sizes [Robbins and Monro, 1951]. By carefully choosing a sequence of averaging parameters and decreasing step-sizes we can guarantee that the variance of SGD will be kept under control and the method will converge. In this work we focus on an alternative to averaging: momentum. Momentum can be used as a replacement for averaging for non-smooth problems, both in the stochastic and non-stochastic setting. Projected SGD with momentum can be written as
m k+1 = βm k + (1 − β)∇f (x k , ξ k ), x k+1 = Π C (x k − α k m k+1 ) ,(3)
where α k and β are step-size and momentum parameters respectively. Using averaging and momentum to handle variance introduces a new problem: choosing and tuning the additional sequence of parameters. In this work we introduce the use of factorial powers for the averaging, momentum, and step-size parameters. As we will show, the use of factorial powers simplifies and strengthens the convergence rate proofs.
Contributions
1. We introduce factorial powers as a tool for providing tighter or more elegant proofs for the convergence rates of methods using averaging, including dual averaging and Nesterov's accelerated gradient method, see row 5 in Table 1. 4. We identify and unify a number of existing results in the literature that make use of factorial power averaging, momentum or step-sizes.
Factorial Powers
The (rising) factorial powers [Graham et al., 1994] are typically defined using a positive integer r and a nonnegative integer k as k r = k(k + 1) · · · (k + r − 1) = r i=0 (k + i − 1). (4) Their behavior is similar to the simple powers k r as k r = O(k r ), and as we will show, they can typically replace the use of simple powers in proofs. They are closely related to the simplicial polytopic numbers P r (k) such as the triangular numbers k(k+1), and tetrahedral numbers 1 6 k(k + 1)(k + 2), by the relation P r (k) = 1 r! k r . See the left of Figure 1 for contour plots comparing factorial and simple powers.
The advantage of k r over k r is that in many cases that arise in proofs, additive, rather than multiplicative operations, are applied to the constants. As we show in Section 3, summation and difference operations applied to k r result in other factorial powers, that is, factorial powers are closed under summation and differencing. In contrast, when summing or subtracting simple powers of the form k r , the resulting quantities are polynomials rather than simple powers. It is this closure under summation and differencing that allows us to derive improved convergence rates when choosing step-sizes and momentum parameters based on factorial powers.
Our theory will use a generalization of the factorial powers to non-integers r ∈ R and integers k ≥ 1 such that k + r > 0 using the Gamma function Γ(k) := ∞ 0 x k−1 e −x dx so that
k r := Γ(k + r) Γ(k)(5)
We also use the convention that 0 r = 0 except for 0 0 = 1. This is a proper extension because, when k is integer we have that Γ(k) = (k − 1)! and consequently (5) is equal to (4). This generalized sequence is particularly useful for the values r = 1/2 and r = −1/2, as they may replace the use of √ k and 1/ √ k respectively in proofs.
The factorial powers can be computed efficiently using the log-gamma function to prevent overflow. Using the factorial powers as step sizes or momentum constants adds no computational overhead as they may be computed recursively using simple algebraic operations as we show below. The base values for the recursion may be precomputed as constants to avoid the overhead of gamma function evaluations entirely.
Notation and Assumptions
We assume throughout that f (x, ξ) is convex in x. Let ∇f (x, ξ k ) denote the subgradient of f (x, ξ k ) given to the optimization algorithm at step k. Let C ⊂ R d be a convex set and let R > 0 be the radius of the smallest Euclidean-norm ball around the origin that contains the set C. We define the projection onto C as Π C (x) := arg min z∈C z − x . In addition to assuming that f (x, ξ) is convex, we will use one of the following two sets of assumptions depending on the setting.
Non-smooth functions. The function f (·, ξ) is Lip- schitz with constant G > 0 for all ξ, that is |f (x, ξ) − f (y, ξ)| ≤ G x − y , ∀x, y ∈ R d . (6) Smooth functions. The gradient ∇f (·, ξ) is Lips- chitz with constant L > 0 for all ξ, that is ∇f (x, ξ)−∇f (y, ξ) ≤ L x−y , ∀x, y ∈ R d . (7) We assume that σ 2 < ∞ where σ 2 = E ξ ∇f (x * , ξ) 2 . Strongly functions. We say that f (x) is µ-strongly convex if f (x) − µ 2 x 2 is convex.
We use the shorthand notation E ξ · 2 = E ξ · 2 and will write E instead of E ξ when the conditional context is clear. We defer all proofs to the supplementary material.
Properties of Factorial Powers
The factorial powers obey a number of properties, see Table 2. These properties allow for a type of "finite" or "umbral" calculus that uses sums instead of integrals [Graham et al., 1994]. A few of these properties, such as the summation and differencing, are given in Chapter 2.6 for integers values in [Graham et al., 1994]. We carefully extend these properties to the non-integer setting. All the proofs of these properties can be found in Section A in the supplementary material.
These properties are key for deriving simple and tight convergence proofs. For instance, often when using telescoping in a proof of convergence, we often need a summation property. For the factorial powers we have the simple formula (10). This shows that the factorial powers are closed under summation because on both sides of (10) we have factorial powers. This formula is a discrete analogue of the definite integral (7) holds with constant L, otherwise we assume that the function is G-Lipschitz (6). Finally when assuming the function if µ-strongly convex we use κ := L/µ. The SVRGM is in fact a new method which is closely related to the SVRG++ Allen Zhu and Yuan [2016] method. sequences, we rely on Faulhaber's formula:
b a x r dx = 1 r+1 b r+1 − 1 r+1 a r+1 .k i=1 i r = k r+1 r + 1 + 1 2 k r + r i=2 B i i! r! (r − k + 1)! k r−i+1 ,(14)which involves the Bernoulli numbers B j := j i=0 i ν=0 (−1) v i ν (ν+1) j i+1
. This is certainly not as simple as (10). Furthermore, to extend (14) to noninteger r complicates matters further [McGown and Parks, 2007]. In contrast the summation property (10) holds for non-integer values.
Another common property used in telescoping arguments is the difference property (11). Once again we have that factorial powers are closed under differencing. In contrast, the simple powers instead require the use of inequalities such as
rx r−1 ≤ (x + 1) r − x r ≤ r (x + 1) r−1 r (x + 1) r−1 ≤ (x + 1) r − x r ≤ rx r−1
where the first row of bounds hold for r < 0 or r > 1 and the second row holds for r ∈ (0, 1). Using the above bounds adds slack into the convergence proof and ultimately leads to suboptimal convergence rates.
Half-Powers
The factorial half -powers k 1/2 and k −1/2 are particularly interesting since they can be used to set the learning rate of the momentum method in lieu of the standard O(1/ √ k) learning rate, as we will show in Theorem 3. The factorial half-powers are similar to the standard half-powers, in that, their growth is sandwiched by the standard half-powers as illustrated in Figure 1 where we show that
(k − 1/2) ≤ k 1/2 ≤ √ k,(15)1 k − 1/2 < k −1/2 < 1 √ k − 1 .(16)
We also believe this is the first time factorial halfpowers have be used in the optimization literature.
From Averaging to Momentum
Here we show that averaging techniques and momentum techniques have a deep connection. We use this connection to motivate the use of factorial power momentum. Our starting point for this is SGD with averaging which
Recursion (k + 1) r = k + r k k r (8) (k + 1) r = (k + r) (k + 1) r−1 (9) Summation b i=a i r = 1 r + 1 b r+1 − 1 r + 1 (a − 1) r+1(10)Differences (k + 1) r − k r = r (k + 1) r−1(11)
Ratios
k r+q k r = (k + r) q(12)
Inversion (11) and (12) hold for k + r > 0 and k ≥ 1. The Summation property (10) holds for a + r > 0 and a ≥ 1. The Inversion property (13) holds for k > r and k ≥ 1.
k −r = 1 (k − r) r(13)
can be written using the online updating form
x k+1 = Π C (x k − η k ∇f (x k , ξ k )) , x k+1 = (1 − c k+1 )x k + c k+1 x k+1 .(17)
At first glance (17) is unrelated to SGD with momentum (3). But surprisingly, SGD with momentum can be re-written in the strikingly similar iterate averaging form given by
z k+1 = Π C (z k − η k ∇f (x k , ξ k )) , x k+1 = (1 − c k+1 ) x k + c k+1 z k+1 ,(18)
as we prove in the following theorem. This equivalence only holds without the projection operation in Equation 3. We are not aware of any analysis of Equation 3's convergence with the projection operation included, and we believe that incorporating projection as we do in Equation 18 is better given it's much more amendable to analysis.
Theorem 1. If C = R d then the x k iterates of (3) and (18) are the same so long as z 0 = x 0 , c 1 ∈ (0, 1) and
η k = α k c k+1 (1 − β), c k+1 = β α k α k−1 c k 1 − c k .(19)
Proof. The proof is by induction.
Base case k = 0. From (3) we have that
x 1 = x 0 − α 0 m 1 (3) = x 0 − α 0 (1 − β)∇f (x 0 , ξ 0 ),(20)
where we used that m 0 = 0. Similarly for (18) we have that
x 1 = (1 − c 1 ) x 0 + c 1 z 1 = (1 − c 1 ) x 0 + c 1 (x 0 − η 0 ∇f (x 0 , ξ 0 )) = x 0 − c 1 η 0 ∇f (x 0 , ξ 0 ),(21)
where we used that z 0 = x 0 . Now (21) and (20) are equivalent since c 1 η 0
= α 0 (1 − β).
Induction step.
Suppose that the x k iterates in (17) and (18) are equivalent for k and let us consider the k + 1 step. Let
z k+1 = x k − α k c k+1 m k+1 .(22)
Consequently
z k+1 = x k − α k c k+1 m k+1 (18)+(3) = (x k−1 − α k−1 m k ) − α k c k+1 (βm k + (1 − β)∇f (x k , ξ k )) = x k−1 − α k−1 + β α k c k+1 m k − α k c k+1 (1 − β)∇f (x k , ξ k ) (19) = x k−1 − α k−1 c k m k − η k ∇f (x k , ξ k ). (22) = z k − η k ∇f (x k , ξ k ),
where in the last but one step we used that c k+1 = β α k α k−1 c k 1−c k which when re-arranged gives
α k−1 + β α k c k+1 = α k−1 c k . Finally x k+1 = x k − α k m k+1 (22) = x k − c k+1 (x k − z k+1 ) = (1 − c k+1 )x k + c k+1 z k+1 .
Which concludes the induction step and the proof.
Due to this equivalence, we refer to (18) as the projected SGDM method. The x k update (18) is similar to the moving average in (17), but now the averaging occurs directly on the x k sequence that the gradient is evaluated on. As we will show, convergence rates of the SGDM method can be shown for the x k sequence, with no additional averaging necessary. This method is also known as primal-averaging, and under this name it was explored by Sebbouh et al. [2020] in the context of smooth optimization and by Tao et al. [2020] and Taylor and Bach [2019] without drawing an explicit link to stochastic momentum methods.
Factorial powers play a key role in the choice of the momentum parameters c k+1 , and the resulting convergence rate of (17). Standard (equal-weighted) averaging given byx
k := 1 k + 1 k i=0
x i or equivalentlȳ
x k := 1 − 1 k + 1 x k−1 + 1 k + 1 x k .(23)
results in a sequence that "forgets the past" at a rate of 1/k. Indeed, if we choose an arbitrary initial point x 0 (or at least without any special insight), to converge to the solution we must "forget" x 0 . To forget x 0 faster, we can use a weighted average that puts more weight on recent iterates. We propose the use of the factorial powers to define a family of such weights that allows us to tune how fast we forget the past. In particular, we propose the use of momentum constants as described in the following proposition. Proposition 2. Let x k ∈ R n for k = 1, . . . be a sequence of iterates, and let r > −1 be a real number. For k ≥ 0, the factorial power averagē
x k = r + 1 (k + 1) r+1 k i=0 (i + 1) r x i(24)
is equal to the moving averagē
x k+1 = (1 − c k )x k + c k x k+1 ,(25)
where c k := r + 1 k + r + 1 .
Proof. We show by induction. For the base case, consider k = 0. Then:
x 0 = (1 − c 0 )x −1 + c 0 z 0 = 1 − r + 1 r + 1 x −1 + r + 1 r + 1 z 0 = z 0 .
Likewise, we have that
x 0 = r + 1 1 r+1 0 i=0 (i + 1) r z i = (r + 1) 1 r 1 r+1 z 0 = z 1 ,
where we used the recursive property (k + 1) r = (k + r) (k + 1) r−1 to simplify.
For the inductive case, consider k ≥ 1 and suppose
thatx k−1 = r+1 k r+1 k−1 i=0 (i + 1) r z i .
We may write the update as
x k = r + 1 (k + 1) r+1 k i=0 (i + 1) r z i . = r + 1 (k + 1) r+1 k−1 i=1 (i + 1) r z i + (r + 1) (k + 1) r (k + 1) r+1 z k = k r+1 (k + 1) r+1 r + 1 k r+1 k−1 i=1 (i + 1) r z i + (r + 1) (k + 1) r (k + 1) r+1 z k = k r+1 (k + 1) r+1x k−1 + (r + 1) (k + 1) r (k + 1) r+1 z k ,
where in the last line we used the induction hypothesis.
To show the equivalence to the moving average form
x k = (1 − c k )x k−1 + c k z k , we just need to show that c k = (r + 1) (k + 1) r (k + 1) r+1 and 1 − c k = k r+1 (k + 1) r+1 ,
where c k = r+1 k+r+1 . By applying the recursive property (9) these two identities follow since
(r + 1) (k + 1) r (k + 1) r+1 = (r + 1) (k + 1) r (k + r + 1) (k + 1) r = c k .
For the second identity we use (8) so that
k r+1 (k + 1) r+1 = k r+1 k+r+1 k k r+1 = k k + r + 1 = 1 − r + 1 k + r + 1 = 1 − c k .
This concludes the inductive step and the proof.
Shamir and Zhang [2013] introduced the polynomialdecay averaging (25) for averaged SGD under the restriction that integer r > 0. Proposition 2 extends the result to non-integer values with a range of r > −1.
Next we use factorial power averaging to get state-ofthe-art convergence results for SGDM .
Applying factorial powers
The any-time convergence of SGDM is a good case study for the application of the half-factorial powers.
Theorem 3. Let f (x, ξ) be G-Lipschitz and convex in x. The projected SGDM method (18) with η k = η(k + 1) −1/2 for η > 0 and c k+1 = 1/(k + 1) converges according to E [f (x n ) − f (x * )] ≤ 1 2 η −1 R 2 + 2ηG 2 (n + 2) −1/2 .
Furthermore, optimizing over η gives η = 1/2 R G and the resulting convergence
E [f (x n ) − f (x * )] ≤ √ 2RG (n + 2) −1/2 .
This result is strictly tighter than the √ 2RG/ √ n + 1 convergence rate that arises from the use of squareroot sequences (see Theorem 19 in the appendix) as used by Tao et al. [2020]. The use of half-factorial powers also yields more direct proofs, as inequalities are replaced with equalities in many places. For instance, when η k = η/ √ k + 1, a bound of the following form arises in the proof:
√ k + 1 − √ k ≤ 1 2 √ k .
If factorial power step sizes η k = η(k + 1) −1/2 are used instead, then this bounding operation is replaced with an equality that we call the inverse difference property:
1 (k + 1) −1/2 − 1 k −1/2 = 1 2 1 k 1/2 .
The standard proof also requires summing the step sizes, requiring another bounding operation
k i=0 1 √ i + 1 ≤ 2 √ k + 1.
Again when the factorial power step sizes are used instead, this inequality is replaced by the equality k i=0 (i + 1) −1/2 = 2 (k + 1) 1/2 .
We can also use factorial power momentum with r = 3 to show that SGDM converges at a rate of O(1/n) for strongly-convex non-smooth problems in the following theorem.
Theorem 4. Let f (x, ξ) be G-Lipschitz and µ−strongly convex in x for every ξ. The projected SGDM method (18) with η k = 1 µ(k+1) and c k+1 = 4 k+4 satisfies E [f (x n ) − f (x * )] ≤ 2G 2 µ (n + 2) −1 = 2G 2 µ(n + 1) .
This O(1/n) rate of convergence is the fastest possible in this setting [Agarwal et al., 2009]. This rate of convergence has better constants than that established by using a different momentum scheme in Tao et al. [2020]. Higher order averaging is also necessary to obtain this rate for the averaged SGD method, as established by Lacoste-Julien et al. [2012] and Shamir and Zhang [2013], however in that case only r = 1 averaging is necessary to obtain the same rate.
From Momentum to Acceleration
A higher order r for the factorial powers is useful when the goal is to achieve convergence rates of the order O(1/n r+1 ). Methods using equal weighted r = 0 momentum can not achieve convergence rates faster than O(1/n), since that is the rate that they "forget" the initial conditions. To see this, note that in a sum 1/(n + 1) n i=0 z i , the z 0 value decays at a rate of O(1/n). When using the order r factorial power for averaging (24), the initial conditions are forgotten at a rate of O(1/n r+1 ). The need for r = 1 averaging arises in a natural way when developing accelerated optimization methods for non-strongly convex optimization, where the best known rates are of the order O(1/n 2 ) obtained by Nesterov's method. As with the SGDM method, Nesterov's method can also be written in an equivalent iterate averaging form [Auslender and Teboulle, 2006]:
y k = (1 − c k+1 ) x k + c k+1 z k z k+1 = z k − ρ k ∇f (y k ) x k+1 = (1 − c k+1 ) x k + c k+1 z k+1 ,(26)
where ρ k are the step sizes, and initially z 0 = x 0 . In this formulation of Nesterov's method we can see that the x k sequence uses iterate averaging of the form (18). To achieve accelerated rates with this method, the standard approach is to use ρ k = 1/(Lc k+1 ) and to choose momentum constants c k that satisfy the inequality
c −2 k − c −1 k ≤ c −2 k−1 .
This inequality is satisfied with equality when using the following recursive formula:
c −1 k+1 = 1 2 1 + 1 + 4c −1 k−1 ,
but the opaque nature and lack of closed form for this sequence is unsatisfying. Remarkably, the sequence
Algorithm 1 Our proposed SVRGM method Initialize: z 0 m0−1 = x 0 m0−1 = x 0 for s = 1, 2, . . . , do outer-loop x s−1 = x s−1 ms−1−1 , x s 0 = x s−1 ms−1−1 , z s 0 = z s−1 ms−1 ∇f (x s−1 ) = 1 n n i=1 ∇f i (x s−1 ). precompute for t = 0, 1, . . . , m s − 1 do inner-loop Sample j uniformly at random g s t = ∇f j (x s t ) − ∇f j x s−1 − ∇f (x s−1 ) z s t+1 = z s t − ηg s t x s t+1 = (1 − c t+1 ) x s t + c t+1 z t+1
Averaging end for end for c k+1 = 2/(k +2) also satisfies this inequality, as pointed out by Tseng [2008], which is a simple application of r = 1 factorial power momentum. We show in the supplementary material how using factorial powers together with the iterate averaging form of momentum gives a simple proof of convergence for this method, which uses the same proof technique and Lyapunov function as the proof of convergence of the regular momentum method SGDM . By leveraging the properties of factorial powers, the proof follows straightforwardly with no "magic" steps.
Theorem 5. Let x k be given by (26). Let f (x, ξ) be L-smooth and convex. If we set c k = 2/(k + 2) and Our formulation has a number of advantages over existing schemes. In terms of simplicity, it includes no resetting operations 1 , so the x and z sequences start each outer loop at the values from the end of the previous one. Additionally, the snapshotx is up-to-date, in the sense that it matches the final output point x from the previous step, rather than being set to an average of points as in SVRG/SVRG++.
ρ k = (k + 1)/(2L) then f (x n ) − f (x * ) ≤ 2L n 2 x 0 − x * 2 .(27)
The non-strongly convex case is an application of noninteger factorial power momentum. Using a large step size η = 1/6L we show in Theorem 6 That Algorithm 1 converges at a favourable rate if we choose the momentum parameters c k corresponding to a (k + 1) 1/2 factorial power averaging of the iterates. The strongly convex case in Theorem 7 uses fixed momentum (i.e. an exponential moving average), since no rising factorial sequence can give linear convergence rates. Both of these rates improve the constants non-trivially over the SVRG++ method.
Theorem 6. (non-strongly convex case) Let f (x) = 1 n n i=1 f i (x) where each f i is L-smooth and convex. By setting c t = 1/2+1 t+1/2+1 , η = 1 6L , and m s = 2m s−1 in Algorithm 1 we have that E f (x S ms−1 ) − f * ≤ f (x 0 ) − f * 2 S + 9L x 0 − x * 2 2 S m 0
The non-strongly convex convergence rate is linear in the number of epochs, however each epoch is twice as long as the previous one, resulting in an overall 1/t rate.
Theorem 7. (strongly convex case) Let 1 n n i=1 f i (x) where each f i is L-smooth and µ-strongly convex. Let κ = L/µ. By setting m s = 6κ, c k = 5 3 1 4κ+1 , and η k = 1/(10L) in Algorithm 1 we have that
E [f (x s ) − f * ] ≤ 3 5 S f (x 0 ) − f (x * ) + 3 4 µ δ 0 , where δ 0 := x 0 − x * 2 .
Further Applications
Factorial powers have applications across many areas of optimization theory. We detail two further instances of popular first order methods where factorial powers are particularly useful. vowel Figure 2: Training loss sub-optimality on 4 LIBSVM test problems, comparing SGD, SGD with r = 1 post-hoc averaging to SGD with factorial power momentum.
Dual Averaging
Classical (non-stochastic) dual averaging uses updates of the form [Nesterov, 2009]:
s k+1 = s k + ∇f (x k ) x k+1 = arg min x s k+1 , x +β k+1 γ 2 x − x 0 2 ,(28)
where the sequenceβ k is defined recursively withβ 0 = β 1 = 1, andβ k+1 =β k+1 +1/β k+1 . This sequence grows approximately following the square root, as 1 βi =β k+1 . Nesterov's sequence has the disadvantage of not having a simple closed form, but it otherwise provides tighter bounds than using β k = √ k + 1. In particular, the precise bound on the duality gap (as we show in Theorem 27 in the supplementary material) is given by
√ 2k − 1 ≤ β k+1 ≤ 1 1+ √ 3 + √ 2k − 1 for k ≥ 1,max x, x ≤R 1 n + 1 n i=0 ∇f (x i ) , x i − x ≤ √ 2 1 + √ 3 1 (n + 1) + 2 √ n + 1 RG.
The factorial powers obey a similar summation relation, and they have the advantage of an explicit closed form, which we exploit to give a strictly tighter convergence rate.
Theorem 8. After n steps of the dual averaging method (28) withβ k = 1/ (k + 1) −1/2 and γ = G/R we have that
max x, x ≤R 1 n + 1 n i=0 ∇f (x i ) , x i − x ≤ 2RG(n + 2) −1/2 < 2RG √ n + 1 .
Conditional Gradient Method
Factorial power step size schemes have also arisen for the conditional gradient method
p k+1 = arg min p∈C p, ∇f (x k ) x k+1 = (1 − c k+1 ) x k + c k+1 p k+1 .
For this method the most natural step sizes satisfy the following recurrence ("open loop" step sizes) c k+1 = c k − 1 2 c 2 k , which Dunn and Harshbarger [1978] note may be replaced with c k+1 = 1/(k + 1). Another approach that more closely approximates the open-loop steps is the factorial power weighting c k+1 = 2/(k + 2) as used in Jaggi [2013] and Bach [2015].
Experiments
For our experiments we compared the performance of factorial power momentum on a strongly-convex but non-smooth machine learning problem: regularized multi-class support vector machines. We consider two problems from the LIBSVM [Chang and Lin, 2011] repository: PROTEIN and USPS, and two from the UCI [Dua and Graff, 2017] repository: GLASS and VOWEL. We used batch-size 1 and the step sizes recommended by the theory for both SGD with r = 1 averaging, as well as SGD with factorial power momentum as we developed in Theorem 4. We induced strong convexity by using weight decay of strength 0.001. The median as well as interquartile range bars from 40 runs are shown. Since our theory suggests r = 3, we tested r = 0, 1, 3, 5 to verify that r = 3 is the best choice. The results are shown in Figure 2. We see that when using factorial power momentum, using r = 0, 1 is worse than r = 3, and using r = 5 is no better that r = 3, so the results agree with our theory. The momentum method also performs a little better than SGD with post-hoc averaging, however it does appear to be substantially more variable between runs, as the interquartile range shows. We provide further experiments covering the SVRGM method in the supplementary material.
Conclusion
Factorial powers are a flexible and broadly applicable tool for establishing tight convergence rates as well as simplifying proofs. As we have shown, they have broad applicability both for stochastic optimization and beyond.
References
A Proof of Properties of Factorial Powers
We recall that we define the factorial powers using the gamma function
k r := Γ(k + r) Γ(k) , where Γ(k) := −∞ 0 x k−1 e −x dx, for k + r > 0 and k ≥ 1.(29)
We also extend the definition and set 0 r = 0 except for 0 0 = 1. We restrict k + r > 0 and k ≥ 1 in (29) because the gamma function Γ(z) is only well defined for z > 0.
We will use the following well known property of the gamma function
Γ(k + 1) = kΓ(k),(30)
that follows by integration by parts.
We now give the proof of all the properties in Table 2.
Proposition 9. For k ≥ 1 and k + r > 0 we have that following recursive properties:
(k + 1) r = k + r k k r ,(31)(k + 1) r = (k + r) (k + 1) r−1 .(32)
Proof. Using the definition directly
(k + 1) r = Γ(k + r + 1) Γ(k + 1) (30) = Γ(k + r) (k + r) Γ(k)k = k + r k k r ,
and (k + 1) r = Γ(k + 1 + r) Γ(k + 1)
(30) = Γ(k + r)(k + r) Γ(k + 1) = (k + r) (k + 1) r−1 .
Proposition 10. For k ≥ 1 and k + r > 0 we have that following difference property
(k + 1) r − k r = r (k + 1) r−1 .(33)
Proof. We apply the recursive property in k, then in r
(k + 1) r − k r = k + r k k r − k r = r k k r = 1 k+r k r k (k + 1) r = r 1 k + r (k + 1) r = r (k + 1) r−1 .
Proposition 11. For k ≥ 1, k + r > 0 and k + r + q > 0 we have that following ratio property
k r+q k r = (k + r) q , Proof. k r+q k r = Γ(k+r+q) Γ(k) Γ(k+r) Γ(k) = Γ(k + r + q) Γ(k + r) = (k + r) q .
Proposition 12. For integers b ≥ a ≥ 1 such that a + r > 0 we have the following summation property
b i=a i r = 1 r + 1 b r+1 − 1 r + 1 (a − 1) r+1 .
Proof. This property is a direct consequence of telescoping the difference property:
(r + 1) b i=a i r = b−1 i=a−1 (r + 1)(i + 1) r = b−1 i=a−1 (i + 1) r+1 − i r+1 = (b − 1 + 1) r+1 − (a − 1) r+1 =b r+1 − (a − 1) r+1
Proposition 13. For k ≥ 1 we have that following inverse difference property 1 (k + 1)
−1/2 − 1 k −1/2 = 1 2 1 k 1/2 .(34)
Proof. We apply the inverse property followed by the difference property then the inverse property again:
1 (k + 1) −1/2 − 1 k −1/2 = k + 1 − 1 2 1/2 − k − 1 2 1/2 = 1 2 (k + 1/2) −1/2 = 1 2 1 k 1/2 .
Lemma 14. Let k ≥ 1, r ≥ 0 and j ≥ 0. Consider the sequence
c k = r + 1 k + j + r . It follows that 1 − c k c k (k + j) r = 1 c k−1 (k + j − 1) r .
Proof. Simplifying:
1 c k − 1 (k + j) r = k + j + r r + 1 − 1 (k + j) r = k + j + r − r − 1 r + 1 (k + j) r = k + j − 1 r + 1 (k + j) r = k + j + r − 1 r + 1 k + j − 1 k + j + r − 1 (k + j) r = 1 c k−1 k + j − 1 k + j + r − 1 (k + j) r .
Now applying the recursion property Eq. (8) gives:
k + j − 1 k + j + r − 1 (k + j) r = (k + j − 1) r ,
giving the result.
B Convergence Theorems for the Projected SGDM
Theorem 15. Consider the projected SGDM method
x k = (1 − c k ) x k−1 + c k z k ,(35)z k+1 = Π C (z k − η k ∇f (x k , ξ k )) , where 0 < c k ≤ 1. If each f (·, ξ) is convex and G-Lipschitz then z k+1 − x * 2 ≤ z k − x * 2 + η 2 k G 2 − 2 1 c k η k [f (x k , ξ k ) − f (x * , ξ k )] + 2 1 c k − 1 η k [f (x k−1 , ξ k ) − f (x * , ξ k )] .
Proof. We start with z k+1 instead of the usual expansion in terms of x k+1 :
z k+1 − x * 2 = Π C (z k − η k ∇f (x k , ξ k )) − Π C (x * ) 2 ≤ z k − η k ∇f (x k , ξ k ) − x * 2 = z k − x * 2 − 2η k ∇f (x k , ξ k ), z k − x * + η 2 k G 2 = z k − x * 2 − 2η k ∇f (x k , ξ k ), x k − 1 c k − 1 (x k−1 − x k ) − x * + η 2 k G 2 = z k − x * 2 + η 2 k G 2 − 2η k ∇f (x k , ξ k ), x k − x * − 2η k 1 c k − 1 ∇f (x k , ξ k ), x k − x k−1
Using the following two convexity inequalities
∇f (x k , ξ k ) , x * − x k ≤ f (x * , ξ k ) − f (x k , ξ k ) ∇f (x k , ξ k ) , x k−1 − x k ≤ f (x k−1 , ξ k ) − f (x k , ξ k ) combined with (1/c k − 1) ≥ 0 gives z k+1 − x * 2 ≤ z k − x * 2 + η 2 k G 2 − 2η k [f (x k , ξ k ) − f (x * , ξ k )] − 2 1 c k − 1 η k [f (x k , ξ k ) − f (x k−1 , ξ k )] .
Now rearranging further gives the result
z k+1 − x * 2 ≤ z k − x * 2 + η 2 k G 2 − 2 1 c k η k [f (x k , ξ k ) − f (x * , ξ k )] + 2 1 c k − 1 η k [f (x k−1 , ξ k ) − f (x * , ξ k )] .
Corollary 16. Consider the Lyapunov function:
A k = z k − x * 2 + 2 c k−1 η k−1 [f (x k−1 ) − f (x * )] If for k ≥ 2, 1 c k − 1 η k ≤ 1 c k−1 η k−1 ,(36)
and for k = 1 we have 1 c1 − 1 η 1 ≤ 0, then SGDM steps statisfy the following relation for k ≥ 1.
E ξ k [A k+1 ] ≤ A k + η 2 k G 2 ,
when each f (·, ξ) is convex and G-Lipschitz.
Corollary 17. Let E[·]
denote the expectation with respect to all ξ i , with i ≤ n. Suppose that the constraint set C is contained in an R-ball around the origin. Then telescoping and applying the law of total expectation gives:
E z n+1 − x * 2 + 2 c n η n E [f (x n ) − f (x * )] ≤ R 2 + n i=0 η 2 i G 2(37)
B.1 Proof of Theorem 3: Any-time convergence with factorial power step sizes
Theorem 18. Consider the projected SGDM method Eq. 18. When η k = 1/2 R G (k + 1) −1/2 and c k = 1/(k + 1), when each f (x, ξ) is G-Lipschitz, convex and the constraint set C is contained within an R-ball around x 0 , then:
E [f (x n ) − f (x * )] ≤ √ 2RG (n + 2) −1/2 ≤ √ 2RG √ n + 1 .
Proof. Consider Theorem 15 in expectation conditioned on ξ k :
E z k+1 − x * 2 ≤ z k − x * 2 + η 2 k G 2 − 2 1 c k η k (f (x k ) − f (x * )) + 2 1 c k − 1 η k (f (x k−1 ) − f (x * )).
We will use a step size η k = η(k + 1) −1/2 for some constant η, and multiply this expression by 1/(k + 1) −1/2 :
1 (k + 1) −1/2 E z k+1 − x * 2 ≤ 1 (k + 1) −1/2 z k − x * 2 + (k + 1) −1/2 η 2 G 2 − 2 1 c k η (f (x k ) − f (x * )) + 2 1 c k − 1 η (f (x k−1 ) − f (x * )) .(38)
Now we prove the result by induction. First consider the base case k = 0. Since 1 −1/2 = Γ(1/2) Γ(1) = √ π which follows since Γ(1/2) = √ π and Γ(1) = 1 we have that
1 1 −1/2 z 0 − z * 2 = 1 √ π z 0 − z * 2 ≤ 1 √ π R 2 .
The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization
Consequently taking k = 0 in (38) gives
1 1 −1/2 E z 1 − x * 2 ≤ 1 1 −1/2 z 1 − z * 2 + (1) −1/2 η 2 G 2 − 2 1 c 0 η (f (x 0 ) − f (x * )) + 2 1 c 0 − 1 η (f (x 0−1 ) − f (x * )) ≤ 1 √ π R + 1 −1/2 η 2 G 2 − 2η (f (x 0 ) − f (x * )) .(39)
Inductive case: consider the case k ≥ 1. To facilitate telescoping we want 1 k −1/2 z k − z * 2 on the right, so to this end we rewrite
1 (k + 1) −1/2 z k − z * 2 = 1 k −1/2 z k − z * 2 + 1 (k + 1) −1/2 − 1 k −1/2 z k − z * 2 ≤ 1 k −1/2 z k − z * 2 + 1 (k + 1) −1/2 − 1 k −1/2 R 2 .(40)
Now since k ≥ 1 we can apply the inverse difference property 1 (k + 1)
−1/2 − 1 k −1/2 = 1 2 1 k 1/2
which when used with (40) and then inserting the result in (38) gives
1 (k + 1) −1/2 E z k+1 − x * 2 ≤ 1 k −1/2 z k − z * 2 + 1 2 1 k 1/2 R 2 + (k + 1) −1/2 η 2 G 2 − 2 1 c k η (f (x k ) − f (x * )) + 2 1 c k − 1 η (f (x k−1 ) − f (x * )) .
Since c k = 1/(k + 1) and 1 k 1/2 = k + 1 2 −1/2 we have that 1 (k + 1)
−1/2 E z k+1 − x * 2 ≤ 1 k −1/2 z k − z * 2 + 1 2 k + 1 2 −1/2 R 2 + (k + 1) −1/2 η 2 G 2 − 2(k + 1)η (f (x k ) − f (x * )) + 2kη (f (x k−1 ) − f (x * )) .(41)
Now taking expectation and adding up both sides of (41) from 1 to n and using telescopic cancellation gives 1 (n + 1)
−1/2 E z n+1 − x * 2 ≤ 1 1 −1/2 E z 1 − z * 2 + 1 2 R 2 n i=1 i + 1 2 −1/2 + n i=1 (i + 1) −1/2 η 2 G 2 + 2η (f (x 0 ) − f (x * )) − 2(n + 1)ηE[f (x n ) − f (x * )].
Now using the base case (39) we have that 1 (n + 1)
−1/2 E z n+1 − x * 2 ≤ R 2 + 1 −1/2 η 2 G 2 − 2η (f (x 0 ) − f (x * )) − 2(n + 1)ηE[f (x n ) − f (x * )] + 1 2 R 2 n i=1 i + 1 2 −1/2 + n i=1 (i + 1) −1/2 η 2 G 2 + 2η (f (x 0 ) − f (x * )) = 1 √ π R 2 + 1 2 R 2 n i=1 i + 1 2 −1/2 + n i=0 (i + 1) −1/2 η 2 G 2 − 2(n + 1)ηE[f (x n ) − f (x * )].(42)
Using the summation property Eq. (10) we have that n i=1 (i + 1/2) −1/2 = 2 (n + 1/2) 1/2 − 2 (3/2) 1/2 = 2 (n + 1/2) 1/2 − 4 √ π ≤ 2 (n + 1) 1/2 − 4 √ π and furthermore n i=0
(i + 1) −1/2 = n+1 i=1 i −1/2 ≤ 2 (n + 1) 1/2 .
So after dividing by 2(n + 1)η:
E [f (x n ) − f (x * )] ≤ 1 2 1 η R 2 + 2ηG 2 (n + 1) 1/2 n + 1
We now use the ratio property on:
(n + 1) 1/2 n + 1 = (n + 1) 1−1/2 (n + 1) 1 = (n + 2) −1/2 , and solve for the best step size η, which is η = 1/2 R G giving:
E [f (x n ) − f (x * )] ≤ √ 2RG (n + 2) −1/2 < √ 2RG √ n + 1 .
B.2 Any-time convergence with standard step sizes:
Theorem 19. Let f (x, ξ) be G-Lipschitz and convex for every ξ. When η k = R G √ 2(k+1) and c k = 1 k+1 in the projected SGDM method (18) we have that
E [f (x n ) − f (x * )] ≤ √ 2RG √ n + 1 .(43)
Proof. We use η k = η/ √ k + 1 and c k = 1 k+1 in the result from Theorem 15, taking expectation and multiplying both sides by √ k + 1 gives
√ k + 1E z k+1 − x * 2 ≤ √ k + 1 z k − x * 2 + 1 √ k + 1 η 2 G 2 − 2(k + 1)ηE [f (x k ) − f (x * )] + 2kηE [f (x k−1 ) − f (x * )] .(44)
For k = 0 the above gives
E z 1 − x * 2 ≤ R 2 + η 2 G 2 − 2ηE [f (x 0 ) − f (x * )] .(45)
For k ≥ 1, from concavity of the square root function
√ k + 1 − √ k ≤ 1 2 √ k ,(46)
we have that
√ k + 1 z k − x * 2 ≤ √ k + 1 2 √ k z k − x * 2 ≤ √ k z k − x * 2 + 1 2 √ k R 2 .
Plugging the above into (44) gives
√ k + 1E z k+1 − x * 2 ≤ √ k z k − x * 2 + 1 2 √ k R 2 + 1 √ k + 1 η 2 G 2 − 2(k + 1)ηE [f (x k ) − f (x * )] + 2kηE [f (x k−1 ) − f (x * )] .
Now we telescope for 1 to n giving:
√ n + 1E z n+1 − x * 2 ≤ z 1 − x * 2 + n i=1 1 2 √ i R 2 + n i=1 1 √ i + 1 η 2 G 2 − 2(n + 1)ηE [f (x n ) − f (x * )] + 2ηE [f (x 0 ) − f (x * )] .
Using the base case (45) we have that
√ n + 1E z n+1 − x * 2 ≤ R 2 + n i=1 1 2 √ i R 2 + n i=0 1 √ i + 1 η 2 G 2 − 2(n + 1)ηE [f (x n ) − f (x * )] .
Now using the integral bounds
n i=1 1 √ i ≤ 2( √ n − 1), n i=0 1 √ i + 1 ≤ 2 √ n + 1,
and re-arranging gives
2(n + 1)ηE [f (x n ) − f (x * )] ≤ √ nR 2 + 2 √ n + 1η 2 G 2 − √ n + 1E z n+1 − x * 2 ≤ √ nR 2 + 2 √ n + 1η 2 G 2 .
Dividing through by 2(n + 1)η gives
2(n + 1)ηE [f (x n ) − f (x * )] ≤ √ n 2(n + 1)η R 2 + 1 √ n + 1 ηG 2 ≤ 1 √ n + 1 1 2η R 2 + ηG 2 .
Minimizing the above in η gives η = R/( √ 2G) which gives (43) and concludes the proof.
C Strongly Convex Convergence
Consider again the SGDM method with a projection step given by
z k+1 = Π C (z k − η k ∇f (x k , ξ k )) , x k+1 = (1 − c k+1 ) x k + c k+1 z k+1 .
Lemma 20. For λ k+1 = k+2 2 and c k+1 = 4 k+4 we have that
A k+1 := x k+1 − x * + λ k+1 (x k+1 − x k ) 2 = 2z k+1 − x k − x *
Proof. The relation follows from substitution of the known relations:
A k+1 = x k+1 − x * + λ k+1 (x k+1 − x k ) 2 = (λ k+1 + 1) x k+1 − λ k+1 x k − x * 2 = (λ k+1 + 1) ((1 − c k+1 ) x k + c k+1 z k+1 ) − λ k+1 x k − x * 2 = (λ k+1 + 1) ((1 − c k+1 ) x k + c k+1 z k+1 ) + [(λ k+1 + 1) (1 − c k+1 ) − λ k+1 ] x k − x * 2 = (λ k+1 + 1) c k+1 z k+1 + [(λ k+1 − λ k+1 c k+1 + 1 − c k+1 ) x k − λ k+1 x k ] − x * 2 = (λ k+1 + 1) c k+1 z k+1 + [(1 − (λ k+1 + 1) c k+1 ) x k ] − x * 2 .
Now using
(λ k+1 + 1) c k+1 = k + 2 2 + 1 4 k + 4 = k + 4 2 4 k + 4 = 2, gives A k+1 = 2z k+1 − x k − x * 2 .
C.1 Proof of Theorem 4
Theorem 21. Let f (x, ξ) be G-Lipschitz and µ−strongly convex in x for every ξ. The projected SGDM method (18)
with η k = 1 µ(k+1) and c k+1 = 4 k+4 satisfies E [f (x n ) − f (x * )] ≤ 2G 2 µ(n + 1) .
Proof. We will define a few constants to reduce notational clutter. Let ρ k = k − 1 k + 1 , and λ k+1 = k + 2 2 .
We will first apply the contraction property of the projection operator (using the fact that x k and x * are always within the constraint set) so that
A k+1 = 2z k+1 − x k − x * 2 = 4 Π C (z k − η k ∇f (x k , ξ k )) − 1 2 x k + 1 2 x * 2 = 4 Π C (z k − η k ∇f (x k , ξ k )) − Π C 1 2 x k + 1 2 x * 2 ≤ 2z k − 2η k ∇f (x k , ξ k ) − x k − x * 2 . Now we use z k = 1 c k x k − 1 c k − 1 x k−1 : A k+1 ≤ 2 c k x k − 2 1 c k − 1 x k−1 − 2η k ∇f (x k , ξ k ) − x k − x * 2 = 2 1 c k − 1 x k − 2 1 c k − 1 x k−1 + x k − 2η k ∇f (x k , ξ k ) − x * 2 = x k − 2η k ∇f (x k , ξ k ) − x * 2 + 4 1 c k − 1 2 x k − x k−1 2 + 4 1 c k − 1 x k − x k−1 , x k − x * − 4η k 1 c k − 1 ∇f (x k , ξ k ), x k − x * .
Now from Lemma 20 we have
A k = x k − x * + λ k (x k − x k−1 ) 2 thus 4 1 c k − 1 x k − x k−1 , x k − x * = 2 λ k 1 c k − 1 A k − 2 λ k 1 c k − 1 x k − x * 2 − 2λ k 1 c k − 1 (x k − x k−1 ) 2 . (47)
Notice that:
2 1 λ k 1 c k − 1 = 2 2 k + 1 k + 3 4 − 1 = 1 k + 1 (k + 3 − 4) = k − 1 k + 1 = ρ k .
So we have:
A k+1 = x k − 2η k ∇f (x k , ξ k ) − x * 2 + 4 1 c k − 1 − 2λ k 1 c k − 1 x k − x k−1 2 = ρ k A k − ρ k x k − x * 2 − 8η k 1 c k − 1 ∇f (x k , ξ k ), x k − x k−1 .
Now note that:
4 1 c k − 1 − 2λ k = 4 k + 3 4 − 1 − 2 k + 1 2 = (k − 1) − 2 k + 1 2 ≤ 0.
Further expanding x k − 2η k ∇f (x k , ξ k ) − x * 2 and rearranging then gives
A k+1 = ρ k A k + (1 − ρ k ) x k − x * 2 + 4η 2 k ∇f (x k , ξ k ) 2 = −4η k ∇f (x k , ξ k ), x k − x * − 8η k 1 c k − 1 ∇f (x k , ξ k ), x k − x k−1 .
We now apply the two inequalities:
− ∇f (x k , ξ k ) , x k − x * ≤ − [f (x k , ξ k ) − f (x * , ξ k )] − µ 2 x k − x * 2 , − ∇f (x k , ξ k ) , x k − x k−1 ≤ f (x k−1 , ξ k ) − f (x k , ξ k ),
which gives:
A k+1 = ρ k A k + (1 − ρ k − 2µη k ) x k − x * 2 + 4η 2 k ∇f (x k , ξ k ) 2 = −4η k [f (x k , ξ k ) − f (x * , ξ k )] + 8η k 1 c k − 1 [f (x k−1 , ξ k ) − f (x k , ξ k )] .
Taking expectations and using E ξ k ∇f (x k , ξ k ) 2 ≤ G 2 gives:
EA k+1 = ρ k A k + (1 − ρ k − 2µη k ) x k − x * 2 + 4η 2 k G 2 = −4η k [f (x k ) − f (x * )] + 8η k 1 c k − 1 [f (x k−1 ) − f (x k )] .
Further grouping of function value terms gives:
EA k+1 = ρ k A k + (1 − ρ k − 2µη k ) x k − x * 2 + 4η 2 k G 2 = − 8η k 1 c k − 1 + 4η k [f (x k ) − f (x * )] + 8η k 1 c k − 1 [f (x k−1 ) − f (x * )] .
Now we simplify constants, recalling that ρ k = k−1 k+1 and c k = 4 k+3 :
8η k 1 c k − 1 = 2 4 µ(k + 1) k + 3 4 − 1 = 2 µ 1 k + 1 (k − 1) = ρ k 2 µ ,
using this we have:
8η k 1 c k − 1 + 4η k = 2 µ k − 1 k + 1 + 4 1 µ(k + 1) = 2 µ k − 1 + 2 k + 1 = 2 µ .
Also note that:
1 − ρ k − 2µη k = 1 − k − 1 k + 1 − 2µ µ(k + 1) = 1 − k + 1 − 2 k + 1 − 2 k + 1 = 0.
So we have:
EA k+1 + 2 µ [f (x k ) − f (x * )] = ρ k A k + 2 µ f (x k−1 ) − f (x * ) + 4η 2 k G 2 .
Based on the form of this equation, we have a Laypunov function
B k+1 = A k+1 + 2 µ [f (x k ) − f (x * )] ,
then:
EB k+1 ≤ ρ k B k + 4η 2 k G 2 ,
with ρ k descent plus noise. To finish the proof, we multiply by k(k + 1) and simplify the last term:
(k + 1) kE[B k+1 ] ≤ k (k − 1) B k + 4 µ 2 G 2 .
We now telescope from k = 1 to n, using the law of total expectation:
(n + 1) nE[B n+1 ] ≤ 4n µ 2 G 2 , ∴ E [f (x n ) − f (x * )] ≤ 2G 2 µ(n + 1) .
D Accelerated Method
Consider the following iterate averaging form of Nesterov's method
y k = (1 − c k+1 ) x k + c k+1 z k z k+1 = z k − ρ k ∇f (y k ) x k+1 = (1 − c k+1 ) x k + c k+1 z k+1 .(48)
with z 0 = x 0 . Note the following two key relations, that can be derived by rearranging the above relations
z k = y k − 1 c k+1 − 1 (x k − y k ) ,(49)
and
x k+1 − y k = c k+1 (z k+1 − z k ) .(50)
Lemma 22. Let f (x, ξ) be L-smooth and convex. If we set c k+1 = 2/(k + 2) and ρ k = (k + 1)/(γL) then the iterates of iterate averaging form of Nesterov's method (48) satisfy
−f (y k ) ≤ −f (x k+1 ) − 2L γ (k + 1) 2 − 1 (k + 2) 2 z k+1 − z k 2 .
Proof. We start with the Lipschitz smoothness upper bound:
f (x k+1 ) ≤ f (y k ) + ∇f (y k ), x k+1 − y k + L 2 x k+1 − y k 2 , ∴ −f (y k ) ≤ −f (x k+1 ) + ∇f (y k ), x k+1 − y k + L 2 x k+1 − y k 2 .
Using (50) and ∇f (y k ) = −Lγ/(k + 1) (z k+1 − z k ) in the above gives
f (y k ) ≤ −f (x k+1 ) − Lγ α (k + 1) (z k+1 − z k ) , c k+1 (z k+1 − z k ) + L 2 c k+1 (z k+1 − z k ) 2 .
Note that c 2 k+1 = 4 (k+2) 2 so:
−f (y k ) ≤ −f (x k+1 ) − L k + 1 2 (k + 2) z k+1 − z k 2 + L 2 4 (k + 2) 2 z k+1 − z k 2 .
Grouping terms gives the lemma.
Proof of Theorem 5
Theorem 23. Let f (x, ξ) be L-smooth and convex. Let x k be given by the iterate averaging form of Nesterov's method (48). If we set c k+1 = 2/(k + 2) and ρ k = (k + 1)/(γL) with γ = 2 then
f (x n ) − f (x * ) ≤ 2L n 2 x 0 − x * 2 .(51)
Proof. We start by expanding a distance to solution term:
z k+1 − x * 2 = z k − x * − (z k − z k+1 ) 2 = z k − x * 2 − 2(k + 1) 1 γL ∇f (y k ), z k − x * + z k+1 − z k 2 .
These equations are satisfied for p = 1 − 2Lη = 2 3 , when η = 1 6L and c t = 1 pt + 1 = 1/2 + 1 t + 1/2 + 1 .
Proof. We can use the same proof technique as we applied in the non-variance reduced case to deduce the following 1-step bound:
EA s t+1 ≤ (1 − ρ − µν) x s t − x * 2 + ρA t + 4Lν 2 f (x s−1 ) − f (x * ) − 2ν (1 + ρλ − 2Lν) [f (x s t ) − f (x * )] + 2ρλν f (x s t−1 ) − f (x * )
Where ρ = (λ + 1) β λ and ν = (λ + 1) α.
We need 1 − ρ − µν ≤ 0, which suggests for step sizes of the form ν = 1/ (qL) , ρ = 1 − µν = 1 − 1 qκ .
Now in order to see a ρ decrease in function value each step, we will require:
−2ν (1 + ρλ − 2Lν) ≤ −2λν, so solving at equality gives 1 + ρλ − 2Lν = λ,
∴ 1 − 2Lν = (1 − ρ) λ, λ = 1 − 2/q 1/qκ = (q − 2) κ.
This gives:
2λν = 2 (q − 2) κ 1 qL = 2 µ 1 − 2 q
Making these substitutions, our one-step bound can be written as:
EA s t+1 + 2 µ 1 − 2 q [f (x s t ) − f (x * )] ≤ ρA s t + ρ 2 µ 1 − 2 q f (x s t−1 ) − f (x * ) + 4 q 2 L f (x s−1 ) − f (x * ) .
We can now telescope using the sum of a geometric series k−1 i=0 ρ i = 1−ρ k 1−ρ and the law of total expectation to give:
EA s m+1 + 2 µ 1 − 2 q [f (x s m ) − f (x * )] ≤ ρ m A s 0 + ρ m 2 µ 1 − 2 q f (x s−1 ) − f (x * ) + 1 − ρ m 1 − ρ 4 q 2 L f (x s−1 ) − f (x * ) .
These expectations are now unconditional. Now multiplying by µ/2, simplifying with 1 − ρ = 1 qκ gives:
µ 2 EA s m+1 + 1 − 2 q [f (x s m ) − f (x * )] ≤ ρ m µ 2 A s 0 + ρ m 1 − 2 q + 2 q (1 − ρ m ) f (x s−1 ) − f (x * ) .
Dividing by 1 − 2 q :
µ 2 q q − 2 EA s m+1 + [f (x s m ) − f (x * )] ≤ ρ m µ 2 q q − 2 A s 0 + ρ m + 2 q − 2 (1 − ρ m ) f (x s−1 ) − f (x * )
Now we can try q = 6 for instance, giving
ρ m + 2 q − 2 (1 − ρ m ) = ρ m + 1 2 (1 − ρ m ) = 1 2 ρ m + 1 2
Then if we use m = 6κ we get ρ m ≤ exp(−1) ≤ 2/5 for m = 6 to give:
3 4 µEA s m+1 + [f (x s m ) − f (x * )] ≤ 6 10 3 4 µA s 0 + f (x s−1 ) − f (x * )
Then we may determine the momentum and step size constants α, β :
β = λ λ + 1 ρ = (6 − 2) κ (6 − 2) κ
F Dual averaging
First we provide a convergence theorem for the dual averaging method that does not use factorial powers to set theβ k parameters.
Theorem 27. Let
δ n = max x, x ≤R n i=0 ∇f (x i ) , x i − x .
Consider the Dual Averaging method
s k+1 = s k + ∇f (x k ) x k+1 = arg min x s k+1 , x +β k+1 γ 2 x − x 0 2 ,(52)
where the sequenceβ k is defined recursively bŷ β 0 =β 1 = 1, andβ k+1 =β k+1 + 1/β k+1 . Figure 3: SVRGM training loss convergence So: δ k ≤ γR 2 (k + 3/2) 1/2 + G 2 (k + 1) 1/2 − 2 (1) 1/2 .
Using step size γ = G/R: δ k ≤ RG (k + 3/2) 1/2 + RG (k + 1) 1/2 − 2 (1) 1/2 = RG (k + 3/2) 1/2 + (k + 1) 1/2 − 2 (1) 1/2 ≤ 2RG (k + 1) 1/2 . Now to normalize by 1/(k + 1) we use:
(k + 1) r+q (k + 1) r = (k + 1 + r) q , with r = 1 and q = −1/2, so that:
(k + 1) 1/2 k + 1 = (k + 2) −1/2 .
We further use (k + 2) −1/2 < (k + 1) −1/2 , giving:
1 k + 1 δ k < 2RG √ k + 1 .
G SVRGM Experiments
We compared the SVRGM method against SVRG both with the r = 1/2 momentum suggested by the theory as well as equal weighted momentum. We used the same test setup as for our SGDM experiments, except without the addition of weight decay in order to test the non-strongly convex convergence. Since the selection of step-size is less clear in the non-strongly convex case, here we used a step-size sweep on a power-of-2 grid, and we reported the results of the best step-size for each method. As shown in Figure 3, SVRGM is faster on two of the test problems and slower on two. The flat momentum variant is a little slower than r = 1/2 momentum, however not significantly so.
Figure 1 :
1(left) Contour plots of the simple powers and the factorial powers. (right) The half-factorial power and associated upper and lower bounds.
In contrast, when summing powerMethod
Alg #
Smooth Str. Conv Polytopic Rate
Std. Rate
Reference
SGDM
Eq (18)
No
No
(n + 2) −1/2
(n + 1) −1/2
Tao et al. [2020]
SGDM
Eq (18)
No
Yes
(n + 2) −1
(n + 1) −1
Tao et al. [2020]
SVRGM
Alg 1
Yes
No
1/n
1/n
Allen Zhu & Yuan [2016]
SVRGM
Alg 1
Yes
Yes
(3/5) n/κ
(3/4) n/κ
Allen Zhu & Yuan [2016]
Nesterov Eq (26)
Yes
No
1/n 2
1/n 2
Nesterov [2013]
Table 1 :
1List of convergence results together with previously known results. We say that the function is smooth if
Table 2 :
2Fundamental Properties of the factorial powers. Properties (8), (9),
This matches the rate given byBeck and Teboulle [2009] asymptotically, and is faster than the rate given by Nesterov's estimate sequence approach Nesterov [2013] by a constant factor.Since factorial power momentum has clear advantages in situations where averaging of the iterates is otherwise used, we further explore a problem where averaging is necessary and significantly complicates matters: the stochastic variance-reduced gradient method (SVRG). The SVRG method[Johnson and Zhang, 2013] is a double loop method, where the iterations in the inner loop resemble SGD steps, but with an additional additive variance reducing correction. In each outer loop, the average of the iterates from the inner loop are used to form a new "snapshot" point. We propose the SVRGM method (Algorithm 1). This method modifies the improved SVRG++ formulation of Allen Zhu and Yuan [2016] to further include the use of iterate averaging style momentum in the inner loop. See Algorithm 1.6 Variance Reduction with
Momentum
Alfred Auslender and Marc Teboulle. Interior gradient and proximal methods for convex and conic optimization. SIAM Journal on Optimization, 2006.Alekh Agarwal, Martin J Wainwright, Peter L. Bartlett,
and Pradeep K. Ravikumar. Information-theoretic
lower bounds on the oracle complexity of convex
optimization. In Advances in Neural Information
Processing Systems 22, pages 1-9, 2009.
Zeyuan Allen Zhu and Yang Yuan. Improved SVRG for
non-strongly-convex or sum-of-non-convex objectives.
In Proceedings of the 33nd International Conference
on Machine Learning, ICML, volume 48, pages 1080-
1089, 2016.
Francis Bach. Duality between subgradient and con-
ditional gradient methods. SIAM Journal on Opti-
mization, 25(1):115-129, 2015.
Amir Beck and Marc Teboulle. A fast iterative
shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. IMAGING SCIENCES, 2009.
Chih-Chung Chang and Chih-Jen Lin. Libsvm: A li-
brary for support vector machines. ACM transactions
on intelligent systems and technology (TIST), 2(3):
27, 2011.
Dheeru Dua and Casey Graff. UCI machine learning
repository, 2017. URL http://archive.ics.uci.
edu/ml.
J. C. Dunn and S. Harshbarger. Conditional gradient
algorithms with open loop step size rules. Journal
of Mathematical Analysis and Applications, 1978.
Ronald L. Graham, Donald E. Knuth, and Oren Patash-
nik. Concrete Mathematics. Addison-Wesley, 2nd
edition edition, 1994.
Martin Jaggi. Revisiting Frank-Wolfe: Projection-free
sparse convex optimization. In Proceedings of the
30th International Conference on Machine Learning.
PMLR, 2013.
Rie Johnson and Tong Zhang. Accelerating stochastic
gradient descent using predictive variance reduction.
Proceedings of the 26th International Conference on
Neural Information Processing Systems, 2013.
Simon Lacoste-Julien, Mark Schmidt, and Francis Bach.
A simpler approach to obtaining an o(1/t) conver-
gence rate for the projected stochastic subgradient
method. arXiv, 2012.
Kevin J. McGown and Harold R. Parks. The general-
ization of faulhaber's formula to sums of non-integral
powers. Journal of Mathematical Analysis and Ap-
plications, 330(1):571 -575, 2007.
Yurii Nesterov. Primal-dual subgradient methods for
convex problems. Math. Program., 2009.
Yurii Nesterov. Introductory lectures on convex opti-
mization: A basic course. Springer, 2013.
B. T. Polyak. Some methods of speeding up the con-
vergence of iteration methods. USSR Computational
Mathematics and Mathematical Physics, 4:1-17, 1964.
H. Robbins and S. Monro. A stochastic approximation
method. Annals of Mathematical Statistics, 22:400-
407, 1951.
Othmane Sebbouh, Nidham Gazagnadou, Samy Jelassi,
Francis Bach, and Robert M. Gower. Towards closing
the gap between the theory and practice of svrg.
Neurips, 2019.
Othmane Sebbouh, Robert M. Gower, and Aaron De-
fazio. On the convergence of the stochastic heavy
ball method, 2020.
Ohad Shamir and Tong Zhang. Stochastic gradient
descent for non-smooth optimization:convergence re-
sults and optimal averaging schemes. Proceedings
of the 30th International Conference on Machine
Learning, 2013.
W. Tao, Z. Pan, G. Wu, and Q. Tao. Primal averaging:
A new gradient evaluation step to attain the opti-
mal individual convergence. IEEE Transactions on
Cybernetics, 50(2):835-845, 2020.
Adrien Taylor and Francis Bach. Stochastic first-order
methods: non-asymptotic and computer-aided anal-
yses via potential functions. In Proceedings of the
Thirty-Second Conference on Learning Theory, vol-
ume 99 of Proceedings of Machine Learning Research,
pages 2934-2992, Jun 2019.
Pal Tseng. On accelerated proximal gradient methods
for convex-concave optimization. Technical report,
MIT, 2008.
To write in iterate averaging form, we have β = 1 − c and from α k = ηc we get for η that+ 1
1 −
1
6κ
=
4κ
4κ + 1
6κ − 1
6κ
=
2
3
6κ − 1
4κ + 1
=
4κ − 2/3
4κ + 1
= 1 −
5/3
4κ + 1
and
α =
ν
λ + 1
=
1
6L
1
4κ + 1
.
c =
5
3
1
4κ + 1
,
η =
1
6L
1
4κ+1
5
3
1
4κ+1
=
1
10L
.
If γ = G Proof.Nesterov [2009] establishes the following bound:√
2R then
1
k + 1
δ k+1 ≤
√
2
1 +
√
3
1
(k + 1)
+
2
√
k + 1
RG.
δ k ≤ γβ k+1 R 2 +
1
2
G 2 1
γ
k
i=0
1
β i
.
Epoch
1.0e-01
Loss suboptimality
glass
SVRG+M r=0
SVRG+M
SVRG++
0 10 20 30 40 50
Epoch
1.0e-05
1.0e-04
1.0e-03
1.0e-02
protein
0 10 20 30 40 50
Epoch
1.0e-01
1.0e+00
usps
0 10 20 30 40 50
Epoch
1.0e-01
vowel
This is also a feature of the variant known as free-SVRG Sebbouh et al.[2019]
Simplifying the inner product term:Then we apply the inequalities:So we haveNow rearranging the function value terms and using that c k+1 = 2 k+2 giveswhich combined with the preceding result givesNow we apply Lemma 14 to give a telescopable sum:After telescoping:The Power of Factorial Powers: New Parameter settings for (Stochastic) OptimizationE SVRGMLemma 24.[Johnson and Zhang, 2013]The following bound holds for g s t at each step:E.1 Proof of Theorem 6 (Convex Case)Theorem 25. At the end of epoch S, when using r = 1/2 factorial power momentum given by c t = 1/2 + 1 t + 1/2 + 1 , and step size η = 1 6L , the expected function value is bounded by:Proof. We start in the same fashion as for non-variance reduced momentum methods:Using the following two convexity inequalitiesNow rearranging further:Now for the purposes of telescoping, define λ t = p(t + 1), we want to choose p > 0 such thatThe optimal step size is γ = G √ 2R So:Using the concavity of the square-root function:We need to normalize this quantity by 1/(k + 1), so we have:Therefore the bound on the normalization of δ is:Next we show how using factorial powers to set theβ k parameters can result in a tighter analysis and a simple proof.Proof of Theorem 8Theorem 28. after n steps of the dual averaging method withβ k = 1/ (k + 1) −1/2 and γ = G/R:Proof. Recall the bound:We useβ i = 1/ (i + 1) −1/2 the sum is: k i=0 1 β i = 1 1 − 1/2 (k + 1) 1/2 − 1 1 − 1/2 (1) 1/2 .Recall also that:β k+1 = 1 (k + 2) −1/2 = (k + 3/2) 1/2 . | [] |
[] | [] | [] | [] | A n a l y s i s a n d V i s u a l i z a t i o n o f t h e P a r a m e t e r S p a c e o f M a t r i x F a c t o r i z a t i o n -b a s e d R e c o m m e n d e r S y s t e m s | 10.48550/arxiv.2303.14417 | [
"https://export.arxiv.org/pdf/2303.14417v1.pdf"
] | 257,766,393 | 2303.14417 | 6c58c7a6809460c98d7d1980c51720435b55a015 |
A n a l y s i s a n d V i s u a l i z a t i o n o f t h e P a r a m e t e r S p a c e o f M a t r i x F a c t o r i z a t i o n -b a s e d R e c o m m e n d e r S y s t e m s
H a o W a n g * a a R a t i d
R E L A T E D W O R K
G E O M E T R I C A N A L Y S I S
t i v a r i a t e n o r m a l a n d t h e t e s t f a i l s a t e v e r y S t o c h a s t i c G r a d i e n t D e s c e n t L e a r n i n g s t e p . T h e r e s u l t i s p r e t t y s h o c k i n g , b e c a u s e : 1 . T h e g e o m e t r y o f t h e p a r a m e t e r s p a c e i s s o s i m p l e t h a t i t i s h a r d f o r u s t o b e l i e v e t h e r e s u l t i s c o r r e c t ; 2 . A l t h o u g h t h e r e s u l t l o o k s s i m p l e o n t h e s u r f a c e , w e a r e y e t n o t r e a d y t o u n d e r s t a n d t h e m e c h a n i s m o f t h e d i s t r i b u t i o n , b e c a u s e i t i s n o t n o r m a l l y d i s t r i b u t e d . S u b p l o t s o f F i g . 1 d e m o n s t r a t e t h a t a l t h o u g h t h e v e c t o r s o f t h e p a r a m e t e r s p a c e s a r e s p h e r i c a l l y d i s t r i b u t e d , t h e y d o n o t
h i n d t h e m a t r i x f a c t o r i z a t i o n f r a m e w o r k . F o r e x a m p l e , a l t h o u g h P r o b a b i l i s t i c M a t r i x F a c t o r i z a t i o n [ 2 ] w a s i n v e n t e d a s e a r l y a s 2 0 0 7 , i t w a s o n l y u n t i l 2 0 2 1 a n d 2 0 2 2 t h a t t r u e z e r o s h o t l e a r n i n g a l g o r i t h m s [ 3 ] [ 4 ] [ 5 ] [ 6 ] b a s e d o n m a t r i x f a c t o r i z a t i o n w e r e p r o p o s e d . T h e t i m e g a p i s 1 4 y e a r s . I n 2 0 2 2 , a n e x p l a i n a b l e A I p a p e r [ 7 ] w a s p u b l i s h e d , w h i c h w a s o n e o f t h e f i r s t g e o m e t r i c i n t e r p r e t a t i o n o f m a t r i x f a c t o r i z a t i o n m e t h o d s . T h e t i m e g a p i s 1 5 y e a r s . T h e m a j o r i t y o f s c i e n t i s t s a n d e n g i n e e r s h a v e s p e n t s o m u c h t i m e o n a c c u r a c y i m p r o v e m e n t , t h a t t h e y n e g l e c t e d t h e t h e o r e t i c a l f o u n d a t i o n o f t h e r e c o m m e n d e r s y s t e m t e c h n o l o g i e
27345627
F
Fn o w t e s t t h e z e r o s h o t l e a r n i n g a l g o r i t h m Z e r o M a t o n t h e s a m e M o v i e L e n s 1 M i l l i o n D a t a s e t , a n d o b t a i n t h e r e s u l
F i g . 2 i l l u s t r a t e s t h e v i s u a l i z a t i o n o f d i s t r i b u t i o n o f u s e r f e a t u r e v e c t o r s a n d i t e m f e a t u r e v e c t o r s i n Z e r o M a t b y d i f f e r e n t S t o c h a s t i c G r a d i e n t D e s c e n t s t e p s . A s c a n b e o b s e r v e d f r o m t h e f i g u r e , b o t h t h e u s e r f e a t u r e v e c t o r s ( B l u e ) a n d t h e i t
F i g . 4 d e m o n s t r a t e s t h a t t h e 3 D h i s t o g r a m o f t h e u s e r f e a t u r e v e c t o r d e n s i t y f u n c t i o n o f K L -M a t i s a c o n e , w h i l e t h e i t e m f e a t u r e v e c t o r d e n s i t y f u n c t i o n i s s i m i l aI n t h i s p a p e r , w e v i s u a l i z e d t h e p a r a m e t e r s p a c e s o f K L -M a t a n d Z e r o M a t u s i n g
Wide & Deep Learning for Recommender Systems. H T Cheng, L Koc, Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. the 1st Workshop on Deep Learning for Recommender SystemsH.T.Cheng, L.Koc, et.al. "Wide & Deep Learning for Recommender Systems. Proceedings of the 1st Workshop on Deep Learning for Recommender Systems", 2016
Probabilistic Matrix Factorization. R Salakhutdinov, A Mnih, Proceedings of the 20th International Conference on Neural Information Processing Systems. the 20th International Conference on Neural Information Processing SystemsR.Salakhutdinov, A.Mnih, "Probabilistic Matrix Factorization", Proceedings of the 20th International Conference on Neural Information Processing Systems, 2007
ZeroMat: Solving Cold-start Problem of Recommender System with No Input Data. H Wang, IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE). 2021H.Wang, "ZeroMat: Solving Cold-start Problem of Recommender System with No Input Data", IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE), 2021
DotMat: Solving Cold-start Problem and Alleviating Sparsity Problem for Recommender Systems. H Wang, ICET. 2022H.Wang, "DotMat: Solving Cold-start Problem and Alleviating Sparsity Problem for Recommender Systems", ICET, 2022
PowerMat: context-aware recommender system without user item rating values that solves the cold-start problem. H Wang, 2022to appearH.Wang, "PowerMat: context-aware recommender system without user item rating values that solves the cold-start problem", AIID, 2022, to appear
PoissonMat: Remodeling Matrix Factorization using Poisson Distribution and Solving the Cold Start Problem without Input Data. H Wang, MLISE. 2022H.Wang, "PoissonMat: Remodeling Matrix Factorization using Poisson Distribution and Solving the Cold Start Problem without Input Data", MLISE, 2022
Fair Recommendation by Geometric Interpretation and Analysis of Matrix Factorization. H Wang, RAIIE. 2022H.Wang, "Fair Recommendation by Geometric Interpretation and Analysis of Matrix Factorization", RAIIE, 2022
KL-Mat: Fair Recommender System via Information Geometry. H Wang, International Conference on Wireless Communication and Sensor Networks (icWCSN). 2022H.Wang, "KL-Mat: Fair Recommender System via Information Geometry", International Conference on Wireless Communication and Sensor Networks (icWCSN), 2022
SVDFeature: A Toolkit for Feature-based Collaborative Filtering. T Chen, W Zhang, Q Lu, Journal of Machine Learning Research. T.Chen, W.Zhang, Q.Lu, et.al., "SVDFeature: A Toolkit for Feature-based Collaborative Filtering", Journal of Machine Learning Research, 2012
Factorization meets the neighborhood: a multifaceted collaborative filtering model. Y Koren, KDD. Y.Koren, "Factorization meets the neighborhood: a multifaceted collaborative filtering model", KDD, 2008
Time-dependent Models in Collaborative Filtering based Recommender System. L Xiang, Q Yang, IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology -Workshops. L.Xiang, Q.Yang, "Time-dependent Models in Collaborative Filtering based Recommender System", IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology -Workshops, 2009
Beyond Globally Optimal: Focused Learning for Improved Recommendations. A Beutel, E Chi, WWWA.Beutel, E.Chi, et.al, "Beyond Globally Optimal: Focused Learning for Improved Recommendations", WWW, 2017
MatRec: Matrix Factorization for Highly Skewed Dataset. H Wang, 3rd International Conference on Big Data Technologies. 2020H. Wang, "MatRec: Matrix Factorization for Highly Skewed Dataset. 3rd International Conference on Big Data Technologies", 2020
Zipf Matrix Factorization: Matrix Factorization with Matthew Effect Reduction. H Wang, IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE). 2021H.Wang, "Zipf Matrix Factorization: Matrix Factorization with Matthew Effect Reduction", IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE), 2021
RankMat: Matrix Factorization with Calibrated Distributed Embedding and Fairness Enhancement. H Wang, ICCIP. 2021H.Wang, "RankMat: Matrix Factorization with Calibrated Distributed Embedding and Fairness Enhancement", ICCIP, 2021
MLP4Rec: A Pure MLP Architecture for Sequential Recommendations. M Li, X Zhao, IJCAI. 2022M.Li, X.Zhao, et.al., "MLP4Rec: A Pure MLP Architecture for Sequential Recommendations", IJCAI, 2022
Enlister: baidu's recommender system for the biggest chinese Q&A website. Q Liu, T Chen, RecSysQ.Liu, T.Chen, et.al., "Enlister: baidu's recommender system for the biggest chinese Q&A website", RecSys, 2012
Instant expert hunting: building an answerer recommender system for a large scale Q&A website. T Chen, J Cai, ACM SACT.Chen, J.Cai, et.al., "Instant expert hunting: building an answerer recommender system for a large scale Q&A website", ACM SAC, 2014
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction. H Guo, R Tang, 2017H.Guo, R.Tang, et.al. , "DeepFM: A Factorization-Machine based Neural Network for CTR Prediction", IJCAI, 2017
Deep Learning Recommendation Model for Personalization and Recommendation Systems. M Naumov, D Mudigere, CoRRM.Naumov, D.Mudigere, et.al., "Deep Learning Recommendation Model for Personalization and Recommendation Systems", CoRR, 2019
Deep Neural Networks for YouTube Recommendations. P Covington, J Adams, E Sargin, RecSys. P.Covington, J.Adams, E.Sargin, "Deep Neural Networks for YouTube Recommendations", RecSys, 2016
Personalized Bundle Recommendation in Online Games. Q Deng, K Wang, CIKM. 2020Q.Deng, K.Wang, et.al., "Personalized Bundle Recommendation in Online Games", CIKM, 2020
AutoML for Deep Recommender Systems: A Survey. R Zheng, L Qu, ACM Transactions on Information Systems. 2021R.Zheng, L.Qu, et.al. , "AutoML for Deep Recommender Systems: A Survey", ACM Transactions on Information Systems, 2021
AutoField: Automating Feature Selection in Deep Recommender Systems. Y Wang, X Zhao, WWW2022Y.Wang, X.Zhao, et.al., "AutoField: Automating Feature Selection in Deep Recommender Systems", WWW, 2022
AutoLoss: Automated Loss Function Search in Recommendations. X Zhao, H Liu, KDD. 2021X.Zhao, H.Liu, et.al., "AutoLoss: Automated Loss Function Search in Recommendations", KDD, 2021
The Million Song Dataset. T Bertin-Mahieux, B Whitman, P Lamere, ISMIRT. Bertin-Mahieux, B. Whitman, P. Lamere, The Million Song Dataset, ISMIR, 2011
Visualizing High-Dimensional Data Using t-SNE. L J P Van Der Maaten, G E Hinton, Journal of Machine Learning Research. 9L.J.P. van der Maaten and G.E. Hinton. "Visualizing High-Dimensional Data Using t-SNE", Journal of Machine Learning Research", 9 (Nov) : 2579-2605, 2008
A class of invariant consistent tests for multivariate normality. N Henze, B Zirkler, Communications in statistics-Theory and Methods. 1910N.Henze, B. Zirkler, "A class of invariant consistent tests for multivariate normality", Communications in statistics-Theory and Methods, 19(10), 3595-3617, 1990
| [] |
[
"UC Irvine UC Irvine Previously Published Works Title Noncommutative coordinates invariant under rotations and Lorentz transformations Publication Date License Noncommutative coordinates invariant under rotations and Lorentz transformations",
"UC Irvine UC Irvine Previously Published Works Title Noncommutative coordinates invariant under rotations and Lorentz transformations Publication Date License Noncommutative coordinates invariant under rotations and Lorentz transformations"
] | [
"Myron Bander \nDepartment of Physics and Astronomy\nUniversity of California\n92697-4575IrvineCaliforniaUSA\n"
] | [
"Department of Physics and Astronomy\nUniversity of California\n92697-4575IrvineCaliforniaUSA"
] | [] | Dynamics with noncommutative coordinates invariant under three-dimensional rotations or, if time is included, under Lorentz transformations is developed. These coordinates turn out to be the boost operators in SO1; 3 or in SO2; 3, respectively. The noncommutativity is governed by a mass parameter M. The principal results are: (i) a modification of the Heisenberg algebra for distances smaller than 1=M, (ii) a lower limit, 1=M, on the localizability of wave packets, (iii) discrete eigenvalues of the coordinate operator in timelike directions, and (iv) an upper limit, M, on the mass for which free field equations have solutions. Possible restrictions on small black holes are discussed. | 10.1103/physrevd.75.105010 | [
"https://web.archive.org/web/20200505175905/https:/escholarship.org/content/qt5hw9704d/qt5hw9704d.pdf?t=q10fnc"
] | 17,494,530 | hep-th/0701253 | 1a84becd10a04656fab1aadc29f33a0caabf0d53 |
UC Irvine UC Irvine Previously Published Works Title Noncommutative coordinates invariant under rotations and Lorentz transformations Publication Date License Noncommutative coordinates invariant under rotations and Lorentz transformations
2007-05-16
Myron Bander
Department of Physics and Astronomy
University of California
92697-4575IrvineCaliforniaUSA
UC Irvine UC Irvine Previously Published Works Title Noncommutative coordinates invariant under rotations and Lorentz transformations Publication Date License Noncommutative coordinates invariant under rotations and Lorentz transformations
2007-05-1610.1103/PhysRevD.75.105010(Received 31 March 2007; published 16 May 2007)Journal Physical Review D -Particles, Fields, Gravitation and Cosmology, 75(10) Author Bander, M https://creativecommons.org/licenses/by/4.0/ 4.0 Peer reviewed eScholarship.org Powered by the California Digital Library University of CaliforniaPACS numbers: 1110Nx, 0240Gh
Dynamics with noncommutative coordinates invariant under three-dimensional rotations or, if time is included, under Lorentz transformations is developed. These coordinates turn out to be the boost operators in SO1; 3 or in SO2; 3, respectively. The noncommutativity is governed by a mass parameter M. The principal results are: (i) a modification of the Heisenberg algebra for distances smaller than 1=M, (ii) a lower limit, 1=M, on the localizability of wave packets, (iii) discrete eigenvalues of the coordinate operator in timelike directions, and (iv) an upper limit, M, on the mass for which free field equations have solutions. Possible restrictions on small black holes are discussed.
I. INTRODUCTION
Recently there has been significant interest in extending quantum mechanics and quantum field theory from ordinary coordinates to noncommutative geometries. Much of the work has implemented such noncommutativity, x a ; x b i ab , through the Groenewold-Moyal [1] star product wherein the ordinary product of two functions is replaced by fx ? gx exp i ab 2 @ x a @ y b fxgyj yx : (1) As specific directions are singled out, this procedure is not Lorentz invariant and in greater than two dimensions not even rotationally invariant. This product does respect a ''twisted'' Poincaré invariance [2,3]; implications of such invariance for field theories have been discussed [4] in great detail recently. A different approach, in which the position coordinates are replaced by operators that have nontrivial commutation relations among each other and under rotations transform into each other preserving these commutation relations, has been pursued by this author [5]. In that paper the space coordinates are represented by operators acting on coherent states based (in three space dimensions) on the group SO1; 3. Although in this work we shall again be interested in this group, the approach now is different. Specifically, we will formulate a dynamics with noncommutative coordinates that transform as vectors under rotations or, with the introduction of time, under the Lorentz group; this dynamics, at least at the level of free fields, is invariant under modified translations and a full Poincaré invariance can be implemented. The caveat of restricting the claim of Poincaré invariance to free field theories is that usual formulations of interacting ones involve time ordered products. Invariance of such products can be implemented only if commutators of operators vanish for spacelike separations [6 -8]; as in the present formulation time will be associated with an operator, the notion of time ordering is unclear.
The motivation for the procedure used in the present work is as follows. If we assume that spatial coordinates have commutation relation of the form
X i ; X j iF i;j ;(2)
where F ij is nonconstant and antisymmetric, rotation invariance demands that it transforms as a tensor; if we demand that (2) is to be time independent then a possible choice is F ij M ij , where M ij is the angular momentum tensor. Requiring further that the X i 's transform appropriately under time reversal and under parity leads us to postulate that X i K i , where the K's are the boost operators of the group SO1; 3; even though the group SO1; 3 appears, this formulation is invariant only under rotations; a true extension to relativistic invariance will be given in Sec. III. Details of this generalization of position operators to ones with nontrivial commutation relations are presented in Sec. II. The scale of noncommutativity is governed by a mass parameter M and in the limit M ! 1 ordinary three-dimensional quantum mechanics is recovered. The Hilbert space these operators act on is a momentum space which serves as a basis for the simplest representation of the Poincaré group, namely, the one for spinless particles of mass M. For M finite and for momenta larger than M and/or distances smaller than 1=M the Heisenberg commutation relations between position and momentum are modified; this, in turn, forces a modification of the concept of space translations. A procedure for obtaining functions of these noncommuting coordinates and their derivatives are also discussed in this section.
Functions are related to their commuting counterparts by having common Fourier transforms and commutators of these functions with momentum operators serve as derivatives. The definition of an integral of a product of such functions, consistent with the previously discussed procedure for taking derivatives, is subtle but can be implemented as a specific matrix element of such a product. An interesting consequence of such coordinate noncom-mutativity is that fluctuations in the measurement of X 2 p must exceed 1=M. Relative and center of mass coordinates needed for two body problems are presented. Noncommuting coordinates invariant under SO3 were identified with boosts of SO1; 3. In Sec. III this procedure is extended to noncommuting space and time coordinates that transform under the SO1; 3 Lorentz group; this leads us to identify these noncommuting space-time coordinates with boosts of the SO2; 3 anti-de Sitter group. Such an identification was suggested by Snyder [9] in the first discussion of this problem. The operators for spacial coordinates have continuous eigenvalues but those ofq X, whereq is timelike, are discrete with separations of 1=M. As in the previous section functions, derivatives and integrations are discussed. The definition of integration permits us to introduce in Sec. IV a Lagrangian and an action for a free scalar field theory, where the field is a function of the noncommuting space-time. A significant consequence is that solutions of the equations of motion for such a free field theory exist only for masses m M.
The above limit on masses of particles combined with the bound on the minimum size of spatial wave packets may put restrictions on the existence of small black holes. This and other results are summarized in Sec. V.
II. EUCLIDEAN NONCOMMUTING COORDINATES
The discussion in this section will focus on noncommuting coordinates in three spatial dimensions invariant under SO3; these coordinates will be identified with boosts in an SO1; 3 group. An extension to d-dimensional noncommuting coordinates invariant under SOd and embedded in SO1; d is straightforward.
A. Coordinates
The usual, commuting, position variables, x i , acting on momentum states jpi, with hp 0 jpi p 0 ÿp, have the form
x i i @ @p i :(3)
As previously mentioned, the noncommuting position operators will be related to boost operators of an SO1; 3 Lorentz group. The generators of the full Poincaré group, and in turn the Lorentz group, can be represented as operators acting on the momentum states; we are interested in an irreducible representation of the Poincaré group [10] with a mass M and spin zero. To this end we define
p 0 p p M 2 q ;(4)
which we use to construct the boost operators
K i p 0 p i @ @p i p 0 p :(5)
It is easy to check that these satisfy the requisite commutation relations
K i ; K j ÿiM ij ;(6)
with M ij the angular momentum,
M ij i p j @ @p i ÿ p i @ @p j ;(7)
in three dimensions M ij ijk J k . (Although we shall not use these, similar realizations of the other representations of the Poincaré group for positive M 2 , the ones with internal spin, exist. If total angular momentum is given by the right-hand side of (7) plus the spin part S ij the boost is obtained by adding ÿS in p n =p 0 M to the right-hand side of (5).)
The noncommuting coordinates we shall use are
X i K i M 1 M p p M 2 1=4 i @ @p i p p M 2 1=4 :(8)
Their commutation relations follow from (6),
X i ; X j ÿ i M 2 M ij :(9)
The mass M plays the role of the noncommutativity parameter. From (8) we see that in the limit M ! 1, X i ! x i , and for distances greater than 1=M ordinary commuting geometry is recovered. In subsequent parts of this work we shall denote commuting coordinates by lower case letters and noncommuting ones by corresponding upper case letters; to emphasize that in the noncommuting situation momentum operators are as they were in the commuting one lower case letters will continue to be used for these. The commutation relations of the boost operators with momenta
K i ; p j i ij p 0 ; K i ; p 0 ip i(10)
yield a modified position-momentum commutator
p i ; X j ÿi ij p 0 M ;(11)
again, in the limit M ! 1 we recover the standard Heisenberg algebra. The extra factor of p 0 =M in (11) will affect the implementation of coordinate translations. It is interesting to note that expanding (11) to first order in p=M 2 yields corrections to the Heisenberg commutation relation that have been postulated, as has (9) to the same order, [11] in order to explain a minimum length that appears in string theory [12]. (In the notation of Ref. [11], M 2 1=2 and 0 0). Some consequences of these modifications of the algebra of quantum mechanics are presented in [12].
B. Functions, differentiation, and integration
The relation between functions fx of commuting coordinates x i and corresponding functions of the noncommuting ones is achieved through their Fourier transforms,
fx Z dqfqe iqx ! fX Z d 3 qfqe iqX :(12)
q is the group invariant measure
d 3 q sinhq=M q=M 2 dq:(13)
As ip i ; fx @ i fx, we may carry this over to a definition of derivatives of fX,
@ i fX ip i ; fX;(14)
this prescription satisfies the Leibniz rule for the differentiation of a product of functions of the noncommuting coordinates.
The integral over all space of a product of functions of ordinary coordinates is a convolution of their Fourier transforms,
Z dxf 1 x f n x 2 3 Z dq 1 dq nf1 q 1 f n q n q 1 q n :(15)
As expiq xjki jk ÿqi, an expression for the -function is q 1 q n hkje iq 1 x e iq n x jki;
valid for any state jki. We shall use the above as a guide for the definition of integration of functions of the noncommuting coordinates,
" Z dX"f 1 X f n X 2 3 Z d 3 q 1 d 3 q n f 1 q 1
f n q n hkje iq 1 X e iq n X jki:
As any state jki is obtainable from any other by a unitary transformation and noting that the measure for the q integrations is invariant under such transformations, the above definition for differentk's are equivalent; using jk 0i simplifies calculations and we shall use this state from now on and drop the quotation marks in (17). Derivatives defined by (14) insure the desired relation R dX@ i fX 0. Two examples are in order,
h0je iqX j0i q;(18)
as for ordinary functions. For the two point function the result is h0je iqX e ÿiq 0 X j0i q ÿq 0 q=M
sinhq=M 2 ;(19)
with the term multiplying the function canceling a similar one in the invariant measure (13). For fX and gX related to fx and gx by (12) we find that
Z dXfXgX Z dxdỹfxgỹ;(20)
this is no longer true for integrations of a product of three or more functions.
C. Localization and translations
As the coordinates X i do not commute with each other, a state with a precise position does not exist. It is still interesting to ask what are the eigenstates ofr X for a specific unit vectorr and what is the minimum eigenvalue ofX X. The eigenstates ofr X with eigenvalues r are
rr p M 2p 0 s p 0 ÿr p p 0 r p iMr=2 ;(21)
the normalization is R dr p ŷ rr 0 p rr p r ÿ r 0 and, as expected, these approach expÿirr p= 2 p as M ! 1. For different directionsr these functions are not orthogonal to each other. A complete set of commuting operators that includesr X contains, in addition to this position operator, the two vectorp ? , wherep ? r 0. For a givenr, the states jp ? ;rri are complete and under rotations transform to a similar set with a rotatedr.
The eigenvalues of X 2 control the extent to which a packet can be localized in position. A lower bound on such eigenvalues may be obtained by noting that the SO1; 3 Casimir operator K 2 ÿ J 2 equals 2 ÿ j 2 0 1 [13] for representations labeled by ; j 0 , with real 0, and with all angular momenta in the representation having values greater than j 0 . As X 2 K 2 ÿ J 2 J 2 =M 2 , its eigenvalues are 2 1 ÿ j 2 0 jj 1=M 2 , with j j 0 ; thus we find that X 2 1=M 2 and wave packets cannot be localized to better than 1=M.
This inability to localize reflects itself in the nonexistence of a translation operator takingX toX ã, withã a c-number. It is easy to show that a unitary operator Uã with the property U y ãXUã X ã does not exist. If it did then (9) would imply that Uã;J 0 which cannot be as the angular momentum must rotate the translation vectorã.
The operator expiã p, which does translate the commuting coordinate vectorsx !x ã, has the following effect on the noncommutative vectorsX, e iãpX e ÿiãp X ã p 0 M :
Thus, for p M or distances larger than 1=M we recover the usual translations, while for distances smaller than 1=M the limit on localization of position wave packets makes translations fuzzy. However, as we shall see in the next section, a form of two body interactions invariant under overall translations does exist. The integration defined in (17) is invariant under translations described by (22).
D. Two body interactions
One body dynamics can be reformulated for the noncommuting coordinates. As outlined in Sec. II B, this is achieved by replacing the potential Vx by VX. Two body interactions present a problem in that replacing Vx 1 ÿx 2 by the corresponding VX 1 ÿX 2 (superscripts refer to the two particles) does not permit a separation into mutually commuting center of mass and relative coordinates. This is a consequence of the lack of a simple translation operator, as was discussed at the end of Sec. II C. With the usual definitions of relative and center of mass coordinates,p rel m 2p1 ÿ m 1p2 =m 1 m 2 ,x rel x 1 ÿx 2 andp cm p 1 p 2 ,x cm m 1x1 m 2x2 =m 1 m 2 , we use the procedure outlined in (8) for obtaining the noncommuting version of these, namely, multiply the position operators on the left and right by p p M 2 1=4 , wherep is respectivelyp rel orp cm ; thus instead ofX 1 ÿX 2 as a relative coordinate, we define
X rel i i M p rel p rel M 2 q @ @p 1 i ÿ @ @p 2 i p rel p rel M 2 q ; X cm i i Mm 1 m 2 p cm p cm M 2 q m 1 @ @p 1 i m 2 @ @p 2 i p cm p cm M 2 q :(23)
A direct computation shows that these relative and center of mass variables commute and the coordinates within each class obey (9) and have the desired limit for large M. From the start we would formulate a two body problem as
H p 12 2m 1 p 22 2m 2 VX rel :(24)
The use of these relative coordinates may be extended to many body situations. A many body Hamiltonian with interactions depending on theX rel is invariant under translations (22) generated by the total momentum.
III. MINKOWSKI NONCOMMUTING COORDINATES
In the previous section, by adding one timelike coordinate to three-dimensional space we were able to identify SO3 invariant noncommuting coordinates with boost operators of the symmetry group SO1; 3 of this extended geometry. Presently we will apply a similar procedure to SO1; 3 invariant noncommuting coordinates, namely, noncommuting space and time. By adding an extra time coordinate to 3 1 dimensional Minkowski space, the noncommuting space-time operators will be represented by boosts of the anti-de Sitter group SO2; 3, the symmetry group of the extended 2 3 dimensional geometry. Again, as in Sec. II the discussion will be for SO1; 3 coordinates embedded in SO2; 3; an extension to SO1; d ÿ 1 embedded in SO2; d ÿ 1 is straightforward.
A. A representation of Poincaré-anti-de Sitter group
We consider a representation of the Poincaré-anti-de Sitter group acting on ''5-momenta'' p T , p , 0; ; 3, preserving the metric p 2 p 2 T p p p 2 T p 2 0 ÿp p; p T and p 0 are the two timelike momenta. For
where is the angle between p 0 and p T or more specifically p 0 p p M 2 p sin and p T p p M 2 p cos. Note that jp 0 j p p M 2 p or equivalently p p M 2 . This restriction will be responsible for the eigenvalues of X 0 , the time operator taking on discrete values.
The group algebra consists of Lorentz transformations M and boosts K (K 0 is really an O2 rotation in the p T -p 0 space). The commutation relations of the K's are
K ; K ÿiM :(26)
Analogous to the choice for the K i 's made in Sec. II for the representation of interest, in the present situation an expression for the boosts that we shall use is
K p T p ÿi @ @p p T p :(27)
B. Coordinates
The usual commuting space-time coordinates can be related to their conjugate momenta by
x ÿi @ @p :
Noncommuting space-time coordinates are obtained by replacing (28) with
X 1 M K 1 M M 2 p p ÿ p 2 0 1=4 ÿi @ @p M 2 p p ÿ p 2 0 1=4 ;(29)
as previously, we denote commuting coordinates by lower case letters and their noncommuting counterparts by upper case ones. The space-time coordinates X satisfy
X ; X ÿi M 2 M(30)
and in the limit M ! 1, X ! x . The momentumcoordinate commutation relations are p ; X ig p T M :
(31)
C. Functions, differentiation, and integration
As in the Euclidean case we make the correspondence between functions of commuting space-time coordinates and the noncommuting ones via the Fourier transform
fx Z d 4 qfqe iq x ! fX Z d 4 qfqe iq X ;(32)
with the invariant measure depending on whether q 2 is less than or greater than 0,
d 4 q sinh ÿq 2 p =M ÿq 2 p =M 3 ; q 2 0; d 4 q sinq=M q=M 3 ; q 2 0:(33)
There is, however, a caveat to this correspondence. For q 2 0, expiq X is an O2 rotation of angle q=M between q p =q and p T ; therefore q 2 must satisfy q 2 M 2 . This also is related to the fact that the time variable takes on discrete values. As previously, differentiation can be generated by using the momentum vectors @ fX ÿip ; fX:
In analogy with (17) integration of functions of noncommuting variables over all Minkowski space is defined as
Z d 4 Xf 1 X f n X 2 4 Z d 4 q 1 d 4 q n f 1 q 1 f n q n h0;0je iq 1 X e iq n X j0;0i;(35)
where the matrix element is taken in the state jp 0 0;p 0i. Some specific values of such matrix elements are:
h0;0je iq X j0;0i q ;
as in (18), while for a product of two exponentials the result is h0;0je iq X e ÿiq 0 X j0;0i q ÿ q 0 8 > > > < > > > :
ÿq 2 p =M sinh ÿq 2 p =M 3 ; q 2 0; q=M sinq=M 3
;
q 2 0:(37)
For timelike q the observation below (32) implies ÿM q M and as in (19) the terms multiplying the functions cancel the corresponding ones in the invariant measure (33).
D. Position and time eigenvectors and eigenvalues
A complete, commuting set operator consists ofr X , with jr 2 j 1, and momenta orthogonal tor . The study of these states has to be done separately forr timelike or spacelike and in the latter case whether p rr p 2 is larger or smaller than M 2 . Lorentz transformations ofr and p do not mix these conditions. As there are varying subtleties in the procedure of obtaining the eigenvalues and eigenfunctions ofr X in the three cases we shall present each one in some detail. Special, simple examples of each of these situations arer ẑ and p 2
x p 2 y M 2 ÿ p 2 0 0,r ẑ, and p 2 x p 2 y M 2 ÿ p 2 0 0 orr along the time direction.
1.r spacelike and ÿp rr p 2 M 2 0
Forr spacelike (r r 0), p T can be expressed in terms ofr p and the magnitude of a vector orthogonal tor, p T ÿp rr p 2 r p 2 M 2 q :
We first consider the case ÿp rr p 2 M 2 0. It is useful to reexpressr X in terms of a new variable defined byr p ÿp rr p 2 M 2 p sinh and p T ÿp rr p 2 M 2 p cosh:
r X i M @ @ 1 2 tanh ;(39)
field theories are invariant under the full Poincaré group. The noncommutativity is governed by a parameter M and in the large M limit we recover ordinary quantum mechanics or field theory. Noteworthy results are: (i) The Heisenberg commutation relation between position and momentum is modified for distances smaller than 1=M and/or momenta larger than M. (ii) Eigenvalues of the coordinate operator in the time or any timelike direction are discrete and spaced by 1=M.
(iii) Wave packets cannot be localized to hX 2 i q less than 1=M.
(iv) A free field theory with mass m has a solution only for m less than M. The last two results prevent us from packing a mass greater than M into a space of radius less than 1=M; if the noncommutativity parameter M is smaller than M Planck , this would preclude the existence of pointlike black holes, that is black holes with radius 1=M Planck and mass M Planck .
Two topics that have not been addressed are the UV-IR connection [17] and the violation of causality, when time is included. Such violations occur for noncommuting spacetime [18] or, even when time is a continuous variable commuting with the space coordinates [19].
p 2
2M 2 > 0 the Hilbert space consists of states jp 0 ;pi with p T p p M 2 ÿ p
with ÿ1 1. The function normalized solutions with eigenvalue r arerr
M
2 cosh
s
e ÿiMr
(40)
(note: dr p coshd); this translates into
rr r p
1
2p T
s
p T ÿr p
p T r p
iMr=2
:
2.r spacelike and ÿp rr p 2 M 2 0In this case we find that p 2 T r p 2 and we introduce the variable by p T p rr p 2 ÿ M 2 p sinh and have to consider two possibilities forr p, namelyr p p rr p 2 ÿ M 2 p cosh. The solutions arewithr p in the two ranges ÿ1 r p ÿ p rr p 2 ÿ M 2 p and p rr p 2 ÿ M 2 p r p 1.3.r timelikeForr timelike p 2 T M 2 ÿ p ÿrr p 2 ÿ r p 2 with p ÿrr p 2 0. This time we define an angle by p T M 2 ÿ p ÿrr p 2 p cos andr p M 2 ÿ p ÿrr p 2 p sin. For timeliker,r X is a generator of rotations in ther p ÿ p T plane.with solutions corresponding to eigenvalues n=M, where n is any integerWritten as a function of the momenta this wave function isthe range is ÿ M 2 ÿ p ÿrr p 2 p r p M 2 ÿ p ÿrr p 2 p with p T both positive and negative. The eigenvalues of coordinates in timelike directions are discrete and spaced by 1=M. In a state R d 4 pfp n p 0 jp 0 ;pi (corresponding tor t) with R dpjfpj 2 1, the expectation of the spatial position operator X i is linear in n, namelyA discrete time has been obtained in formulations of space-time noncommutativity[14,15], where such noncommutativity involves more than the simple noncommutative plane.IV. FREE SCALAR FIELD THEORYFollowing the procedures of Sec. III C, we define a field bymaking it an operator in both the Hilbert space on which q operates and in the jp 0 ;pi Hilbert space. The latter makes it difficult to vary any proposed actions with respect to X , but it is possible to do a variation of the Fourier component q . With differentiation defined in (34) and integration in (35) we propose the following action A h0;0jfÿp ; y Xp ; X ÿ m 2 y XXgj0;0i Z d 4 q 0 d 4 q y q 0 qh0;0jfp ; e iq 0 X p ; e ÿiqX ÿ m 2 e iq 0 X e ÿiqX gj0;0i:For q timelike (as expected, there are no solutions for q spacelike) expÿiq Xj0;0i jMq 0 sinq=M=q; Mq sinq=M=qi resulting in h0;0jp ; e iq 0 X p ; e ÿiqX j0;0i M 2 sin 2 q M h0;0je iq 0 X e ÿiqX j0;0i;the matrix element in the above can be found in (37). Setting the variation of (47) with respect to q to zero yields the dispersion relationAs 0 q 2 2 M 2 , we obtain, for each m 2 , two solutions q 2 m 2 l and q 2 m 2 h , with m h > m l . For m=M small we find in addition to the usual solution m l m, the solution m h M ÿ m; the mechanism responsible for having two solutions is reminiscent of fermion doubling[16]in lattice field theories, except that in the present situation the mass of the additional state is comparable to M and the energy is always large; as M goes to infinity this extra mass decouples. Importantly, we note that solutions of the field equations exist only for m M.Aside from the doubling question, we may relate q for q 0 q 2 m 2 l;h q to annihilation operators and for q 0 ÿ q 2 m 2 l;h q to creation ones and express the field operators as in commuting field theories with x replaced by X .V. REMARKS AND CONCLUSIONSIdentifying coordinates as boost operators of the symmetry group of a space with one timelike coordinate added to the space of interest allows us to replace usual commuting coordinates or coordinates and time can by noncommuting ones. These dynamics are rotationally, respectively, Lorentz invariant. As discussed in the introduction, the latter claim could be valid for noninteracting theories as usual formulations of interactions using time ordered products require that commutators of operators vanish for spacelike separations. Many body interactions can be formulated in a way to make them invariant under rotations and translations. Upon the introduction of time resulting
. H Groenewold, Physica (Amsterdam). 12405H. Groenewold, Physica (Amsterdam) 12, 405 (1946);
. J. E. Moyal, Proc. Cambridge Philos. Soc. 4599J. E. Moyal, Proc. Cambridge Philos. Soc. 45, 99 (1949).
. M Chaichian, P P Kulish, K Nishijima, A Tureanu, Phys. Lett. B. 60498M. Chaichian, P. P. Kulish, K. Nishijima, and A. Tureanu, Phys. Lett. B 604, 98 (2004).
J Wess, Proceedings of the BW2003 Workshop on Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard Model. the BW2003 Workshop on Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard ModelVrnjacka Banja, Serbia122J. Wess, in Proceedings of the BW2003 Workshop on Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard Model, Vrnjacka Banja, Serbia, 2003, p. 122.
. G Fiore, J Wess, arXiv:hep-th/0701078G. Fiore and J. Wess, arXiv:hep-th/0701078.
. M Bander, J. High Energy Phys. 0340M. Bander, J. High Energy Phys. 03 (2006) 040.
. L Alvarez-Gaume, J L F Barbon, R Zwicky, J. High Energy Phys. 0557L. Alvarez-Gaume, J. L. F. Barbon, and R. Zwicky, J. High Energy Phys. 05 (2001) 057.
. S Doplicher, J. Phys.: Conf. Ser. 53793S. Doplicher, J. Phys.: Conf. Ser. 53, 793 (2006).
. A P Balachandran, A Pinzul, B A Qureshi, S Vaidya, I I S Bangalore, arXiv:hep-th/0608138A. P. Balachandran, A. Pinzul, B. A. Qureshi, S. Vaidya, and I. I. S. Bangalore, arXiv:hep-th/0608138.
. H S Snyder, Phys. Rev. 7138H. S. Snyder, Phys. Rev. 71, 38 (1947).
. E Wigner, Ann. Math. 40149E. Wigner, Ann. Math. 40, 149 (1939).
. A Kempf, J. Phys. A. 302093A. Kempf, J. Phys. A 30, 2093 (1997).
. A Kempf, G Mangano, Phys. Rev. D. 557909A. Kempf and G. Mangano, Phys. Rev. D 55, 7909 (1997);
. L N Chang, arXiv:hep-th/0405059and references cited thereinL. N. Chang, arXiv:hep-th/0405059 and references cited therein.
I M Gel'fand, R A Minlos, Z Ya, Shapiro, Representations of the Rotation and Lorentz Groups and Their Applications. LondonMacMillanI. M. Gel'fand, R. A. Minlos, and Z. Ya. Shapiro, Representations of the Rotation and Lorentz Groups and Their Applications (MacMillan, London, 1963).
. M Chaichian, A Demichev, P Presnajder, A Tureanu, Eur. Phys. J. C. 20767M. Chaichian, A. Demichev, P. Presnajder, and A. Tureanu, Eur. Phys. J. C 20, 767 (2001).
. A P Balachandran, T R Govindarajan, A G Martins, P Teotonio-Sobrinho, J. High Energy Phys. 1168A. P. Balachandran, T. R. Govindarajan, A. G. Martins, and P. Teotonio-Sobrinho, J. High Energy Phys. 11 (2004) 068.
. J B Kogut, L Susskind, Phys. Rev. D. 11395J. B. Kogut and L. Susskind, Phys. Rev. D 11, 395 (1975);
. H B Nielsen, M Ninomiya, Phys. Lett. B. 105219H. B. Nielsen and M. Ninomiya, Phys. Lett. B 105, 219 (1981).
. S Minwalla, M Van Raamsdonk, N Seiberg, J. High Energy Phys. 0220S. Minwalla, M. Van Raamsdonk, and N. Seiberg, J. High Energy Phys. 02 (2000) 020.
. N Seiberg, L Susskind, N Toumbas, J. High Energy Phys. 0644N. Seiberg, L. Susskind, and N. Toumbas, J. High Energy Phys. 06 (2000) 044.
. O W Greenberg, Phys. Rev. D. 7345014O. W. Greenberg, Phys. Rev. D 73, 045014 (2006).
| [] |
[
"Network Intrusion Detection System in a Light Bulb",
"Network Intrusion Detection System in a Light Bulb",
"Network Intrusion Detection System in a Light Bulb",
"Network Intrusion Detection System in a Light Bulb"
] | [
"Liam Daly Manocchio \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n",
"Siamak Layeghy †[email protected] \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n",
"Marius Portmann \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n",
"Liam Daly Manocchio \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n",
"Siamak Layeghy †[email protected] \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n",
"Marius Portmann \nSchool of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia\n"
] | [
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia",
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia",
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia",
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia",
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia",
"School of Information Technology and Electrical Engineering\nUniversity of Queensland Brisbane\n4072QLDAustralia"
] | [] | Internet of Things (IoT) devices are progressively being utilised in a variety of edge applications to monitor and control home and industry infrastructure. Due to the limited compute and energy resources, active security protections are usually minimal in many IoT devices. This has created a critical security challenge that has attracted researchers' attention in the field of network security. Despite a large number of proposed Network Intrusion Detection Systems (NIDSs), there is limited research into practical IoT implementations, and to the best of our knowledge, no edge-based NIDS has been demonstrated to operate on common low-power chipsets found in the majority of IoT devices, such as the ESP8266. This research aims to address this gap by pushing the boundaries on low-power Machine Learning (ML) based NIDSs. We propose and develop an efficient and low-power ML-based NIDS, and demonstrate its applicability for IoT edge applications by running it on a typical smart light bulb. We also evaluate our system against other proposed edge-based NIDSs and show that our model has a higher detection performance, and is significantly faster and smaller, and therefore more applicable to a wider range of IoT edge devices. | 10.1109/itnac55475.2022.9998371 | [
"https://export.arxiv.org/pdf/2210.03254v1.pdf"
] | 252,762,554 | 2210.03254 | 08425674a5dbb121efdedca5680937b654feb945 |
Network Intrusion Detection System in a Light Bulb
Liam Daly Manocchio
School of Information Technology and Electrical Engineering
University of Queensland Brisbane
4072QLDAustralia
Siamak Layeghy †[email protected]
School of Information Technology and Electrical Engineering
University of Queensland Brisbane
4072QLDAustralia
Marius Portmann
School of Information Technology and Electrical Engineering
University of Queensland Brisbane
4072QLDAustralia
Network Intrusion Detection System in a Light Bulb
Index Terms-Network Intrusion Detection System (NIDS)Machine Learning (ML)Internet of Things (IoT)Edge Com- putingESP32 WROOM
Internet of Things (IoT) devices are progressively being utilised in a variety of edge applications to monitor and control home and industry infrastructure. Due to the limited compute and energy resources, active security protections are usually minimal in many IoT devices. This has created a critical security challenge that has attracted researchers' attention in the field of network security. Despite a large number of proposed Network Intrusion Detection Systems (NIDSs), there is limited research into practical IoT implementations, and to the best of our knowledge, no edge-based NIDS has been demonstrated to operate on common low-power chipsets found in the majority of IoT devices, such as the ESP8266. This research aims to address this gap by pushing the boundaries on low-power Machine Learning (ML) based NIDSs. We propose and develop an efficient and low-power ML-based NIDS, and demonstrate its applicability for IoT edge applications by running it on a typical smart light bulb. We also evaluate our system against other proposed edge-based NIDSs and show that our model has a higher detection performance, and is significantly faster and smaller, and therefore more applicable to a wider range of IoT edge devices.
I. INTRODUCTION
Internet of Things (IoT) edge devices are finding increasing use and prevalence in powerful device ecosystems, ranging from smart homes to remote sensor networks. There are also industrial scale IoT systems (IIoT) which have significantly higher levels of complexity than ordinary IoT Networks. It is estimated that there are over 14 billion IoT endpoints in 2022 [1]. Because of their widespread usage, and their applications in commercial industrial infrastructure, they have become the target of various cyberattacks.
Despite the fact that IoT devices are used to monitor and control 'things' from home security systems, through to medical monitoring devices and industrial infrastructure, they often do not have the same level of protection that can be achieved on servers and workstations. This is to a large extent due to their limited compute and energy resources, and their application in a diverse range of networks that make it more difficult to implement cybersecurity controls.
While there are many documented cases of compromise of IoT edge devices, including the incredibly damaging Mirai botnet in 2016 that compromised over 600K edge and embedded devices [2], the average consumer does not have access to high grade network intrusion detection systems (NIDSs) Figure 1. A photo of a typical consumer grade 'Tuya' compatible smart light bulb that we use for our NIDS light bulb demonstration 2 that could be used to detect and protect against these types of attacks. An edge-based NIDS can enable a fast reaction to attacks against IoT devices and networks, without needing to centrally process data. This decentralised processing also improves privacy, by allowing data to be kept local, at the edge of the network.
This security risk has not gone unnoticed in the research community, and there are works proposing several IoT compatible NIDSs, which we discuss in this paper. However, many of these proposals are theoretical and do not evaluate their models on actual edge hardware. Some works were also tested on relatively high-power edge devices, such as Google's Edge TPU or Raspberry Pi [3], which are unlikely to be widely deployed in typical IoT devices and smart home networks. A number of frameworks exist with the aim of bringing machine learning to the edge, such as TensorFlow Lite [4]. However, NIDS models built using these frameworks are often still inaccessible to low-power microcontrollers [5]. The models in the literature we surveyed, based on TensorFlow Lite, have a large memory footprint, leaving little room for other functionality on typical low-end IoT devices.
To address this gap, this research aims to push the bounds on what is possible in a low compute power environment, and to demonstrate that high accuracy network intrusion detection can be brought to the wider IoT domain. We propose, develop and evaluate a high performance NIDS capable of running on the lowest power conventional edge microprocessors. We show that the performance of this NIDS is comparable to the existing approaches proposed in the literature, while being significantly faster and more lightweight. To further demonstrate the applicability of the proposed NIDS at the IoT edge, we deploy it on a typical smart light bulb, and demonstrate the world's first NIDS in a light bulb. A similar smart light bulb is shown in Figure 1. To do this, we replace the ESP8266 microcontroller on a consumer smart light bulb, with a transplant ESP8266 microcontroller featuring our modified NIDS firmware. Then as a fun way to demonstrate our NIDS running on the smart bulb, we control the colour of the light emitted by the bulb, i.e. green during normal operation, and red when an attack is detected, shown in Figure 2.
The key contribution of this paper is the proposal, implementation, and evaluation of an extremely lightweight NIDS, capable of functioning on the lowest-power IoT devices. We have made the code publicly available here 3 . Based on our experimental evaluation, our system outperforms the state-ofthe-art IoT NIDS proposals on IoT hardware both in terms of detection accuracy, detection speed and resource requirements.
II. RELATED WORKS
There have been several efforts to develop lightweight and scalable machine learning systems for use in IoT devices and networks. There are many previous works that adopt a signature-based approach for intrusion detection. However, signature based detection suffers from the limitation that signatures must be manually updated. Since this paper focuses on ML-based NIDSs, these works have not been included in our discussion.
In terms of machine learning based NIDS, these can be grouped into shallow learning and deep learning. The authors in [6] and [7] surveyed several approaches to NIDSs in IoT devices, and found a variety of works that used shallow learning to great success. For example, [8] evaluated five classifiers following feature selection, PCA based anomaly detection, a local deep SVM, a logistic regression and a boosted decision tree. Across three benchmark datasets, the authors showed 100% accuracy for all approaches other than PCA. There are also several proposed approaches in the literature that utilise deep learning models. For instance, [9] uses a multi-layer perceptron (MLP) model, which is a fully connected dense artificial neural network, to achieve 99.4% accuracy. There are also several deep unsupervised approaches, such as [10], which showed that using an autoencoder, a sufficiently low reconstruction loss could be achieved for networking data, to facilitate an IoT compatible anomaly detection system. More advanced forms of neural networks, such as graph neural networks, have also been proposed for IoT devices [11], and these have achieved F1 scores of 0.81 on two IoT benchmark datasets. However, these systems discussed here that have achieved 99%+ accuracy were not tested on real IoT hardware.
There has been a limited number of works that have tested proposed NIDSs on real IoT hardware. The use of Google's Edge TPU platform has been explored for use with NIDS models [3]. Here, the authors compared the performance of a convolutional neural network (CNN) running on a Google Edge TPU with that of a Raspberry Pi (Cortex-A53), and demonstrated fast performance as well as 0.98+ F1 scores. However, both Edge TPU and Raspberry Pi have significantly more processing power than the average IoT smart device. [5] is the most relevant to our work, since it implements a deep learning based NIDS on several lower power hardware platforms [5], including ESP32-WROOM-32, ESP8266 and ATmega328p. The authors used TensorFlow Lite for their approach, which allowed them to bring a pre-trained neural network model to various microcontrollers [4]. They were able to achieve 96.7% detection accuracy on the ESP32-WROOM-32. However, the proposed model was too large to be deployed on low-end devices such as ATMega328p. Furthermore, the authors' ESP8266 implementation used nearly 100% of the device's memory, making it impractical for parallel deployment to an existing low-end IoT device, where a significant amount of memory is likely required for the code and data of the devices' core functionality.
There exist several solutions, outside of TensorFlow Lite, that allow machine learning models to be brought to microcontrollers. Of particular interest here are solutions that can convert models developed in scikit-learn [12], another widely used machine learning framework, to microcontroller compatible code. These tools, which include sklearn-porter, and EmbML [13], are capable of porting pre-trained scikitlearn models to microcontrollers. However, unlike Tensorflow Lite which is primarily focused on deep learning models, scikit-learn features many shallow learning approaches. To the best of our knowledge, no previous work has used these techniques to take pre-trained NIDS models and run them on IoT hardware.
In summary, although there are many works that demonstrate high accuracy for IoT NIDSs when tested on benchmark data, there has been relatively limited experimental research into the practical implementation and deployment of NIDSs on IoT edge hardware. The research that has been conducted to date delivers NIDSs that either require too much processing power to operate, or would utilise too much of device resources to be applicable as part of a smart device with inbuilt intrusion detection functionality. Our work presented in this paper aims to address this gap by proposing, implementing and evaluating a high-performance ML-based NIDS, capable of running on resource-constrained edge IoT devices.
III. LIGHTBULB NIDS A. Datasets
For training and evaluating the performance of ML-based NIDSs, data is required. For this, there are publicly available and highly cited benchmark datasets. These datasets are usually captures of network data generated synthetically or on test beds, designed for NIDS research. Since the main approaches to obtain network traffic include packet-capture (pcap) and flow-based traffic collection, publicly available NIDS datasets are often represented in one or both of these formats. In the packet-based approach, the full packet headers and payloads are captured as they are sent across the network. In the flowbased approach, only aggregate information about the network traffic is collected, based on the sequence of packets between two endpoints. There are many formats of flow based data.
As a first step in the design of our proposed NIDS for edge IoT, it is important to decide on the format of the input data. Continuous packet-based network monitoring is very resource intensive, and is typically not feasible for large scale networks, particularly large-scale IoT networks, where thousands of devices may communicate their status in a short time period. Flow-based network traffic monitoring, on the other hand, is more scalable and includes less information and thus has fewer security and privacy issues. Accordingly, we chose the NetFlow as the data format in this study, which is also consistent with prior works in this space [5] [3].
This paper considers five different widely used and highly cited NIDS datasets.
1) Ton-IoT, an IoT and industrial IoT dataset featuring 'various attacking techniques, such as DoS, DDoS and ransomware, against web applications, IoT gateways and computer systems across the IoT/IIoT network' [14] 2) BoT-IoT, an IoT dataset featuring various botnet traffic, including 'DDoS, DoS, OS and Service Scan, Keylogging and Data exfiltration attacks' [15] 3) MQTT-IoT-IDS2020 (MQTT), an IoT dataset where 'five scenarios are recorded: normal operation, aggressive scan, UDP scan, Sparta, SSH brute-force, and MQTT brute-force attack.' [16]. 4) UNSW-NB15, an NIDS dataset for traditional networks featuring 'a hybrid of real modern normal activities and synthetic contemporary attack behaviours' with 'nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode and Worms' [17] 5) CSE-CIC-IDS2018, an NIDS dataset for traditional networks including 'seven different attack scenarios: Brute-force, Heartbleed, Botnet, DoS, DDoS, Web attacks, and infiltration of the network from inside' [18] These datasets which form the basis of our evaluation, represent a range of several classes of network traffic and attack types. These datasets are available in different feature sets. For the BoT-IoT, ToN-IoT, UNSW-NB15 and CSE-CIC-IDS2018 datasets, we used the version of these converted to a standardised flow format which was proposed by [19]. Works using these derivative datasets have shown comparable performance to works using the datasets in their original packet capture format. For the MQTT dataset, we use the published bidirectional flow format data released with the original capture.
B. Model Choice
Finding a suitable machine learning model, which can satisfy the requirements of an efficient and lightweight NIDS at the IoT edge, is critical. While Deep Learning (DL) models have been shown to be very successful in the implementation of ML-based NIDSs in general networks [5], they usually need a large amount of memory and compute resources. As such, we considered several Shallow Learning approaches that are known to be less resource intensive, and we decided to use a decision tree model based on its generally low complexity and high classification performance.
This model choice is supported by the literature, several related works demonstrate that tree-based models can be effectively used for NIDSs, including [20], [8] and [21]. In [20] a 95.25% detection accuracy with a random forest on the Kyoto 2016 dataset is achieved. In [8] an accuracy of 100% with three benchmark datasets is achieved using a boosted tree. Finally, in [21] a decision tree with sensitive pruning is applied to the GureKDDCup dataset to achieve 99%+ accuracy. In addition to these previous works, three other factors motivated our use of decision trees:
1) The results of our early experimentation showed comparable accuracy of tree based models to that of larger machine learning models, such as random forests [20] and even neural networks [5].
2) It can be expected that the number of instructions required to implement a decision tree would be far fewer than other types of shallow machine learning models such as random forests, while also balancing predictive power versus 'too simple' models such as linear support vector classifiers.
3) Decision trees have decision paths that can be easily analysed for model interpretability. They also translate logically to code that can be easily read and interpreted by programmers. This property was noted in [21] as an advantage.
Despite the use of tree based models in previous works, we were unable to find an implementation of these models on actual IoT hardware. We therefore aimed to investigate the implementation of decision trees, with the goal of balancing low complexity and resource requirements with high detection performance, both in terms of accuracy and speed.
C. Pre-processing Figure 3 shows the basic stages of our approach. This involves training a machine learning model using benchmark datasets offline, and then testing this pre-trained model on real IoT edge hardware. As can be seen, the first stage in this pipeline is pre-processing, which involves transforming data into a format that is suitable for machine learning models. Decisions regarding pre-processing must take into account the data format, as well as the type of model being used.
As previously discussed, for the purposes of this work, we ingest data in flow-based format. Despite this format being a very common and useful representation for network administrators, it typically requires some processing to make it suitable for use in a machine learning model. Network data in flow format has several fields that are categorical. For example, TCP and UDP port numbers. Although ports 22 (SSH) and 25 (SMTP) are close numerically, they represent vastly different protocols. Whereas port 8080 and port 443 are both commonly used for HTTPS despite the larger numerical difference. Certain machine learning models, such as neural networks, typically function best when the distances between values represent some contextual distance. There are several standard approaches to pre-processing that can be applied to categorical data to solve these issues, such as ordinal, frequency, or target encoding. In addition to the categorical fields, there are several numerical fields in flow based data, such as the number of bytes or packets. Numerical data is often standardised prior to use in machine learning, to ensures that the inter-feature variance is equal. This can help in the training of certain classes of machine learning models.
However, as discussed, pre-processing must also be done with consideration of the model being trained. Because tree based models work by splitting data and since this can be done at an arbitrary point, regardless of scale, they are somewhat robust to data that has not been pre-processed both in terms of categorical data or unscaled numerical data. This is in contrast to neural networks that typically require significant pre-processing. We tested tree based models with and without pre-processing and found that the performance was comparable, these results are shown in Table I. In later investigations, we found that pre-processing also increased the inference time. Because of this, we opted to not perform preprocessing. As the flow format data we are using expresses all features as numeric data types, these can be directly handled by decision tree models.
In order to ensure stable model training offline, we did ensure that training data had a balanced representation of the benign and attack class. Training on imbalanced data is a significant challenge for many models, and we solve this by using random under sampling to collect a balanced training dataset that is smaller than the overall dataset. However, we perform a separate cross validated evaluation step to ensure that all samples are considered when performing evaluation, using metrics that are resistant to imbalance such as balanced accuracy or F1 score. Figure 4. Averaged performance of various decision tree depths when trained on the BoT-IoT training split, versus their balanced accuracy on the holdout data split. We can see that models with a depth greater than 6 all converge to an accuracy of near 100% performance. The line at 98.5% shows our initial acceptance criteria, so we can rule out depths below 4.
¯± ³ ¬« @WDOSǴ ±««± °«2 ««2 °«3 ««3 °« ««°«¬ «««« @K@MBDCBBTQ@BX¸
D. Training and Model Hyperparameters
To train the models, we used the Python language and the scikit-learn library for machine learning models. Scikit-learn is a commonly used machine learning library in scientific research, and is an open source tool accessible to machine learning researchers [12].
Within scikit-learn, we used the Decision Tree Classifier. The hyperparameters of our decision tree are shown in Table II; they represent the default values used in the used library, except for the ones specifically discussed below. There are several parameters that can be varied when using decision trees, but the most important one is the max depth of the tree. We performed an initial systematic search of the max-depth parameter space between a max-depth of 2 and 12, on the BoT IoT dataset, to determine which candidate depths had an acceptable balanced accuracy. Here we used traditional holdout evaluation rather than cross validation. The results of this are shown in Figure 4. This allowed us to pick 6 and 10 as max-depth targets for investigation, 6 being the lowest depth with a reasonable accuracy of >98.5% and 10 at the point of diminishing return where all later depths were near 100% accuracy. We also tested depth 5 and depth 12 which lie slightly outside this range.
Our second non-standard parameter choice is the Cost Complexity Pruning (CCP) Alpha, which is only relevant if cost complexity pruning is used. Cost Complexity Pruning is a common method used to address overfitting, and has also been shown experimentally to produce significantly smaller decision trees with similar detection accuracy [22]. This was confirmed during our initial investigation, in which we found that using a CCP Alpha value of 0.0001 reduced the number of features and model size, without diminishing the accuracy significantly.
One of the other common parameter choices is the Max Features parameter, which is used during the initial tree fitting to limit the number of features to split on. However, our tests indicated that leaving this uncapped yielded the best results. We also used the Gini index, with a splitting strategy of best, which is the default for the scikit-learn library. The Gini index can be thought of as the purity of a given set of observations, with 0 indicating all observations belong to a single class, and 1 indicating a random distribution between all classes. This is used to determine how to split a tree as part of the classification and regression trees algorithm (CART), which chooses the feature to split on at each node to achieve the best or highest purity. This was achieved in earlier decision tree algorithms such as ID3 or C4.5 by maximising the information gain at each node. Scikit-learn [12] utilises an optimised version of the CART algorithm.
E. Transferring a Trained Model to a Microcontroller
To convert our models to C-code for use by a microcontroller, we developed a source-to-source converter in Python that accepts a scikit-learn tree model, and converts the tree into C code. This C code is then injected into a template that can be complied for a target microcontroller. This template Converted scikit-learn export_text AVR Compatible C Code Figure 5. Our source-to-source translator accepts a scikit-learn decision tree, and uses the output from the scikit-learn export text function to convert this into a C decision tree.
includes the functionality to read the flow data from the serial port, as well as the ability to time the model. An example of this conversion is shown in Figure 5. Our approach is different to those used in other scikit-learn frameworks, as we utilise nested if statements to express decision trees. Other approaches utilise recursion for each depth as well as a global array of thresholds, but we expect that our approach utilising nesting will compile to a smaller number of instructions, when used for smaller decision trees such as those in our proposed NIDS.
IV. EXPERIMENTAL METHODOLOGY In this section, we discuss how we evaluate our proposed approach.
A. Microcontroller
We evaluate our work against three low-power microcontrollers, shown in Table III. We refer to the ESP32 Node MCU as ESP32, the NodeMCU Lua Lolin V3 as ESP8266, and the Arduino Uno as ATMega328p, based on their chipset. These represent three of the lowest-power IoT devices in widespread use, with the ESP8266 having found widespread applications in all manners of smart devices, from smart light bulbs to entry control systems. The ESP32 is a slightly more powerful device, but still finds use in higher-end IoT devices. The ATMega328p on the other hand is the chip powering several Arduino products. Arduino has found widespread use in the hobbyist space. Despite its low power, there exists several shields that can enable the Arduino Uno to connect to the internet via Ethernet or Wi-Fi. We chose these devices because they allow us to compare our system's performance in the case of typical IoT devices. Figure 6. Our setup uses the onboard device clock to measure the inference time. To do this, we send flow records via serial and convert this into numerical features on the device. We then call the predict method on these features 100,000 times, timing how long this takes, before sending this timing data to the computer and proceeding to the next record.
B. Measuring inference time
To measure inference time was relatively challenging, given that the onboard timer cannot necessarily be expected to have a reliable microsecond resolution. In addition, the 'micros()' clock returns an integer, which does not allow us to measure fractions of microseconds. To adjust for this inaccuracy, as can be seen in Figure 6, we repeated calls to the predict function 100,000 times for each flow record, and then divided the total time taken for this to get the average inference time per record. Here, caution needs to be taken to ensure that the inference method is not optimised out during compilation, and that the results from repeated function calls are not cached at the CPU level during execution.
In our experiment, we send flow records to the device via the serial port for convenience. In order to not distort the inference time measurement, we do not consider the transfer time.
In a practical deployment of our NIDS in an IoT device, flow records would be generated in real-time from the network traffic observed on the Wi-Fi interface, and exported locally on the IoT device before being considered by the ML-based traffic classifier. However, because we are using benchmark datasets for testing, we instead send these records directly to the device.
C. Measuring Accuracy
For accuracy measurement, we used the same model parameters as when testing inference time, however, we apply repeated stratified cross validation with 5 splits and 5 repeats, measuring the balanced accuracy and F1 score of each of these splits. This splits the dataset into 5 different balanced groups, and computes the validation accuracy holding out each of these groups in turn, averaging this, and then averaging this Size is measured with the Linux tool 'size' given after compilation with GCC at optimisation level 0 (-O0) and 3 (-O3) respectively. Here, BSS (block starting symbol) size can be considered the size of global variables. ATMega328p across 5 repeats of other random groupings. This provides the most robust and reliable accuracy data, which is more likely to detect model overfitting. Because cross validation does not by default yield a single trained model that can be deployed, for timing results, we instead split data into a training and testing dataset, using standard holdout evaluation for accuracy. However, these scores are all in the 99.5%+ range even for model depths of 5, as can be seen with the performances found in our initial investigation in Figure 4.
V. RESULTS
A. Comparison to other Source-to-source Converters
We begin by evaluating our choice to write a custom sourceto-source converter for transforming the scikit-learn decision tree to a format that can be used on a microcontroller. As discussed in the literature review, other tools do exist for the conversion of scikit-learn machine learning models to Ccode. However, we hypothesised that a lean small decision tree focused approach could yield more optimised code compared to a general library. To test this, we compared our code transformer to that of sklearn-porter in terms of program size as well as inference time. The results are shown in Table IV.
Here we compared the same depth 12 decision tree model, with the same bare-bones boilerplate code, substituting only the predict method with either the predict method generated by
CSE-CIC-IDS2018
ToN-IoT BoT-IoT UNSW-NB15 sklearn-porter or by our converter. We used the GCC compiler with the '-O0' option to compare un-optimised program sizes, as well as '-O3' to compare sizes after optimisation. Timing data is from the deployment of this version of the model to the ESP32 with the ToN-IoT dataset at depth 12, although Table IV shows a different trained model than in Table V. We can see from Table IV that our program has a significantly smaller memory footprint than sklearn-porter, as well as a lower inference time. We believe this is due to the logic used for the source-to-source conversion. For our solution with a relatively low decision tree depth, our nested 'if-then' logic produces relatively simple and inexpensive chains of comparison instructions. In contrast, sklearn-porter uses four integer arrays to store child nodes, as well as a large double array for thresholds. This contributes to a larger memory footprint. In addition, it uses recursive iteration for each layer of the tree, and at each layer of the tree iterates over the entire class array. This produces more condensed code in terms of number of lines, but may require additional operations for smaller trees compared to our approach. However, our approach did have a higher Block Starting Symbol (BSS) which indicates we have more global variables. This is because we initialise a set of global variables for each feature in a flow record, and process the incoming flows directly into these variables to remove the need for parameter passing. This is still however well under 10% of the dynamic memory of the ATMega328p, which is one of the lowestpower practical microcontrollers, so this is not a significant factor.
B. Detection Accuracy and Performance
Table V shows the inference time per flow for our model, for each of the three considered benchmark datasets. We can see that CPU speed is reflected in inference time, as is the depth, although the impact of depth is minor. Finally, the results are roughly consistent between datasets, with ToN-IoT having the longest inference times. Table VI shows the cross validated detection accuracy in terms of balanced accuracy and model scores for our model, for each of the five datasets. We can see that for most datasets the accuracy improves with model depth, but this is only a small improvement, and that for a depth 12 model, 99%+ accuracy is achieved for all three IoT datasets, and 98%+ for the traditional network NIDS datasets. Based on the accuracy and inference time results, we decided to compare our model at depth twelve, to models in the related works. Did not fit on device Table VII shows these key metrics compared with two related works. We give the average inference time of our depth 12 model across all three IoT datasets as per Table V. We also use accuracy results from our depth 12 model as per Table VI. We only consider the IoT specific datasets for comparison. Our model achieved a higher accuracy than other approaches, even than those on more powerful hardware. Additionally, the results of our experiments show that our decision tree was able to function at a significantly faster speed than the related works. For example, our model on the ESP32 had an inference time of 0.89 microseconds, whereas [5] took almost twice as long. Also, the accuracy of our model was 99.92% on the MQTT dataset, compared to 97.21% of [5] on the same dataset. Our solution on the ESP32 is even faster than models from [5] and [3] which ran on the significantly more powerful Raspberry Pi. In addition, [5] was unable to fit a machine learning model on the Atmega328p, and the ESP8266 model had a lower accuracy due to the significant model compression used. Our uncompressed model was able to fit on both these devices with 99%+ accuracy, and a sub-millisecond inference time on the ATMega328p.
VI. CONCLUSION
In this paper, we present a highly efficient machine learning based NIDS that is deployable on extremely resource constrained IoT devices, such as a typical smart light bulb. Our experimental evaluation shows that our system outperforms the relevant state-of-the-art IoT NIDS solutions both in terms of detection accuracy and speed. Another key benefit of our solution is its extremely low memory footprint. For example, the size of our model at export only occupies around 10% of the program space of the highly popular ESP8266 microcontroller, compared to the near 90% utilisation of a comparable TensorFlow Lite model [5]. This means that our model can be deployed on low-end IoT devices as an add-on service via a software upgrade. This opens up new possibilities to provide enhanced low-cost security services for IoT networks, which are under increasing threats of cyberattacks.
Figure 2 .
2An example of our light bulb NIDS running on an ESP8266 microcontroller on a modified consumer smart light bulb. Green indicates normal traffic, whereas red indicates an attack has been detected.
Figure 3 .
3Our approach begins with the offline pre-processing of data and training the machine learning model, followed by the deployment of this pre-trained model to an IoT device which can ingest flow data in real-time.
Table I PERFORMANCE
IOF A LOW MAX-DEPTH DECISION TREE MODEL, WITH AND WITHOUT PRE-PROCESSINGDataset
(Balanced Accuracy)
With
Pre-processing
No
Pre-processing
ToN-IoT
98.75%
98.70%
BoT-IoT
98.70%
98.70%
Table II
HYPERPARAMETERS CHOSEN FOR OUR DECISION TREE MODEL
Hyperparameter
Value
Max Depth
6-10
Max Features
Uncapped
Criterion
Gini
Splitter
Best
CCP Alpha
0.0001
Table III CHARACTERISTICS
IIIOF THE THREE DEVICES USED IN THIS WORK.IoT Device
Chipset
Processor SRAM
ESP32 NodeMCU
ESP32-WROOM-32
240MHz 520KB
NodeMCU Lua Lolin V3 ESP8266MOD 12-F
80MHz
64KB
Arduino Uno
ATmega328P
16MHz
2KB
Read next record from serial
Start timer
Call model predict method on
loaded record
Repeat 100,000x
Stop timer
Write timing data to serial
Serial
Serial
Desktop
Serial
Microcontroller
Flow record
Inference times
Table IV OUR
IVSOLUTION WHEN COMPARED TO SKLEARN-PORTER FOR TRANSLATING THE SAME DEPTH 12 DECISION TREE TO C CODEApproach
Inference
Time / Flow
Size
-O0 (-O3)
BSS
-O0 (-O3)
Our System
1.53µs
4174B (3414B)
96B (96B)
sklearn-porter
3.40µs
8902B (8670B)
8B (4B)
Table V THE
VAVERAGE INFERENCE TIME PER FLOW FOR VARYING MODEL DEPTHSMQTT BoT-IoT ToN-IoT
5
0.54μs
0.75μs
0.91μs
6
0.62μs
0.76μs
0.99μs
10
0.64μs
0.76μs
1.27μs
12
0.64μs
0.76μs
1.33μs
5
0.99μs
1.32μs
1.48μs
6
1.14μs
1.33μs
1.47μs
10
1.13μs
1.34μs
2.02μs
12
1.13μs
1.37μs
2.77μs
5
8.76μs
12.40μs 15.89μs
6
12.19μs 12.33μs 17.71μs
10
12.22μs 12.30μs 22.87μs
12
12.22μs 12.26μs 24.40μs
IoT Device
Depth
Inference Time / Flow
ESP32
ESP8266
Table VI THE
VIBALANCED ACCURACY (BACC.) AND F1 SCORES (F1 IS EXPRESSED AS A PERCENT FOR CONVENIENCE) OF THE CROSS VALIDATED RESULTS ON BOTH TRADITIONAL AND IOT DATASETS FOR VARIOUS MODEL MAX DEPTHS. BEST RESULT IS HIGHLIGHTED IN BOLD. 13% 98.09% 99.76% 99.76% 99.03% 99.04% 99.36% 99.36% 99.92% 99.92% Best Acc.BAcc.
F1
BAcc.
F1
BAcc.
F1
BAcc.
F1
BAcc.
F1
5
97.07% 96.98% 99.76% 99.76% 95.73% 95.77% 99.90% 99.90% 99.74% 99.74%
6
97.78% 97.73% 99.76% 99.76% 96.30% 96.40% 99.94% 99.94% 99.75%
9.75%
10
98.09% 98.05% 99.76% 99.76% 98.79% 98.80% 99.96% 99.96% 99.93% 99.93%
12
98.Model
Depth
MQTT-IoT-IDS2020
99.92%
IoT IDS Datasets
Traditional IDS Datasets
98.13%
99.03%
99.96%
99.76%
Table VII THE
VIIINFERENCE TIME AND PERFORMANCE OF OUR DEPTH 12 MODEL VERSUS OTHER MODELS IN THE LITERATUREDevice
Clock Speed
Model
Inference
Time
Accuracy
Datasets
ATMega328p
16MHz
Decision Tree
15.80μs
ESP8266
80MHz
Decision Tree
1.50μs
ESP32-WROOM-32
240MHz
Decision Tree
0.89μs
ATMega328p
16MHz
Neural Net.
MQTT
ESP8266
80MHz
Neural Net.~2μs
96.69%
MQTT
ESP32-WROOM-32
240MHz
Neural Net.~2μs
97.21%
MQTT
Cortex-A53
[email protected]
Neural Net.~1μs
99.74%
MQTT
Cortex-A53
[email protected]
CNN (6000kb) 25-200ms
98.4%
ToN-IoT
Edge TPU
ASIC 4T op/sec CNN (6000kb)
1-10ms
98.4%
ToN-IoT
Solution
Exploring Edge TPU
for ... [3]
Lightbulb NIDS
99.36% (BoT)
99.03% (ToN)
99.92% (MQTT)
BoT-IoT
ToN-IoT
MQTT
A Lightweight
Optimized Deep
Learning … [5]
This is a picture of a newer model that uses the Tuya WB2L SoC, we use an earlier version with the ESP8266 microcontroller
Code available at https://rft.io/lightbulb
Number of connected IoT devices growing 18% to 14.4 billion globally. M Hasan, 12022M. Hasan, "Number of connected IoT devices growing 18% to 14.4 billion globally," 1 2022.
M Antonakakis, Understanding the Mirai Botnet. USENIX Association. M. Antonakakis et al., Understanding the Mirai Botnet. USENIX Association, 2017.
Exploring Edge TPU for Network Intrusion Detection in IoT. S Hosseininoorbin, S Layeghy, M Sarhan, R Jurdak, M Portmann, 32021S. Hosseininoorbin, S. Layeghy, M. Sarhan, R. Jurdak, and M. Port- mann, "Exploring Edge TPU for Network Intrusion Detection in IoT," 3 2021.
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. M Abadi, M. Abadi et al., "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems," 2016.
A Lightweight Optimized Deep Learning-based Host-Intrusion Detection System Deployed on the Edge for IoT. I Idrissi, M Azizi, O Moussaoui, International Journal of Computing and Digital Systems. 1112022I. Idrissi, M. Azizi, and O. Moussaoui, "A Lightweight Optimized Deep Learning-based Host-Intrusion Detection System Deployed on the Edge for IoT," International Journal of Computing and Digital Systems, vol. 11, no. 1, 1 2022.
Network Intrusion Detection for IoT Security Based on Learning Techniques. N Chaabouni, M Mosbah, A Zemmari, C Sauvignac, P Faruki, IEEE Communications Surveys and Tutorials. 2132019N. Chaabouni, M. Mosbah, A. Zemmari, C. Sauvignac, and P. Faruki, "Network Intrusion Detection for IoT Security Based on Learning Techniques," IEEE Communications Surveys and Tutorials, vol. 21, no. 3, pp. 2671-2701, 7 2019.
A survey on recent approaches in intrusion detection system in IoTs. A Tabassum, A Erbad, M Guizani, 15th International Wireless Communications and Mobile Computing Conference, IWCMC 2019. 62019A. Tabassum, A. Erbad, and M. Guizani, "A survey on recent ap- proaches in intrusion detection system in IoTs," 2019 15th International Wireless Communications and Mobile Computing Conference, IWCMC 2019, pp. 1190-1197, 6 2019.
Towards the design of real-time autonomous IoT NIDS. A Alhowaide, I Alsmadi, J Tang, Cluster Computing. A. Alhowaide, I. Alsmadi, and J. Tang, "Towards the design of real-time autonomous IoT NIDS," Cluster Computing, 2021.
Threat analysis of IoT networks Using Artificial Neural Network Intrusion Detection System. E Hodo, 2016 International Symposium on Networks, Computers and Communications. E. Hodo et al., "Threat analysis of IoT networks Using Artificial Neural Network Intrusion Detection System," 2016 International Symposium on Networks, Computers and Communications, ISNCC 2016, 4 2017.
Conditional Variational Autoencoder for Prediction and Feature Recovery Applied to Intrusion Detection in IoT. M Lopez-Martin, B Carro, A Sanchez-Esguevillas, J Lloret, Sensors. 1792017M. Lopez-Martin, B. Carro, A. Sanchez-Esguevillas, and J. Lloret, "Conditional Variational Autoencoder for Prediction and Feature Recov- ery Applied to Intrusion Detection in IoT," Sensors (Basel, Switzerland), vol. 17, no. 9, 9 2017.
E-GraphSAGE: A Graph Neural Network based Intrusion Detection System for IoT. W W Lo, S Layeghy, M Sarhan, M Gallagher, M Portmann, Proceedings of the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022. the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022W. W. Lo, S. Layeghy, M. Sarhan, M. Gallagher, and M. Portmann, "E-GraphSAGE: A Graph Neural Network based Intrusion Detection System for IoT," Proceedings of the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022, 2022.
Scikit-learn: Machine learning in Python. F Pedregosa, Journal of Machine Learning Research. 12F. Pedregosa et al., "Scikit-learn: Machine learning in Python," Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 1 2011.
EmbML Tool: Supporting the use of supervised learning algorithms in low-cost embedded systems. L. Tsutsui Da Silva, V M Souza, G E Batista, Proceedings -International Conference on Tools with Artificial Intelligence, ICTAI. -International Conference on Tools with Artificial Intelligence, ICTAI112019L. Tsutsui Da Silva, V. M. Souza, and G. E. Batista, "EmbML Tool: Supporting the use of supervised learning algorithms in low-cost embedded systems," Proceedings -International Conference on Tools with Artificial Intelligence, ICTAI, vol. 2019-November, pp. 1633-1637, 11 2019.
A new distributed architecture for evaluating AI-based security systems at the edge: Network TON IoT datasets. N Moustafa, Sustainable Cities and Society. 722021N. Moustafa, "A new distributed architecture for evaluating AI-based security systems at the edge: Network TON IoT datasets," Sustainable Cities and Society, vol. 72, p. 102994, 9 2021.
Towards the Development of Realistic Botnet Dataset in the Internet of Things for Network Forensic Analytics: Bot-IoT Dataset. N Koroniotis, N Moustafa, E Sitnikova, B Turnbull, Future Generation Computer Systems. 100N. Koroniotis, N. Moustafa, E. Sitnikova, and B. Turnbull, "Towards the Development of Realistic Botnet Dataset in the Internet of Things for Network Forensic Analytics: Bot-IoT Dataset," Future Generation Computer Systems, vol. 100, pp. 779-796, 11 2018.
Machine Learning Based IoT Intrusion Detection System: An MQTT Case Study (MQTT-IoT-IDS2020 Dataset). H Hindy, E Bayne, M Bures, R Atkinson, C Tachtatzis, X Bellekens, Lecture Notes in Networks and Systems. 180H. Hindy, E. Bayne, M. Bures, R. Atkinson, C. Tachtatzis, and X. Bellekens, "Machine Learning Based IoT Intrusion Detection Sys- tem: An MQTT Case Study (MQTT-IoT-IDS2020 Dataset)," Lecture Notes in Networks and Systems, vol. 180, pp. 73-84, 2021.
UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). N Moustafa, J Slay, 2015 Military Communications and Information Systems Conference. 12MilCIS 2015 -ProceedingsN. Moustafa and J. Slay, "UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set)," 2015 Military Communications and Information Systems Conference, MilCIS 2015 -Proceedings, 12 2015.
Toward generating a new intrusion detection dataset and intrusion traffic characterization. I Sharafaldin, A H Lashkari, A A Ghorbani, ICISSP 2018 -Proceedings of the 4th International Conference on Information Systems Security and Privacy. I. Sharafaldin, A. H. Lashkari, and A. A. Ghorbani, "Toward generating a new intrusion detection dataset and intrusion traffic characterization," ICISSP 2018 -Proceedings of the 4th International Conference on Information Systems Security and Privacy, vol. 2018-January, pp. 108- 116, 2018.
Towards a Standard Feature Set for Network Intrusion Detection System Datasets. M Sarhan, S Layeghy, M Portmann, Mobile Networks and Applications 2021 27:1. 272021M. Sarhan, S. Layeghy, and M. Portmann, "Towards a Standard Feature Set for Network Intrusion Detection System Datasets," Mobile Networks and Applications 2021 27:1, vol. 27, no. 1, pp. 357-370, 11 2021.
Service-Aware Two-Level Partitioning for Machine Learning-Based Network Intrusion Detection with High Performance and High Scalability. Y Uhm, W Pak, IEEE Access. 9Y. Uhm and W. Pak, "Service-Aware Two-Level Partitioning for Ma- chine Learning-Based Network Intrusion Detection with High Perfor- mance and High Scalability," IEEE Access, vol. 9, pp. 6608-6622, 2021.
Decision Tree with Sensitive Pruning in Network-based Intrusion Detection System. Y J Chew, S Y Ooi, K S Wong, Y H Pang, Lecture Notes in Electrical Engineering. 603Springer VerlagY. J. Chew, S. Y. Ooi, K. S. Wong, and Y. H. Pang, "Decision Tree with Sensitive Pruning in Network-based Intrusion Detection System," in Lecture Notes in Electrical Engineering, vol. 603. Springer Verlag, 2020, pp. 1-10.
Pruning decision trees with misclassification costs. J P Bradford, C Kunz, R Kohavi, C Brunk, C E Brodley, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 1398J. P. Bradford, C. Kunz, R. Kohavi, C. Brunk, and C. E. Brodley, "Pruning decision trees with misclassification costs," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1398, pp. 131- 136, 1998.
| [] |
[
"Planet formation in the PDS 70 system Constraining the atmospheric chemistry of PDS 70b and c",
"Planet formation in the PDS 70 system Constraining the atmospheric chemistry of PDS 70b and c"
] | [
"Alex J Cridland \nLaboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\n96 Bd. de l'Obseravtoire06300NiceFrance\n\nUniv. Grenoble Alpes\nCNRS\nIPAG\n414 Rue de la Piscine38000GrenobleFrance\n\nMax-Planck-Institut für Extraterrestrishe Physik\nGießenbachstrasse 185748GarchingGermany\n",
"Stefano Facchini \nEuropean Southern Observatory\nKarl-Schwarzschild-Str. 285748GarchingGermany\n\nDipartimento di Fisica\nUniversitá degli Studi di Milano\nvia Celoria 16MilanoItaly\n",
"Ewine F Van Dishoeck \nMax-Planck-Institut für Extraterrestrishe Physik\nGießenbachstrasse 185748GarchingGermany\n\nLeiden Observatory\nLeiden University\nNiels Bohrweg 22300 RALeidenthe Netherlands\n",
"Myriam Benisty \nLaboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\n96 Bd. de l'Obseravtoire06300NiceFrance\n\nUniv. Grenoble Alpes\nCNRS\nIPAG\n414 Rue de la Piscine38000GrenobleFrance\n"
] | [
"Laboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\n96 Bd. de l'Obseravtoire06300NiceFrance",
"Univ. Grenoble Alpes\nCNRS\nIPAG\n414 Rue de la Piscine38000GrenobleFrance",
"Max-Planck-Institut für Extraterrestrishe Physik\nGießenbachstrasse 185748GarchingGermany",
"European Southern Observatory\nKarl-Schwarzschild-Str. 285748GarchingGermany",
"Dipartimento di Fisica\nUniversitá degli Studi di Milano\nvia Celoria 16MilanoItaly",
"Max-Planck-Institut für Extraterrestrishe Physik\nGießenbachstrasse 185748GarchingGermany",
"Leiden Observatory\nLeiden University\nNiels Bohrweg 22300 RALeidenthe Netherlands",
"Laboratoire Lagrange\nUniversité Côte d'Azur\nObservatoire de la Côte d'Azur\nCNRS\n96 Bd. de l'Obseravtoire06300NiceFrance",
"Univ. Grenoble Alpes\nCNRS\nIPAG\n414 Rue de la Piscine38000GrenobleFrance"
] | [] | Understanding the chemical link between protoplanetary disks and planetary atmospheres is complicated by the fact that the popular targets in the study of disks and planets are widely separated both in space and time. The 5 Myr PDS 70 systems offers a unique opportunity to directly compare the chemistry of a giant planet's atmosphere to the chemistry of its natal disk. To that end, we derive our current best physical and chemical model for the PDS 70 disk through forward modelling of the 12 CO, C 18 O, and C 2 H emission radial profiles with the thermochemical code DALI and find a volatile C/O ratio above unity in the outer disk. Using what we know of the PDS 70 disk today, we analytically estimate the properties of the disk as it was 4 Myr in the past when we assume that the giant planets started their formation, and compute a chemical model of the disk at that time. We compute the formation of PDS 70b and PDS 70c using the standard core accretion paradigm and account for the accretion of volatile and refractory sources of carbon and oxygen to estimate the resulting atmospheric carbon-to-oxygen number ratio (C/O) for these planets. Our inferred C/O ratio of the gas in the PDS 70 disk indicates that it is marginally carbon rich relative to the stellar C/O = 0.44 which we derive from an empirical relation between stellar metallicity and C/O. Under the assumption that the disk has been carbon rich for most of its lifetime, we find that the planets acquire a super-stellar C/O in their atmospheres. If the carbon-rich disk is a relatively recent phenomenon (i.e. developed after the formation of the planets at ∼ 1 Myr) then the planets should have close to the stellar C/O in their atmospheres. This work lays the groundwork to better understand the disk in the PDS 70 system as well as the planet formation scenario that produce its planets. | 10.1051/0004-6361/202245619 | [
"https://export.arxiv.org/pdf/2303.17899v1.pdf"
] | 257,900,734 | 2303.17899 | cd9abded3de06a96b5a510f427aebe00fbc7abea |
Planet formation in the PDS 70 system Constraining the atmospheric chemistry of PDS 70b and c
April 3, 2023
Alex J Cridland
Laboratoire Lagrange
Université Côte d'Azur
Observatoire de la Côte d'Azur
CNRS
96 Bd. de l'Obseravtoire06300NiceFrance
Univ. Grenoble Alpes
CNRS
IPAG
414 Rue de la Piscine38000GrenobleFrance
Max-Planck-Institut für Extraterrestrishe Physik
Gießenbachstrasse 185748GarchingGermany
Stefano Facchini
European Southern Observatory
Karl-Schwarzschild-Str. 285748GarchingGermany
Dipartimento di Fisica
Universitá degli Studi di Milano
via Celoria 16MilanoItaly
Ewine F Van Dishoeck
Max-Planck-Institut für Extraterrestrishe Physik
Gießenbachstrasse 185748GarchingGermany
Leiden Observatory
Leiden University
Niels Bohrweg 22300 RALeidenthe Netherlands
Myriam Benisty
Laboratoire Lagrange
Université Côte d'Azur
Observatoire de la Côte d'Azur
CNRS
96 Bd. de l'Obseravtoire06300NiceFrance
Univ. Grenoble Alpes
CNRS
IPAG
414 Rue de la Piscine38000GrenobleFrance
Planet formation in the PDS 70 system Constraining the atmospheric chemistry of PDS 70b and c
April 3, 2023Received April 3, 2023Astronomy & Astrophysics manuscript no. main
Understanding the chemical link between protoplanetary disks and planetary atmospheres is complicated by the fact that the popular targets in the study of disks and planets are widely separated both in space and time. The 5 Myr PDS 70 systems offers a unique opportunity to directly compare the chemistry of a giant planet's atmosphere to the chemistry of its natal disk. To that end, we derive our current best physical and chemical model for the PDS 70 disk through forward modelling of the 12 CO, C 18 O, and C 2 H emission radial profiles with the thermochemical code DALI and find a volatile C/O ratio above unity in the outer disk. Using what we know of the PDS 70 disk today, we analytically estimate the properties of the disk as it was 4 Myr in the past when we assume that the giant planets started their formation, and compute a chemical model of the disk at that time. We compute the formation of PDS 70b and PDS 70c using the standard core accretion paradigm and account for the accretion of volatile and refractory sources of carbon and oxygen to estimate the resulting atmospheric carbon-to-oxygen number ratio (C/O) for these planets. Our inferred C/O ratio of the gas in the PDS 70 disk indicates that it is marginally carbon rich relative to the stellar C/O = 0.44 which we derive from an empirical relation between stellar metallicity and C/O. Under the assumption that the disk has been carbon rich for most of its lifetime, we find that the planets acquire a super-stellar C/O in their atmospheres. If the carbon-rich disk is a relatively recent phenomenon (i.e. developed after the formation of the planets at ∼ 1 Myr) then the planets should have close to the stellar C/O in their atmospheres. This work lays the groundwork to better understand the disk in the PDS 70 system as well as the planet formation scenario that produce its planets.
Introduction
The link between the gas and ice chemistry in protoplanetary disks and the resulting chemical structure in the atmospheres of (giant) exoplanets has become an important tool in probing the underlying physics of planet formation. The argument follows that chemical gradients in the disk gas and ice -particularly in the carbon and oxygen tracers -are 'encoded' into the planet's atmosphere as it accretes its atmosphere-forming gas. As a result, if accurate measurements of the atmospheric number ratio of carbon and oxygen atoms can be made then one can infer from where in the disk the planet has accreted its material. Restricting from where known planets form helps to constrain planet formation models both in terms of their rates -growth vs. migrationas well as their initial conditions. Young planets, with ages less than ∼ 10 Myr, offer a unique view on the connection between planet formation and the chemistry of planetary atmospheres. First, they are still self-luminous because of their latent heat of formation which makes direct spectroscopy (and imaging) more feasible than Gyr-old planets. Second, and in particularly for the ideal case of PDS 70, their natal protoplanetary disk remains sufficiently massive that it can be detected and chemically characterised. In this way we can ★ [email protected] directly measure chemical properties of the gas that is feeding the planetary atmospheres and use this knowledge to infer the chemical properties of the gas that had fed the planet during its growth phase. Unfortunately, the presence of the disk makes the first point more complex, since it contaminates attempts at direct spectroscopy (discussed more below). We focus more on the second point in this work. Finally, these young, warm planets have not had sufficiently long for internal processes like mixing and sedimentation to drastically alter the observable chemical properties of its atmospheres. We discuss this point in more detail in Section 5 below.
The carbon-to-oxygen number ratio or more commonly the carbon-to-oxygen ratio (C/O), is the total number of carbon atoms divided by the total number of oxygen atoms in either the refractory (rock) and/or volatile (gas and ice) components. It has become the most popular elemental ratio for tracing the formation of planets since its introduction by Öberg et al. (2011). The reasons for its popularity are three-fold: first, carbon and oxygen are the most abundant elements heavier than hydrogen and helium in the Universe and thus their molecular forms H 2 O, CO, CO 2 tend to be the most abundant molecules after H 2 in astrophysical settings. Second, these same molecular species have strong molecular features in the near-and mid-infrared that can be observed in spectroscopic studies through either direct observations Article number, page 1 of 18 arXiv:2303.17899v1 [astro-ph.EP] 31 Mar 2023 A&A proofs manuscript no. main or during a stellar occultation (transit, see the review by Madhusudhan 2019). Finally, due to their differences in condensation temperature there is the aforementioned C/O ratio gradient as a function of radius through the protoplanetary disk. And as such, different formation models, which predict different starting locations for a giant planet should predict different (observable) chemical differences in the atmospheres of gas giant planets.
Measuring the planet's atmospheric C/O ratio offers a link to the local C/O ratio in the protoplanetary disk where the giant planet drew down the bulk of its gas. Given that the later stages of gas accretion are typically very rapid, with timescales that are much less than the migration timescales (Pollack 1984;Pollack et al. 1996;Cridland et al. 2016;Emsenhuber et al. 2021) the atmospheric C/O ratio is determined by a narrow range of radii in the protoplanetary disk. To zeroth order, the local C/O ratio in the gas is determined by the freeze-out of the most abundant volatiles at their 'ice lines'. The most famous ice lines are the water ice line near a gas temperature of ∼ 150 K and the CO ice line near a temperature of ∼ 20 K. The former for its importance in setting the water abundance of planetesimals in the Solar system (Ciesla & Cuzzi 2006, and references therein) while the latter is relevant due to its in-direct detection in protoplanetary disks (for ex. in TW Hya, Qi et al. 2013b). However other ice lines, like the CO 2 ice line, will also contribute to the overall evolution of carbon and oxygen bearing species in planet forming regions (see for ex. Eistrup et al. 2018).
The local C/O ratio ultimately depends on chemical processes impacting the arrangement of carbon and oxygen atoms between the gas and ice phases as well as the total number of carbon and oxygen atoms in the system. The latter can be represented by a global C/O ratio and be thought of as the C/O ratio of the gas and ice of the stellar system's primordial molecular cloud. Given that the disk and star accreted from the same molecular cloud material it is often assumed that the stellar C/O ratio represents this global C/O ratio. Having a measure of the global C/O ratio via the carbon and oxygen abundances in the host star is crucial to understanding a planet's atmospheric C/O ratio in the context of its planetary system and formation processes.
The paper is organised as follows: in section 2 we outline the important features of the PDS 70 system and in section 3 we outline our numerical and analytic methods. The results are summarised in section 4 and are discussed, along with the model and assumption caveats, in section 5. We concluded our results and offer direction for future study in section 6.
The PDS 70 system
The star
PDS 70 was first identified as a T Tauri star in the Pico dos Dias Survey (PDS) by Gregorio-Hetem et al. (1992). It was included in the Radial Velocity Experiment (RAVE) fourth (Kordopatis et al. 2013) and sixth (Steinmetz et al. 2020) date releases where it was measured at medium resolution ( ∼ 7500) to spectroscopically derive its stellar atmospheric properties such as effective temperature, gravitational acceleration and metallicity. It has an effective temperature of 4237 ± 134 K, a log = 4.82 ± 0.2 and [Fe/H] = −0.11 ± 0.1. It is at a distance of 113 pc (Gaia Collaboration 2020).
For the purpose of this paper, it would be useful to have an estimate of the photospheric abundance of carbon and oxygen of the star. Baring this measurement, we can estimate the stellar C/O based on its inferred metallicity and the relation between these values in stars found by Suárez-Andrés et al. (2018). In their sample, they found a linear relation between the star's metallicity and its C/O -particularly between stars that host giant planets (as PDS 70 does). Using their fitted slope (0.41 ± 0.02) and a normalisation of C/O = 0.48 ± 0.15 at [Fe/H] = 0 -which is slightly lower than the commonly accepted solar value of 0.54we find a stellar C/O = 0.44 ± 0.19 for PDS 70.
The disk
The infrared excess identified in the IRAS catalogue by Gregorio-Hetem et al. (1992) classified PDS 70 as a T Tauri system and implies, given our current knowledge of T Tauri systems, the presence of a dusty disk. The excess was confirmed by Metchev et al. (2004) and the disk was soon after detected in coronagraphic images of its scattered light by Riaud et al. (2006). The inner gap and transition disk-nature of the PDS 70 disk was confirmed through -band polarmetric and -band imaging by Hashimoto et al. (2012), and follow up observations at 1.3 mm with the Sub-Millimeter Array (SMA) additionally found evidence for a compact disk inside the dust gap (Hashimoto et al. 2015). The disk was studied at a higher spatial resolution with the Atacama Large (sub)Millimetre Array (ALMA) at 0.87 mm continuum and CO = 3 − 2 and HCO + = 4 − 3 line emission by Long et al. (2018). Similar to the dust the gas emission shows a gap near the location of the two planets (Keppler et al. 2019, and see below), it also extends much farther (∼ 200 AU) from the host star than the dust (∼ 80 AU).
High resolution observations of the system with VLT/SPHERE and VLT/NaCo have mapped the small dust distribution around the host star in more detail and have confirmed the presence of the inner dust disk (Keppler et al. 2018). The size of the dust disk was constrained to less than 17 AU based on the polarized intensity of the dust emission, but could not be further constrained because of a degeneracy between the outer radius and dust depletion factor in their inner disk model. Along with the more detailed look at the physical properties of the PDS 70 disk, Keppler et al. (2018) also report the first detection of an embedded planetary companion to the PDS 70 star. Now called PDS 70b, the direct detection of this embedded planet is a first of its kind. This detection was later confirmed by Haffert et al. (2019) who also detect a second embedded planet, PDS 70c, using H emission.
The planets
The detection of PDS 70b (Keppler et al. 2018) and PDS 70c (Haffert et al. 2019) represents a unique opportunity in the study of planet formation. At time of writing, they are the two only confirmed embedded planets that have been directly detected by the emission of their young atmospheres (Wang et al. 2021), through the H emission of currently accreting gas (Haffert et al. 2019), and through continuum emission coming from the circumplanetary disk (Isella et al. 2019;Benisty et al. 2021). Because of their coexistence with their natal protoplanetary disk and the inferred age of the system (∼ 5 Myr) they are likely the youngest exoplanets ever detected, in a stage of evolution consistent with the final stages of planet formation. At this final state, the planets have opened a gap in the disk because its gravitational influence has exceeded the viscous and gas pressure force otherwise governing hydrodynamics. The planets have thus largely accreted the bulk Some recent observations have suggested the detection of point-like feature in AB Aur (Currie et al. 2022), however this is debated (Zhou et al. 2022 of their gas, with any remaining accretion being fed by gas that approaches and then surpasses the gap edge through meridional flows Teague et al. 2019). We quote the planetary properties of the two embedded planets as derived by Wang et al. (2021) who inferred the orbital parameters based on the assumption that the system is dynamically stable over the lifetime of the system (∼ 5 Myr). Both planets were observed in the near infrared at high spatial and spectral resolution using the GRAVITY instrument on the Very Large Telescope Interferometer (VLTI) by Wang et al. (2021). Their Spectral Energy Distribution (SED) fitting included the Exo-REM models of Charnay et al. (2018) which allows an inference of the atmospheric C/O of both planets. Unfortunately their 'adequate' models -which have Bayes factors within a factor of 100 of the best fit -could not provide stringent restrictions on the atmospheric C/O other than a constraint of C/O> 0.4 in PDS 70b. For PDS 70c, none of the Exo-REM models were considered 'adequate', however the best fit Exo-REM model -with a Bayes factor of 114× smaller than the best fit model -constrained the atmospheric C/O< 0.7. This constraint, however, lies very close to the upper limit in their model's C/O prior and thus should be taken with a grain of salt. Cugno et al. (2021) similarly struggled to place constraints on the chemical properties of the planets in the PDS 70 system. They used medium resolution data from VLT/SINFONI and the 'molecular mapping' method (Hoeijmakers et al. 2018) to try and find molecular emission from the planets. They predict relatively high upper limits on the molecular abundances of CO and H 2 O (10 −4.1 and 10 −4.0 relative to hydrogen respectively) which is in contention with the results of Wang et al. (2021). A possible reason for deviations between the two observations could be extinction either by clouds or surrounding dust which impacts the overall efficiency of the molecular mapping technique.
Summary
For the sake of chemical characterisation there is sufficient data to model the global chemical structure in the PDS 70 system. The stellar C/O = 0.44 and the detection of carbon-rich molecular species like C 2 H implies that the disk has likely undergone some form of chemical evolution to remove gaseous oxygen from the disk's emitting layer. This chemical processing of oxygen -mainly in CO and H 2 O -out of the gas phase is known to result in unexpectedly low CO emission in protoplanetary disks (Bruderer et al. 2012;Favre et al. 2013;Du et al. 2015;Kama et al. 2016;Ansdell et al. 2016;Bosman et al. 2018b;Schwarz et al. 2018;Krijt et al. 2020;Bosman et al. 2021;Miotello et al. 2022). We thus interpret the chemical difference between the stellar C/O and the expected high C/O of the disk (as discussed by Facchini et al. 2021) as the chemical processing of gaseous CO in the ice phase and the locking up of carbon and oxygen rich ices into pebbles that settle to the disk midplane.
These ices are thus invisible to detection via line emission observations but could, in principle, be available for the accretion into the atmosphere of the young planets. Given that CO is also an abundant carrier of carbon, one would expect a similar depletion of carbon as oxygen. As a result many works that explore disk chemistry in situations where C/O > 1 find that a depletion of both carbon and oxygen is needed to explain observed fluxes (Bergin et al. 2016;Miotello et al. 2019;Öberg et al. 2021;Bosman et al. 2021). We find here that such a carbon depletion is not consistent with the disk line emission observations of Facchini et al. (2021) and the models presented here, and will discuss below.
In order to constrain the chemical properties of the disk we use the line emission survey of the PDS 70 system of Facchini et al. (2021) which included the three most abundant CO isotopologues and many bright hydrocarbon lines. In fact, the detection of bright lines from C 2 H = 7/2 − 5/2, c-C 3 H 2 = 3 21 − 2 12 , and H 13 CN = 3 − 2 is suggestive of high carbon abundance, relative to oxygen, in the outer disk. In this work, we will model the volatile chemistry and line emission for a subset of the detected species -12 CO, C 18 O, and C 2 H -which broadly constrain the temperature, density, and chemical properties of the PDS 70 disk respectively.
Our strategy will proceed as follows: we model the line emission observations of Facchini et al. (2021) to estimate the current physical and chemical structure of the PDS 70 disk. We will stay close to the derived physical structure of Keppler et al. (2019) who studied the dust continuum and CO line emission of the disk. We will use the derived physical structure to estimate the physical structure of the disk back when the planets presumably began forming and compute the chemical structure of this younger disk. Finally we will compute the formation of the two planets over the lifetime of the PDS 70 system (∼ 5 Myr) to estimate their current-day atmospheric C/O. We will expand on the above steps in more detail below.
Methods
This work combines a number of numerical and theoretical work to understand and model the physical and chemical properties of the PDS 70 system. The chemical modelling is done using the Dust And LInes (DALI) code (Bruderer et al. 2012;Bruderer 2013) which computes the chemical and thermal evolution of the protoplanetary disk gas and ice self-consistently. The DALI code also contains a radiative transfer module for computing the radiative heating and cooling (for the thermal evolution), but is also used to produce synthetic observations of the disk models.
'Current' disk model
Our first step is to determine the 'current' physical and chemical structure of the PDS 70 disk at its current age of 5 Myr based on the line observations of Facchini et al. (2021). The gas disk model is based on a simple parametric description of a transition disk proposed by Andrews et al. (2011) and is discussed in detail in Bruderer (2013). The model follows the standard self-similar solution of a viscously evolving protoplanetary disk assuming that the gas viscosity is constant with time and radially varies as a power-law (Lynden-Bell & Pringle 1974). Solutions of this form follow:
Σ gas ( ) = Σ − exp − 2− ,(1)
where Σ is the 'critical surface density' at a 'critical radius' of . The capital denotes the midplane radius. We allow the gas and dust density to extend out to a maximum radius of disk .
Transition disks are characterised by the presence of a large dust gap caused by (at least in the case of PDS 70) the presence of giant planets. This gap can extend all the way to the host star (in which case it is colloquially called a cavity) but transition disks can also contain a smaller inner disk of gas and dust. Thus rather than a radially constant dust-to-gas ratio Δ dtg , the model proposed by Andrews et al. (2011) and used in DALI reduces Δ dtg to arbitrarily low values between the outer radius of the inner disk ( gap ) and the inner radius of the outer disk disk ( cav ).
Inward of cav the gas and dust can also be further depleted relative to their expected density from Equation 1. For the gas this is justified by the assumption that gas accretion is slightly less efficient inside the dust cavity due to the gravitational influence of the embedded planets, hence Σ gas ( < cav ) = gas Σ gas ( ). The dust density has the form:
Σ dust = dust Δ dtg Σ gas ( ) < gap 10 −15 gap < < cav Δ dtg Σ gas ( ) > cav .(2)
The final two important radii for the model are the sublimation radius subl ∼ 0.07 √︁ * / AU inward of which the disk is sufficiently warm for the dust to evaporate (Bruderer 2013) and the outer radius out which describes the largest radius in the simulation. For the stellar spectrum we use a blackbody with effective temperature noted above. The star has a bolometric luminosity of 0.35 L .
The vertical distribution of the gas follows a Gaussian profile with a physical scale height following = ( / ) . The so-called scale height angle ℎ ≡ / is thus ℎ = ℎ ( / ) −1 , where the subscript 'c' continues to denote the value of any variable at the critical radius . The vertical distribution of the dust is more complex than the gas and is split into two populations. The 'small' population represents grains with sizes ranging between 0.005 -1 m while the 'large' population represents grains 1 -1000 m. While the small grains tend to be well coupled to the gas and thus follow the vertical distribution more closely, the larger grains will tend to settle to the midplane. The vertical volume density of the dust follows, in cylindrical coordinates:
dust,small = (1 − )Σ dust √ 2 ℎ exp − 1 2 0.5 − ℎ 2 and dust,large = Σ dust √ 2 ℎ exp − 1 2 0.5 − ℎ 2 ,(3)
where = arctan ( / ), < 1 parameterises the amount of settling that occurs for the large dust grains and sets their mass fraction. DALI will run in two modes, the first takes Equations 1-3 with a list of prescribed parameters to produce the disk gas and dust distributions while the second allows users to prescribe their own choice of dust and gas radial and vertical distribution (discussed more below). We show the adopted values for the discussed parameters in Table 2. In the second column of the table, if a single value is shown it coincides with the derived disk model of Keppler et al. (2018) based on the dust distribution.
The quantity of gas within the orbital range of the planets and the dust cavity is still an open question. Numerical simulations (eg. Lubow & D'Angelo 2006) have shown that gas accretion can occur across the planet-induced gap, but results in a lower Notes. Important parameters for the construction of the transition disk model of PDS 70, including the range of values used to determine the current disk model. The parameters without ranges in the second column coincide with the derived properties of the disk by Keppler et al. (2018). Bolded values show the parameters of our preferred model. accretion rate onto the host star than the accretion rate outward of the gap. Theoretical studies have shown (for ex. see Crida et al. 2006) that a giant planet modifies the local gas density surrounding it as its gravitational torque competes against the viscous torques of the surrounding protoplanetary disk. It is thus expected that the gas surface density should be lower around the embedded planets of the PDS 70 system. Keppler et al. (2019) used 12 CO = 3 − 2 line emission data to constrain the gas properties within the dust cavity. They found evidence of a gas gap around the PDS 70b planet consistent with a mass of 5 Jup . No gas gap was inferred for PDS 70c, however the numerical simulations of Bae et al. (2019) showed that the two planets would share the gas gap. In this work we can include the impact on the line emission of CO and C 2 H from the presence of a planet-driven gap. In order to add the impact of the gas gap we further deplete the gas density within the dust cavity by a factor of planet (see Figure 1).
In Figure 1 we show an example of how the different forms of depletion impact the structure of the midplane volume (number) densities. While the blue and green solid lines represent the fiducial distribution of the gas and dust (respectively) in a transition disk model, their dashed lines show the effect of reducing the inner disk densities by gas = dust = 0.01. Meanwhile the orange line shows the impact of imposing a reduction in the gas density by a factor of planet = 10 −2.5 within the dust cavity, which approximately equates to the 5 M Jup model of Keppler et al. (2019).
In Figure 2 we show an example of the vertical distribution of the gas and dust in a fiducial model at a radius of ∼ 80 AU, outward of the cavity outer edge. The gas vertical distribution (blue curve) is described by a single Gaussian profile while the dust vertical profile (green line) is described by a pair of Gaussian profiles (ie. Equation 3). The orange dashed lines show each of the Gaussian profiles from Equation 3, converted to number density rather than mass density, to demonstrate the fact that the dust profile is effectively the sum of the two Gaussian profiles. The large dust grains are distributed closer to the midplane because they tend to settle with respect to the smaller grains. As a result the dust-to-gas ratio is slightly higher near the midplane when compared to far above the midplane.
Disk chemistry
As previously mentioned the chemistry is handled using the DALI (Bruderer et al. 2012;Bruderer 2013) code. DALI simultaneously computes the gas and dust temperature via radiative transfer (both continuum and line) as well as the chemical evolution of the volatiles molecular species. As a result the code computes an accurate picture of the temperature structure and molecular abundance in the upper atmosphere of the disk while simulta- neously predicting the relative molecular line strengths and line emission from a large range of species.
In particular we focus on three molecular lines: 12 CO = 2 − 1, C 18 O = 2 − 1, and C 2 H = 7/2 − 5/2. These molecular lines are sensitive (mainly) to the temperature, density structure, and the chemistry of the disk respectively. These lines, among others, were observed by Facchini et al. (2021) using the Atacama Large (sub)Millimeter Array (ALMA) at a resolution of ∼ 0.4", and are shown in Figure 3. One of the main features found in this observational work is that at this resolution the integrated intensity is not centrally peaked, instead peaking within the dust cavity. In addition the strong C 2 H emission is indicative of a large C/O -likely above unity -in order for C 2 H to have a significant abundance in the disk.
We focus on constraining the global C/O ratio in the disk as a first step in understanding the chemical environment in which the planets formed. Our chemical model, discussed below, is sensitive to chemistry-driven local changes to the volatile (gas/ice) C/O ratio such as the freeze-out; however we neglect local changes to the C/O ratio that may be driven by the radial transport of volatiles (as seen in for ex. Booth et al. 2017;Bosman et al. 2018a;Krijt et al. 2020). Given the presence of the two planets, the strong dust trap outside of their orbit, and the evidence of tenuous gas in the inner disk (Keppler et al. 2019), an exploration of the radial dependence of C/O would be an interesting focus for future work.
We employ the chemical network used by Miotello et al. (2019) to model the global C 2 H emission from an ALMA survey of protoplanetary disks. The chemical network is based on the network originally developed for DALI (Bruderer et al. 2012) and pulls reactions and rates from the UMIST Database for Astrochemistry (UDfA, Woodall et al. 2007;McElroy et al. 2013). It is optimised for the abundances of CO and related species in the gas phase and includes UV cross-sections and photodissociation rates of Heays et al. (2017). Visser et al. (2018) added to the network to include nitrogen chemistry which often acts to compete with hydrocarbons (like C 2 H) for available carbon.
9.90×10 −7 C 1.01×10 −4 H + 3 1.00×10 −8 O 1.00×10 −4 HCO + 9.00×10 −9 N 5.10×10 −6 C + 1.00×10 −9 Mg 4.17×10 −7 Mg + 1.00×10 −11 Si 7.94×10 −7 Si + 1.00×10 −11 S 1.91×10 −6 S + 1.00×10 −11 Fe 4.27×10 −7 Fe + 1.00×10 −11 N 2 1.00×10 −6 PAH 6.00×10 −7
Notes. The ions and PAH initialise the total charge and act as a sink for free electrons, respectively. A species with a label denotes the ice phase of the given species.
While chemical networks that include isotope-specific reactions exist (e.g. Miotello et al. 2016), we do not include these reactions here. Instead we assume that species like C 18 O scale with their more abundant isotopologues by the standard isotopic ratios in the Interstellar medium (ISM (Wilson & Rood 1994). By simply scaling the abundance of species like C 18 O to the abundance of the standard isotope ( 12 C 16 O) we ignore possible chemical effects like isotopic selective photodissociation which causes the less abundant isotopologues to be further depleted because of their less efficient self-shielding. As a result the less abundant isotopologue becomes optically thick lower in the disk atmosphere than would be assumed from simple scaling models. Miotello et al. (2016) showed that this will slightly weaken the line intensity of these less abundant isotopologues because the emission is coming from deeper in the disk atmosphere, from a slightly cooler region of the disk. Furthermore the excess photodissociation results in more free atomic oxygen isotopes that can be incorporated into other species.
We begin the chemical simulations with the majority of chemical elements in the atomic phase (i.e. the reset scenario of Eistrup et al. 2016), except for hydrogen which begins in its molecular form and a fraction of the nitrogen which begins as frozen NH 3 and gaseous molecular N 2 . The initial abundances follow those selected by Miotello et al. (2019) except in the case of carbon and oxygen which is varied. The initial abundances are summarised in Table 3. Our fiducial model uses the carbon and oxygen abundances shown in the table. These values result in a marginally carbon-rich disk (C/O = 1.01) which is what is expected from the presence of C 2 H emission. We will also explore a range of C/H and O/H (and thus C/O) in order to confirm this expectation.
Disk model at the time of planet formation
More than likely, the physical properties of the PDS 70 disk were different when the planets b and c started forming ∼ 4−5 Myr ago. The particular 'starting time' for planet formation is a remarkably difficult feature to constrain -particularly for exoplanet systems where we lack the isotopic data that is used in Solar system studies on the subject. Here we will assume that the core of both planets formed within the first 1 Myr of the protoplanetary disk lifetime, and begin our gas accretion simulation from there.
We use analytic models of protoplanetary disk evolution to determine the density structure of the gas at a disk age of 1 Myr. These models, derived by Tabone et al. (2022), assume that the disk evolves through either viscous evolution or magnetohydrodynamic (MHD) disk winds. It is currently unclear which mechanism is the main driver of angular momentum transport, and thus protoplanetary disk evolution, so modelling both separately accesses the two extremes of disk evolution. The main physical difference between the two mechanisms is the 'direction' of their angular momentum transport. Viscous evolution transports the bulk of the angular momentum radially outward along with a small amount of the disk mass, while the bulk of the disk mass moves inward. In other words viscous disks evolve by spreading outward to larger radii with a small fraction of its mass, while the remainder moves inward toward the host star.
Protoplanetary disks evolving due to MHD disk winds move the bulk of their angular momentum to larger heights, by launching gas along the magnetic field lines emerging from the top of the disk. Unlike in viscously evolving disks, wind-driven disks do not spread their mass outward and are thus expected to remain at roughly the same size throughout their evolution (Tabone et al. 2022). For each of the above cases, Tabone et al. (2022) found that the gas surface density and disk size should evolve as:
Σ gas = Σ ( ) ( ) −1+ exp − ( ) where ( ) = ( = 0) 1 + (1 + ) acc,0 and Σ ( ) = Σ ( = 0) 1 + (1 + ) acc,0 −( 5 2 + + 2 ) .(4)
The relevant timescale, acc,0 , in these equations is the initial accretion timescale and denotes the time that would require a gas parcel to accrete from an initial position of /2 to the inner region of the disk. It has the functional form of:
acc,0 = ( = 0) 3ℎ ,˜( = 0) ,(5)
where , is the sound speed at the (initial) critical radius and is the effective disk- (Shakura & Sunyaev 1973) due to either viscous or MHD disk winds. The parameter = DW / SS is the ratio of the effective due to disk winds (DW) and viscosity (SS, the standard from Shakura & Sunyaev 1973). In the limit of a purely viscous disk = 0 while in the opposite limit = ∞. In the limit that → ∞ the time evolution of and Σ in eq. 4 becomes:
( ) = ( = 0) and Σ ( ) = Σ ( = 0) exp − 2 acc,0 .(6)
The final parameter, , denotes the mass ejection index and quantifies the effect on the mass surface density profile from the ejection of mass through the disk wind. The change in surface density profile follows from the significant fraction of the gas that can be lost to the disk wind as it extracts angular momentum. For the case of a pure wind ( = ∞) it has the form = 1/[2( − 1)] where is magnetic lever arm parameter and has a typical value of between 2 − 5. Parameter
Viscous ( = 0) Wind ( = ∞) Σ ( = 1 Myr) 44.7 g cm −2 21.2 g cm −2 ( = 1 Myr) 13.3 AU 40.0 AU 0 0.25 ( = 3) M disk,1Myr (M J ) 1 5.8 19.9
Notes. 1 Jupiter masses within 120 AU
Planet formation
We use a standard approach to the growth and migration of our planetary embryo. In both disk models presented in Section 3.3 we initiate planetary embryos at a range of initial radii between 20-80 AU in steps of 2 AU. For completeness we also compute the growth and evolution of planets originating from 1-20 AU in steps of 1 AU. The initial mass of the embryos are set by either the pebble isolation mass -using the prescription of Bitsch et al. (2018) -or the total dust mass exterior to the embryos' initial orbits, which ever is lower. The gas accretion and planetary migration follows the work of Cridland et al. (2020a), and we summarise the important features of the model in Appendix A. For the purpose of this work, the 'chemical accounting' of the carbon and oxygen-bearing species is important to determine the atmospheric C/O. We trace the quantity of mass flux onto the atmosphere through Equations A.1, A.2, and/or A.5, as well as the rate that the growing planets move through their disk using Equation A.6. We follow the method of Cridland et al. (2020a) to compute the acquisition of carbon and oxygen into the atmospheres, we summarise the relevant information below.
Chemical acquisition
During the age of planet formation the disk physical and chemical properties are kept constant to make the problem more easily tractable. As was done in Cridland et al. (2020a), such simplicity ignores the impact that an evolving disk can have on the overall evolution of the gas chemistry, but allows us to include the chemical impact of meridional flow on the chemistry of planetary atmospheres. We view this as an appropriate trade-off for this study since ALMA line emission observations tend to probe the chemistry of the disk from where meridional flow originates.
Before a gap in the disk is opened by the growing planet we compute the average carbon and oxygen abundance within the planet's feeding zone -which we equate to the proto-planet's Hill sphere. Once the the gap is open we assume that the gas flow towards the planet follows a meridional flow (Mordasini et al. 2014;Teague et al. 2019) and we assume that the planet-feeding gas comes from between one and three gas scale heights. In this case we average the carbon and oxygen abundance in the gas and ice at the edge of the gas gap over one and three gas scale heights.
We assume any ice frozen onto the population of small dust grains is entrained with the gas and is delivered into the planetary atmospheres. This process occurs both while the planet is embedded in the disk as well as after the gap is opened. The large grain population is not as entrained as the small population, but is nevertheless expected to flow with the gas near the planet. Before the gas gap is opened we assume that ices frozen on the large dust grain population can accrete into the planetary atmosphere at a rate that is reduced relative to the rate of gas accretion by a factor of 1/(1 + 2 ), where is the average Stokes number of the population of large dust grains.
The refractory component of the disk is known to include a significant fraction of the overall carbon (around 50%, Bergin et al. 2015). During the growth of the atmosphere, we do not include any excess carbon from the dust. The small grains dominate the dust mass above the midplane, in a region that is likely to have been photochemically processed to remove this excess carbon (Anderson et al. 2017;Klarmann et al. 2018;Binkert & Birnstiel 2023). This could have contributed to the high C/O that is observed in the PDS 70 gas disk. The dust near the midplane is dominated by larger grains which could maintain their carbon. While we allow for their accretion, they do not significantly contribute to the carbon abundance of the planetary atmospheres.
Results
Current disk model
Our first goal is to constrain the chemical and physical properties within the PDS 70 disk by comparing DALI forward models to the observed radial profile of the 12 CO, C 18 O, and C 2 H emission of Facchini et al. (2021). We begin with our fiducial model using the parameters derived in Keppler et al. (2019) and neglecting any deviations from the standard surface density profile due to depletion in the inner disk or the embedded planets ( gas = dust = planet = 1.0). In the following we begin by constraining the density structure using the optically thin C 18 O line. With the constrained density structure we then investigate the impact on the optically thick 12 CO line and the chemical dependence of the optically thin C 2 H line.
The synthetic images are produced using the radiative transfer package in DALI using the observational parameters of: inclination = 52 degrees, distance = 113 pc. The data are then convolved with the same beam as in Facchini et al. (2021), de-projected, and averaged over annuli with the same sizes as was done in Facchini et al. (2021). In addition we use the same channel width and number of channels as is done in Facchini et al. (2021). Figure 4(a) we show the radial profile of C 18 O emission in the case of a disk that is nearly void of material inward of the outer dust ring edge, while using the remaining fiducial parameters presented in Table 2. In this case we are exclusively sensitive to emission from the inner edge of the dust ring, and the outer disk. With fiducial parameters cav = 45 AU and disk = 120 AU we find a good agreement between the peak location of the C 18 O emission in the observations (orange line) and our model (blue line). Meanwhile we overestimate the flux coming from the outer disk suggesting that the outer disk is too extended or too diffuse. A smaller disk may better replicated the observed flux, however since we neglect isotopologue-selected photodissociation, the outer disk could be less abundant in C 18 O than is represented in our model (see for ex. Miotello et al. 2014).
4.1.1. C 18 O In
In Figure 4(b) we show the C 18 O radial profile for the preferred model which includes an inner disk. Adding a small inner disk into the model adds a new source of optical depth which both cool the inner edge of the dust cavity as well as protect molecular species from photodissociation. Of course the inner disk may also increases the flux of C 18 O at small separations which requires that it is fairly tenuous. The presented model involves a gas depletion gas = 0.01 and dust depletion dust = 0.0001. The inner disk has a radius of 6 AU in this model.
As a final modification we add an additional gas depletion within the planetary gap caused by the presence of the young pair of giant planets. In Figure 4(c) we add an additional depletion planet = 10 −2.5 into the dust cavity ( gap < < cav ). Reducing the gas density inside the gap leads to a slightly warmer dust wall and a slight increase in the peak flux of C 18 O. In addition the flux coming from the inner regions is lowered back to a similar level as in the model shown in Figure 4(a). Because it reproduces the peak flux as well as its location, the model in Figure 4(c) is our preferred model.
12 CO
In Figure 5 we show the resulting 12 CO emission radial profile from the disk model presented in Figure 4(c). Surprisingly the peak emission is somewhat shifted with respect to the observed radial profile. This may be linked to the exact gas structure near the inner edge of the outer disk. In our model we assume that the gas density sharply cuts off at the inner edge of the outer disk ( cav ). However its likely that the disk smoothly transitions from the density in the outer disk down to the lower density in the cavity and lower disk Leemker et al. 2022). The extra width due to this smooth transition could easily contribute to the (optically thick) 12 CO emission while only weakly contributing to the optically thin tracers.
The outer disk flux of 12 CO also falls off more quickly than is seen in observations, which could be exaggerated if the outer disk is truncated as was discussed for C 18 O. Our model over estimates the 12 CO flux between cav and disk (= 120 AU or ∼ 1"), before dropping below the observed flux. This suggests that there is some material in the PDS 70 disk that is outside disk but is sufficiently tenuous that it does not contribute to the C 18 O flux. In addition, the slope and peak of the 12 CO may suggest that the temperature profile of the emitting layer is not consistent between model and observations. This could be caused by ignoring heating contribution from the embedded planets, or because our dust distribution is not providing the proper amount of opacity. The shifted peak suggests that the tenuous gas in the gap is warmer than we compute in our model. We leave the modelling of these aspects of the data to future work. For the purpose of understanding the planet formation within the disk the tenuous gas in the outer disk likely does not play an important role.
C 2 H
In Figure 6(a) we show the resulting C 2 H flux from the preferred model presented in Figure 4(c). Our model predicts a similar peak flux -roughly 30% larger than observed -with a similar peak location. Our model peak is shifted slightly inward, towards the inner disk, with respect to the observed peak location. This is due to bright C 2 H emission that is found in the warm inner disk. This inner disk emission contributes to a bright C 2 H line and regardless of our choice of disk parameters we find a similarly C 2 H-enhanced inner disk.
Our modelling goal was to constrain the global carbon-tooxygen in the PDS 70 disk. Our fiducial model was set such that the carbon abundance is that of the ISM, while the oxygen abundance is set lower so that C/O = 1.01. We expect a marginally carbon-rich disk based on the detection of C 2 H, as well as a few other carbon-rich molecular species . We test whether this hypothesis is robust by computing the C 2 H Fig. 6: Radial profiles of C 2 H emission compared to the work of Facchini et al. (2021). In (a) we show the emission from the preferred disk model of Figure 4(c), which has C/O = 1.01. The peak flux is slight over estimated, but it best reproduces the peak location compared to models with lower C/O. In (b) we show the same disk model, but with O/H = 2.525 × 10 −4 so that C/O = 0.4. Here we see that the peak flux is suppressed and the peak moves inward, in this model the inner disk dominates the C 2 H flux. In (c) we set C/H = 1.1 × 10 −4 and O/H = 1.0 × 10 −4 such that C/O = 1.1. Here we find that the predicted flux greatly over estimates the observed C 2 H flux. emission for a disk with an ISM-like abundance of carbon and oxygen.
In Figure 6(b) we showed the preferred model from Figure 4(c) but with an enhanced oxygen abundance such that the disk C/O = 0.4. Not surprising, the total C 2 H flux is suppressed in this model compared to the the marginally carbon-rich disk. Furthermore the flux peak shifts to smaller radii which suggests that the flux is being dominated by the inner disk in this model. For the low C/O model the carbon and oxygen abundances are set to their standard ISM abundances, while in the higher C/O models the oxygen abundances is depleted to O/H = 1.0 × 10 −4 , and the carbon abundance is adjusted to set the C/O ratios. Depleting abundances relative to the ISM value is consistent with previous studies of the emission of C 2 H in protoplanetary disks (Miotello et al. 2019;Bosman et al. 2021), however their the abundances are depleted further than is done here. We further discuss our choice of carbon and oxygen abundances in the discussion section below.
In Figure 6(c) we show the C 2 H emission for a model with C/O = 1.1. The peak flux shifts further out and is more consistent with the location of the observed flux peak. However the model emission is about 4× brighter than is observed which heavily disfavours such carbon-rich models.
In all three panels of figure 6 we find that the inner disk is bright in C 2 H. In order to assess the impact of inner disk C 2 H we artificially remove the C 2 H from the inner 40 AU of each disk model and recalculate the radiative transfer of the C 2 H emission, and shown the results in figure 7. We find that in both of the carbon-rich models, panel (a) and (c), the removal of the inner disk C 2 H emission has suppressed the total C 2 H flux, while in the carbon-poor model (panel b) the profile is unaffected.
Our C 2 H modelling has suggested that the inner disk and outer disk have different chemical properties. With the former showing evidence of having a lower elemental abundance than the outer disk due to its dominant C 2 H flux in the preferred model. Future studies of PDS 70 may need to vary the elemental abundances as a function of radius, and higher spacial resolution studies of the gas emission will help to constrain this variation.
Summary of chemical modelling
In this section we have tested forward DALI models of disks using the derived outer disk parameters of Keppler et al. (2018Keppler et al. ( , 2019 with a grid of models that varied the properties of the inner disk. These models have shown that the outer disk parameters do an adequate job in reproducing the C 18 O peak flux and peak location. In addition the peak 12 CO emission is well reproduced, but is shifted with respect to the observed peak. This shift is consistent with previous findings of transitions disks (see for ex. van der Marel et al. 2016) where the gas and dust cavity edges are shifted with respect to each other (although in our model they are the same), and could be linked to a more gradual decline in gas density at cav than we model here.
Using C 2 H emission as a tracer of chemistry, we find that both the outer and inner disk contribute to its flux. For marginally carbon-rich models the flux peak is shifted towards the outer disk which is more consistent with the observed radial profile. Furthermore we show that for C/O = 1.1 the C 2 H flux is far too strong to be consistent with observed emission, suggesting that the disk carbon and oxygen abundances are nearly equal, but slightly favours carbon. In our preferred model, we find that the inner disk is too bright in C 2 H which may suggest a difference in the volatile C/O or C/H between the inner and outer disk. Such a chemical difference could be linked to the sequestering of either element into larger, trapped, dust grains as argued in Sturm et al. (2022). Because of the high temperatures in the inner disk, the exact composition of the gas there can be directly probed by James Webb Space Telescope (JWST).
In comparison to other studies of high-carbon protoplanetary disk chemistry we compute the integrated flux of C 2 H and 13 CO (whose azimuthially averaged distribution is shown in the appendix). We find that log 10 ( C 2 H ) = 7.63, in units of mJy km s −1 pc 2 , and log 10 ( 13 CO / 890 m ) = 1.15 which places our preferred model in a similar range of disks studied by Miotello et al. (2019). For the total gas mass of the preferred model (at an age of 5 Myr), 1.5 × 10 −3 M , we find that our predicted C 2 H flux is consistent with the C/O models of Miotello et al. (2019) that are above unity. The column density of C 2 H are on average higher than 10 14 cm −2 across the whole disk, which are similarly consistent with the models of Bosman et al. (2021) Table 4 and Equation 4 at 1 Myr. The dashed lines show the gas temperature through each of the models. We assume that the disk temperature is dominated by the direct irradiation of the host star.
works studying the C 2 H flux in protoplanetary disks, we favour a marginally carbon-rich model for the current properties of the PDS 70 disk.
Planet formation in the PDS 70 disk
Disk model at the time of planet formation
In Figure 8 we show the two gas density profiles (solid lines) determined by computing the evolution of the PDS 70 disk through viscous (blue) and disk wind (orange) evolution. Viscous disks must transport material outward to carry angular momentum away from the host star. As a result there is actually less gas in the outer region ( > 45 AU) of the disk at a younger age than there is now (green). Wind-driven disks, on the other hand, transport angular momentum vertically and do not spread (Tabone et al. 2022). As a result its outer disk is more massive than both the outer gas disk today, as well as the viscous model. Figure 8 show the midplane gas temperature through each of the models. The temperature profiles are similar between each of the disk models with the exception of the model representing the disk 'now'. While disk models generally cool as they age, we are neglecting viscous heating in our chemical models, so the disk 'now' is slightly warmer because the gas density has dropped, allowing for stellar irradiation deeper into the disk. By 1 Myr -our assumed age of the viscous and wind models -most evolving analytic models find that heating is dominated by direct stellar irradiation outside of a few AU (Chambers 2009). Figures 9(a) and (b) show the results of planet formation in the wind-driven and viscous-driven disk models respectively. In each figure the faded points denote synthetic planet formation tracks that fail to generate PDS 70-like planets, while the regular points are the successful simulations. The general evolution of synthetic planets is from the bottom and right of the figure towards the top left (shown with the black arrows -outward migration is not possible in our model) and thus the PDS70-like planets begin their evolution farther from PDS 70 than their current orbit.
The dashed lines in
Growth of the b and c planets
The PDS 70-like synthetic planets have masses and orbital radii that are within twice the error of the measured values of PDS 70 b and c at a simulation age of 4 Myr. This simulation age corresponds to a system age of 5 Myr because we assume that planet formation begins at a system age of 1 Myr, and is consistent with the system properties inferred by Keppler et al. (2019). Because of our choice of proximity within twice the uncertainty of the planets' currently known properties, in the viscous model (Figure 9(b)) the PDS 70b-like planet actually starts near PDS 70b's current orbit and migrates slightly inward. This synthetic planet, however, is the only planet that has a similar mass and orbital radius as the PDS 70's current planets.
The drop in planet mass at all time steps in the outer regions of the disk are related to our choice of initial mass for the embryos. As the initial radius of the embryos moves outward there is less dust mass (because there is less disk exterior to its initial radius) and thus it starts at a lower mass than embryos beginning closer to the host star. The most extreme example of this is that planets forming inward of ∼ 30 AU in the wind model, and ∼ 10 AU in the viscous model have started sufficiently massive that their gas accretion already undergone a runaway by the first time step (60 kyr) shown. Meanwhile in the outer disk the embryos are not sufficiently heavy to undergo runaway gas accretion by 60 kyr, but do so later in the simulation.
Given the choice of system age we find that the Wind-driven model can better reproduce the mass and orbital radii of both planets than the viscous model. This is mainly linked to the fact that the viscous model requires that the density structure at 1 Myr is more centrally concentrated than the Wind model. As a result there is less material (both gas and dust) available in the outer disk for the construction of both the core and the gas envelope in the viscous model. Regardless, in both models we can identify a few of our synthetic planets that contain enough mass and orbits near to the PDS 70 planets' current orbit to investigate their resulting atmospheric chemistry.
In the wind-driven case we overestimate the mass of PDS 70b by a factor of about 2. We note here that, as in previous works, the planet formation of each synthetic planet is handled separately and thus the formation of the c-planet does not have any bearing on the formation of the b-planet. In reality, the growth of the cplanet likely impacted the gas flow from the disk to the b-planet, unless it formed much earlier than the c-planet. Our model has no sensitivity to differences in the formation start time, however this hints at other interesting lines of research for future studies of this system.
Chemistry in the atmospheres of PDS 70b and c
We compute the number of carbon and oxygen-carrying molecules that are available to both the b and c planets. We investigate two possible scenarios which are involved in setting the global C/O in the disk. The first is that the processes that lead to a high C/O (discussed above) happened very early in the lifetime of the PDS 70 system, and thus the global C/O is the same during the era of planet formation (∼ 1 Myr). On the other hand the processes that deplete oxygen and some carbon in the disk could have happened later in the disk lifetime, possibly after the planets had accreted most of their gas. In this scenario the global C/O would be the same as the stellar value ∼ 0.4.
In Table 5 we show the average C/O in all synthetic planets whose mass and orbital radii correspond to that of the PDS 70 pair of planets (within twice the uncertainties). Where multiple synthetic planets contributed to the average (i.e. in the Wind model) we show the range of C/O from each individual planet to an extra decimal place. The spread is small among separate planets. Not surprising, the planets resulting from the higher C/O Fig. 10: The midplane distribution of molecular abundances for CO and its tracer, N 2 H + for both the viscous and wind models at 1 Myr. They each see a dip in CO abundance at the same position as a steep increase in N 2 H + , indicative of the CO ice line. The three grey bands show the range of initial embryo radii that result in the synthetic b and c planets in both the Viscous and Wind models.
models have higher C/O in their atmospheres. In the wind-driven model, where a satisfactory planet c can be found, the PDS 70b planet has a slightly lower C/O than the c-planet. Meanwhile the planet b results in a slightly different atmospheric C/O between the wind and viscous models. In both cases small differences in the formation history and the underlying chemical model lead to their varied atmospheric C/O.
In Figure 10 we show a representation of the CO ice line in our chemical models along the disk midplane. While the twodimensional structure of the chemistry is important (see for example Cridland et al. 2020a), for the purpose of the following discussion the midplane abundances can be illustrative.
The traditional definition of an ice line is the point in the disk where the temperature is such that the gas and ice phases of a species has equal abundances. Observationally, this is difficult to constrain and hence molecular tracers are used to infer the location of ice lines. For CO, N 2 H + is a popular molecular tracer because its main destruction pathway involves reactions with CO (Qi et al. 2013a;van 't Hoff et al. 2017). In TW Hya, N 2 H + has been used to infer the location of the CO ice line (Qi et al. 2013b).
The CO ice line is located between 15-30 AU along the midplane. Additionally on the figure the grey bars show the initial location of the embryos that led to the best PDS 70-like planets. They are, from left to right, PDS 70b in the viscous model, PDS 70b and PDS 70c in the wind model. By coincidence, we find that the b-planet in the viscous model begins its formation in the range of radii where the abundance of CO is near its minimum. Meanwhile the planets forming in the wind model form in a region where the CO abundances is higher. This higher abundance follows from higher interstellar radiation in the outer disk, which results in higher dust temperatures and thus less efficient freeze-out.
As argued in Cridland et al. (2020b) the gas tends to be more carbon rich than the ices (also see Öberg et al. 2011) and thus the relative amount of gas and solid abundance impacts the atmospheric C/O. In the outer disk, when CO begins to freeze out either as pure CO ice, or more likely as other oxygen-rich ice species (see for example Eistrup et al. 2018), the bulk of the carbon and oxygen can become trapped in the ices. This is particularly important for the PDS 70b planet forming in the viscous model and can help explain its lower C/O compared to the wind model planets.
For the two planets forming in wind-driven model we find that their final masses are similar, and thus they acquire very similar atmospheric C/O. The (very) small difference in atmospheric C/O (at the level of 1%) may be due to the slight difference in their starting location and the resulting differences in the available CO in the gas phase. More available gaseous CO will lead to slightly higher atmospheric C/O.
In the low C/O disk model we find that both PDS 70b and c show nearly equal atmospheric C/O in both disk models. Unlike in the high C/O models, planet formation in the outer disk appear to be less constrained by chemical gradients in the disk as the planet formation history of the planets.
Discussion
Why study young systems?
There are several billion years between the point that a typical exoplanet's protoplanetary disk evaporates and the measurement of its atmospheric chemistry. Several dynamical processes, such as gravitational scattering between other planetary bodies (i.e. the Nice model, Gomes et al. (2005); or for hot Jupiters, Beaugé & Nesvorný (2012)) and/or scattering by passing stars (Shara et al. 2016;Hamers & Tremaine 2017;Wang et al. 2020Wang et al. , 2022 could change the planet's orbital radius. These processes, however, should have no impact on the atmospheric chemistry if the planet has already fully formed and thus the aforementioned framework can still be used to interpret observations.
Over billion year timescales, the mass-loss driven by photoevaporation can potentially change the chemistry of the upper atmosphere (Yelle 2004;García Muñoz 2007;Murray-Clay et al. 2009;Owen & Jackson 2012). In the PDS 70 system, however these processes have limited effect on the chemistry of the two embedded planets. Firstly they orbit very far from their host stars compared to planets found in the well known hot Neptune desert (Owen & Lai 2018), at orbital periods less than a few days, which are the main focus of photoevaporation studies. Furthermore, while heavy elements have been observed to be in the evaporating winds of exoplanets (Fossati et al. 2010;Sing et al. 2019), they usually remain coupled to the outflow (Koskinen et al. 2013) which would maintain their relative abundances in the remaining atmosphere (Hunten et al. 1987). Finally, the effect of photoevaporation on the chemical structure of giant planets likely does not play an important role on planets more massive than Saturn (Mordasini et al. 2016;Fossati et al. 2018).
A final source of long timescale evolution that has the potential to change the observable chemical abundances of the atmospheres away from their primordial values are chemical reactions between the atmosphere and the planet's core. One possible direction for chemical evolution is through the envelope-induced core erosion that could transfer heavy elements from the core into the atmosphere via convection (Stevenson 1982(Stevenson , 1985Guillot et al. 2004;Soubiran et al. 2017). The study of this topic represents a rapidly developing field, however many studies have argued that the cores of gas giant planets consist of a diffuse outer and inner core with steep chemical gradients that suppress adiabatic convection in favour of less efficient heat and compositional transport from the core (Stevenson 1985;Chabrier & Baraffe 2007;Leconte & Chabrier 2012;Vazan et al. 2015Vazan et al. , 2016Wahl et al. 2017;Moll et al. 2017;Vazan et al. 2018).
In the other direction, differences in the average condensation temperatures of refractory (silicon, magnesium) and volatiles (oxygen, carbon) bearing species causes these species to rain-out at different altitudes. This can thus cause the chemical gradients in the planetary atmospheres -and its observable chemical abundances -to change as the planet loses its accretion energy (Stevenson et al. 2022). These internal processes can lead to difficulties in interpreting the chemical structure of old planets in the context of planet formation -depending on which molecular tracers are used. Young planets, on the other hand, still have the majority of their accretion heat and have not had enough time for many of these mixing processes to proceed. They thus provide an excellent test bed for studies linking the chemical properties of exoplanet atmospheres to planet formation physics.
Overview of the PDS 70 system
The PDS 70 system offers a unique look at the planet formation process. While the star's metallicity is similar to that of the Sun, and so one would expect a stellar C/O that is near solar, the detection of many carbon-rich molecular features suggest that the current disk C/O is higher -perhaps above unity. The chemical modelling presented here seem to support this, however at low C/O the inner disk contributes strongly to the C 2 H flux. This could be caused by two separate assumption regarding the chemistry in the disk which we discuss Section 5.3.
Computing the classic core accretion scenario in the younger version of the PDS 70 disk has shown that super-Jupiter-massed planets at the orbital radii of the known planets is possible, however it is very sensitive to the accretion history of the disk. Viscously accreting disks -having had to begun more centrally compact -struggle to accumulate enough material in the outer disk to build both giant planets. Here we have assumed that the initial mass of the planetary embryos is the larger of the pebble isolation mass given by Bitsch et al. (2018) and the total quantity of solid mass exterior to the planet's starting location -assuming a gas-to-dust ratio of 100. As a result the initial core mass of our synthetic planets struggles to be large enough to draw down significant quantities of gas -particularly in the viscous model.
We have considered two scenarios responsible for driving the disk's C/O away from the assumed stellar value. The first is that the responsible physical/chemical process occurs earlier in the disk life than the era of planet formation and thus the planets form in the same chemical environment that we see today (ie. high C/O). The second scenario imagines that the process leading to high C/O occurs late in the disk life, at least after the bulk of planet formation has already occurred and thus the planets form in a low C/O environment. Not surprisingly the resulting atmospheric C/O ratios are sensitive to this choice, with synthetic planets growing in the low C/O environment having themselves a low atmospheric C/O (effectively stellar) while in the opposite case the resulting C/O is super-stellar.
Currently the measured C/O of PDS 70b and c are not well constrained, however the models of Wang et al. (2021) predict an atmospheric C/O for PDS 70b of between 0.54-0.7 depending on their choice of model. The sensitivity on the PDS 70c planet is worse than for PDS 70b, but using the same models Wang et al. (2021) find a range of C/O = 0.49-0.65. Clearly the current observations of the PDS 70 planets are not sufficiently sensitive to confidently differentiate between different models and future, more sensitive observations are required.
ISM abundances of carbon
In our chemical model we assume an ISM abundance of carbon, and modify the abundance of oxygen to set our disk C/O. We note that in previous work exploring the link between C 2 H flux and volatile C/O, particularly Miotello et al. (2019) and Bosman et al. (2021), have found that they must deplete the carbon and oxygen abundance by a factor of 100 relative to the ISM in order to better match observed and modelled column densities/fluxes. Here we find that in the case that we deplete the carbon and oxygen abundance we completely lose our C 18 O flux that is coming from the inner edge of the outer dust ring (see Appendix B). We can therefore say that a global depletion of carbon and oxygen is not consistent with the observations of Facchini et al. (2021).
In figure B.3 we show the most abundant carbon carriers for the model in figure 4(a) and a model with a similar setup, but with carbon and oxygen abundances reduced by a factor of 100 (ie. consistent with Miotello et al. 2019) and a gas mass enhanced by a factor of 100. There we see that when the carbon and oxygen abundance is reduced the primary carbon carrier becomes CH 4 which freezes out onto the dust grains. This can be understood from a chemical kinetic perspective as follows: the production of CO scales with the abundance of both carbon and oxygen -and thus its production scales with n C n O n 2 X , where X denotes the gas 'metallicity'. Thus when the abundance of carbon and oxygen are reduced each by a factor of 100, the production of CO is slowed by a factor of 10000. CH 4 , on the other hand is produced at a rate that scales with n X -since it contains only one 'heavy' element. It thus can become the dominant carbon carrier along the midplane of the disk at 5 Myr.
With that said, it is possible that local depletions (or enhancements) of the carbon and oxygen are occurring in the PDS 70 disk. For example, there is a very obvious dust trap outwards of the orbital position of PDS 70c which could be locally enhanced in carbon and oxygen ices because of the ongoing flux of volatile rich ices there. Meanwhile the rest of the outer disk could be subsequently depleted in heavy elements. Furthermore the lack of many optically thin lines in the inner disk could suggest that it too is depleted in carbon and oxygen -particularly given that our chemical models consistently predict a high C 2 H flux there, even at lower C/O. Further radially-dependent studies of the chemistry in the PDS 70 disk will be needed, along with upcoming high resolution ALMA data (Facchini et al. in prep) and new data from JWST.
Choice of chemical network
We followed the work of Miotello et al. (2019), using their chemical network that was first developed by Visser et al. (2018). The network contains 64 construction and destruction reactions for C 2 H with a variety of reaction partners. The carbon chains are a maximum of two carbon atoms (ie. C 2 H, C 2 H 2 , C 2 H 3 ) which can result in erroneously large abundances of these species.
As an illustration, Wei et al. (2019) computed the chemistry in protoplanetary disk for C/O greater than unity. They generally found that the carbon that was not incorporated into CO tended to inhabit long chain hydrocarbons and/or cyanides like HCN in their inner disk ( < 5 AU). If these longer chain hydrocarbons were available in our chemical model it is possible that the inner disk chemistry may have shifted away from C 2 H, but would have largely not affected the emission in the outer disk. We leave this investigation to future work. A&A proofs manuscript no. main 5.5. Core accretion vs. gravitational instability In this work we have use the standard picture of core accretion to build our synthetic planets. As discussed above this planet formation scheme struggled at times to build planets similar to PDS 70c -at least in the case of a disk that evolved via viscous evolution. We have neglected the other popular planet formation mechanism that often leads to the generation of massive planets -gravitational instability (GI, ie. Boss 1997).
Very few astrochemical studies of GI currently exist, however Ilee et al. (2017) explored the chemical evolution of a pair of gravitationally unstable clumps in hydrodynamic simulations. They found that the clumps that survived throughout their whole simulation evolved to have the same atmospheric C/O as was initialised in their disk model. As such, we might expect that if PDS 70b and c formed through GI then they would have the same C/O as the disk.
If this is true then it would be difficult to differentiate between the different formation models for PDS 70c with C/O alone -because its C/O seems to replicate the disk C/O here. In which case we may need to include an additional tracer of planet formation such as the solid to volatile ratio (Schneider & Bitsch 2021), or the nitrogen abundance (Bosman et al. 2019;Bitsch et al. 2022). For PDS 70b there may be enough information encoded in the atmospheric C/O that can be used to differentiate the two formation mechanisms -at least if the disk C/O is high at the time of its formation.
In the future, it would be a useful to compute a similar experiment as was done in Ilee et al. (2017) to reproduce the PDS 70 pair of planets. Such a numerical simulation would also help to understand how the two planets impact their mutual growth and chemical evolution. In this work we ignore any mutual interaction between the two planets in their growth which is no doubt over-simplifying the physical system.
Conclusion
We have modelled the chemical and physical structure of the PDS 70 disk in order to understand the environment in which the PDS 70b and PDS 70c planets formed. We find that the physical model of Keppler et al. (2018Keppler et al. ( , 2019 does an adequate job reproducing the C 18 O and 12 CO flux, and evidence of a tenuous inner disk. The carbon-to-oxygen ratio in the volatiles of the PDS 70 disk is likely marginally above unity, based on the flux of C 2 H emission. The inner disk ( < 10 AU), however, is too bright in C 2 H which may suggest that it is depleted in carbon and oxygen relative to the ISM-like abundances that we assume in this work. The current outer disk (at 5 Myr) is consistent with high abundances of carbon and oxygen, and we showed that if we deplete these elemental species we can not reproduce the flux of C 18 O that comes from the inner edge of the outer dust ring.
To understand how the disk would have looked at the stage where planet formation began (at 1 Myr), we used analytic prescriptions for the evolution of the surface density and critical radius under the assumption that the disk evolved through viscous or MHD disk winds. The disk models resulting from the different driving mechanisms result in slightly different surface density profiles at an age of 1 Myr. The main difference between the two disk models is the quantity of material in the outer diskwhich effects the formation of the synthetic planets around where the PDS 70 planets currently orbit.
We use a simple prescription for the growth of the PDS 70 planets, initialising their mass with the higher of the pebble isolation mass and the remaining total solid density exterior to the embryo's initial orbital radius. We accrete the gas and account for the collection of carbon and oxygen from both the gas and the volatile rich ices. Because it is unclear when the chemical processing of the volatiles occurred during the evolution of the disk we test two scenarios. In the case that the disk is chemically similar to it is now during the early era of planet formation the planets turn out to have a high C/O ratio, while in the opposite case -when the initial disk composition resembled the host starthe planets tended to have stellar C/O. The lower C/O models may be a better match to our current understanding of the planetary C/O, however the planetary C/O is only weakly constrained by observations which makes comparisons difficult to make. The PDS 70 system represents a fantastic environment to study the link between planet formation and the planet's natal disk. In particular, understanding the chemical properties of the disk offer a unique opportunity to understand how giant planets acquire the higher mass elements like carbon and oxygen. This study will benefit greatly from both improved observational programs of the disk both at sub-mm wavelengths with ALMA as well as infrared with JWST, along with better constraints on the atmospheric C/O within the planets. Fortunately all of these studies are well on the way, and so PDS 70 will continue to be an excellent system to study planet formation for years to come.
A&A proofs manuscript no. main Appendix A: Planet formation details Appendix A.1: Gas accretion For this work we focus primarily on the overall chemistry of the atmospheres of the PDS 70b and c planets. As a result we ignore the initial build up of the planetary core under the assumption that the core does not contribute significantly to the bulk chemistry in the atmosphere. The gas accretion is limited by a number of different mechanisms depending on the planet's current evolutionary stage. While the planet is still embedded within the protoplanetary disk (ie. has not yet opened a gap) we assume that the gas accretion rate is limited by either the Kelvin Helmholtz timescale (KH) or the Bondi timescale -which ever is longer.
The KH timescale is related to the rate that the collapsing envelope can release its gravitational potential energy as heat. Its timescale has the functional form of (Alessi & Pudritz 2018):
KH = 10 7 yr M plnt M ⊕ −2 , (A.1)
where the exponents are determined by comparing population synthesis models of planetesimal core formation to populations of known exoplanets. In principle the KH timescale depends directly on the opacity of the collapsing envelope, however this is not well constrained. The best fit exponents of Equation A.1 include variations over envelope opacity when comparing to the known exoplanet population. The Bondi radius describes the region of the disk where the gas is likely to be captured by the planet, the kinetic energy of a gas parcel is less than its gravitational potential energy relative to the planet. The planet can thus not accrete more gas than is available within the Bondi radius, or can be resupplied by viscous processes in the disk. The accretion timescale associated with accreting gas through the Bondi radius is (D'Angelo et al. 2010):
= ( Ω) −1 * 2 Σ −7 plnt * −2 , (A.2)
where is the orbital radius of the planet, Ω is the Kepler orbital frequency, and 2.6 is a constant meant to match this simple prescription to full 3D hydrodynamic simulations. The growth of the planet during the embedded phase is thus plnt = plnt /max( , ). When the planet becomes sufficiently massive its gravitational influence begins to dominate the surrounding gas over the viscous torques and gas pressure forces, at this point it opens a gap locally in the protoplanetary disk. The criteria for opening a gap in the disk is a planet of mass M plnt such that (Crida et al. 2006):
3 4 + 50 R 1, (A.3)
where = ( plnt /3 * ) 1/3 is the Hill radius of the planet, = plnt / * is the planet-to-star mass ratio, R = 2 Ω/ is the Reynolds number, and = is the gas viscosity under the standard -prescription of Shakura & Sunyaev (1973).
Once the gap opens the geometry of the accretion flow changes. This is mainly due to the fact that while the gravitational influence on the gas at the disk midplane is strong, straight above the midplane the gravitational force is necessarily weaker. As a result the gas can 'leak' across the edge of the gap and lose its pressure support -effectively free falling towards the midplane of the disk, and the planet. This gas motion, often called 'meridional flow' (for ex. in Morbidelli et al. 2014), have been found in high resolution studies of CO gas velocities in HD 163296 to coincide with the expected location of an embedded giant planet . Furthermore in numerical studies of disk gas hydrodynamics around giant planets, meridional flows have been found to contribute a large fraction (up to ∼ 90%, Szulágyi et al. 2014) of the gas flux into the planet's region of gravitational influence. Morbidelli et al. (2014) outlined the rate of gas accretion onto a growing planet that had recently opened a gap. Broadly speaking it is limited by the delivery of material to the outer edge of the gap -the disk accretion rate -however they describe a cycling of material driven by gas falling through a meridional flow into a decreting circumplanetary disk and back to the outer edge of the planet-induced gap. This recycled gas returns to hydrostatic equilibrium with the rest of protoplanetary disk gas and can return to the meridional flow and the growing planet. This process evolves on a dynamic timescale rather than the viscous timescale and thus their mass accretion rate into the gap follows ):
gap = 8 ( / ) Σ gas . (A.4)
This accretion rate is faster than the disk equilibrium mass accretion by a factor of 8/3( / ) which, depending on the disk scale aspect ratio, can be between one and two orders of magnitude higher.
The efficiency that the rate of mass accretion into the gap is transferred to an accretion rate onto the planet depends on the local flow around the planet. Batygin (2018) proposed that the magnetic field of the young planet acts to deflect incoming gas into the circumplanetary disk reducing the accretion and growth efficiency. Cridland (2018) derived the connection between the magnetic field strength and the mass accretion efficiency resulting in the scaling: of the (assumed) magnetic dipole of the planet. We assume that the young planet has a (constant) magnetic field strength of 500 Gauss which is two orders of magnitude above that of Jupiter, but less than the typical magnetic field strength of a ∼1000 K brown dwarf stars. Interpolating the magnetic field strength in this way is consistent with our general understanding of the geo-dynamo and solar-dynamo (Christensen et al. 2009). We assume a constant planetary radius of plnt = 2 Jupiter during this phase as it is a nominal size of young, self-luminous planets with masses greater than Saturn as reported by Mordasini et al. (2015). Cridland (2018) explored the impact of the planetary size on the final planet mass due to gas accretion through the above mechanism and found that final masses only varied by a factor of a few depending on the whether the planet was a 'coldstart' -planetary radius equal to Jupiter's current radius -or a 'hot-start' planet with a radius three times larger than Jupiter's current radius.
We employ standard planet formation methods in computing the growth of the synthetic PDS 70 planets. For simplicity, however we keep the disk physical and chemical properties constant throughout the planet formation process. This ignores the fact that the disk generally cools and becomes less dense as a function of time. At later times, as the gas is accreted onto the host star, the surface density of the gas drops and the gas accretion rate onto the planet can be reduced through equation A.2. The drop in mass accretion rate will be a particular problem in the viscous model because it already predicts low mass giant planets. The wind model, on the other hand may better predict the masses of the b and c planets if the accretion rate is lower at later times.
Appendix A.2: Planet migration
Our gas accretion simulations begin with a planetary core mass that is set to the the smaller of the pebble isolation mass (Bitsch et al. 2018) and the total dust mass exterior to the initial radii. These masses are a factor of a few lower than the gas gap opening mass (Crida et al. 2006) and thus Type I migration will not have a large impact on the overall results of our work. We keep the initial core stationary until it reaches the gas gap opening mass at which point we allow it to migrate via Type-II migration (Lin & Papaloizou 1986). Our choice of ignoring the Type-I phase (ie. Ward 1991) of planet migration is related to our ignoring the chemical impact that the planet core might have on the chemical properties of the atmosphere. As such where the core is built is less important to the overall chemical properties of the atmosphere and thus we ignore Type-I migration and the early build up of the core for simplicity.
Once the young planet has reached a mass that satisfies Equation A.3 we begin to evolve its orbital radius through standard Type-II migration. In Type-II migration, the planet has opened a gap in the gas which effectively changes the way in which the protoplanetary disk can transport its angular momentum. Because of its connection to the angular momentum transport -typically assumed to be due to viscous evolution in gas -we typically assume that Type-II migration evolves on the viscous timescale:
plnt = plnt / , (A.6)
where the viscous time = 2 plnt / and = follows the standard -prescription of Shakura & Sunyaev (1973). The gas viscosity depends on the gas scale height and the gas sound speed . At a certain point Type-II migration stalls when the planet exceeds a critical mass of M crit = 2 Σ -the total mass of the disk gas inward of the planet's current orbital radius. After the planet passes this mass its migration time becomes → (1 + plnt / crit ).
Updated hydrodynamic models concerning Type-II migration have shown that the inner and outer disks are not perfectly separated by the planet-induced gap and some gas crosses the gap (Dürmann & Kley 2015). The gas crossing the gap can also be accreted by the growing planet (Dürmann & Kley 2017) which reduces the efficiency of the gap crossing. Robert et al. (2018) showed that even given these complications the rate of Type-II migration continues to be proportional to the gas viscosity, which justifies the use of Equation A.6 in the face of more complex hydrodynamic processes. The proportionality is related to the fact that if the planet migrates on a shorter timescale than the viscous timescale, the gas 'ahead' of the migrating planet will be compressed by the planetary torques while the space 'behind' the planet becomes evacuated. As a result the inner/outer torque would be strengthened/weakened, slowing the migration rate.
The DALI model outputs the gas sound speed throughout the disk, for an ideal gas it has the form: = √︁ / m , for an adiabatic constant = 1, gas constant , gas temperature , and mean molecular weight of the gas The observations are not consistent with a global carbon and oxygen abundances depletion by a factor of 100, which is the depletion done by Miotello et al. (2019). Here we find a drop in C 2 H flux by nearly an order of magnitude (panel c), and more so when the gas (and dust) density is enhanced (panel d). Given that we have focused a model with effectively no inner disk -since we use the disk model with gas = dust = 10 −15 ( Figure B.1(a)) we can say that the outer disk is inconsistent in general with a global depletion of carbon and oxygen.
A global depletion of the carbon and oxygen abundance is not consistent with the observed radial profile of Facchini et al. (2021), however it could help to explain the unusually high C 2 H flux from the inner disk of our model (< 6 AU here). If the inner disk is locally depleted in carbon and oxygen then it would not emit in C 2 H as brightly as we are predicting in this work. This depletion could be caused by the inefficient transport of material from the outer disk due to the elements being sequestered in the dust and trapped -where observations seem to prefer ISM-like abundances of carbon and oxygen. A future study on the radial distribution of gas and dust, along with their volatile abundances, would be a useful way of furthering our understanding of the PDS 70 system.
To understand the sudden drop in C 18 O flux when the elemental abundances of carbon and oxygen are reduced, we show the primary carbon carriers for a pair of models in figure B.3. The figure shows the relative number of the most abundant carbon carrier in the volatile and ice (labelled with a 'J') phases in both models. There we see that when the abundances of carbon and oxygen are reduced by a factor of 100 compared to ISM (right panel) the most dominant carbon carrier in the disk near the dust ring (50-70 AU) is CH 4 which remains frozen on the dust grains near the midplane. We understand this shift in the context of the rate of formation of CO, which depends on the number density of both carbon and oxygen. CH 4 , meanwhile, only depends on the number density of carbon alone and thus when the abundance of carbon and oxygen are reduced, the reduction in production rate that follows is less drastic for CH 4 than it is for CO. denotes species that are frozen onto dust grains. The abundances are integrated between 50 AU and 70 AU (ie. near the mm dust ring) and from the midplane to z/r = 0.43. It therefore includes the main reservoir of CO that contributes to the C 18 O flux. It is clear that when the abundances of carbon and oxygen are reduced the main reservoir of carbon is shifted towards hydrocarbons that are frozen onto the dust. between the observed 13 CO flux and that computed by the preferred model. 13 CO appears to be partially optically thick as its peak flux is shifted outward relative to the observed flux similarly to the 12 CO flux shown in figure 5.
Fig. 1 :Fig. 2 :
12Examples of possible models of the gas and dust distributions to be fed into DALI. Shown is the number volume density defined as n gas = Σ gas /( √ 2 H ) for the gas. For the dust, dust is defined to ensure the correct dust-to-gas mass ratio. The solid and dashed curves show the effect of depleting the inner disk by constant factors of gas and dust while the orange curve show the impact of adding a planet-induced gas gap of a prescribed depth of planet . The vertical distribution of the gas and dust at a radius of ∼ 80 AU -outward of the dust cavity outer edge. The orange dashed line represent the different distributions of the large and small dust (eq. 3).
Fig. 3 :
3The radial profile observed byFacchini et al. (2021) using ALMA. The grey band shows the approximate location of the (mm) dust ring according toKeppler et al. (2019). The line fluxes peak in slightly different positions with the optically thin tracers peaking near the edge of the dust cavity and the optically thick tracers peaking inside the cavity.
Fig. 4 :
4Radial profiles of C 18 O emission for three different disk models compared to the observational data ofFacchini et al. (2021). The grey band shows the width of the major axis of the beam. In each case the relevant parameters are listed in the top right corner of the figure. From left to right: in model (a) the inner disk is almost completely removed, and there is effectively no gas or dust from cav inward. In model (b) the inner disk is replaced, with the noted depletions of gas and dust. Finally in model (c) the same inner disk parameters are used, however between gap and cav the gas is depleted by a further factor.
Fig. 5 :
5The radial profile of the 12 CO emission due to the model presented inFigure 4(c)
that have C/O above unity. Given what we find in comparing the radial distribution of the line emission and the comparisons to other
Fig. 7 :
7The same as figure 6 but for the model where C 2 H is artificially removed in the inner 40 AU. The C/O = 0.4 model (b) does not show a large change, however in both of the carbon-rich models we find that the inner disk is strongly contributing to the C 2 H emission. Both the C/O = 1.01 (a) and C/O = 1.1 (c) models show similar C 2 H flux profiles.
Fig. 8 :
8The midplane gas density profiles (solid lines) used for planet formation compared to the 'current' (at 5 Myr) density profile (green). The 'Wind' (orange) and 'Viscous' (blue) models are related to the turnback models presented in
Fig. 9 :
9Evolution of synthetic planets in the disk model given by disk-wind evolution (a) and viscous evolution (b). The colour of each point denotes a particular timestep since the begining of the simulation ( 0 = 1 Myr). The faded circles are synthetic planets that do not result in PDS 70-like planets at a time of 4 Myr after the beginning of the simulation. The square points show the successful PDS 70b synthetic planets, while the diamonds show the successful PDS 70c planets. The black circles and error bars are the values for PDS 70 b and c as shown in Table 1. The grey lines connect the points from individual synthetic planets. The wind model can reproduce both b and c planets, although it over estimates the mass of planet b. The viscous model can not reproduce the c planet and underestimates the radial location of the b planet. The inset of panel (b) shows the collection of successful synthetic planets for both the viscous and wind models. See the text for more details.
5 :
5The atmospheric carbon-to-oxygen ratios for the syn-Notes. Each row denotes a different combination of disk model and chemical model. As seen inFigure 9our viscous disk model fails to produce a satisfactory planet c. Where possible, the range of C/O from individual synthetic planets are shown to the 3rd decimal place.
Fig. B. 1 :
1Test of depleting carbon and oxygen abundances in the PDS 70 disk, using C 18 O emission. Each panel represents a different model and their relevant parameters are noted. Panels (a) and (c) use the fiducial critical surface density as in the preferred model in the main text. Panels (b) and (d) have enhanced critical densities by the same factor as the abundances of carbon and oxygen have been depleted. Clearly a depleted abundance is inconsistent with observations.
Fig. B. 2 :
2Same asFigure B.1 but for C 2 H emission. Unlike in the C 18 O case, the emission of C 2 H can remain strong even with depleted abundances. This is consistent with the finding ofMiotello et al. (2019).
Fig. B. 3 :
3The most abundant species in the model presented in figure 4(a) (left panel) and the model presented in figure B.1(d) (right panel), for a disk age of 5 Myr. Molecules with a 'J'
Fig
Fig. C.1: A comparison between the observed 13 CO flux and that computed by the preferred model. 13 CO appears to be partially optically thick as its peak flux is shifted outward relative to the observed flux similarly to the 12 CO flux shown in figure 5.
).
Table 1 :
1Planetary properties derived byWang et al. (2021) Notes. The values are based on the requirement of a dynamically stable system over the age of the system. The uncertainties are based on their 95% confidence intervals.PDS 70b PDS 70c
Mass (M Jup )
3.2 +8.4
−2.1
7.5 +7.0
−6.1
Semi-major axis (AU) 20.8 +1.3
−1.1
34.3 +4.6
−3.0
Table 2 :
2Range of disk parameters used to construct current disk model.Parameter
Range of values
Σ
2.87 g cm −3
40 AU
0.968
0.2
1
0.25
Table 3 :
3Initial abundances (per number of Hydrogen atoms) used in the fiducial DALI chemical calculation.Species Abundance Species Abundance
H 2
0.5
He
0.14
NH 3
). The isotopic ratios of carbon and oxygen in the ISM that are relevant to our work are [ 12 C]/[ 13 C] = 77 and [ 16 O]/[ 18 O] = 560
Table 4 :
4Parameters used in setting the disk properties at a disk age of 1 Myr.
Table
Article number, page 5 of 18 A&A proofs manuscript no. main
Article number, page 7 of 18 A&A proofs manuscript no. main
Article number, page 18 of 18
Acknowledgements. Thanks to the anonymous referee for their comments which greatly improved the clarity of this work. Thanks to the ExoGRAVITY team for stimulating discussion and observations that triggered this study. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA(Hunter 2007).A&A proofs manuscript no. mainAppendix B: Depleting carbon and oxygen abundanceAs mentioned in the main text a common finding in studies focused on C 2 H flux is that the carbon and oxygen abundances need to be depleted by up to a factor of 100 times the ISM value in order to match to CO isotopologue and H 2 O observations. The depletion is expected to occur because of the freeze out of CO and H 2 O onto dust grains that subsequently settle to the midplane -'hiding' the bulk of the volatile abundance of carbon and oxygen. Additional chemical reactions transform CO into other species like CO 2 , CH 3 OH, and other hydrocarbons(Bosman et al. 2018b;Krijt et al. 2020). We have tested how this depletion effects the flux of both C 18 O and C 2 H in order to better understand our chemical picture.Figure B.1 shows the C 18 O flux from a series of models that are meant to test whether depleting the carbon and oxygen abundances is consistent with observations. In panels (a) and (c) the abundances are depleted, but the density is kept the same as the model presented inFigure 4(a). Here we see that we completely lose the C 18 O flux at the edge of the dust ring. In order to possibly recover the flux, we enhance the gas density in the disk by the same factor as the carbon and oxygen abundances are depleted. These tests are shown in panels (b) and (d) and generally show that gas enhancements can not recover the missing flux caused by depleting the carbon and oxygen abundances.In figure B.2 we show the C 2 H flux for same disk models presented in figure B.1. Here we see that the C 2 H flux is not greatly impacted by a factor of 10 depletion of carbon abundance compared to the fiducial disk model (panel a), and is even weakened when the gas density is enhanced by a factor of 10 (panel b). The reduced flux is related to the equal increase in the dust density in the models where we enhance the critical surface density, since we keep the dust-to-gas ratio constant when we enhance the gas density. The extra dust acts as an extra source of opacity, particularly at the edge of the dust ring, weakening the C 2 H flux.A&A proofs manuscript no. main Appendix C: Radial profile of 13 CO = 2 − 1 In figures C.1 we show the radial profiles of the 13 CO emission generated by the preferred model. This line is not used in selecting the preferred model, however we include it here for completeness. We find the peak flux location is slightly shifted outward similarly to 12 CO, while the peak flux is slightly over estimated. This suggests that its emission is slightly optically thick, but not to the same extent as the 12 CO flux.
. M Alessi, R E Pudritz, MNRAS. 4782599Alessi, M. & Pudritz, R. E. 2018, MNRAS, 478, 2599
. D E Anderson, E A Bergin, G A Blake, ApJ. 84513Anderson, D. E., Bergin, E. A., Blake, G. A., et al. 2017, ApJ, 845, 13
. S M Andrews, D J Wilner, C Espaillat, ApJ. 73242Andrews, S. M., Wilner, D. J., Espaillat, C., et al. 2011, ApJ, 732, 42
. M Ansdell, J P Williams, N Van Der Marel, ApJ. 82846Ansdell, M., Williams, J. P., van der Marel, N., et al. 2016, ApJ, 828, 46
. A M Price-Whelan, Astropy CollaborationP L Lim, Astropy CollaborationApJ. 935167Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
. J Bae, Z Zhu, C Baruteau, ApJL. 88441Bae, J., Zhu, Z., Baruteau, C., et al. 2019, ApJL, 884, L41
. K Batygin, AJ. 155178Batygin, K. 2018, AJ, 155, 178
. C Beaugé, D Nesvorný, ApJ. 751119Beaugé, C. & Nesvorný, D. 2012, ApJ, 751, 119
. M Benisty, J Bae, S Facchini, ApJL. 9162Benisty, M., Bae, J., Facchini, S., et al. 2021, ApJL, 916, L2
E A Bergin, G A Blake, F Ciesla, M M Hirschmann, J Li, Proceedings of the National Academy of Science. the National Academy of Science1128965Bergin, E. A., Blake, G. A., Ciesla, F., Hirschmann, M. M., & Li, J. 2015, Proceedings of the National Academy of Science, 112, 8965
. E A Bergin, F Du, L I Cleeves, ApJ. 831101Bergin, E. A., Du, F., Cleeves, L. I., et al. 2016, ApJ, 831, 101
. F Binkert, T Birnstiel, MNRAS. 5202055Binkert, F. & Birnstiel, T. 2023, MNRAS, 520, 2055
. B Bitsch, A Morbidelli, A Johansen, A&A. 61230Bitsch, B., Morbidelli, A., Johansen, A., et al. 2018, A&A, 612, A30
. B Bitsch, A D Schneider, L Kreidberg, A&A. 665138Bitsch, B., Schneider, A. D., & Kreidberg, L. 2022, A&A, 665, A138
. R A Booth, C J Clarke, N Madhusudhan, J D Ilee, MNRAS. 4693994Booth, R. A., Clarke, C. J., Madhusudhan, N., & Ilee, J. D. 2017, MNRAS, 469, 3994
. A D Bosman, F Alarcón, E A Bergin, ApJS. 2577Bosman, A. D., Alarcón, F., Bergin, E. A., et al. 2021, ApJS, 257, 7
. A D Bosman, A J Cridland, Y Miguel, A&A. 63211Bosman, A. D., Cridland, A. J., & Miguel, Y. 2019, A&A, 632, L11
. A D Bosman, A G G M Tielens, E F Van Dishoeck, A&A. 61180Bosman, A. D., Tielens, A. G. G. M., & van Dishoeck, E. F. 2018a, A&A, 611, A80
. A D Bosman, C Walsh, E F Van Dishoeck, Science. 6181836A182 Boss, A. PA&ABosman, A. D., Walsh, C., & van Dishoeck, E. F. 2018b, A&A, 618, A182 Boss, A. P. 1997, Science, 276, 1836
. S Bruderer, A&A. 55946Bruderer, S. 2013, A&A, 559, A46
. S Bruderer, E F Van Dishoeck, S D Doty, G J Herczeg, A&A. 54191Bruderer, S., van Dishoeck, E. F., Doty, S. D., & Herczeg, G. J. 2012, A&A, 541, A91
. G Chabrier, I Baraffe, ApJL. 66181Chabrier, G. & Baraffe, I. 2007, ApJL, 661, L81
. J E Chambers, ApJ. 7051206Chambers, J. E. 2009, ApJ, 705, 1206
. B Charnay, B Bézard, J L Baudino, ApJ. 854172Charnay, B., Bézard, B., Baudino, J. L., et al. 2018, ApJ, 854, 172
. U R Christensen, V Holzwarth, A Reiners, Nature. 457167Christensen, U. R., Holzwarth, V., & Reiners, A. 2009, Nature, 457, 167
. F J Ciesla, J N Cuzzi, Icarus. 181178Ciesla, F. J. & Cuzzi, J. N. 2006, Icarus, 181, 178
. A Crida, A Morbidelli, F Masset, Icarus. 181587Crida, A., Morbidelli, A., & Masset, F. 2006, Icarus, 181, 587
. A J Cridland, A&A. 619165Cridland, A. J. 2018, A&A, 619, A165
. A J Cridland, A D Bosman, E F Van Dishoeck, A&A. 63568Cridland, A. J., Bosman, A. D., & van Dishoeck, E. F. 2020a, A&A, 635, A68
. A J Cridland, R E Pudritz, M Alessi, MNRAS. 4613274Cridland, A. J., Pudritz, R. E., & Alessi, M. 2016, MNRAS, 461, 3274
. A J Cridland, E F Van Dishoeck, M Alessi, R E Pudritz, A&A. 642229Cridland, A. J., van Dishoeck, E. F., Alessi, M., & Pudritz, R. E. 2020b, A&A, 642, A229
. G Cugno, P Patapis, T Stolker, A&A. 65312Cugno, G., Patapis, P., Stolker, T., et al. 2021, A&A, 653, A12
. T Currie, K Lawson, G Schneider, Nature Astronomy. 6751Currie, T., Lawson, K., Schneider, G., et al. 2022, Nature Astronomy, 6, 751
G D'angelo, R H Durisen, J J Lissauer, Giant Planet Formation. S. SeagerD'Angelo, G., Durisen, R. H., & Lissauer, J. J. 2010, Giant Planet Formation, ed. S. Seager, 319-346
. F Du, E A Bergin, M R Hogerheijde, ApJL. 80732Du, F., Bergin, E. A., & Hogerheijde, M. R. 2015, ApJL, 807, L32
. C Dürmann, W Kley, A&A. 57452Dürmann, C. & Kley, W. 2015, A&A, 574, A52
. C Dürmann, W Kley, A&A. 59880Dürmann, C. & Kley, W. 2017, A&A, 598, A80
. C Eistrup, C Walsh, E F Van Dishoeck, A&A. 59583Eistrup, C., Walsh, C., & van Dishoeck, E. F. 2016, A&A, 595, A83
. C Eistrup, C Walsh, E F Van Dishoeck, A&A. 61314Eistrup, C., Walsh, C., & van Dishoeck, E. F. 2018, A&A, 613, A14
. A Emsenhuber, C Mordasini, R Burn, A&A. 65669Emsenhuber, A., Mordasini, C., Burn, R., et al. 2021, A&A, 656, A69
. S Facchini, R Teague, J Bae, AJ. 16299Facchini, S., Teague, R., Bae, J., et al. 2021, AJ, 162, 99
. C Favre, L I Cleeves, E A Bergin, C Qi, G A Blake, ApJL. 77638Favre, C., Cleeves, L. I., Bergin, E. A., Qi, C., & Blake, G. A. 2013, ApJL, 776, L38
. L Fossati, C A Haswell, C S Froning, ApJL. 714222Fossati, L., Haswell, C. A., Froning, C. S., et al. 2010, ApJL, 714, L222
L Fossati, T Koskinen, J D Lothringer, I/350L30 Gaia Collaboration. 2020, VizieR Online Data Catalog. 868Fossati, L., Koskinen, T., Lothringer, J. D., et al. 2018, ApJL, 868, L30 Gaia Collaboration. 2020, VizieR Online Data Catalog, I/350
. A García Muñoz, Planet. Space Sci. 551426García Muñoz, A. 2007, Planet. Space Sci., 55, 1426
. R Gomes, H F Levison, K Tsiganis, A Morbidelli, Nature. 435466Gomes, R., Levison, H. F., Tsiganis, K., & Morbidelli, A. 2005, Nature, 435, 466
. J Gregorio-Hetem, J R D Lepine, G R Quast, C A O Torres, R De La Reza, AJ. 103549Gregorio-Hetem, J., Lepine, J. R. D., Quast, G. R., Torres, C. A. O., & de La Reza, R. 1992, AJ, 103, 549
T Guillot, D J Stevenson, W B Hubbard, D Saumon, Jupiter. The Planet, Satellites and Magnetosphere. F. Bagenal, T. E. Dowling, & W. B. McKinnon1Guillot, T., Stevenson, D. J., Hubbard, W. B., & Saumon, D. 2004, in Jupiter. The Planet, Satellites and Magnetosphere, ed. F. Bagenal, T. E. Dowling, & W. B. McKinnon, Vol. 1, 35-57
. S Y Haffert, A J Bohn, J De Boer, Nature Astronomy. 3749Haffert, S. Y., Bohn, A. J., de Boer, J., et al. 2019, Nature Astronomy, 3, 749
. A S Hamers, S Tremaine, AJ. 154272Hamers, A. S. & Tremaine, S. 2017, AJ, 154, 272
. J Hashimoto, R Dong, T Kudo, ApJL. 75819Hashimoto, J., Dong, R., Kudo, T., et al. 2012, ApJL, 758, L19
. J Hashimoto, T Tsukagoshi, J M Brown, ApJ. 79943Hashimoto, J., Tsukagoshi, T., Brown, J. M., et al. 2015, ApJ, 799, 43
. A N Heays, A D Bosman, E F Van Dishoeck, A&A. 602105Heays, A. N., Bosman, A. D., & van Dishoeck, E. F. 2017, A&A, 602, A105
. H J Hoeijmakers, H Schwarz, I A G Snellen, A&A. 617144Hoeijmakers, H. J., Schwarz, H., Snellen, I. A. G., et al. 2018, A&A, 617, A144
. D M Hunten, R O Pepin, J C G Walker, Icarus. 69532Hunten, D. M., Pepin, R. O., & Walker, J. C. G. 1987, Icarus, 69, 532
. J D Hunter, Computing in Science and Engineering. 990Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
. J D Ilee, D H Forgan, M G Evans, MNRAS. 472189Ilee, J. D., Forgan, D. H., Evans, M. G., et al. 2017, MNRAS, 472, 189
. A Isella, M Benisty, R Teague, ApJL. 87925Isella, A., Benisty, M., Teague, R., et al. 2019, ApJL, 879, L25
. M Kama, S Bruderer, E F Van Dishoeck, A&A. 59283Kama, M., Bruderer, S., van Dishoeck, E. F., et al. 2016, A&A, 592, A83
. M Keppler, M Benisty, A Müller, A&A. 61744Keppler, M., Benisty, M., Müller, A., et al. 2018, A&A, 617, A44
. M Keppler, R Teague, J Bae, A&A. 625118Keppler, M., Teague, R., Bae, J., et al. 2019, A&A, 625, A118
. L Klarmann, C W Ormel, C Dominik, A&A. 6181Klarmann, L., Ormel, C. W., & Dominik, C. 2018, A&A, 618, L1
. G Kordopatis, G Gilmore, M Steinmetz, AJ. 146134Kordopatis, G., Gilmore, G., Steinmetz, M., et al. 2013, AJ, 146, 134
. T T Koskinen, M J Harris, R V Yelle, P Lavvas, Icarus. 2261678Koskinen, T. T., Harris, M. J., Yelle, R. V., & Lavvas, P. 2013, Icarus, 226, 1678
. S Krijt, A D Bosman, K Zhang, ApJ. 899134Krijt, S., Bosman, A. D., Zhang, K., et al. 2020, ApJ, 899, 134
. J Leconte, G Chabrier, A&A. 54020Leconte, J. & Chabrier, G. 2012, A&A, 540, A20
. M Leemker, A S Booth, E F Van Dishoeck, A&A. 66323Leemker, M., Booth, A. S., van Dishoeck, E. F., et al. 2022, A&A, 663, A23
. D N C Lin, J Papaloizou, ApJ. 309846Lin, D. N. C. & Papaloizou, J. 1986, ApJ, 309, 846
. Z C Long, E Akiyama, M Sitko, ApJ. 858112Long, Z. C., Akiyama, E., Sitko, M., et al. 2018, ApJ, 858, 112
. S H Lubow, G Angelo, ApJ. 641526Lubow, S. H. & D'Angelo, G. 2006, ApJ, 641, 526
. D Lynden-Bell, J E Pringle, MNRAS. 168603Lynden-Bell, D. & Pringle, J. E. 1974, MNRAS, 168, 603
. N Madhusudhan, ARA&A. 57617Madhusudhan, N. 2019, ARA&A, 57, 617
. D Mcelroy, C Walsh, A J Markwick, A&A. 55036McElroy, D., Walsh, C., Markwick, A. J., et al. 2013, A&A, 550, A36
. S A Metchev, L A Hillenbrand, M R Meyer, ApJ. 600435Metchev, S. A., Hillenbrand, L. A., & Meyer, M. R. 2004, ApJ, 600, 435
. A Miotello, S Facchini, E F Van Dishoeck, A&A. 63169Miotello, A., Facchini, S., van Dishoeck, E. F., et al. 2019, A&A, 631, A69
. A Miotello, I Kamp, T Birnstiel, L I Cleeves, A Kataoka, arXiv:2203.09818arXiv e-printsMiotello, A., Kamp, I., Birnstiel, T., Cleeves, L. I., & Kataoka, A. 2022, arXiv e-prints, arXiv:2203.09818
. A Miotello, L Testi, G Lodato, A&A. 56732Miotello, A., Testi, L., Lodato, G., et al. 2014, A&A, 567, A32
. A Miotello, E F Van Dishoeck, M Kama, S Bruderer, Astronomy and Astrophysics. 59485Miotello, A., van Dishoeck, E. F., Kama, M., & Bruderer, S. 2016, Astronomy and Astrophysics, 594, A85
. R Moll, P Garaud, C Mankovich, J J Fortney, ApJ. 84924Moll, R., Garaud, P., Mankovich, C., & Fortney, J. J. 2017, ApJ, 849, 24
. A Morbidelli, J Szulágyi, A Crida, Icarus. 232266Morbidelli, A., Szulágyi, J., Crida, A., et al. 2014, Icarus, 232, 266
. C Mordasini, H Klahr, Y Alibert, N Miller, T Henning, A&A. 566141Mordasini, C., Klahr, H., Alibert, Y., Miller, N., & Henning, T. 2014, A&A, 566, A141
. C Mordasini, P Mollière, K.-M Dittkrist, S Jin, Y Alibert, International Journal of Astrobiology. 14201Mordasini, C., Mollière, P., Dittkrist, K.-M., Jin, S., & Alibert, Y. 2015, Interna- tional Journal of Astrobiology, 14, 201
. C Mordasini, R Van Boekel, P Mollière, T Henning, B Benneke, ApJ. 83241Mordasini, C., van Boekel, R., Mollière, P., Henning, T., & Benneke, B. 2016, ApJ, 832, 41
. R A Murray-Clay, E I Chiang, N Murray, ApJ. 69323Murray-Clay, R. A., Chiang, E. I., & Murray, N. 2009, ApJ, 693, 23
. K I Öberg, V V Guzmán, C Walsh, ApJS. 2571Öberg, K. I., Guzmán, V. V., Walsh, C., et al. 2021, ApJS, 257, 1
. K I Öberg, R Murray-Clay, E A Bergin, ApJL. 74316Öberg, K. I., Murray-Clay, R., & Bergin, E. A. 2011, ApJL, 743, L16
. J E Owen, A P Jackson, MNRAS. 4252931Owen, J. E. & Jackson, A. P. 2012, MNRAS, 425, 2931
. J E Owen, D Lai, MNRAS. 4795012Owen, J. E. & Lai, D. 2018, MNRAS, 479, 5012
. J B Pollack, ARA&A. 22389Pollack, J. B. 1984, ARA&A, 22, 389
. J B Pollack, O Hubickyj, P Bodenheimer, Icarus. 12462Pollack, J. B., Hubickyj, O., Bodenheimer, P., et al. 1996, Icarus, 124, 62
. C Qi, K I Öberg, D J Wilner, ApJ. 76534Qi, C., Öberg, K. I., & Wilner, D. J. 2013a, ApJ, 765, 34
. C Qi, K I Öberg, D J Wilner, Science. 341630Qi, C., Öberg, K. I., Wilner, D. J., et al. 2013b, Science, 341, 630
. P Riaud, D Mawet, O Absil, A&A. 458317Riaud, P., Mawet, D., Absil, O., et al. 2006, A&A, 458, 317
. C M T Robert, A Crida, E Lega, H Méheut, A Morbidelli, A&A. 61798Robert, C. M. T., Crida, A., Lega, E., Méheut, H., & Morbidelli, A. 2018, A&A, 617, A98
. A D Schneider, B Bitsch, A&A. 65472Schneider, A. D. & Bitsch, B. 2021, A&A, 654, A72
. K R Schwarz, E A Bergin, L I Cleeves, ApJ. 85685Schwarz, K. R., Bergin, E. A., Cleeves, L. I., et al. 2018, ApJ, 856, 85
. N I Shakura, R A Sunyaev, A&A. 24337Shakura, N. I. & Sunyaev, R. A. 1973, A&A, 24, 337
. M M Shara, J R Hurley, R A Mardling, ApJ. 81659Shara, M. M., Hurley, J. R., & Mardling, R. A. 2016, ApJ, 816, 59
. D K Sing, P Lavvas, G E Ballester, AJ. 15891Sing, D. K., Lavvas, P., Ballester, G. E., et al. 2019, AJ, 158, 91
. F Soubiran, B Militzer, K P Driver, S Zhang, Physics of Plasmas. 2441401Soubiran, F., Militzer, B., Driver, K. P., & Zhang, S. 2017, Physics of Plasmas, 24, 041401
. M Steinmetz, G Guiglion, P J Mcmillan, AJ. 16083Steinmetz, M., Guiglion, G., McMillan, P. J., et al. 2020, AJ, 160, 83
. D J Stevenson, Planet. Space Sci. 30755Stevenson, D. J. 1982, Planet. Space Sci., 30, 755
. D J Stevenson, Icarus. 624Stevenson, D. J. 1985, Icarus, 62, 4
. D J Stevenson, P Bodenheimer, J J Lissauer, G Angelo, PSJ. 374Stevenson, D. J., Bodenheimer, P., Lissauer, J. J., & D'Angelo, G. 2022, PSJ, 3, 74
. J A Sturm, M K Mcclure, D Harsono, A&A. 660126Sturm, J. A., McClure, M. K., Harsono, D., et al. 2022, A&A, 660, A126
. L Suárez-Andrés, G Israelian, J I González Hernández, A&A. 61484Suárez-Andrés, L., Israelian, G., González Hernández, J. I., et al. 2018, A&A, 614, A84
. J Szulágyi, A Morbidelli, A Crida, F Masset, ApJ. 78265Szulágyi, J., Morbidelli, A., Crida, A., & Masset, F. 2014, ApJ, 782, 65
. B Tabone, G P Rosotti, A J Cridland, P J Armitage, G Lodato, MNRAS. 5122290Tabone, B., Rosotti, G. P., Cridland, A. J., Armitage, P. J., & Lodato, G. 2022, MNRAS, 512, 2290
. R Teague, J Bae, E A Bergin, Nature. 574378Teague, R., Bae, J., & Bergin, E. A. 2019, Nature, 574, 378
. N Van Der Marel, E F Van Dishoeck, S Bruderer, Computing in Science and Engineering. 58522A&Avan der Marel, N., van Dishoeck, E. F., Bruderer, S., et al. 2016, A&A, 585, A58 van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22
. M L R Van 't Hoff, C Walsh, M Kama, S Facchini, E F Van Dishoeck, A&A. 599101van 't Hoff, M. L. R., Walsh, C., Kama, M., Facchini, S., & van Dishoeck, E. F. 2017, A&A, 599, A101
. A Vazan, R Helled, T Guillot, A&A. 61014Vazan, A., Helled, R., & Guillot, T. 2018, A&A, 610, L14
. A Vazan, R Helled, A Kovetz, M Podolak, ApJ. 80332Vazan, A., Helled, R., Kovetz, A., & Podolak, M. 2015, ApJ, 803, 32
. A Vazan, R Helled, M Podolak, A Kovetz, ApJ. 829118Vazan, A., Helled, R., Podolak, M., & Kovetz, A. 2016, ApJ, 829, 118
P Virtanen, R Gommers, E Burovski, scipy/scipy: SciPy 1.5. 3Virtanen, P., Gommers, R., Burovski, E., et al. 2020, scipy/scipy: SciPy 1.5.3, Zenodo
. R Visser, S Bruderer, P Cazzoletti, A&A. 61575Visser, R., Bruderer, S., Cazzoletti, P., et al. 2018, A&A, 615, A75
. S M Wahl, W B Hubbard, B Militzer, Geophys. Res. Lett. 444649Wahl, S. M., Hubbard, W. B., Militzer, B., et al. 2017, Geophys. Res. Lett., 44, 4649
. J J Wang, A Vigan, S Lacour, AJ. 161148Wang, J. J., Vigan, A., Lacour, S., et al. 2021, AJ, 161, 148
. Y.-H Wang, N W C Leigh, R Perna, M M Shara, ApJ. 905136Wang, Y.-H., Leigh, N. W. C., Perna, R., & Shara, M. M. 2020, ApJ, 905, 136
. Y.-H Wang, R Perna, N W C Leigh, M M Shara, MNRAS. 5095253Wang, Y.-H., Perna, R., Leigh, N. W. C., & Shara, M. M. 2022, MNRAS, 509, 5253
W R Ward, C.-E Wei, H Nomura, J.-E Lee, Lunar and Planetary Inst. 22129ApJWard, W. R. 1991, in Lunar and Planetary Inst. Technical Report, Vol. 22, Lunar and Planetary Science Conference Wei, C.-E., Nomura, H., Lee, J.-E., et al. 2019, ApJ, 870, 129
. T L Wilson, R Rood, ARA&A. 32191Wilson, T. L. & Rood, R. 1994, ARA&A, 32, 191
. J Woodall, M Agúndez, A J Markwick-Kemper, T J Millar, A&A. 4661197Woodall, J., Agúndez, M., Markwick-Kemper, A. J., & Millar, T. J. 2007, A&A, 466, 1197
. R V Yelle, Icarus. 170167Yelle, R. V. 2004, Icarus, 170, 167
. Y Zhou, A Sanghi, B P Bowler, ApJL. 93413Zhou, Y., Sanghi, A., Bowler, B. P., et al. 2022, ApJL, 934, L13
| [] |
[
"The crime of being poor",
"The crime of being poor"
] | [
"Georgina Curto [email protected] \nUniversity of Notre Dame\nNotre Dame\nUSA\n",
"Svetlana Kiritchenko [email protected] \nNational Research Council Canada\nOttawaCanada\n",
"Isar Nejadgholi [email protected] \nNational Research Council Canada\nOttawaCanada\n",
"Kathleen C Fraser [email protected] \nNational Research Council Canada\nOttawaCanada\n"
] | [
"University of Notre Dame\nNotre Dame\nUSA",
"National Research Council Canada\nOttawaCanada",
"National Research Council Canada\nOttawaCanada",
"National Research Council Canada\nOttawaCanada"
] | [] | The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected. | 10.48550/arxiv.2303.14128 | [
"https://export.arxiv.org/pdf/2303.14128v1.pdf"
] | 257,757,054 | 2303.14128 | 41cb30662c083f06a596f82c24c17818106013e3 |
The crime of being poor
Georgina Curto [email protected]
University of Notre Dame
Notre Dame
USA
Svetlana Kiritchenko [email protected]
National Research Council Canada
OttawaCanada
Isar Nejadgholi [email protected]
National Research Council Canada
OttawaCanada
Kathleen C Fraser [email protected]
National Research Council Canada
OttawaCanada
The crime of being poor
The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected.
The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected.
Artificial Intelligence (AI) provides insights that can trigger innovative interventions towards the UN Sustainable Development Goals (SDGs) (Vinuesa et al., 2020). The #1 UN SDG is the "end of poverty in all its forms everywhere", and there is an urgent call to find alternative paths to fight against poverty. The trend of global poverty reduc-tion has been decreasing in the last decades and the Covid-19 pandemic has erased the last four years of poverty mitigation. Rising inflation and the impact of the war in Ukraine derail the process of poverty mitigation even further (The World Bank, 2023). Worldwide, the World Bank estimates that 685 M people are living below the US$2.15 a day poverty line (The World Bank, 2023). Poverty is a worldwide problem that affects not only the population in developing regions but also a significant percentage of people living in countries with thriving economies: in the United States, 11.6% of the population (US 37.9 M people) are in a situation of poverty and 18.5 M live in extreme poverty (United Nations, 2018).
The term undeserving poor describes the difficulty for policy makers to approve and implement poverty reduction policies when the poor are blamed for their situation, since these policies are not popular (Nunn and Biressi, 2009). Therefore, the blamefulness of the poor could have an impact on the actual poverty levels. Curto et al. (Curto et al., 2022) provided evidence of bias against the poor, or aporophobia (Cortina, 2017), in Google, Twitter and Wikipedia word embeddings. This Natural Language Processing approach in Machine Learning has been widely used to identify biases within AI by representing words as vectors and measuring meaningful semantic relationships among them. It also allows us to reflect on societal biases, since the historical data used in AI has been produced by humans in real world scenarios.
Caricatured narratives of the rich as being industrious and entrepreneurial while the poor are seen as wasters and scammers are at the root of this bias, which has been described as particularly prevalent in the United States (United Nations, 2018). The criminalization of poverty refers to a discriminatory phenomenon where the poor are both associated with criminality and, at the same time, are being punished for being poor, generating a vicious circle.
The criminal offenses devised for sleeping rough in many cities of the so-called developed countries are an example of a legalized punishment for being poor, since they affect the homeless directly.
It must be highlighted that, to date, no evidence has been found that sustains the high level association between poverty and overall criminality rates. This correlation is complex to establish due to the multidimensional nature of poverty, the diversity of crimes (including violent and not violent), the method to identify them (self-reported and official reports), the potential police bias in the official crime rates reports, the different indicators of poverty used to study the potential correlation, the ages of the population in the sample, and geographic scope (Thornberry and Farnworth, 1982). However, recent studies report that factors such as a high poverty headcount ratio, high income inequality, and unemployment could have an impact on specific types of crimes, namely related to property (Imran et al., 2018;Anser et al., 2020).
While the criminalization of poverty has been widely denounced as an instance of bias against the poor, or aporophobia, empirical evidence of this collective prejudice was missing in the literature. This brief report aims to fill in this gap. By investigating how poor people are viewed in society through the analysis of social media texts, namely tweets, we discover that poor and homeless people are often discussed in association with criminality. We devise a metric, crime-poverty-bias (CPB), as the difference between the percentage of utterances mentioning criminality and the poor and the percentage of utterances mentioning criminality and the rich, and compare CPB in Twitter for eight English-speaking counties. We present the CPB results per country together with the factors which could influence the increase of property criminality rate according to the literature, namely poverty, unemployment and inequality rates. The purpose of the study is to provide empirical evidence on the criminalization of poverty and shed some light on the reasons behind it. We aim to answer the following questions: Can the differences in the association between poverty and criminality in different countries be justified based on the respective indicators of poverty, inequality, unemployment and criminality? Or are these differences due to a crime-poverty bias? The results have an impact on policy-making since CPB can hinder the acceptance of poverty mitigation policies by the public opinion.
Method
We used the Twitter API to collect tweets in English from 25 August 2022 to 23 November 2022, pertaining to two groups: 'poor' and 'rich'. Since tweets can be up to 280 characters and include several sentences, we split each tweet into individual sentences. The corpus 'poor' comprises tweet sentences that contain the terms the poor (used as a noun as opposed to an adjective, as in 'the poor performance'), poor people, poor ppl, poor folks, poor families, homeless, on welfare, welfare recipients, low-income, underprivileged, disadvantaged, lower class. We excluded explicitly offensive terms that tend to be used in personal insults, such as trailer trash or scrounger. We also collected tweets related to the group 'rich', using query terms the rich (used as a noun), rich people, rich ppl, rich kids, rich men, rich folks, rich guys, rich elites, rich families, wealthy, well-off, upper-class, upper class, millionaires, billionaires, elite class, privileged, executives. The single words poor and rich were not included as query terms because of their polysemy (they can apply to people, but can also be used to describe other things, e.g., 'poor results', 'rich dessert'). In total, there are over 1.3 million sentences in the corpus 'poor' and over 1.9 million sentences in the corpus 'rich'.
We were also interested in the geographical locations from which tweets originated. Unfortunately, only about 2% of tweets included the exact geographical information. Therefore, in addition to the tweet location we relied on user location that tweeters voluntarily provide in their Twitter accounts, which was available for about 60% of tweets. We automatically parsed user location descriptions to extract country information for the most frequently mentioned countries. Table 1 shows the number of sentences in both corpora per geographical location. In the following analysis, we focus on eight geographically-diverse English-speaking countries, for which both corpora contain at least 1,000 sentences each: the United States of America, the United Kingdom, Canada, India, Nigeria, Australia, South Africa, and Kenya.
To explore the themes commonly discussed with regard to poor people, we analyzed the content of sentences within the corpus 'poor' using an unsupervised topic modeling algorithm, BERTopic (Grootendorst, 2022). Topic modeling is a Machine Learning technique that aims to group texts semantically. As a first step, BERTopic converts texts to 384-dimensional vector representations so that semantically similar texts have similar representations. This conversion is performed using a sentence transformer, a large language model trained on over one billion sentences scraped from the web. Then, the vectors are clustered together using a density-based clustering technique HDBSCAN (Campello et al., 2013). The clustering algorithm identifies dense groups of semantically similar texts and leaves texts that do not fit any clusters as outliers. For computational efficiency, BERTopic was applied on a random sample of 600,000 sentences from the corpus 'poor'.
Results and Discussion
The topic modeling on sentences related to the group 'poor' resulted in 142 extracted topics.
Among the top topics in terms of frequency we find expected discussions such as the lack of affordable housing and (un)fair distribution of taxes among the socio-economic classes. We also discover themes relating to drug use, alcohol addiction and mental health issues associated with poverty.
In this paper, we focus on the prominent topic related to criminality, which includes about 6,000 tweets. This topic is characterized by the presence of words such as crime, police, cops, criminals, and jail. Some utterances explicitly associate poverty with crime, while others oppose such positions and criticize the systemic discrimination of the poor, including over-policing and disproportionate incarceration of poor and homeless people. However, the negation of stereotypes through counter-speech (e.g., "not all homeless people are criminals") is also a proof that these stereotypes exist in society (Beukeboom and Burgers, 2019). Table 2 shows some example tweets blaming the poor or denouncing social bias against the group.
Since topic modelling techniques are typically used for qualitative studies, a quantitative analysis was carried out to complement the results obtained through BERTopic. We quantified the percentage of sentences from the 'poor' corpus (1.3M sentences) that contain the terms related to criminality, per country (Table 3, row 1). The terms related to criminality include crime, crimes, criminal, criminals, criminalizing, jail, prison, arrest, arrested. For comparison, we include the percentage of sentences from the 'rich' corpus (1.9M sentences) that contain terms related to criminality (row 2). We refer to the difference between rows 1 and 2 as the Crime-Poverty-Bias (CPB), since it measures the rate at which the poor are related to criminality as compared to the rich.
The results shown in Table 3 indicate that CPB is highest in the United States, followed by Canada and South Africa. Although the literature finds no correlation in general between overall crime rates and poverty, several factors have been identified that may potentially lead to an increase in property crime, such as income inequality and unemployment rate (Anser et al., 2020;Imran et al., 2018). If the association between crime and poverty measured by the CPB reflects reality, we would expect higher CPB rates in countries rated higher on these measures. Therefore, to contextualize these outcomes, Figure 1 offers the CPB results together with each country's overall criminality rate, poverty headcount ratio at $2.5 a day (purchasing power adjusted prices), inequality indicators (Gini Index and 10% income share) and unemployment rate. However, we do not observe correlation between these indicators and the CPB for the eight countries. The United States has the highest CPB rates despite having lower poverty, criminality, inequality and unemployment rates than South Africa. Therefore, we must look to other causes for the overestimated correlation between poverty and criminality in the public opinion in the United States. The relatively low CPB results obtained for other countries such as Kenya, with higher poverty headcount and similar levels of inequality and unemployment rates to the United States, would support this hypothesis.
A potential explanation could be found on the narrative shared by the United States and Canada of being the "land of opportunity" where the rich and the poor are thought to have equal chance of success and an illusory emphasis on employment influences the discussion on the public social spending (United Nations, 2018). However, the principle of equal opportunity can be considered an oxymoron since every person is exposed to different opportunities in life from the moment of birth (Sandel, 2020) and the job market for individuals with low educational qualifications, disability and with no assistance to find employment is very limited. The indicators of social mobility and inequality support the claim from the United Nations that the poor in the United States are overwhelmingly those born into poverty (United Nations, 2018): intergenerational social mobility in the United States from the bottom to the top income quintile is as low as 7.8%, below European countries such as the UK, France, Italy, or Sweden (Alesina et al., 2018). In fact, intergenerational mobility has declined substantially over the last 150 years in the United States (Song et al., 2020) and income inequality has been growing since the 1980s (The World Bank, 2023).
Although the use of Twitter is not representative of the total population within the countries in scope, the data obtained provides a first approach to measuring the phenomenon of the criminalization of the poor, which constitutes an instance of aporophobia. These preliminary results have an impact for poverty reduction policy making, because when the poor are considered "undeserving of help" it is more difficult for governments to approve laws to mitigate poverty. It is also harder for the people in need to overcome their situation when they are blamed for it and lack support from their community. A Data Collection and Pre-processing
A.1 Tweet Collection
We used the Twitter API to collect tweets in English from 25 August 2022 to 23 November 2022, pertaining to two groups: 'poor' and 'rich'. The initial set of query terms has been assembled from social psychology literature, and expanded with synonyms and related terms. Then, a one-week sample of tweets collected using this initial set has been manually examined, and terms that resulted in very small numbers of retrieved tweets or in many irrelevant tweets were discarded. The final list of query terms for the group 'poor' included: the poor (used as a noun as opposed to an adjective as in 'the poor performance'), poor people, poor ppl, poor folks, poor families, homeless, on welfare, welfare recipients, low-income, underprivileged, disadvantaged, lower class. We excluded explicitly offensive terms that tend to be used in personal insults, such as trailer trash or scrounger. For the group 'rich' we used the following query terms: the rich (used as a noun), rich people, rich ppl, rich kids, rich men, rich folks, rich guys, rich elites, rich families, wealthy, well-off, upper-class, upper class, millionaires, billionaires, elite class, privileged, executives. The single words poor and rich were not included as query terms because of their polysemy (they can apply to people, but can also be used to describe other things, e.g., 'poor results', 'rich dessert').
A.2 Data Pre-processing
We filtered out re-tweets, tweets with URLs to external websites, tweets with more than five hashtags, and tweets from user accounts that have the word bot in their user or screen names. This step helped remove advertisements, spam, news headlines, and other non-personal communications. Further, tweets containing query terms from both 'poor' and 'rich' groups were also excluded. The remaining tweets were split into individual sentences and only sentences that included at least one of the query terms were kept. User mentions have been replaced with '@user' and query terms have been masked with '<target>' to reduce the bias from the query terms in the analysis. In total, there are over 1.3 million sentences in the corpus 'poor' and over 1.9 million sentences in the corpus 'rich'.
A.3 User Location Identification
To identify the location from which a tweet originated, we used both tweet location and user location fields that Twitter provides. Only about 2% of the tweets included the exact geographical information from which the tweet was sent (the field 'place'). User location was specified in about 60% of the tweets. This information was presented as a free-form text, and tweeters were often very creative in describing their location (e.g., "somewhere on Earth"). We automatically parsed user location descriptions to extract country information for the most frequently mentioned countries. In the absence of a country name, we considered the mentions of U.S. states, Canadian provinces, and major cities in the U.S., U.K. and Canada, since these were also commonly used by tweeters. (Major cities from other countries were rarely used without the country name.)
B Topic Modeling
BERTopic (Grootendorst, 2022) is a flexible state-of-the-art toolkit for unsupervised, semisupervised, and supervised topic modeling. The input documents are first converted to a numerical vector (called embedding) space using techniques such as sentence transformers. Then the dimensionality of the embedding space is reduced with techniques like PCA or UMAP since the clustering methods usually are more effective in low-dimensional spaces. The core component of BERTopic is a density-based clustering technique HDBSCAN (Campello et al., 2013), which can produce clusters of different shapes and leave documents that do not fit any clusters as outliers. This suited our case well as we wanted to discover the most commonly discussed topics in tweets mentioning poor people. The discovered topics are then represented with topic words, which are identified using class-based TF-IDF (c-TF-IDF). Topic words are the words that tend to frequently appear in the topic of interest and less frequently in the other topics.
We applied BERTopic in the unsupervised mode using the following settings and parameters. For converting text to numerical representations, we used the sentence transformers 1 method based on the all-MiniLM-L6-v2 2 pre-trained embedding model. For the vectorizer model, we used the CountVectorizer method, 3 and removed English stopwords and terms that appear in less than 5% of the sentences (min_df = 0.05). For the HDB-SCAN clustering algorithm, we specified the minimum size of the clusters as min_cluster_size = 500. For all the other parameters, the default settings of the BERTopic package were used.
Table 1 :
1The number of tweet sentences in the 'poor' and 'rich' corpora per geographical location.Location
'Poor' corpus 'Rich' corpus
United States
326,993
460,848
United Kingdom
80,947
135,211
Canada
32,978
43,686
India
14,029
16,296
Nigeria
10,529
26,693
Australia
9,698
14,654
South Africa
7,729
8,600
Kenya
3,378
6,478
Other locations
337,252
461,437
No location information
539,365
748,740
Total
1,362,898
1,922,643
Table 2 :
2Example tweets relating criminality and poor people. The tweet texts were paraphrased to protect the privacy of the users.Explicit statements associating the poor with criminality:
Put the homeless in jail and make work camps.
More and more homeless people are doing crime.
Homeless people in that area, criminals on the streets!!
Statements opposing the criminalization of poverty, which
elicit the underlying stereotype:
It's quite bold of you to claim that all homeless people are
criminals.
So if you are in poverty you commit violent crimes and
murder because you are disadvantaged?
Law enforcement and prisons are routinely used against poor
people not for the reasons of safety, but to protect the
wealthy and privileged.
Table 3 :
3Percentage of sentences that include the terms related to criminality in the 'poor' and 'rich' corpora and the difference in frequency, which constitutes the Crime-Poverty-Bias (CPB).Percentage of sentences
USA Canada South Africa Kenya UK Nigeria Australia India
1. in the 'poor' corpus
3.4
2.1
2.1
1.6
1.1
0.8
1.0
0.7
2. in the 'rich' corpus
1.2
0.9
1.0
0.9
0.6
0.4
0.7
0.6
3. Crime-Poverty-Bias (CPB)
2.2
1.2
1.1
0.7
0.5
0.4
0.3
0.1
Figure 1: For the countries in scope, overview of: the results obtained for Crime-Poverty-Bias (CPB), poverty
headcount at $ 2.5 a day (purchasing power adjusted prices), overall criminality rate, indicators of inequality (Gini
index and 10% income share) and unemployment rates. Sources: the CPB is obtained through the authors analysis
of a corpus of tweets using the Natural Language Processing techniques. Poverty headcount ratio, Gini index, and
10% income share rates (World Bank 2017 or nearest year). Unemployment rate (World Bank 2021) and overall
criminality rate (worldpopulationreview.com).
G. Curto, M.F Jojoa Acosta, F. Comim, and B. Garcia-Zapirain. 2022. Are the poor being discriminated
against on the Internet? A machine learning analysis
using Word2vec and GloVe embeddings to identify
aporophobia. AI & Society.
Maarten Grootendorst. 2022. BERTopic: Neural topic
modeling with a class-based TF-IDF procedure.
Mohammed Imran, Mosharrof Hosen, and Mohammad
Ashraful Ferdous Chowdhury. 2018. Does poverty
lead to crime? Evidence from the United States
of America. International Journal of Social Eco-
nomics, 45(10):1424-1438.
Heather Nunn and Anita Biressi. 2009. The undeserv-
ing poor. Soundings, 41.
Michael J. Sandel. 2020. The tyranny of merit. Pen-
guin Random House.
Xi Song, Catherine G. Massey, Karen A. Rolf, Joseph P.
Ferrie, Jonathan L. Rothbaum, and Yu Xie. 2020.
Long-term decline in intergenerational mobility in
the United States since the 1850s. Proceedings
of the National Academy of Sciences of the United
States of America, 117(1):251-258.
The World Bank. 2023. Poverty and Inequality Plat-
form (PIP).
Terence P Thornberry and Margaret Farnworth. 1982.
Social Correlates of Criminal Involvement: Further
Evidence on the Relationship Between Social Sta-
tus and Criminal Behavior SOCIAL CORRELATES
OF CRIMINAL INVOLVEMENT: FURTHER EV-
IDENCE ON THE RELATIONSHIP BETWEEN
SOCIAL STATUS AND CRIMINAL BEHAVIOR*.
Source, 47(4):505-518.
United Nations. 2018. Report of the Special Rappor-
teur on extreme poverty and human rights on his mis-
sion to the United States of America.
Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite,
Madeline Balaam, Virginia Dignum, Sami Domisch,
Anna Felländer, Simone Daniela Langhans, Max
Tegmark, and Francesco Fuso Nerini. 2020. The
role of artificial intelligence in achieving the Sustain-
able Development Goals. Nature Communications
2020 11:1, 11(1):1-10.
https://github.com/UKPLab/ sentence-transformers 2 https://www.sbert.net/docs/pretrained_models. html 3 https://scikit-learn.org/stable/modules/ generated/sklearn.feature_extraction.text. CountVectorizer.html
This brief report aims to initiate a new path of research for poverty mitigation, where the focus is not only on the redistribution of wealth but also on the mitigation of social bias against the poor. While the phenomenon of bias in terms of gender and race has been extensively analysed, bias against the poor has not received the attention it deserves both in AI and social sciences literature, despite the potential impact on the first global challenge identified by the United Nations. | [
"https://github.com/UKPLab/"
] |
[
"Electromagnetic Decays of Excited Hyperons. (II) ‡",
"Electromagnetic Decays of Excited Hyperons. (II) ‡"
] | [
"Yasuo Umino ",
"S A ",
"Fred Myhrer ",
"\nDepartment of Physics State\nDepartment of Physics and Astronomy\nUniversity of New York at Stony Brook Stony Brook\n11794NY\n",
"\nUniversity of South Carolina Columbia\n29208, 41882, 1009 DBPostbus, AmsterdamSCU.S.A., The Netherlands\n"
] | [
"Department of Physics State\nDepartment of Physics and Astronomy\nUniversity of New York at Stony Brook Stony Brook\n11794NY",
"University of South Carolina Columbia\n29208, 41882, 1009 DBPostbus, AmsterdamSCU.S.A., The Netherlands"
] | [] | Excited negative parity hyperon masses are calculated in a chiral bag model in which the pion and the kaon fields are treated as perturbations. We also calculate the hadronic widths of Λ(1520) and Λ(1405) as well as the coupling constants of the lightest I = 0 excited hyperon to the meson-baryon channels, and discuss how the dispersive effects of the hadronic meson-baryon decay channels affect the excited hyperon masses. Meson cloud corrections to the electromagnetic decay widths of the two lightest excited hyperons into ground states Λ 0 and Σ 0 are calculated within the same model and are found to be small. Our results strengthen the argument that predictions of these hyperon radiative decay widths provide an excellent test for various quark models of hadrons. | 10.1016/0375-9474(93)90248-v | [
"https://export.arxiv.org/pdf/nucl-th/9210018v1.pdf"
] | 119,460,382 | nucl-th/9210018 | bbe5402f55ea216104b6ca455a1404fe7c2ac34d |
Electromagnetic Decays of Excited Hyperons. (II) ‡
9210018v1 23 Oct 1992 September 1992
Yasuo Umino
S A
Fred Myhrer
Department of Physics State
Department of Physics and Astronomy
University of New York at Stony Brook Stony Brook
11794NY
University of South Carolina Columbia
29208, 41882, 1009 DBPostbus, AmsterdamSCU.S.A., The Netherlands
Electromagnetic Decays of Excited Hyperons. (II) ‡
9210018v1 23 Oct 1992 September 1992Submitted to: Nuclear Physics A ‡arXiv:nucl-th/
Excited negative parity hyperon masses are calculated in a chiral bag model in which the pion and the kaon fields are treated as perturbations. We also calculate the hadronic widths of Λ(1520) and Λ(1405) as well as the coupling constants of the lightest I = 0 excited hyperon to the meson-baryon channels, and discuss how the dispersive effects of the hadronic meson-baryon decay channels affect the excited hyperon masses. Meson cloud corrections to the electromagnetic decay widths of the two lightest excited hyperons into ground states Λ 0 and Σ 0 are calculated within the same model and are found to be small. Our results strengthen the argument that predictions of these hyperon radiative decay widths provide an excellent test for various quark models of hadrons.
Introduction
With the anticipated completion of CEBAF and the proposed KAON laboratory, there has been a renewed interest in the study of low-lying excited hyperons. [1] Although the existence of these hyperon states has been established for quite some time, [2] their underlying quark structure is not yet well understood. One reason is our poor knowledge of the excited hyperon mass spectrum [3] which introduces uncertainties when building models of hadrons.
Another reason is that the measurements of transition amplitudes involving excited hyperon states, which test model dependent wavefunctions and transition operators, have not been made with sufficient accuracy. Theoretically, a relatively simple type of such transitions is the electromagnetic one since corresponding transition currents and thus, the operators, can, in principle, be constructed once the model Lagrangian and assumptions about the hadron structure are specified. Fortunately, calculations of electromagnetic transition amplitudes between excited and ground state hyperons are found to be strongly model dependent [4,5,6,7] and their measurements, which are being planned at CEBAF, [8] may help to determine the hyperon wave function composition and thereby put strong restrictions on possible phenomenological models of hadrons.
Hyperon resonances also play an important role in low-energyKN interactions which are characterized by the presence of coupled two-particle channels. The πΣ channel can couple to the low-energyKN system with isospin I = 0 whereas both the πΣ and πΛ channels are open when thē KN system is coupled to I = 1. The lightest hyperon resonance, Λ(1405), plays a crucial role in understanding the properties of the K − atoms and to determine the structure of this state is of prime importance in K − atomic studies. It has long been speculated that the Λ(1405), being close to theKN threshold, is a candidate for the K − p bound state interpretation, [12,13,14] however, a similar analysis has not been made for Λ(1520). A Cloudy Bag Model analysis (to be discussed later) of low-energy S-waveKN scatter-ing shows that Λ(1405) is mostly a meson-baryon bound state. [12] Also, in the bound-state approach to the hyperons in the Skyrme model, [15] Λ(1405) emerges naturally as a bound state of the strangeness carrying kaon and the SU (2) soliton. This suggests that Λ(1405) might have a dominant molecular q 4q structure instead of being a pure qqq-state. In an effective meson-baryon theory the mass and the width of Λ(1405) may be reproduced in a simple coupled channel K-matrix analysis of πΣ scattering. [2,16] For further developments see e.g. Oades and Rasche [17] and Williams, Ji and Cotanch. [18] In contrast, both the NRQM [9,10] and chiral bag model calculations [7,19] find that the lightest Λ * ( 1 2 − ) qqq state is almost mass degenerate with the lightest Λ * ( 3 2 − ) qqq state. In addition, the non-relativistic quark model (NRQM) finds Λ(1405) to be dominantly a three quark (qqq) flavor singlet state [9,10] whereas in bag models it is about an even mixture of flavor singlet and octet states. [7,11] The chiral bag model, like the Cloudy Bag Model, has a specific coupling to the open hadronic channels through the meson cloud. An improved treatment of this meson cloud (i.e., a proper inclusion of dispersive effects of the hadronic widths) might explain the observed L · S splitting of the lowest J P = 1 2 − and 3 2 − Λ * -states, [20,21,22,23] an effect we shall estimate within our model in this work. If the Λ(1405) is a molecular (or quasi-boundKN/πΣ) state, then a further and so far un-observed low-lying three-quark Λ * ( 1 2 − ) state should exist. Thus, establishing all the low-lying excited hyperons and probing the quark substructure of these hyperon resonances through, for example, their radiative transition rates will contribute towards a better understanding of the nature of the Λ(1405) state and its role in low-energyKN interactions.
In two previous articles [7,19] we calculated the masses and wavefunctions of the low-lying negative parity baryon resonances as well as some hyperon radiative decay widths relevant to a planned CEBAF experiment [8] using a perturbative version of the chiral bag model. In calculating the excited baryon masses we found that both the one-gluon exchange interactions and the dispersive effects from the pion-baryon channels play important roles in describing the mass-spectrum. (See Ref. [7] for errata to Tables 4 and 5 of Ref. [19]. Erratum to the figures and other tables of Ref. [19] are available upon request.)
In this paper we extend our work to include the kaon cloud in our model calculation of the hyperon mass spectrum and also estimate the effects due to mass differences between initial/final and intermediate baryons in the evaluation of baryon self-energy diagrams. Furthermore, in Ref. [7] we calculated only the quark core contributions to the radiative decay widths of Λ(1520) and Λ(1405) decaying into ground states Λ 0 and Σ 0 . These decay widths were found to be much smaller than those calculated in the NRQM [4] due to the different spin-flavor content of excited hyperons in the two models.
For example, the chiral bag model predicts, similar to the early but incomplete MIT bag calculations of these states, [11] a large but non-dominant admixture of flavor octet component in Λ(1405), a state which has traditionally been treated as a flavor singlet state based on the results of NRQM calculations. [9,10] This octet admixture reduces drastically the radiative decay widths of the lightest Λ * ( 1 2 − ) state. The small hyperon radiative decay widths found in Ref. [7] imply that meson cloud corrections to the model might affect the calculated widths, and therefore we calculate in this work the meson cloud corrections to the above decays widths.
This paper is organized as follows. In the following section we present the excited hyperon mass spectrum calculated with the kaon fields included in the meson cloud. We discuss how the dispersive effects of our calculated hadronic widths contribute to the hyperon masses and examine some of the difficulties encountered in evaluating the meson cloud contributions to excited baryon masses. The strong decay widths and coupling constants of Λ(1520) and Λ(1405) are then estimated with our model. In Section 3 we present the meson cloud corrections to the excited hyperon radiative decay widths. Meson electromagnetic transition currents constructed to calculate these corrections are, in general, two quark operators acting on spin-flavor wavefunctions of a three-quark system. Finally in Section 4, we conclude with a summary and a discussion of the results of the present work.
2 Low-lying Negative Parity Hyperons
The excited hyperon mass spectrum
In this section we extend our calculation of the low-lying negative parity hyperon mass spectrum by including the kaon cloud contribution while neglecting the effects of the η cloud. These massless meson fields are determined by a boundary condition on the bag surface requiring continuity of the axial current and are excluded from the bag interior so that chiral symmetry is realized in the Wigner-Weyl mode inside the bag and in the Nambu-Goldstone mode outside. As in Ref. [7], we use an approximation of the chiral bag model [24] where the mesonic cloud surrounding the quark core is treated as a perturbation, giving our model Lagrangian to lowest order in the meson fields as
L Bag = iq(x) ∂q(x) − 1 4 a F a µν (x)F µνa (x) − B θ V − 1 2q (x) 1 + i f λ · φ(x)γ 5 q(x)δ S + 1 2 (∂ µ φ(x)) 2 θV(1)
where q(x) and φ(x) are the quark and octet meson fields, respectively, and
θ V = 1, if xǫV 0, if x ǫV .(2)
θV is the compliment of θ V , f is the appropriate meson decay constant, (in this work we use f K = 1.09f π ) and δ S = δ(| x | − R) couples the quarks to meson fields on the bag surface. We will also give masses to the quarks and thereby to the mesons in our calculations. We shall refer to the above Lagrangian when constructing effective meson electromagnetic transition operators in Section 3. The method we use to calculate the excited baryon masses in our approximation of the chiral bag has been described in detail in Refs. [19] and [25]. The extension of the model calculation to include the kaon cloud is straightforward. As discussed in Refs. [19] and [25], when calculating the effective quark one-meson exchange diagrams, only those diagrams where the intermediate baryon state, |B ′ , is the same as initial/final baryon state, |B , are included in the diagonalization of the Hamiltonian and minimization of the energy (see Fig. 1 of Ref. [19]). Contributions to the baryon masses from those diagrams where the intermediate baryon state is different from initial/final baryon state are included as corrections after the diagonalization and minimization procedure. Consequently, when calculating the Λ * and Σ * masses all contributions from the kaon cloud are treated as perturbative corrections to the masses and will therefore not affect the hyperon wavefunctions obtained in Ref. [7]. Fig. 1 shows the spectrum of low-lying negative parity hyperon resonances calculated in our model when only the pion field and when both the pion and kaon fields are included in the meson cloud. In the present calculation we consider contributions from all quark one-gluon and effective one-pion and one-kaon exchange interactions to the masses. Here, as in Refs. [7,19,25], we do not correct for possible mass differences between the initial/final and intermediate baryon states in the meson cloud correction terms. In the following section and in Appendix A we will discuss this particular mass difference correction and present an application to the lowest Λ * states. The bag parameters used to fit the spectrum are B 1/4 = 145 MeV, Z 0 = 0.25, α s = 1.5 and m S = 250 MeV, which are the same as in Ref. [7], where the meson cloud consists only of pions, except that the zero-point/center-of-mass energy parameter, Z 0 , has been changed from 0.45 to 0.25. This change in Z 0 is necessary because the kaon cloud lowers the masses of all states by about 30 to 60 MeV relative to our results in Ref. [7] for a wide range of input parameters as exemplified in Fig. 1. However, the wavefunctions of the excited hyperon states shown in Fig. 1 remain the same as those obtained in
Ref. [7], since a change in Z 0 only affects the diagonal elements of the bag Hamiltonian. (The above change in Z 0 increases the stable bag radii by a few percent.) It is instructive to compare our results with that of the NRQM calculation by Isgur and Karl. [9] Table 1
A modified meson cloud correction
The most conspicuous state that both the NRQM and the above chiral bag model calculation fail to reproduce is the J P = 2 − Λ * states to be mass-degenerate and our model gives almost the same mass for these two states as well. In our model, the meson cloud contributions to baryon masses correspond to eval-uating baryon self-energy diagrams in which the initial baryon state couples to allowed intermediate meson-baryon channels. As mentioned above, it is necessary to correct these self-energy diagrams for possible mass differences between the initial/final and intermediate baryon states. Therefore, we present here meson cloud corrections which are modified by the mass corrected dispersive effects and explore whether it can explain the observed mass splitting between Λ(1520) and Λ(1405). The mass difference corrections to baryon self-energy diagrams have been calculated in the chiral bag model within the perturbative meson cloud approximation for the ground state baryon masses [26] using an effective Yukawa model. [27] In our calculations, the meson cloud contribution to the baryon masses, i.e., the effective quark one-meson exchange interaction matrix element, Σ B , is written as [26]
Σ B = i,j B|O(i)O(j)|B = B ′ i,j B|O(i)|B ′ B ′ |O(j)|B .(3)
Here i and j are summed over the three quarks and the operator product O(i)O(j) is the appropriate two-body (if i = j ) one-meson exchange interaction operator which acts on quarks i and j in the spin-flavor space. Due to mass differences between the states |B and |B ′ , each term in the above sum over B ′ should be multiplied by the real part of a correction factor δ L (BB ′ ), which, in the static approximation, is given by
δ L (BB ′ ) = K 0 dq q 2(L+1) ω(ω + m B ′ − m B − iǫ) K 0 dq q 2(L+1) ω 2 .(4)
Here ω = √ µ 2 + q 2 where q is the meson momentum and µ is the meson mass.
The momentum K (∼ 500 MeV) [26] is a cut-off reflecting the finite size of the source of the meson fields and is determined by the normalization integral appearing in the denominator of Eq. (4). In the static approximation, K is of the order of 1/R where R is the bag radius. [26] find that the real part of multiplicative correction factors, Reδ L (BB ′ ), are close to unity resulting in small corrections to baryon self-energies and help to improve the fit to the ground state baryon mass spectrum.
However, if the initial and final baryon state is an excited state, it becomes necessary to include both ground state and excited baryon states in |B ′ , and this method runs into difficulties of a practical nature. Even when we limit |B ′ to the quark space of either all three quarks in the lowest S-state or two in the lowest S-state and one in the lowest P -state (P 1/2 or P 3/2 ), difficulties arise in the choice of the calculated wavefunctions to be used in |B ′ . For example, consider contributions from the one-pion exchange interaction to the mass of Λ * . In this case possible intermediate baryon states are the isospin one Σ and their excited states, and the Σ * wavefunctions are needed to evaluate each term in the one-pion exchange interaction matrix element, Σ B . However, as shown in Fig. 1, the Σ * mass spectrum is very poorly known experimentally [3] and many states predicted by quark models are not seen.
This makes it very difficult to assign calculated negative parity Σ * wavefunctions to observed resonances and contributes to the theoretical uncertainty in the resulting Λ * mass. Another example of practical difficulties arises in calculating the mass corrections due to the one-K + exchange. This requires knowledge of our model wavefunctions for the low-lying negative parity Ξ − states which we have not yet calculated. Because of these difficulties, we chose to ignore corrections due to mass differences between the initial/final and intermediate baryon states when we calculate the baryon self-energy contributions to the overall excited hyperon mass spectrum discussed in Section 2.1 and shown in Fig. 1. Again, note that because the mass difference corrections apply only to those one-meson exchange diagrams where the intermediate baryon state is different from the initial/final baryon state, they do not affect our calculated wavefunctions of excited hyperons which are used in Section 3.
We shall now make a very restricted application of the above mass corrections to see if a better treatment of the meson-baryon channels in the baryon self-energy diagrams might contribute to an understanding of the mass splitting between Λ(1520) and Λ(1405). The lightest Λ( Table 1, respectively. We shall estimate the magnitude of mass corrections in the spirit of the work by Arima and Yazaki. [21] They investigated the problem of mass splitting between Λ(1520) and Λ(1405) using a NRQM with mesonic degrees of freedom, assuming that the two states have equal "bare" masses. In this effective meson-baryon model the hyperons acquire their physical masses through the coupling to the meson-baryon channels. They found that in order to generate a low mass for the lightest J P = 1 2 − Λ * -state relative to the 3 2 − state, it was necessary to introduce strong couplings to the intermediateKN and πΣ channel. This is also similar in spirit to the work in the Cloudy Bag Model [12] which unfortunately was limited to Λ(1405) only. We consider it very important to describe simultaneously both Λ(1520) and Λ(1405) in any model of baryons since the coupling in the J P = 1 2 − and 3 2 − channels are linked and both states couple to the πΣ andKN meson-baryon channels. When we consider the pure hyperon three quark states including one-gluon exchange corrections, we find that the two lightest Λ( 3 2 − ) and Λ( 1 2 − ) states are almost mass degenerate [7] , the assumption made by Arima and Yazaki. [21] We find that the masses for these states remain nearly degenerate when the pion cloud is included, [7] but the kaon cloud lifts this mass degeneracy as seen in Fig. 1 and discussed in Appendix A. In our estimates we shall use Σ B of Eq. (3) calculations. [12,20,21,22] This means B ′ includes only the octet baryons, i.e., B ′ = N for the kaon cloud and B ′ = Σ for the pion cloud. Since we are calculating the meson cloud contribution to the hyperon masses, one should not use the physical B and B ′ masses, but only their "bare" masses which include the gluonic mass corrections [26] that exist even in the chiral limit.
Additional details are given in Appendix A. Our model estimate confirms the findings of Arima and Yazaki [21] that quark models with a qqq structure for hyperons have difficulties in explaining the Λ(1520) and Λ(1405) mass difference. Further examinations regarding the nature of Λ(1405) are necessary, and we stress again that it is imperative to treat both Λ(1405) and Λ(1520) in the same model and then repeat, for example, Veit et.al.'s Cloudy Bag Model calculation [12] starting with the "bare" masses of Λ(1405) and Λ(1520) about equal and use the coupling to the meson-baryon channels to get the observed hyperon masses. We note that the Cloudy Bag Model allows for a quark-meson four point interaction in the bag volume which generates an S-wave meson-baryon contact interaction. In their model this specific interaction, which is of second order in f −2 π and is not included in our model, is responsible for the remarkable lowering of the Λ(1405) mass. It would be interesting to investigate if this second order quark-meson interaction will affect the mass of the Λ(1520) equally strongly.
Such an investigation might give us some further clues to the question of whether the quark content of Λ(1405) differs from that of Λ(1520) or not. 1 We find that although the "bare masses" (i.e. masses before meson cloud corrections) of our two lightest J P = 1 Table 2).
Strong decay widths and coupling constants
In this subsection we estimate the hadronic decay widths of the two lightest excited hyperons, given by the imaginary part of Eq. (4), and the coupling constants of Λ(1405) into the meson-baryon channels. For the hadronic decay widths we use the observed physical mass differences and our "non-static" expression in Eq. (4) to get the correct centrifugal factor, using a bag radius R of 1.2 fm and a strange quark mass m S of 250 MeV. Also, as discussed in Appendix A, we do not restrict the sum over the quark indicies to the case i = j in Eq. (3). Then the hadronic decay width of Λ(1520) into the πΣ channel is given by
Γ πΣ [Λ(1520)] = 5 a √ 2 − b 2 √ 5 − c 2 2 × 7.7 MeV.(5)
Here the strength of the quadrupole transition operator k π SAAS of Ref. [19], Table II, is used to find the width. A similar quadrupole transition operator for the kaon cloud, k K SA ′ A ′ S , (A ′ indicates a massive s-quark in the P 3/2 state), is used to determine the hadronic width of Λ(1520) into theKN channel which we find to be
ΓK N [Λ(1520)] = 5 2 a + √ 2c 2 × 2.4 MeV.(6)
For Λ(1405) the hadronic width is given by
Γ πΣ [Λ(1405)] = a ′ + b ′ √ 2 − c ′ √ 2 2 × 19.3 MeV,(7)
where the magnitude of the width is determined by the monopole transition operator for the pion cloud, m π SP P S , of Ref. [19]. Here a, b, c and a ′ , b ′ , c ′ are the spin-flavor coefficients for the excited hyperon wavefunctions in the j − j coupled basis. For Λ(1520) they are defined as
|Λ(1520) ≡ a|1; SSA ′ + b|8; SSA + c|8; SSA ′ + d|8; SSP(8)
where S ≡ S 1/2 , P ≡ P 1/2 and A ≡ P 3/2 , and the corresponding coefficients for Λ(1405) are
|Λ(1405) ≡ a ′ |1; SSP ′ + b ′ |8; SSP + c ′ |8; SSP ′ + d ′ |8; SSA .(9)
Using the calculated wavefunctions for Λ(1520) and Λ(1405) given in Eqs. (5) and (6) of Ref. [7] 2 together with its appendix, one finds the following values for these coefficients;
a = +0.95 b = −0.21 c = +0.21 d = 0.07(10)
and
a ′ = +0.77 b ′ = +0.15 c ′ = +0.45 d ′ = −0.43(11)
With these values for the spin-flavor coefficients we find the total Λ(1520) decay width to be 23.6 MeV compared to the measured width of 15.6 MeV. [3] Note that if Λ(1520) is a pure flavor singlet state (a = 1, b = 0, c = 0 and The value of the Λ(1405) width is difficult to extract from experimental data since this hyperon, a resonance only 27 MeV below the K − p threshold, can only be seen in the πΣ channel. However, our calculated hadronic width for Λ(1405) are clearly too small, a result which will be reflected in a too small hadronic coupling constant as discussed in the following paragraph. The latest analysis of the experimental π − Σ + mass spectrum by Dalitz and Deloff [28] gives a value of 50±2 MeV for this width.
d = 0) the
The magnitude of the coupling constants for the hyperon Y * coupling to theKN and πΣ channels can also be calculated in the chiral bag model. For Λ(1405) we find the ratio of these coupling constants to be
G 2 Λ(1405)KN G 2 Λ(1405)πΣ = 2 3 m K SP ′ P ′ S m π SP P S (a ′ + √ 2c ′ ) 2 (a ′ + b ′ / √ 2 − c ′ / √ 2) 2 .(12)
where m K SP ′ P ′ S /m π SP P S is the ratio of the strength of the monopole transition operator for the kaon cloud to that of the pion cloud, which depends on R and m S . (Similar to above, P ′ denotes a massive s-quark in the P 1/2 state.) In Eq. (12) we use the same convention as in Table 6.6 of the compilation by Dumbrajs et.al. [29] where G 2 Λ(1405)πΣ is defined in their Eq. (6.4). 3 Note that from Eqs. (5), (6), (7) and (12) b ′ = c ′ = d ′ = 0) and (ii) we are in the SU(3) limit where the pion and kaon masses as well as the u-, d-and s-quark masses are equal, so that the ratio m K SP ′ P ′ S /m π SP P S = 1. Otherwise, the ratio of the coupling constants will be different from 2/3. The flavor octet part of Λ(1405) will increase the value of the ratio from 2/3, whereas the meson cloud corrections to this "bare" quark model value tend to make the ratio much smaller than 2/3 due to the very different kaon and pion masses. With our values for the spinflavor coefficients a ′ , b ′ and c ′ given in Eq. (11) we find the ratio of coupling constants in Eq. (12) to be ≈ 4.3 in the SU(3) limit. Including the physical meson masses, which corresponds to m K SP ′ P ′ S /m π SP P S ≈ 0.11 at R = 1.2 fm and m S = 250 MeV, we find this ratio reduced to approximately 0.47. It would be very useful to reanalyze theKN scattering data to see if this ratio of coupling constants for Λ(1405) is much smaller or larger than 2/3. If one uses both Table 6.6 in Dumbrajs et.al. [29] and Table 6.14 of an earlier data analysis compilation by Nagels et.al., [30] it is seen that even a small value for the ratio of the coupling constants is not ruled out. However, in our model the quark-meson coupling is at the bag surface, and the meson fields have a sharp cut-off, which is unphysical. Our meson cloud contribution to the ratio m K SP ′ P ′ S /m π SP P S ≈ 0.11 might therefore be unrealistic. Using Eq. (6.4) of Dumbrajs et.al. [29] and the value we found above for the hadronic width of Λ(1405), the magnitude of the coupling constant |G Λ(1405)πΣ | is found to be 0.698 with R = 1.2 fm and m S = 250 MeV.
Here G Λ(1405)πΣ is the charge independent coupling to the πΣ. The ratio In addition to these coupling constants for Λ(1405), we have also calculated the magnitude of the strong coupling constants G Λ 0K N and G Σ 0K N within our model as shown in Appendix B. These hyperon coupling constants are defined for each charge state to conform to the notation of Table 6.3 and Section 2.3 of Dumbrajs et.al. [29] . With R = 1.125 fm for the Λ * and Σ * and m S = 250 MeV, we find that the chiral bag model gives |G Λ 0K N | = 9.68 and |G Σ 0K N | = 3.23, values which are somewhat smaller than those quoted in Table 6.3 of Dumbrajs et.al. [29] However, these values for Λ 0K N and Σ 0K N coupling constants are well within the range reported in a more recent compilation by Adelseck and Saghai. [31] We note that a recent calculation in the bound state approach to the Skyrme model by Gobbi et.al. gives G Λ 0K N = −9.93 and G Σ 0K N = +3.43 when evaluated with a pion mass term in the model lagrangian. [32] 3 Hyperon Radiative Decay Widths: Meson Cloud Contribution
As stated above, it is not possible to distinguish the NRQM from the chiral bag model using the results presented in Tables 1 and 2. Instead, one needs to compare calculated observables which are sensitive to the predicted hyperon wavefunctions of Table 2. One such observable is the radiative decay widths of excited hyperons decaying into their ground states. In this section we use our simple two phase model of baryons, described by the Lagrangian in Eq. (1), to extract meson transition electromagnetic currents and evaluate meson cloud corrections to the total hyperon radiative decay widths.
Before presenting the details of the calculation it should be emphasized that in our model the quark-meson coupling at the bag surface, determined by the requirement of a continuous axial current in the chiral symmetry limit, is pseudoscalar implying that the photon only couples to the mesons through the kinetic energy term. Therefore, contributions to the meson electromagnetic current comes only from those diagrams where the photon couples to the mesons in flight as shown in Fig. 2c. The Cloudy Bag Model, [12] uses, in addition to the four point quark-meson interaction, a derivative quark-meson coupling in the bag volume. This means that the photon can couple to the mesons inside the quark core and allows for a quark-meson-photon contact interaction (Figs. 2a and 2b). We ignore contributions to the decay widths from those diagrams shown in Fig. 2d, where the intermediate baryon radiates a photon while the meson is in flight. In principle, this diagram should be included but its effect is small compared with the process shown in Fig. 2c, roughly by the ratio of meson to baryon masses. As we shall show, the contribution from the diagram Fig. 2c to the total hyperon radiative width is small and therefore ignoring Fig. 2d will not have any numerical consequence in this calculation. This approximation is also invoked by Zhong et.al. [33] who used a chiral SU(3) version of the Cloudy Bag Model to calculate the branching ratios of the K − p atom. In their calculation, the Λ(1405), which they assume to be a purely flavor singlet state, gives a relatively small contribution to the radiative decay width of the K − p atom. The reason is thatKN atomic system is close to theKN threshold which is at the upper tail of the Λ(1405) width, and the Λ(1405) coupling strength is therefore considerably reduced in the atomic branching ratios.
Having defined the model Lagrangian, meson electromagnetic transition currents can be extracted in the usual manner by introducing the minimal coupling prescription ∂ µ → ∂ µ ± ieA µ . The total decay width receives contributions both from the quark core and the meson cloud and, using the notation of Ref. [6], is given by
Γ J i J f = 2k 2J i + 1 m i m f λ=±1 J f m f ǫ * λ (k) · ( I q + I m ) J i m i 2 .(13)
Hereǫ is the photon polarization vector and I q and I m are defined as
I q ≡ R 0 d 3 r J q ( r)e −i k· r(14)I m ≡ ∞ R d 3 r J m ( r)e −i k· r ,(15)
where R is the bag radius. In Eqs. (14) and (15), J q and J m are the quark and meson electromagnetic current operators, respectively, and, as in Refs. [6,7], we take the photon momentum k to be given by the observed mass difference between the initial and final hyperon states. The current J m is generally a two body operator such that the transition operator given in Eq. (15) has the structure I m = i,j O(i)O(j) which is similar to the effective quark one-meson exchange interaction operator in Eq. (3). Thus, for example, the meson cloud contribution to the matrix element for the radiative decay of Λ(1405) into Σ 0 is written as We find that contributions from π + Σ − and π − Σ + as well as π + Σ − * and π − Σ + * intermediate states cancel each other when evaluating the the matrix element Λ 0 | I m |Λ * for the radiative decay of Λ * into Λ 0 γ. The reason is that the strong coupling constants for the processes π − Σ + → Λ and π + Σ − → Λ (as well as π ± Σ ∓ → Λ * ) have the same sign whereas the π + and π − transition electromagnetic currents connecting the states Λ * and Λ have the opposite sign thus cancelling the contributions from π + Σ − and π − Σ + intermediate states in the matrix element for the radiative widths. Consequently, in this model, the pions are effectively spectators for Λ * → Λ 0 γ decays in contrast to Λ * → Σ 0 γ decays where both the kaon and pion clouds contribute to the total decay widths. Furthermore, there are no contributions from the K − ∆ + intermediate state to the radiative decays of Λ(1520) and Λ(1405) for the following reason. If the initial excited hyperon is to radiate through the K − cloud, a strange quark must initially be in an excited state implying that both the u-and d-quarks are in S-states and form an antisymmetric spinflavor state. Therefore, the initial hyperon state cannot couple to the ∆ + intermediate state with a totally symmetric spin-flavor wavefunction.
Σ 0 | I m |Λ(1405) = B ′ i,j Σ 0 |O(i)|B ′ B ′ |O(j)|Λ(1405) .(16)
The results of our calculations are summarized in Table 3 where we present the separate incoherent quark core and meson cloud contributions as well as the total hyperon radiative decay widths. As in Ref. [7], we use m S = 250 MeV and R = 1.125 fm to calculate these widths. It is clear that the meson cloud corrections to the decay widths are negligible except in the case of the decay Λ(1405) → Σ 0 + γ where the width decreases from 2.22 keV to 1.85 keV when the meson cloud corrections are included. The meson cloud contributions to the widths for the decay process Λ * → Λ + γ is much smaller than those for Λ * → Σ 0 + γ decays as expected due to the heavy kaon mass, i.e., the meson cloud contributions to the radiative decay widths, though small, come from the pion cloud.
These results are a direct consequence of the weak meson field approximation to the chiral bag model. In this approximation, the meson fields are localized just outside the bag surface and their strength decreases very rapidly as one moves away from the quark source on the surface. Numerically, the smallness of the meson cloud contributions come from the small values of integrals I 1 to I 6 defined in Eqs. (C-17) to (C-22) in Appendix C. These are small due to the damping factor of e −2µRx appearing in the integrands, which means only the region near the (sharp) bag surface contribute to the integrals. In fact, this can be seen in the radial charge density of the Λ, calculated in the Cloudy Bag Model by Kunz, Mulders and Miller. [34] They find a vanishing charge density at a distance of about 0.5 fm outside the bag cavity (see Fig. 2 of Ref. [34]). This situation is similar to the description of excited Λ in our model although the two models will probably disagree in their predictions of the radiative decay widths due to different meson-photon couplings.
Discussion and Conclusion
In this work we have calculated the masses of low-lying negative parity hyperons in a perturbative approximation to the chiral bag model incorporating both pion and kaon fields in the meson cloud and found that the resulting mass spectrum is very similar to the one predicted by the NRQM calculation of Isgur and Karl. [9] Not only do both models reproduce this mass spectrum to some extent, but they also predict yet unobserved states sharing the same quantum numbers. However, both models fail to reproduce the observed mass splitting between Λ(1520) and Λ(1405). Without the kaon cloud, our model, We have also calculated the hadronic widths of Λ(1520) and Λ(1405) and
found that the total width of Λ(1520) agrees well with experiment whereas the prediction for the Λ(1405) width is too small relative to the currently accepted value. If we include the effects of the kaon and pion clouds, the ratio of the square of coupling constants of Λ(1405) coupling to theKN and πΣ channels was also found to be small compared to most data analysis. [29,30] We find in this paper that the chiral bag model gives values for the Λ 0 KN and the Σ 0 KN coupling constants consistent with the latest compilation by Adelseck and Saghai [31] and the Skyrme model calculation by Gobbi et.al. [32] In view of this it seems like our dominant qqq assumption regarding Λ(1405) has some difficulties. However, we note that the only reliable quantity in an analysis of Λ(1405) is the strength of the residue at the Λ(1405) pole which can only be reached via a dispersion calculation analysis of the experimental data. CurrentKN and πΣ scattering data are presently too crude to allow for a reliable dispersion analysis.
In addition to the hadronic widths of the lightest J P = 3 2 − and 1 2 − hyperons, we evaluated the meson cloud corrections to the radiative decay widths of Λ(1520) and Λ(1405) which we claim to be a very good observable to test various models of hadrons. In our model the radiative decay of an excited hyperon proceeds mainly through the radiation of a photon by the excited valence quarks and almost no radiation occurs from the meson cloud. To examine this further, one should calculate the excited baryon mass spectra with the non-perturbative (topological) chiral bag model, [35] which is technically very challenging. The model we use in this work is the weak meson field approximation to the topological bag model, which enhances the role of the quarks in the bag cavity while ignoring the non-perturbative effects of the meson cloud. Another extreme approximation to the topological bag model is the soliton description of baryons without any explicit quark degrees of freedom. The Skyrme model is a prototype of such soliton models of baryons, and hyperon resonances have been studied using this model. [36] A calculation of the radiative decay widths of excited hyperons in the Skyrme model for comparison with predictions from models of baryons containing explicit quark degrees of freedom is currently in progress by one of the authors. [37] In Ref. [38], Burkhardt and Lowe extract the radiative decay widths of Λ(1405) decaying into ground states Σ 0 and Λ 0 using the measured branching ratios for the radiative decay of the K − p atom. [39] They use a pole model to calculate the K − p atom radiative decay branching ratios in order to determine the Λ(1405) radiative decay widths. The Λ(1405) coupling constant to theKN channel and the Λ 0K N and Σ 0K N coupling constants are among the input parameters of this analysis. We find a small value of the Λ(1405)KN coupling constant, a reflection of the small Λ(1405) hadronic widths obtained through Eq. (7). A pole model analysis with this small coupling gives Λ(1405) radiative decay widths larger than predictions reported in the literature. [40] This together with our unsuccessful attempt in obtaining a Λ(1520) and Λ(1405) mass splitting and with the Cloudy Bag Model results of going to the next order in the meson field coupling, indicate that the Λ(1405) structure is more than a simple three quarks state. In fact, it has been known for some time that the real part of the K − p scattering length extracted from the 1S level shift of the kaonic hydrogen atom and fromKN scattering have opposite signs. A recent examination of this "kaonic hydro-gen puzzle" by Tanaka and Suzuki [41] using two different models seem to favor the one assuming a two-body composite system for the Λ(1405).
In summary, with the exception of Λ(1405), the chiral bag model describes the mass spectrum of the negative parity hyperons reasonably well which, together with the earlier success of describing the N * and ∆ * negative parity mass spectrum, [25,19] makes this model a strong competitor to the NRQM. We have also calculated the partial and total hadronic widths of Λ(1520) and find them to be close to the experimentally observed ones. Furthermore, we find that the electromagnetic decay widths of the two lightest excited hyperons in this model are much smaller than the ones calculated in the NRQM. [4] We stress that to understand the difference between the structures of Λ(1520) and Λ(1405) hyperons and the nature of their observed L · S splitting, both states, which are close to theKN threshold, should be examined within the same model.
A Hyperon Mass Corrections
Here we discuss details regarding the estimates of the mass correction factor (i.e. the real part of δ L (BB ′ )) in Eq. (4), which multiply the spin-flavor matrix elements of Eq. (3). We apply this mass correction to Λ( in the calculations of Veit et al. [12] , Siegel and Weise [13] and Arima and Yazaki [21] and is a result of our bag model calculation including all one-gluon exchange interaction terms, but excluding all meson cloud contributions. [7] The "bare" ground state baryon octet masses taken from Ref. Table II of Ref. [19]). Here, as in Section 2.2, A ′ denotes a massive s-quark in the P 3/2 state. In our model calculations, the wave function for the lightest J P = 3 2 − Λ * state was found to be [7]
|Λ(3/2 − ) 3 = −0.95 | 2 1 3/2 − 0.09 | 4 8 3/2 + 0.29 | 2 8 3/2 (A-1)
which is clearly dominated by the flavor singlet component. The pionic cloud contribution to the J P = 3 2 − states, H π , is given by a 3×3 matrix defined by
H π ≡ 2 1 3/2 |O π | 2 1 3/2 2 1 3/2 |O π | 4 8 3/2 2 1 3/2 |O π | 2 8 3/2 4 8 3/2 |O π | 2 1 3/2 4 8 3/2 |O π | 4 8 3/2 4 8 3/2 |O π | 2 8 3/2 2 8 3/2 |O π | 2 1 3/2 2 8 3/2 |O π | 4 8 3/2 2 8 3/2 |O π | 2 8 3/2 (A-2)
where O π is the effective quark one-pion exchange interaction operator. The Here the three terms in the parenthesis are the "bare" Λ * mass, the pion-and the kaon-cloud contributions, respectively. When we include the estimated values of the real part of the mass correction factors quoted above, we find the following changes to the above masses. Model of Veit et.al. [12] an additional, very attractive, four point quark-meson interaction of the typeqγ µ q(φ × ∂ µ φ), which is second order in the meson field, is included leading to a L=0 meson-baryon contact interaction. The effect of this interaction term on the Λ(1520) state should be examined.
B YKN Coupling Constants
In this Appendix we briefly present the calculation of the hyperon coupling constants G Λ 0K N and G Σ 0K N in the chiral bag model where the meson cloud is treated as a perturbation. The method employed in determining G YKN is a straightforward generalization of the calculation of πNN coupling constant as described in Ref. [24]. The basic assumption made here is the identification of the hyperon coupling constant G YKN with the usual πNN coupling constant in the SU(3) limit (i.e. the s-quark mass equals the u-and d-quark masses and the kaon mass becomes equal to the pion mass).
If the quark-meson coupling at the bag surface is linearized as in Eq. (1), then the K − field generated by a strange quark in the S state can be written as
K − ( r ) = − A(−1)N(−1) 4πf K µ 2 e µR (µR + 1) 2 + 1 1 + µr (µr) 2 e −µr i σ(i)V + (i) ·r (B-1)
where r ≥ R. Here µ is the kaon mass, f K is the kaon decay constant and the quark index i runs from 1 to 3. V + is the V-spin raising operator acting on the flavor wavefunction. The coefficients A(−1) and N(−1) are proportional to the normalization constants for the s-quark and u-or d-quark wavefunctions in the bag cavity, respectively, so that in the SU(3) limit A(−1) → N(−1).
In close analogy with the calculation of πNN coupling constant, define the quark-kaon coupling constant g qK as
g qK M ≡ A(−1)N(−1) f K e µR (µR + 1) 2 + 1 (B-2)
so that the K − field outside the bag becomes
K − ( r ) = − µ 2 4π g qK M 1 + µr (µr) 2 e −µr i σ(i)V + (i) ·r. (B-3)
In Eq. (B-2), M is the nucleon mass and in the SU(3) limit g qK reproduces the quark-pion coupling constant discussed in Ref. [24].
The coupling constant for the Λ 0 and Σ 0 hyperons are determined by relating the expectation value of the quark operator i σ(i)V + (i) between the relevant baryon states to that evaluated at the meson-baryon level. For example let
p ↑ | i σ(i)V + (i)|Λ 0 ↑ = aU † N σV + U Λ 0 (B-4)
where ↑ indicates "spin up" and U Λ 0 and U N are the Pauli spinors for Λ 0 and the nucleon, respectively. Then the magnitude of the physical Λ 0K N coupling constant, denoted as G Λ 0K N in Section 2.3, is given by
|G Λ 0K N | = a|g qK | (B-5)
A standard calculation gives
|g qK | = 1 √ 2 |G Λ 0K N | 3 √ 2 |G Σ 0K N | . (B-6)
The value of the quark-kaon coupling constant is a function of the strange quark mass and the bag radius. With m S = 250 MeV and R = 1.125 fm, |g qK | is found to be 6.84 and the resulting magnitude of the physical Λ 0K N and Σ 0K N coupling constants are 9.68 and 3.23, respectively.
C Meson Electromagnetic Currents
In this Appendix we outline a derivation of pion electromagnetic transition currents and their corresponding radial integrals used to evaluate the meson cloud contributions to the hyperon radiative decay widths. Let π XY ( r) be the effective pion field emitted by a quark at the bag surface in an initial state X and final state Y , where X and Y can be in any of quark states S ≡ S 1/2 , P ≡ P 1/2 or A ≡ P 3/2 . As discussed these pion fields are determined by requiring a continuous axial current across the bag surface in the chiral limit, and their derivation was presented explicitly in Appendix A of Ref. [19].
The general expression for the pion field is
π XY ≡ π XY ( r ) = i 2f π l,m f l (iµr) d 2 r ′ (q X ( r ′ )γ 5 τ q Y ( r ′ ))| r ′ =R Y * lm (r)Y lm (r ′ ) (C-1) where f l (iµr) ≡ h l (iµr) ∂h l (iµR)/∂R (C-2)
and h l (x) are the spherical Hankel functions of the first kind
h l (x) = −i(−1) l x l 1 x d dx l e ix x . (C-3)
Here q Y ( r ) is the field of a quark in state Y , µ is the pion mass and Y lm (r) is the spherical harmonic.
The pion electromagnetic transition current can easily be constructed from the model Lagrangian given in Eq. (1). Let J ij π (XY ZW ) be the pion transition current emitted by a quark i in an initial state X and final state Y and absorbed by a quark j in an initial state Z and final state W (see Fig. 3a). Then J ij π (XY ZW ), which in Section 3 is defined generically as J m , can be written as
J ij π (XY ZW ) = − ie 2 ∇π i XY † π j ZW − π i XY † ∇π j ZW . (C-4)
Since these pion fields are solutions of the free Klein-Gordon equation, the pion electromagnetic current J ij π is automatically conserved. For example, using the notation of Ref. [25], the transition currents corresponding to the diagrams in Figs. 3b and 3c are
J ij π (P SSS) = − ie 2 ∇π i P S † π j SS − π i P S † ∇π j SS = + ie 2 T + (i)T − (j) N(S) 3 N(P ) 16π 2 f 2 π P i [P → S] (C-5) × 1 r [g 1 (iµr) σ(j) ·r r − f 0 (iµr)f 1 (iµr) σ(j)] and J ij π (ASSS) = − ie 2 ∇π i AS † π j SS − π i AS † ∇π j SS = + ie 2 T + (i)T − (j) N(S) 3 N(A) 8 √ 6π 2 f 2 π K [3/2,1/2] ab (i) (C-6)
× 1 r [f 1 (iµr)f 2 (iµr) ( σ(j)r arb − σ c (j)r arcêb ) +g 2 (iµr)σ c (j)r arbrcr ]
Here the isospin operator T ± ≡ ∓ 1 √ 2 (λ 1 ± iλ 2 ) and λ i are the 3 × 3 Gell-Mann matrices in flavor space. The subscripts a, b, c = 1, 2, 3 indicate the Cartesian components of the vectors or tensors andê is a spacial unit vector. The permutation operator P i [X → Y ] permutes the states X and Y of quark i, the components of the vector σ are the usual Pauli spin-matricies and K [3/2,1/2] ab is a quadrupole transition operator defined in Ref. [25]. The function g n (iµr) is given by g n (iµr) = f n (iµr) df n−1 (iµr) dr − (n − 1) f n−1 (iµr) r −f n−1 (iµr) df n (iµr) dr − n f n (iµr) r .
(C-7)
It is evident that the transition operators resulting from these two currents are two-body operators in quark space and no other currents need to be constructed if one allows only ground state baryons as intermediate baryon states.
Volume integrals needed to evaluate I m in Eq. (7) are straightforward but lengthy. In the following equations, the photon polarization vectorǫ is expressed in Cartesian coordinates, i.e.ǫ 1 ≡ (1, 0, 0) andǫ 2 ≡ (0, 1, 0). For P → S transitions, the volume integral using the pion electromagnetic current of Eq. (C-5) iŝ
ǫ l · ∞ R d 3 r J ij π (P SSS)e −i k· r = + ie 2 T + (i)T − (j) N(S) 3 N(P ) 16π 2 f 2 π P i [P → S] (C-8) × i 4π 3 R µ H 0 (iµR)H 1 (iµR)(I 1 − I 2 ) σ l (j)
Here the subscript l forǫ and σ is either 1 or 2. The volume integrals for A → S transitions involving the current in Eq. (C-6) have a more complicated operator structure. Corresponding expressions for kaon electromagnetic transition currents may be derived in a similar manner with an appropriate substitution of the isospin operator T ± by the V -spin operator V ± ≡ ∓ 1 √ 2 (λ 4 ± iλ 5 ). Table 3: Hyperon radiative decay widths in keV in the linearized approximation to the chiral bag model. In columns two and three we show the separated incoherent parts of the quark core (Γ q ) and meson cloud (Γ m ) contributions to the decay width. The total decay widths defined in Eq. (13) is shown in the fourth column. As in Ref. [7] we use m S = 250 MeV and R = 1.125 fm to calculate the widths.
Transition Γ q Γ m Γ T otal Γ(Λ(1520) → Λ 0 + γ) 31. and changes its state from X to Y . The emitted meson radiates a photon (wiggly line) and is subsequently absorbed by the quark j which
−
is a summary of the masses of low-lying negative parity hyperons predicted in the NRQM and in our model together with the observed states. With the exception of Λ(1405) to be discussed below, both models can reproduce the established hyperon resonances reasonably well and give similar predictions for states that have not been seen. In particular, they are successful in describing the ordering of the J P = 5 2 − Λ * and Σ * states correctly as well as the mass splitting between the J P = 3 2 − Λ(1520) and Λ(1690). In the Σ * sector the states are not yet well determined experimentally but both models predict states near 1800 MeV and 1650 MeV with quantum numbers J P , respectively. However, fromTable 1alone it is clear that the calculated hyperon states can not be unambiguously identified with the observed resonances in the I = 1 sector.Although both the NRQM and the chiral bag model predict similar masses for the negative parity hyperon resonances, their wavefunctions are very different as shown inTable 2, which lists the model predictions for spin-flavor contents of the excited hyperons in percents. (Note that in both models the J P = 5 2 − Λ * and Σ * are pure spin-quartet, flavor-octet states.) Since the chiral bag model predicts an excited hyperon mass spectrum very similar to the NRQM the differences in the predictions for the spin-flavor contents of these hyperon resonances between the two models become important and should be explored experimentally as discussed in Refs.[6] and[7].
For the non-static case, the expression ω+m B ′ −m B −iǫ in Eq. (4) should be replaced by ω+ m 2 B ′ + q 2 − m B − iǫ, and in our estimate we use the same normalization as in the static case. The relative intermediate meson-baryon angular momentum L has the values of L = 0 for B = Λeffective Yukawa model, the imaginary part of δ L (BB ′ ) gives the hadronic decay width of the baryon B. The sum over intermediate baryon states |B ′ in Eq. (3) should, in principle, include all the observed baryon states in the allowed meson-baryon channels for a given initial state |B . The ground state baryons are described by restricting the quarks to occupy only the lowest S-state. In this case, almost all contributions to the baryon self-energy diagrams come from octet and decuplet intermediate baryon ground states because higher excited |B ′ state contributions are negligible due to the large mass differences m B ′ − m B . Therefore when |B are the ground state baryons (i.e. when quarks occupy only the S-states), their self-energies may be evaluated by considering only the ground state intermediate states, and in such a case there are no ambiguities in assigning wavefunctions to each intermediate state |B ′ . Using this prescription Myhrer, Brown and Xu
with the real part of the correction factor of Eq. (4) and consider only the case where B = Λ * and with all the intermediate quarks in the S-state. The ground state N and ∆ contributions have different masses due to the one-gluon exchange interaction, and to evaluate the largest corrections, we neglect the heavyK∆ and πΣ * (1385) intermediate states in our estimate as in earlier
2 −
2with the bag parameters used inFig. 1), their flavor contents are different with Λ( 1 ) 3 being almost equal mixtures of flavor singlet and flavor octet states, (see
one sees that neither the |8; SSP component of Λ(1520) nor the |8; SSA component of Λ(1405) couple to theKN or the πΣ channels. From Eq. (12) it is clear that the ratio of the two Λ(1405) coupling constants equals 2/3 only if: (i) Λ(1405) is a flavor singlet (a ′ = 1;
1405)πΣ = 0.43 then determines the coupling of Λ(1405) to theKN channel, |G Λ(1405)KN | = 0.46. If Λ(1405) were a pure flavor singlet state, the magnitude of the Λ(1405)πΣ coupling constant increases to |G Λ(1405)πΣ | 1 = 1.211 while the corresponding coupling to theKN channel decreases to |G Λ(1405)KN | 1 = 0.327.
As a result, we are faced with the problem of the choice of wavefunctions for the intermediate baryon states, a problem similar to the one already addressed in the previous section. In this work we use only the ground state octet and decuplet baryons with all three quarks in the S-state for intermediate |B ′ states and, as a first approximation, neglect baryon mass differences as well as recoil corrections. This means the possible intermediate baryon states included in Eq. (16) are B ′ = p, Ξ − , Σ + , Σ − , ∆ + , Ξ − * , Σ + * and Σ − * . In Appendix C we present some explicit examples of the meson electromagnetic currents, J m , and give expressions for the corresponding transition operators I m .
−
Λ * states to be degenerate in mass. A more careful treatment of the baryon self-energy diagrams takes the mass difference between the initial/final and the intermediate baryons into consideration. An estimate of this mass difference correction indicates that it is the large flavor-singlet component in the Λ(1520) which lowers its mass below that of Λ(1405) when the kaon cloud is included in our model.
−−
)3 and discuss why our model gives the unexpected mass ordering of these two lightest excited hyperons as shown inFig. 1.In Eq. (4) we use the "bare" mass for the initial/final baryon state, ) 3 are about equal and are approximately 1695 MeV. This "bare" mass value is not too different from what is needed
−−− 2 −− 2 −
22[26] are used for the intermediate baryon masses, m B ′ . In general, m B ′ − m B = −∆ < 0, which gives our "static" estimate for the mass corrections. When B = Λ( ) 3 N) ≈ 9 for theKN L = 0 intermediate state and the correction factor for the πΣ intermediate state is Reδ 0 (), we use the "non-static" estimate since the baryon recoil will be particularly important for theKN intermediate state with L = 2. In this case, the corresponding correction factors are found to be Reδ 2 () 3 Σ) ≈ 4 for the L = 2 meson-baryon intermediate states. The value of the correction factor Reδ 2 (Λ( 3 ) 3 N) is very sensitive to the "bare" baryon mass difference m B ′ − m B due to the closeness of theKN threshold and the L = 2 centrifugal factor. Using these correction factors in Eq. (3), we find Λ( ) 3 . The reasons are inherent in our model and are as follows. For Λ( 3 ) 3 , theKN and the πΣ intermediate state couplings in Eq. (3) operates only through the k K SA ′ A ′ S and k π SAAS quadrupole transition operators, (see
− 2 − 2 −−−−
22matrix in Eq. (A-2) is similar to Eqs.(2) in Ref.[7]. With a bag radius of 1.2 fm and strange quark mass of 250 MeV the matrix elements in H π are (the numbers are in MeV) baryon octet ground states (with all three quarks in the S state) contribute to the matrix elements in Eqs. (A-3) and (A-4). The meson cloud contributions to the octet components of Λ( states approximately mass degenerate. Therefore, in this discussion we shall concentrate on the contribution from the pure flavor singlet component which is responsible for the unexpected mass ordering of the lowest J P flavor singlet mass contributions from the pion cloud, denoted as 2 1 3/2 |O π | 2 1 3/2 in Eq. (A-3), is −104 MeV. This contains a contribution from the πΣ intermediate state of 5Reδ 2 (Λ( 3 2 − ) 3 Σ)k π SAAS = −16.6 MeV when Reδ 2 (Λ( 3 ) 3 Σ) = 1. Similarly, the kaon cloud contribution to the flavor singlet is −34 MeV as shown in Eq. (A-4) and this includes aKN intermediate state of contribution of 10 3 Reδ 2 (Λ( 3 ) 3 N)k K SA ′ A ′ S = −3.73 MeV when Reδ 2 (Λ) 3 are as follows (see e.g. Eq. (2c) in Ref. [7]). For the pion cloud we have a contribution of −62 MeV which includes an intermediate πΣ state contribution of 2Reδ 0 (Λ) 3 Σ) = 1, and the kaon cloud mass contribution is −13 MeV of which theKN intermediate state contribution is 4Reδ 0 (Λ( 1 2 − ) 3 N)m K SP ′ P ′ S = −1.85 MeV when Reδ 0 (Λ( 1 2 − ) 3 N) = 1. We note that when we calculate the spin-flavor matrix elements between the various baryon states we use the prescription of Ref. [25], (see Fig. 4 of this reference) where the quark remains in its initial state when i = j in the sum over the quark indices i and j in Eq. (3). The consequence of this prescription is that only the terms i = j in Eq. (3) contribute to the πΣ andKN intermediate states. (We relax this restriction when we calculate the hadronic widths of the hyperons in Section 2.3 of this paper.)Assuming now that Λ( ) 3 are pure flavor singlets, we find, with no mass correction factors (i.e., with all Reδ L (BB ′ ) =1), the following masses.
M Λ( 3 2 −
2) 3 = (1695 − 154 − 49) mass correction factors for theKN intermediate states are all very large, the kaon cloud contributions are too small to invert the calculated mass ordering. The reason is due to the small values of the kaon cloud transition matrix elements, k K SA ′ A ′ S and m K SP ′ P S , a result of the heavy kaon mass. It seems to be necessary to go beyond lowest order of this model, perhaps by including the non-perturbative effects of the meson cloud, to explain the mass splitting of the Λ(1520) and Λ(1405). Note that in the Cloudy Bag
1 (iµR)H 2 (iµR) D(µR)K
60 3 × 10 −3 31.46 Γ(Λ(1520) → Σ 0 + γ) 48.83 0.33 50.85 Γ(Λ(1405) → Λ 0 + γ) 75.15 7 × 10 −4 74.98 Γ(Λ(1405) → Σ 0 + γ)
Figure 1 :
1The mass spectrum of the negative parity Λ * and Σ * hyperons in the chiral bag model where the meson cloud is treated as perturbation. In the figure we show the observed hyperon resonances with their mass uncertainties taken from Ref. [3] and the calculated masses when (i) only the pion field (solid line) and (ii) both the pion and the kaon fields (dashed line) are included in the meson cloud. The vertical lines connect the same states before and after the kaon field has been included in the calculation. Note that when both the pion and kaon fields are included in the meson cloud, the mass of Σ(
Figure 2 :
2(a) and (b): The quark-meson-photon contact interaction which is absent in a chiral bag model described by the Lagrangian given in Eq. (1) due to the pseudoscalar quark-meson coupling at the bag surface. (c): The only type of contribution to the meson electromagnetic transition current considered in this work. Here the photon (wiggly line) couples to the meson (dashed line) in flight. (d): Radiation of a photon from an intermediate baryon while the meson is in flight. Because the baryons are heavy compared to mesons this diagram has beenomitted in the present work as well as in Ref.[33].
Figure 3 :
3(a): A typical diagram involving the meson electromagnetic transition current. Here the quark i (solid line) emits a meson (dashed line)
relative branching ratio Γ πΣ [Λ(1520)]/ΓK N [Λ(1520)] increases from 14.4/9.2 ≈ 1.6 to approximately 7.7/2.4 ≈ 3.2. For Λ(1405) we find Γ πΣ [Λ(1405)] ≈ 6 MeV, whereas if Λ(1405) is a pure flavor singlet then Eq. (7) gives a width of about 19.3 MeV. We note that the values of these widths depend somewhat on the value of the bag radius R. For example, if R = 1.175fm then the total widths for the Λ(1520) decreases to 19 MeV, a value closer
to the measured width, but Γ πΣ [Λ(1405)] changes only by 1 MeV to 5 MeV.
In all of the Cloudy Bag Model calculations, the Λ(1405) is assumed to be a flavor singlet state.
Eq.(6) in Ref.[7] contains a misprint: 0.03 should be 0.003.
The definitions of strong coupling constants used in Ref.[29] are not always consistent. Also the footnote (a) toTable 6.6 in this reference is incorrect. The ratio of coupling constants should increase if g 2 were used as correctly stated in footnote (a) ofTable 6.14 of Nagels et.al.[30]
AcknowledgementsWe thank G.E. Brown for encouraging and inspiring this collaboration and Jim Lowe for numerous correspondences. One of us (YU) would like to thank the University of South Carolina at Columbia for kind hospitality.(i) + B(µR)K The function H n (iµR) is defined as-11)and the coefficients A(µR) through E(µR) are given bywhere,(C-17)(C-18)(C-19)(C-20)(C-21)Table 2: Relative percentages of spin-flavor contents of low-lying negative parity hyperons in the NRQM of Ref.[9]and in the chiral bag model of this work. The composition for Σ(3/2 − ) 2 state in NRQM was not given in Ref.[9]. In both models the J P = 5 2 − hyperons are pure spin-quartet, flavor-octet states.Spin-flavor state (→) NRQM (in %) Chiral Bag (in %) State (↓)
Electromagnetic Production of Hyperons. P D Barnes, CEBAF Letter of Intent. unpublishedP.D. Barnes et.al., "Electromagnetic Production of Hyperons", CEBAF Letter of Intent (unpublished)
H Pilkuhn, The interactions of hadrons. AmsterdamNorth-HollandH. Pilkuhn, in The interactions of hadrons, (North-Holland, Amsterdam, 1967)
. Phys. Rev. 45Particle Data GroupPart IIParticle Data Group, Phys. Rev. D45 (1992) Part II
. J W Darewych, M Horbatsch, R Koniuk, Phys. Rev. 281125J.W. Darewych, M. Horbatsch and R. Koniuk, Phys. Rev. D28 (1983) 1125
. M Warns, W Pfeil, H Rollnik, Phys. Letts. 258431M. Warns, W. Pfeil and H. Rollnik, Phys. Letts. B258 (1991) 431
. E Kaxiras, E J Moniz, M Soyeur, Phys. Rev. 32695E. Kaxiras, E.J. Moniz and M. Soyeur, Phys. Rev. D32 (1985) 695
. Y Umino, F Myhrer, Nucl. Phys. 529713Y. Umino and F. Myhrer, Nucl. Phys. A529 (1991) 713
Radiative Decays of Low-Lying Hyperons. D L Adams, CE-BAF Letter of IntentunpublishedD.L. Adams, et.al., "Radiative Decays of Low-Lying Hyperons", CE- BAF Letter of Intent (unpublished)
. N Isgur, G Karl, Phys. Rev. 184187N. Isgur and G. Karl, Phys. Rev. D18 (1978) 4187
F Close, An introduction to quarks and partons. New YorkAcademic PressF. Close, in An introduction to quarks and partons, (Academic Press, New York, 1979)
. T Degrand, R L Jaffe, Ann. of Phys. (NY). 100425T. DeGrand and R.L. Jaffe, Ann. of Phys. (NY) 100 (1976) 425;
. T De-Grand, Ann. of Phys. (NY). 101496T. De- Grand, Ann. of Phys. (NY) 101 (1976) 496
. E A Veit, B K Jennings, R C Barrett, A W Thomas, Phys. Lett. 137415E.A. Veit, B.K. Jennings, R.C. Barrett and A.W. Thomas, Phys. Lett. B137 (1984) 415;
. E A Veit, B K Jennings, A W Thomas, R C Barrett, Phys. Rev. 311033E.A. Veit, B.K. Jennings, A.W. Thomas and R.C. Barrett, Phys. Rev. D31 (1985) 1033
. P B Siegel, W Weise, Phys. Rev. 382221P.B. Siegel and W. Weise, Phys. Rev. C38 (1988) 2221
. P J Fink, G He, R H Landau, J W Schnick, Phys. Rev. 412720P.J. Fink, G. He, R.H. Landau and J.W. Schnick, Phys. Rev. C41 (1990) 2720
. C G Callen, K Hornbostel, I Klebanov, Phys. Letts. 202269C.G. Callen, K. Hornbostel and I. Klebanov, Phys. Letts. B202 (1988) 269;
. C G Callen, I Klebanov, Nucl. Phys. 262365C.G. Callen and I. Klebanov, Nucl. Phys. B262 (1985) 365
. R H Dalitz, Rev. Mod. Phys. 33471R.H. Dalitz, Rev. Mod. Phys. 33 (1961) 471
. G C Oades, G Rasche, Nuovo Cimento 42A. 462G.C. Oades and G. Rasche, Nuovo Cimento 42A (1977) 462;
. Phys. Scr. 2615Phys. Scr. 26 (1982) 15
. R A Williams, C.-R Ji, S R Cotanch, Phys. Rev. 43452R.A. Williams, C.-R. Ji and S.R. Cotanch, Phys. Rev. C43 (1991) 452
. Y Umino, F Myhrer, Phys. Rev. 393391Y. Umino and F. Myhrer, Phys. Rev. D39 (1989) 3391
. N A Törnqvist, P Zenczykowski, Zeit. Phys. 3083N.A. Törnqvist and P. Zenczykowski, Zeit. Phys. C30 (1986) 83
. M Arima, K Yazaki, Nucl. Phys. 506553M. Arima and K. Yazaki, Nucl. Phys. A506 (1990) 553
. B Silvestre-Brac, C Gignoux, Phys. Rev. 433699B. Silvestre-Brac and C. Gignoux, Phys. Rev. D43 (1991) 3699
. S K Sharma, Phys. Rev. Lett. 622913S.K. Sharma et.al., Phys. Rev. Lett. 62 (1989) 2913;
. W Blask, Z. Phys. 326413W. Blask et.al., Z. Phys. A326 (1987) 413.
F Myhrer, International review of nuclear physics. W. WeiseSingaporeWorld Scientific1326F. Myhrer, in International review of nuclear physics, ed. W. Weise, vol. 1 (World Scientific, Singapore 1984) P. 326
. J Wroldsen, F Myhrer, Zeit. Phys. 2559J. Wroldsen and F. Myhrer, Zeit. Phys. C25 (1984) 59;
. F Myhrer, J Wroldsen, Zeit. Phys. 25281F. Myhrer and J. Wroldsen, Zeit. Phys. C25 (1984) 281
. F Myhrer, G E Brown, Z Xu, Nucl. Phys. 362317F. Myhrer, G.E. Brown and Z. Xu, Nucl. Phys. A362 (1981) 317;
. F Myhrer, Z Xu, Phys. Letts. 108372F. Myhrer and Z. Xu, Phys. Letts. B108 (1982) 372
. R L Jaffe, Phys. Rev. 213215R.L. Jaffe, Phys. Rev. D21 (1980) 3215
. R H Dalitz, A Deloff, J. Phys. 17289R.H. Dalitz and A. Deloff, J. Phys. G17 (1991) 289
. O Dumbrajs, Nucl. Phys. 216277O. Dumbrajs et.al., Nucl. Phys. B216 (1983) 277
. M M Nagels, Nucl. Phys. 1091M.M. Nagels et.al., Nucl. Phys. B109 (1976) 1
. R A Adelseck, B Saghai, Phys. Rev. 42108R.A. Adelseck and B. Saghai, Phys. Rev. C42 (1990) 108
Strong and Electromagnetic Meson Hyperon Couplings in the Bound State Soliton Model. C Gobbi, D O Riska, N N Scoccola, Nucl. Phys. A. to appear inC. Gobbi, D.O. Riska and N.N. Scoccola, "Strong and Electromagnetic Meson Hyperon Couplings in the Bound State Soliton Model", to appear in Nucl. Phys. A
. Y S Zhong, A W Thomas, B K Jennings, R C Barrett, Phys. Letts. 171837Phys. Rev.Y.S. Zhong, A.W. Thomas, B.K. Jennings and R.C. Barrett, Phys. Letts. B171 (1986) 471, Phys. Rev. D38 (1988) 837
. J Kunz, P J Mulders, G A Miller, Phys. Letts. 25511J. Kunz, P.J. Mulders and G.A. Miller, Phys. Letts. B255 (1991) 11
. G E Brown, M Rho, Phys. Letts. 82177G.E. Brown and M. Rho, Phys. Letts. B82 (1979) 177
. U Blom, K Dannbom, D O Riska, Nucl. Phys. 493384U. Blom, K. Dannbom and D.O. Riska, Nucl. Phys. A493 (1989) 384;
. K Dannbom, E Nyman, D O Riska, Phys. Lett. 227291K. Dannbom, E. Nyman and D.O. Riska, Phys. Lett. B227 (1989) 291;
. N N Scoccola, Phys. Lett. 236245N.N. Scoccola, Phys. Lett. B236 (1990) 245
. Y Umino, in preparationY. Umino, in preparation.
. H Burkhardt, J Lowe, Phys. Rev. 44607H. Burkhardt and J. Lowe, Phys. Rev. C44 (1991) 607
. D A Whitehouse, Phys. Rev. Letts. 631352D.A. Whitehouse et.al., Phys. Rev. Letts. 63 (1989) 1352
. J Lowe, private communicationJ. Lowe, private communication.
. K Tanaka, A Suzuki, Phys. Rev. 452068K. Tanaka and A. Suzuki, Phys. Rev. C45 (1992) 2068
Spin-flavor state (→) NRQM (in %) Chiral Bag. in %Spin-flavor state (→) NRQM (in %) Chiral Bag (in %)
Here the dashed line represents a pion in flight. (c): The corresponding diagram involving the current J ij π (ASSS) of Eq. The diagram involving the pion electromagnetic transition current J ij π (P SSS) of Eq. (C-5). See Appendix C for details of their derivationschanges its state from Z to W . (b): The diagram involving the pion electromagnetic transition current J ij π (P SSS) of Eq. (C-5). Here the dashed line represents a pion in flight. (c): The corresponding diagram involving the current J ij π (ASSS) of Eq. (C-6). See Appendix C for details of their derivations.
| [] |
[
"Chapter 1. Introducing Practicable Learning Analytics",
"Chapter 1. Introducing Practicable Learning Analytics"
] | [
"Olga Viberg [email protected] \nKTH Royal Institute of Technology Lindstedsvägen 3\n10044StockholmSweden\n",
"Åke Grönlund [email protected] \nSchool of Business\nÖrebro University\nSweden Fakultetsgatan 170182Örebro\n"
] | [
"KTH Royal Institute of Technology Lindstedsvägen 3\n10044StockholmSweden",
"School of Business\nÖrebro University\nSweden Fakultetsgatan 170182Örebro"
] | [] | Learning analytics have been argued as a key enabler to improving student learning at scale. Yet, despite considerable efforts by the learning analytics community across the world over the past decade, the evidence to support that claim is hitherto scarce, as is the demand from educators to adopt it into their practice. We introduce the concept of practicable learning analytics to illuminate what learning analytics may look like from the perspective of practice, and how this practice can be incorporated in learning analytics designs so as to make them more attractive for practitioners. As a framework for systematic analysis of the practice in which learning analytics tools and methods are to be employed, we use the concept of Information Systems Artifact (ISA) which comprises three interrelated subsystems: the informational, the social and the technological artefacts. The ISA approach entails systemic thinking which is necessary for discussing data-driven decision making in the context of educational systems, practices, and situations. The ten chapters in this book are presented and reflected upon from the ISA perspective, clarifying that detailed attention to the social artefact is critical to the design of practicable learning analytics. | 10.48550/arxiv.2301.13043 | [
"https://export.arxiv.org/pdf/2301.13043v1.pdf"
] | 256,389,944 | 2301.13043 | 6156fd3b4d9313232ef4444108e61d594d08daaa |
Chapter 1. Introducing Practicable Learning Analytics
Olga Viberg [email protected]
KTH Royal Institute of Technology Lindstedsvägen 3
10044StockholmSweden
Åke Grönlund [email protected]
School of Business
Örebro University
Sweden Fakultetsgatan 170182Örebro
Chapter 1. Introducing Practicable Learning Analytics
To be cited: Viberg, O., & Grönlund, Å. (a pre-print). Introducing Practicable Learning Analytics. In Viberg, O., & Grönlund, Å. Practicable Learning Analytics. Springer. Authors:Learning analyticsPracticableInformation systems artefactImpact
Learning analytics have been argued as a key enabler to improving student learning at scale. Yet, despite considerable efforts by the learning analytics community across the world over the past decade, the evidence to support that claim is hitherto scarce, as is the demand from educators to adopt it into their practice. We introduce the concept of practicable learning analytics to illuminate what learning analytics may look like from the perspective of practice, and how this practice can be incorporated in learning analytics designs so as to make them more attractive for practitioners. As a framework for systematic analysis of the practice in which learning analytics tools and methods are to be employed, we use the concept of Information Systems Artifact (ISA) which comprises three interrelated subsystems: the informational, the social and the technological artefacts. The ISA approach entails systemic thinking which is necessary for discussing data-driven decision making in the context of educational systems, practices, and situations. The ten chapters in this book are presented and reflected upon from the ISA perspective, clarifying that detailed attention to the social artefact is critical to the design of practicable learning analytics.
Introduction
This book is about practicable learning analytics. So, let us begin by defining what we mean by learning analytics and by practicable. Learning analytics has over the last ten years become an established field of inquiry and a growing community of researchers and practitioners (Lang et al. 2022). It has been suggested as one of the learning technologies and practices that will significantly impact the future of teaching and learning (Pelletier et al. 2021). It is argued to be able to improve learning practice by transforming the ways we support learning and teaching (Viberg et al. 2018) Learning analytics has been defined in several ways (Draschler and Kalz 2016;Rubel and Jones 2016;Xing et al. 2015). A widely employed and accepted definition explains it as the "measurement, collection, analysis and reporting of data about learners and their contexts, for the purposes of understanding and optimizing learning and the environments in which it occurs" (Long and Siemens 2011, p.34).
In order to recognise the complex nature of the learning analytics field, its related opportunities and corresponding challenges, researchers have stressed a need to further define and clarify what "kinds of improvement [in education] we seek to make, the most productive paths towards them, and to start to generate compelling evidence of the positive changes possible through learning analytics" (Lang et al. 2022, p. 14). Such evidence has so far been scarce and, to the extent it exists, it is often limited in scale (e.g., Ferguson and Clow 2017;Ifenthaler et al. 2021;Gašević et al. 2022). What does exist is predominantly found in higher education settings (e.g., Viberg et al. 2018;Wong and Li 2020;Ifenthaler et al. 2021); in K-12 settings, learning analytics research efforts have hitherto been limited (see e.g., de Sousa et al. 2021). If learning analytics can deliver on its promises, K-12 is arguably an even more important practice to improve as it concerns many more students and is more critical to society as it serves to educate the whole population, which makes it an even more complex field of activity.
In all educational contexts, there is a need to deliver on the promises of learning analytics and translate the unrealised potential into practice for improved learning at scale. But clearly learning analytics cannot be simplistically "put into practice", it has to be adopted into practice by practitioners who see a need for it and practical ways of using it. It has to be practicable.
Practicable suggests that something is "able to be done" or "put into action" or practised "successfully" (Cambridge Dictionary 2022; Oxford Learner's Dictionary 2022). This raises some questions: What exactly is that 'something' in learning analytics? Who is going to put it into practice? What practices are learning analytics aiming to improve? and How can we distinguish between what is more or less practicable? Would not it be good to have a theory for that, rather than just focusing on different aspects of learning analytics examinations, such as self-regulated learning (e.g., Montgomery et al. 2019;Viberg et al. 2020), collaborative learning (e.g., or social learning (e.g., Kaliisa et al. 2022). While these diverse learning analytics efforts are both interesting and meaningful to support, it is worthwhile to look at learning and teaching in a more systemic way, looking beyond isolated activities and considering them as a whole system orchestrated for students learning Education is composed of many activities conducted by both students and teachers, and affected by environmental factors. The latter includes many factors ranging from physical, like light and noise in the classroom, to social, like class sizes and composition and attitudes to learning in the home. Changes in one of those activities or factors may affect the others and may hence have consequences for the learning outcomes. It is not necessarily the case that focusing specifically on improving one factor leads to overall improvement of the system as a whole.
For example, Zhu, analysing data from the Programme for International Student Assessment (PISA), showed that reading literacy was significantly more important than mathematics for achievements in science (Zhu 2021), it was also directly influential on their mathematics achievements. Similarly, in a quasi-experimental study, Agélii Genlott and Grönlund (2016) introduced an ICT-supported method for improving literacy training in primary school and found that not only students' literacy achievements but also those in mathematics improved significantly, as measured by the national standard tests.
Such findings suggest that there are complex relations involved in learning; if you want to improve students' skills in mathematics and science, improving literacy training may be a good way to go. It certainly appears to be a bad idea to reduce literacy training to increase the time spent on mathematics training. So let us consider education practices from a systemic perspective.
A Systemic perspective on education practices
Making the use of learning analytics come into use in everyday teaching and learning activities at scale requires the tools and methods use to fit with the educational environments in which they are to be used. However, educational systems and activities are manifold and diverse, and even a brief analysis shows a great variety of situations and undertakings, as well as several stakeholders who may have different interests in learning analytics.
Stakeholders. Students and teachers are the frequently focused stakeholders in the learning analytics literature (e.g., Draschler and Greller 2011;Gašević et al. 2022;Gray et al. 2022), but educational leaders and school administrations are also involved and, in particular for younger students, parents have interest and take some part. These stakeholders play different roles and do not necessarily share the same view of what should be done in an educational institution and how to do that. While teachers and students take the keenest interest in the actual learning and teaching activities, parents, institutional leaders and school administrations are typically more interested in the results, often in the form of grades. Stakeholders can also include educational technology companies (e.g., learning management systems providers) bringing a commercial interest, and also researchers acting in the field. In sum, there are many stakeholders who may have quite different needs and interests in learning analytics (e.g., Sun et al. 2019), and this needs to be carefully considered when planning any learning analytics undertaking. It is easy to see that several conflicts between the interests of different stakeholders may come up. For example, Wise, Sarmiento and Boothe (2021) note that student and teacher stakeholders often fear that learning analytics systems are less about improving education and more about serving surveillance needs of the administration. They use the concept of "subversive learning analytics" to discuss the need to take a critical stance in order to disclose hidden assumptions built into technology designs.
Situations. Teaching and learning situations are quite different in school (especially primary and secondary) than at the university. Furthermore, learning frequently takes place with no teacher present and outside of school or scheduled classes at the university. The amount of individual student work and the responsibility of students to study independently increases as students get older, but it is also influenced by the number of teachers available, goals of educational programs, pedagogical approaches as well as educational and cultural contexts. Different study subjects require or entail certain activities, which may involve practical operations, movement, communication, testing, group work, and more. Some involve learning specific concepts, some involve understanding of systems, structures, logical reasoning, causes and effects in physical, social, or psychological matters, or all of these in combination.
In an average week, a student meets several teachers, several topics, and several situations. But common for them all is that there is some information to be handled and this takes place in a social context. As for the information, it is not only a content, it also has a form. It is typically written, audio or visual, but it may also be haptic or even tacit, such as when for example social behavioural norms are communicated by actions or non-actions. In an educational context, information must be presented in a form that is conducive to learning.
Introducing new technology, such as a novel learning analytics system, into an educational setting means changing both the situations and the information, and one cannot be changed without changing the other. For example, changing from reading a textbook to listening to the teacher means you have to stop listening to music on your headphones. This means that technology can also be seen as an actor in the social situation as it affects the conditions for student learning in several ways: in some situations, leading to improved learning but in others resulting in negative learning outcomes. That is, we cannot expect any new learning analytics tool introduced in a selected educational context to influence student learning directly and positively (as anticipated by designers); it changes the conditions in which learning activities occur, but the actual effect depends both on the technology and the situation, and it can be positive or negative. Often it is both; some of the anticipated positive effects may occur but also some "unintended consequence" that may be negative. The better we understand the situation before we intervene, the more likely we will design technology that has positive effects and no, or minimal, negative ones.
For at least fifty years, the discipline of information systems has been concerned with the introduction of information technology into people's work situations, that is, changing the social and informational situation of work. Pioneering in this regard was the Tavistock Institute in London where the concept of sociotechnical systems was coined (Emery and Trist 1960). Sociotechnical systems analysis and design was developed in the field of information systems design in the 1970s and onwards, pioneered by the Manchester Business School where Enid Mumford was a portal figure in the field of information systems, for example by developing the human-centred systems design method ETHICS (Effective Technical and Human Implementation of Computer Systems) (Mumford and Weir 1979).
The sociotechnical approach has since seen many developments, many new models and methods for analysis and design. The areas of work affected by digitalisation of tools and processes have multiplied -and education is among the most recent to be explored, decades after office work. An increasing number of theories have also come to use for analysing the relations between people and technology -and between people, organisations and technology. As an example, discuss critical learning analysis, critical race theory, speculative design and -still going strong! -sociotechnical systems.
The "Information System Artefact" in learning analytics
The research field of Learning Analytics is situated in the intersection of Learning, Analytics and Human-Centred Design (SOLAR 2021). "Learning" includes (at least) educational research, learning and assessment sciences, educational technology, "analytics" comprises, e.g., statistics, visualisation, computer/data sciences, artificial intelligence (but also qualitative analyses, such as critical analysis), and "human-centred design" is concerned with issues like usability, participatory design, sociotechnical systems thinking (SOLAR 2021). All these aspects are critical to successful implementation of learning analytics and require a carefully considered, approach to not only measure, but to better explain the targeted learning or teaching activities or processes
The disciplines of informatics (often named information systems) and computer science both share the interest in information technology artefacts, but informatics is distinguished by its focus on the user, which is in line with recent efforts on human-centred learning analytics (e.g., Buckingham Shum et al. 2019;Ochoa and Wise 2021). Who are the users of these technologies? What do they do? and How can technology help them do better? The object of study is people and technology together, and the concept of "information system" is typically defined as "a formal, sociotechnical, organizational system designed to collect, process, store, and distribute information" (Piccoli et al. 2018, p. 28).
A theoretical expression of that interest in users and use contexts is the notion of the Information System Artefact (ISA), as distinct from the information technology artefact (Lee et al. 2015). The ISA is "a system, itself comprising of three subsystems that are (1) a technology artefact, (2) an information artefact and (3) a social artefact, where the whole (the ISA) is greater than the sum of its parts (the three constituent artefacts as subsystems), where the information technology artefact (if one exists at all) does not necessarily predominate in considerations of design and where the ISA itself is something that people create" (i.e. an 'artefact'; Lee et al. 2015, p.6). The three sub-artefacts are interrelated and interdependent, which means that 'improving' one of the artefacts (in the literature, typically the technical, e.g., a learning analytics service) may in fact lead to a deterioration of the ISA. What is considered an improvement in any subsystem is only that which contributes to improving the whole, the ISA.
To make a LA system 'practicable' in our terms means understanding how it enhances the ISA as a whole in the targeted educational setting. The ISA should be understood as an object to be designed. Creating and implementing a learning analytics system means designing a technical, a social and an information artefact in such a way that they interact well to improve the overall ISA, ultimately leading to student improved learning. This argument echoes the earlier call for a more systemic approach to learning analytics (Ferguson et al. 2014;Gašević et al. 2019). Lee et al. (2015) define the components of the ISA, the three sub-artefacts, in the following way:
The technology artefact: "a human-created tool whose raison d'être is to be used to solve a problem, achieve a goal or serve a purpose that is human defined, human perceived or human felt" (p. 8). In the learning analytics setting, it could be different tools such as learning dashboards (see e.g., Susnjak et al. 2022) or other tools aimed at, for example, supporting students' self-regulated learning (for overview, see Perez- Alvarez et al. 2022) or formative feedback on academic writing (e.g., Knight et al. 2020) or collaborative peer feedback (e.g., Er et al. 2021).
The information artefact: "an instantiation of information, where the instantiation occurs through a human act either directly (as could happen through a person's verbal or written statement of a fact) or indirectly (as could happen through a person's running of a computer program to produce a quarterly report)" (p.8). The role of the information artefact in an educational setting can be to "form meaning", i.e., learn something, but it can also be other things, such as process information (like a calculator) or serve as a structure for information exchange (e.g., the alphabet).
The information artefact, hence, includes all the information that is present in a learning situation (in the case of learning analytics). Some of this information is subject to learning (the subject content), some is contextual (e.g., what concerns work methods). Introducing a technology artefact in an existing learning situation changes the information artefact insomuch as some new information may be added and some already existing information may appear in a different form (e.g., digital instead of physical or presented in a different digital format) or become available to students by different methods. This means any new learning analytics tool (a technology artefact) will in some way affect the information artefact of an educational context.
The social artefact "consists of, or incorporates, relationships or interactions between or among individuals through which an individual attempts to solve one of his or her problems, achieve one of his or her goals or serve one of his or her purposes" (p. 9). Social here means not just specific situations, like when a number of people meet and communicate, but also established, persistent relations such as institutions, roles, cultures, laws, policies and kinship.
In a simple way, the social artefact can be thought of as 'the classroom'. In a physical classroom, there are people with relations: professional and social. Professional relations concern the formal and technical part of teacher-student interaction (the teaching and learning activities), which is partly a function of the way it is organised as concerns, rules of conduct, time allocation, physical environment, class size, examination forms, and more. Social relations concern students' relations to each other, but also students' relation to schoolwork -which differ from very positive and uncomplicated to very negative and complicated -and the nature of the student-teacher communication, which is very much dependent on the personalities of the people involved.
The social artefact is much affected by changes in both the technology and the information ones. For example, when a new technology artefact is introduced in the classroom (the social artefact), it may mean that information that previously was physically available (e.g., a paper textbook or a teacher writing on a whiteboard) becomes part of the technology artefact and accessed and manipulable in new ways, the teacher-student communication changes. Teachers may have to spend time explaining to students how to handle the new tool, or students have to explain to teachers how they use them. Teachers may be less able to inspect students' work as it no longer is visible in the same way as previously when they could overview the work of an entire class in a moment. A 'social inspection' available by physical means -looking around in the classroom and then observing both individual work and social contacts -is to some extent replaced by an individual one available only through technical means (to the extent that the learning analytics application allows for that). Taken together, this means a change in the social artefact reducing the amount of physical communication and increasing the amount of technology-mediated communication. To what extent the quality of the social artefact is increased or reduced is subject to analysis, which is often not straight-forward. Using the ISA model, different stakeholders' views of, and relation to, learning analytics systems, the information they use and produce, and the role they could play in teaching and learning environments can be more clearly identified and analysed. Teaching and learning are complex phenomena taking place in (different) social contexts, and the ISA model provides an analytical framework that includes those contexts.
Overview of the chapters
This book includes ten chapters (except this introductory chapter) that illustrate the examples and aspects of the practicable learning analytics efforts and related opportunities and challenges across three continents. Most concern higher education contexts. Whereas the first five chapters explicitly demonstrate institutional efforts to put learning analytics into practice at scale, the other five illustrate relevant efforts focusing on various aspects that are important to putting learning analytics into teaching and learning practice effectively.
In Chapter 2, Buckingham Shum (2023) presents and critically reflects on the efforts of an Australian public university to design, pilot and evaluate learning analytics tools over the last decade. These efforts are summarised as conversations in the Boardroom, the Staff Room, the Server Room and the Classroom, reflecting the different levels of influence, partnership and adaptation necessary to introduce and sustain novel technologies in the complex system that constitutes any educational institution.
In Chapter 3, Rienties et al. (2023) demonstrate how the (UK) Open University's Learning Design Initiative (OULDI) has been adopted and refined in a range of institutions to fit local and specific needs across three European projects, involving practitioners from nine countries. This chapter stresses that applying and translating the OULDI and learning analytics in other institutions and borders "is not a merely copy-paste job" since it requires a number of adaptations at different implementation levels, highlighting the importance of considering the targeted context. These required adaptations have been 'translated' into and presented as the Balanced Design Planning approach in the context of The University of Zagreb (Croatia).
In Chapter 4, De Laet (2023) illustrates two cases of learning analytics implementations at the institutional level in the context of Belgian higher education. The first case reflects an institutional path of bringing learning analytics to advising practice, and the second one presents the ongoing institutional efforts of bringing predictive analytics to advising practice, an approach building on explainable artificial intelligence to uncover the existing black-box predictions.
Chapter 5 presents a project of "Learning Analytics -Students in Focus" in the context of another European university, TU Graz University of Technology. Through the lens of the human-centred learning analytics approach, Barreiros et al. (2023) illustrate the iterative design, analysis, implementation and evaluation processes of the three learning analytics tools (i.e., the planner, the activity graph, and the learning diary), all contained in the student-facing dashboard.
In Chapter 6, Hilliger and Pérez-Sanagustín (2023) introduce the LALA CANVAS: a conceptual model to support a participatory approach to learning analytics adoption in higher education. The model has been employed across four Latin American universities affiliated with the LALA (Building Capacity to Use Learning Analytics to Improve Higher Education in Latin America) project. The LALA CANVAS model is argued to be a useful model to formulate change strategies in higher education settings where the adoption of learning analytics is still at an early stage.
In Chapter 7, Järvelä et al. (2023) present their recent empirical progress on metacognitive awareness and participation in cognitive and socioemotional interaction to support the adaptive collaborative learning process. In particular, the authors present how learning process data and multimodal learning analytics can be used to uncover the regulation in computer-supported collaborative learning settings. They also provide a set of practical implications to assist students in collaborative learning activities.
In Chapter 8, Kizilcec and Davis (2023) introduce the current state of learning analytics education across the globe. This chapter contributes to practicable learning analytics by providing evidence on the status quo of teaching and learning analytics with a comprehensive review of current learning analytics programs, topics and pedagogies focused. This is followed by an in-depth case study of a learning analytics course offered to the students at Cornell University. Finally, a set of actionable guidelines for the community to consider when designing learning analytics courses is offered.
In Chapter 9, Glassey and Bälter (2023) present novel student data that learningsourcing produced. The aim is to marry learnersourcing efforts with learning analytics in terms of the types of novel learning data that is produced. The chapter provides a background to the emergence of learnersourcing as a topic, a taxonomy of the types of learnersourcing data and their supporting systems that increasingly make learnersourcing practicable for learning analytics. They also discuss challenges for using such data for learning analytics, for example as concerns data quality.
In Chapter 10, Viberg et al. (2023) argue for the importance of addressing cultural values when designing and implementing learning analytics services across countries. Viewing culture from a value-sensitive perspective, this chapter exemplifies two selected values (privacy and autonomy) that might play an important role in the design of learning analytics systems and discusses opportunities for culture-and valuesensitive design methods that can guide the design of culturally aware learning analytics systems. A set of design implications for culturally aware and value-sensitive learning analytics services is offered at the end.
Finally, in Chapter 11, Mavroudi (2023) reflects on the challenges associated with the ethical use of learning analytics in higher education, and how different selected policy frameworks address these challenges. It concludes with a list of practical recommendations on how to counteract specific challenges that might originate in the nature of learning analytics.
The chapters in context
Looking at the chapters in the book from the perspective of the ISA model, we find that most of them concern changes in the social artefact. In plain words that means changes in the way education is conducted. Education is somebody's work -teachers and students. Changing somebody's work from the outside -such as when introducing a learning analytics tool or system -will inevitably meet resistance unless it is clear to the people working education that there is not something negative in it for them. The starting point is often a suspicion that there is -most professions tend to believe that they are the ones who best understand how to do their job, so if someone from outside demands a change professionals tend to suspect that there is another agenda at play.
For changes to be positively received, there should also be something positive in it for them. Even if positive effects for teachers and students can be expected they can be very hard to argue in a convincing way as they may be difficult to measure and as they often appear later while there is always more work upfront when new systems are introduced.
The changes presented in the chapters in this book always concern the social artefact, changes in teachers' and students' daily work environment. Sometimes those changes are effects of changes in the other artefacts, the technical or the informational. Other times, changes in the social artefact motivates changes in one or both of the others. In all cases, changes in one artefact entails changes in another, and these changes are not always foreseen or planned for. In plain words, intended changes often lead to unintended consequences.
In the highly pragmatic Chapter 2, Buckingham Shum (2023) describes the entire setting in which learning analytics is to be implemented in terms of different "rooms". These rooms, which contain different stakeholders correspond quite directly to the different artefacts within ISA, and the chapter clearly describes the differences, and potential conflicts, between the different rooms. The Staff Room, the Classroom, and the Boardroom concern the social and information artefacts and focus on the required engagement of the different stakeholders involved: the university senior leadership, tutors, academics, students and teachers with learning analytics. But the social artefacts in the different rooms are different, representing different stakeholders' views and needs. In the Staff Room and the Classroom, there are teachers who are engaged in engaging with students and their work, and with the knowledge content of their courses, and who want to have information that can help them with that. In the Boardroom, university leadership is working in a business environment where the interest is in information about performance on university strategic priorities and how to improve return on investment in production. The learning analytics entrepreneur must engage both these audiences, but the way to do it differs as each room has different requirements on the information artefact. Information about teaching and learning, pedagogical issues and students' learning processes, is of interest for teachers, but in the Boardroom, there are rather requirements for information about production costs and results, including, for example, process effectiveness and efficiency, and performance of teachers. Not only is such information not interesting to teachers, it may even be discouraging to find that their own performance is monitored through the new system. The Server Room concerns the engagement with the information technology services, that is, the technology artefact, which is also critical to the success of any learning analytics implementation. Here, one important interest is how a new learning analytics system fits in with the existing ecosystem of applications which it needs to be able to interact with. This is not just a technical issue, the degree of integration among technical systems directly affects students' and teachers' work in the classroom.
Chapter 3 in presenting the new approach to learning analytics to fit local institutional needs across several European institutions, stresses the importance of the information and social artefacts but also the situational nature of them. Both the information handling, and the social setup in which the system was to be used, were areas where most adaptations to the system had to be made to fit the way education and administration were organised and conducted in different places due to regulation and practices, and at both national and local (university) level. These regulations were implemented in work instructions and practices of administrators, managers, and professionals, and in technical systems, which together formed a very firm social infrastructure to which any new work process must adapt. While there was less adaptation needed for the technology artefact in the case presented, this, too, needed concern as there has to be a sufficiently mature technical infrastructure in an organisation to be able to implement any learning analytics system.
Chapter 4 reflects on the interrelations between the three ISA subartefacts when presenting the scale-up process of the advising dashboard. The impetus to change came from an improved technology artefact aimed to improve the information artefact, that is, lead to better information handling and hence more effective work processes, specifically by supporting the dialogue between academic advisors and students. The change process involved several challenges related to the social artefact, including "overcoming resistance to change, alignment with educational values of the higher education institute, and tailoring to the particular context". Interestingly -and in contrast to similar efforts previously reported in the literature, the project was successful in terms of improving the social artefact -it resulted in the academic advisors (the key system users) feeling that the system made them better equipped to conduct a constructive and "more personal" dialogue with students. The author attributes this success to two main factors. First, the system did not include any prescriptive or predictive components, which are often found to be sources to resistance because they interfere uninvitedly in people's work (negatively affect the social artefact, in terms of ISA). Second, the implementation project took a bottom-up approach with the goal of supporting the advising dialogue and the professionals were included in an iterative user-centred design process, hence giving them an element of ownership and control of the new system.
Chapter 5 discusses a human-centred approach to LA design, which means the point of departure is the social artefact; the aim of a human-centred approach is to design work processes, work organisation, and technical systems to fit people. The chapter describes a project where use cases were first constructed. This was done by defining students' personas and descriptions of several scenarios illustrating when and with what intent the students may use the learning analytics dashboard to acquire or develop self-regulated skills, and how they might act to achieve a goal using the dashboard. Based on a selection of these scenarios, the project went on to produce design solutions, which were then moved forward to prototypes for testing with the intended users. In terms of the ISA, this means designing the entire ISA artefact using the social artefact (the scenarios) as the reference and as a test for the quality of the other two sub-artefacts. The prototypes represent the information and the technology artefacts. They were based on the scenarios; the information artefact concerned selecting which information to include and how to organise it to meet user needs, and the technology artefact concerned implementing the user interface to that information in such a way that it provides adequate support to their use processes. This shows a mutual dependence among the sub-artefacts. The social artefact informed the design of the information and technology artefacts, but the latter two also informed the design of the social artefact; during the design process, the prototypes were used to make the scenarios more concrete to users.
Similar to the previous chapter, Chapter 6 also starts from an interest in the social artefact. The contribution here is a conceptual model to support a participatory approach to learning analytics adoption in higher education; that is, a way to understand the social environment in which learning analytics is to work by means of direct user participation. The challenge is to discuss learning analytics at an early stage of development, which means it is still a rather hypothetical concept to participants as there is little in the form of examples of proven practice to guide prospective users' expectations. The method for discussion is group discussions, and the aim is to understand what needs there might be in educational practice that learning analytics could draw upon so as to be useful to practitioners. The model proposed and tested is built on factors known to be important for successful implementation: political context, influential actors, desired behaviours, internal capabilities, change strategy, and indicators and instruments for assessment and evaluations.
In Chapter 7, again the social artefact is in focus, this time in a basic research perspective. The study studies group collaboration with the aim to be able to support its regulation. Effective collaborative learning requires group members to ensure that they work toward the shared goals and in order to be able to regulate their work they need to reveal to each other when they become aware that their collaboration is not heading toward the shared goals. This regulation takes place not only by using words but also by social, visual, cues of different kinds. The research studies multimodal data from group processes to identify "socially shared regulation episodes" (Järvelä et al., 2023, p.X).
Chapter 8 notes that higher education in learning analytics is conducted in different schools including not only Education but also Computer Science, Information Science and Media Studies. This means that both students and teachers come from a wide variety of disciplinary backgrounds, and many will not have a background in educational environments. The authors caution against overly focusing on numbers and -in the spirit of ISA, if not in the words -encourages educators to not forget the educational (social) environments where learning analytics are to be used (that is, the social artefact): "Before students are asked to conduct any analyses or learn a new programming language for data processing, it is critical that they first develop a strong foundational understanding of the field" (Kizilcec and Davis 2023, p.X). This understanding will help them select what (educational) problems to engage with.
Chapter 9 concerns "learnersourcing", where the basic idea is to have students do part of the grading or each other's work by means of a (teacher-organised) peer-review process. This constitutes a major change in the way education is set up, that is the social artefact. It means the students must, to some part, assume a role as evaluator, which is quite contrary to the traditional role where they (individually or in cooperation) submit work for evaluation to another stakeholder in the setting, the teachers. It also means the teachers back off a little from the evaluation process by delegating parts of it to students. The main driver behind the change is to save teachers' time by letting students do some of the information processing required for assessment of student work. In terms of the ISA, this means rearranging the information artefacts, and this change has considerable effects on the social artefact. This change redistributes some workload/information processing, but also changes the roles of stakeholders. It forces students to view their assignments from the perspective of teachers and stated quality criteria. It also changes the role of a teacher who becomes less of a direct actor and more of a "learning manager" overviewing a learning system (of students working in a digital tool) and intervening only as necessary.
Chapter 10 discusses how cultural values can be critical to learning analytics use, and how to make learning analytics design and related examinations "culturally aware" and "value-sensitive". While culture is a concept that eludes strict definition, there are several cultural values that may strongly influence the social environment (the social artefact) that can be more clearly defined and that are valued differently in different countries. The chapter discusses two out of a set of such culturally significant values -privacy and autonomy -and discusses how design methods can take values into consideration.
Chapter 11 seeks to contribute to the discussion on the ethical usage (e.g., as concerns transparency, privacy, access) of learning analytics in higher education by examining the main theoretical concepts in the field against respective policies or codes of LA ethics at several selected universities in three countries.
This discussion directly concerns the information and the technology artefacts (how data about individuals is handled in a digital environment) but it more fundamentally concerns the social artefact as ethics is basically a social contract. The key to using data on individuals is consent by the individuals themselves. The legal regulation provides a -very strictframework, but as many situations require data that is more or less personal and sensitive, consent is the method used to be able to retrieve and manipulate such data. In online shopping and social media, explicit consent is needed -"I agree to allow cookies" -but in education, there is a social contract between teachers and students that teachers can use some student data for the purpose of being able to teach them. Some of that data may be sensitive, like students' medical diagnoses and other personal characteristics, personal background, and views, which may affect learning and require special teaching methods. The condition to use such data is discretion; it is only for use in teaching situations. This condition is typically implicit, it is not expressed in personal social contracts but comes with the definitions and practices of the educational environments and professions (that is, social contracts at national level). Hence, it differs across countries. Physical educational environments make it easy to meet the contract terms, as each teacher is in control of the data.
LA changes this as much data that may be sensitive is handled digitally, and the ways that this is done is not only beyond the control of teachers and students, but also often opaque and difficult for them to learn about.
This means that the policies of educational institutions become important. This chapter discusses higher education, but the issues discussed are arguably even more important to K-12 education as it concerns more students, younger students (and therefore also involves their parents) and generally a more diverse population.
Conclusion
The chapters in this book together bring up many issues pertinent to making learning analytics more practicable. They all focus on specific issues or practices and use different theoretical perspectives but for the purpose of discussing the overall perspective of 'practicability', we have provided an overview of the problem of making learning analytics practicable by using the concept of the Information System Artefact (ISA). The ISA consists of three integrated and mutually dependent subartefacts, social, technical, and informational. In the brief analysis of the chapters in the previous section, we provide glimpses of how the three sub-artefacts relate to each other in the different educational situations or aspects of learning that each chapter discusses. Throughout the chapters, it is clear that the social artefact is the most fundamental for practicability. Any substantive changes in information handling -content, process, format, technology used -will affect the social educational situations, and to be effective -or at all used -they will have to be understood and accepted by the practitioners involved. This is not to say that the social artefact -the way in which education is conducted -cannot or should not change. Quite to the contrary, practitioners -students as well as teachers -experience many problems or deficiencies in the way education is conducted and are likely to welcome changes, just like they have already done as concerns use of various other technologies. But the welcoming is contingent on them anticipating, and ultimately experiencing, benefits to their teaching and students learning. Therefore, an important key to successful large-scale implementation of learning analytics is the way teachers and students are approached. What is not practicable is not likely to be used.
AcknowledgementsMany thanks to Simon Buckingham Shum who offered constructive feedback on the draft of this chapter, and all the authors and revie-wers who have contributed to this book.
Closing the gaps -Improving literacy and mathematics by ict-enhanced collaboration. Agélli Genlott, Grönlund, Computers & Education. 99Agélli Genlott, A & Grönlund, Å 2016, 'Closing the gaps -Improving literacy and mathematics by ict-enhanced collaboration', Computers & Education, vol.99, pp. 68-80.
Students in focus -Moving towards human-centered learning analytics. C Barreiros, P Leitner, M Ebner, Veas, Lindstaedt, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureBarreiros, C, Leitner, P, Ebner, M, Veas, E & Lindstaedt, S 2023, 'Students in focus -Moving towards human-centered learning analytics', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Embedding Learning Analytics in a University: Boardroom, Staff Room, Server Room, Classroom. Buckingham Shum, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureBuckingham Shum, S 2023, 'Embedding Learning Analytics in a University: Boardroom, Staff Room, Server Room, Classroom', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Human-centered learning analytics. Buckingham Shum, S Ferguson, R & Martinez-Maldonado, Journal of Learning Analytics. 62Buckingham Shum, S, Ferguson, R & Martinez-Maldonado, R 2019, 'Human-centered learning analytics', Journal of Learning Analytics, vol. 6, no.2, pp.1-9.
Learning dashboards for academic advising in practice. De Laet, T , Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureDe Laet T (2023), 'Learning dashboards for academic advising in practice', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Applications of learning analytics in high schools: A systematic literature review. De Sousa, E , Alexandre , B , Ferreira Mello, R Falcao, T Vesin, Gasevic, Frontiers in Artificial Intelligence. 4737891De Sousa, E, Alexandre, B, Ferreira Mello, R, Fontual Falcao, T, Vesin, B & Gasevic, D 2021, 'Applications of learning analytics in high schools: A systematic literature review, Frontiers in Artificial Intelligence', vol. 4, no. 737891, pp.1-14.
The pulse of learning analytics understandings and expectations from the stakeholders. H & Draschler, Greller, LAK12: Proceedings of the 2 nd International Conference on Learning Analytics and Knowledge. Draschler, H & Greller, W 2012, 'The pulse of learning analytics understandings and expectations from the stakeholders', In LAK12: Proceedings of the 2 nd International Conference on Learning Analytics and Knowledge, pp. 120-129.
The MOOC and learning analytics innovative cycle (MOLAC): A reflective summary of ongoing research and its challenges. H Draschler, M Kalz, Journal of Computer Assisted Learning. 323Draschler, H & Kalz M 2016, 'The MOOC and learning analytics innovative cycle (MOLAC): A reflective summary of ongoing research and its challenges', Journal of Computer Assisted Learning, vol.32, no.3, pp. 281-290.
Socio-technical Systems. F E & Emery, Trist, Management Sciences Models and Techniques. Churchman C W M & Verhulst MPergamon Press2Emery, F E & Trist, E 1960, 'Socio-technical Systems', In Churchman C W M & Verhulst M (Eds.), Management Sciences Models and Techniques. Vol. 2, Pergamon Press, pp. 83-97.
Collaborative peer feedback and learning analytics: theory-oriented design for supporting classwide interventions. E Er, Y & Dimitriadis, Assessment & Evaluation in Higher Education. 462Er, E, Dimitriadis, Y & Gasevic D 2021, 'Collaborative peer feedback and learning analytics: theory-oriented design for supporting class- wide interventions', Assessment & Evaluation in Higher Education, vol. 46, no. 2., 169-190.
Setting learning analytics in context: Overcoming the barriers to large-scale adoption. R Ferguson, L Macfaydyen, D Clow, B Tynan, Alexander Dawson, Journal of Learning Analytics. 13Ferguson, R, Macfaydyen, L, Clow, D, Tynan, B, Alexander, S & Dawson, S 2014, 'Setting learning analytics in context: Overcoming the barriers to large-scale adoption', Journal of Learning Analytics, vol.1, no. 3, pp.120-144.
How do we start? An approach to learning analytics adoption in higher education. D Gašević, Y-S Tsai, Dawson Pardo, International Journal of Information and Learning Technology. 364Gašević, D, Tsai, Y-S, Dawson, S & Pardo, A 2019, 'How do we start? An approach to learning analytics adoption in higher education', International Journal of Information and Learning Technology, vol. 36, no. 4, pp. 342-353.
Learningsourcing analytics. R & Glassey, Bälter, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureGlassey, R & Bälter, O 2023, 'Learningsourcing analytics', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Learning analytics in higher education -Stakeholders, strategy and use. D Gašević, Y.-S Tsai, Draschler, Internet and Higher Education. 52100833Gašević, D, Tsai, Y.-S., & Draschler, H 2022. 'Learning analytics in higher education -Stakeholders, strategy and use', Internet and Higher Education, vol. 52, no. 100833, pp.1-5.
G Gray, A E Schalk, G Cooke, P Rooney, & O'rourke, Stakeholders' insights on learning analytics: Perspectives of students and staff. 187Gray, G, Schalk, A E, Cooke, G, Rooney, P & O'Rourke, K 2022, 'Stakeholders' insights on learning analytics: Perspectives of students and staff', Computers & Education, vol.187, no.104550, pp.1-16.
LALA Canvas: A model for guiding group discussions in early stages of learning analytics adoption. I & Perez Hilliger, Sanagustín, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureHilliger I & Perez Sanagustín, M 2023, 'LALA Canvas: A model for guiding group discussions in early stages of learning analytics adoption', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Putting learning back into learning analytics: actions for policy makers, researchers, and practitioners. D Ifenthaler, D Gibson, D Prasse, Shimada, Yamada M 2021, Educational Technology Research and Development. 69Ifenthaler, D, Gibson, D, Prasse, D, Shimada, A & Yamada M 2021, 'Putting learning back into learning analytics: actions for policy makers, researchers, and practitioners', Educational Technology Research and Development, vol.69, pp. 2131-2150.
How learning process data can inform regulation in collaborative learning practice. S Järvelä, Vuorenmaa, A Cini, Malmberg, Järvenoja, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureJärvelä, S, Vuorenmaa, Cini, A, Malmberg, J & Järvenoja, H 2023, 'How learning process data can inform regulation in collaborative learning practice', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Social learning analytics in computer-supported collaborative learning environments: A systematic review of empirical studies. R Kaliisa, B Rienties, Morch, Kluge, Computers and Education Open. 3100073Kaliisa, R, Rienties, B, Morch, A & Kluge, A 2022. 'Social learning analytics in computer-supported collaborative learning environments: A systematic review of empirical studies', Computers and Education Open, vol. 3, no.100073, pp.1-11.
Eight-minute self-regulation intervention raises educational attainment at scale in individualist but not collectivist cultures. R & Kizilcec, Cohen, Proceedings of the National Academy of Sciences. the National Academy of Sciences114Kizilcec, R & Cohen, G 2017, 'Eight-minute self-regulation intervention raises educational attainment at scale in individualist but not collectivist cultures', In Proceedings of the National Academy of Sciences, vol.114, no.17, 4348-4353.
Learning Analytics Education: A case study, review of current programs, and recommendations for instructors. R & Kizilcec, Davis, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureKizilcec, R & Davis, D 2023, 'Learning Analytics Education: A case study, review of current programs, and recommendations for instructors', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
AcaWriter: A learning analytics tool for formative feedback on academic writing. S Knight, Vijay Mogarkar, R Liu, M Kitto, K Sándor, Á Lucas, C Wight, R Sutton, N Ryan, P Gibson, A Abel, S Shibani, A & Buckingham Shum, Journal of Writing Research. 121Knight, S, Vijay Mogarkar, R, Liu, M, Kitto, K, Sándor, Á, Lucas, C, Wight, R, Sutton, N, Ryan, P, Gibson, A, Abel, S, Shibani, A & Buckingham Shum, S 2020, 'AcaWriter: A learning analytics tool for formative feedback on academic writing', Journal of Writing Research, vol.12, no.1, pp.141-186.
What is learning analytics?. C Lang, A F Wise, A Merceron, Gasevic, Siemens, The Handbook of Learning Analytics. A F, Gasevic, D & Merceron, ALang, C, Siemens, G, WiseLang, C, Wise, A F, Merceron, A, Gasevic, D & Siemens, G 2022, 'What is learning analytics?', In Lang, C, Siemens, G, Wise, A F, Gasevic, D & Merceron, A (Eds.). The Handbook of Learning Analytics, pp.8-18.
Going back to basics in design science: from the information technology artifact to the information systems artifact. A Lee, Thomas, R Baskerville, Information Systems Journal. 25Lee, A, Thomas, M & Baskerville, R 2015, 'Going back to basics in design science: from the information technology artifact to the information systems artifact', Information Systems Journal, vol.25, pp. 5-21.
Designing translucent learning analytics with teachers: an elicitation process. R Martinez-Maldonado, D Elliot, C Axisa, T Power, C & Buckingham Echeverria, Shum, Interactive Learning Environments. 306Martinez-Maldonado, R, Elliot, D, Axisa, C, Power, T, Echeverria, C & Buckingham Shum, S 2022. 'Designing translucent learning analytics with teachers: an elicitation process', Interactive Learning Environments, vol. 30, no. 6, pp.1077-1091.
Challenges and recommendations on the ethical usage of learning analytics in higher education. Mavroudi, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureMavroudi, A 2023, 'Challenges and recommendations on the ethical usage of learning analytics in higher education', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Using learning analytics to explore self-regulated learning in flipped blended learning music teacher education. A Montgomery, A Mousavi, M Carbonaro, Hayward, Dunn, British Journal of Educational Technology. 501Montgomery, A, Mousavi, A, Carbonaro, M, Hayward, D & Dunn, D 2019, 'Using learning analytics to explore self-regulated learning in flipped blended learning music teacher education', British Journal of Educational Technology, vol. 50, no.1, pp.114-127.
Computer Systems in Work Design: The ETHICS method: Effective Technical and Human Implementation of Computer System. E & Mumford, M Weir, John Wiley & SonsMumford, E & Weir, M W 1979. Computer Systems in Work Design: The ETHICS method: Effective Technical and Human Implementation of Computer System, John Wiley & Sons.
Supporting the shift to digital with studentcentered learning analytics. X & Ochoa, A Wise, Educational Technology Research and Development. 69Ochoa, X & Wise, A F 2021, 'Supporting the shift to digital with student- centered learning analytics', Educational Technology Research and Development, vol. 69, pp.357-361.
Oxford English Dictionary, The definitive record of the English language 2022. Oxford English Dictionary: The definitive record of the English language 2022. https://www.oed.com/
. K Pelletier, M Brown, C Brooks, M Mccormack, J Reeves, EDUCAUSE Horizon Report, Teaching and Learning Edition. Pelletier, K, Brown, M, Brooks, C, McCormack, M, Reeves, J et al. 2021 'EDUCAUSE Horizon Report, Teaching and Learning Edition'. https://library.educause.edu/- /media/files/library/2022/4/2022hrteachinglearning.pdf?la=en&h ash=6F6B51DFF485A06DF6BDA8F88A0894EF9938D50B
Tools designed to support self-regulated learning in online learning environments: A systematic review. Perez Alvarez, R Jivet, I Perez-Sanagustin, M Scheffel, Verbert, IEEE Transactions of Learning Technologies. 154Perez Alvarez, R, Jivet, I, Perez-Sanagustin, M, Scheffel, M & Verbert, K 2022, 'Tools designed to support self-regulated learning in online learning environments: A systematic review', IEEE Transactions of Learning Technologies, vol. 15, no.4, pp.508-522.
Information systems for managers. G & Piccoli, Pigni, Prospect Presswith cases (Edition 4.0 ed.Piccoli, G & Pigni, F 2018. Information systems for managers: with cases (Edition 4.0 ed.). Prospect Press.
Applying and translating learning design and analytics approaches across borders. B Rienties, I Balaban, B Divjak, D Grabar, Svetec, P Vonda, Practicable Learning Analytics. Viberg O & Gronlund, ÅSpringer NatureRienties, B, Balaban, I, Divjak, B, Grabar, D, Svetec, B & Vonda, P (2023), 'Applying and translating learning design and analytics approaches across borders', In Viberg O & Gronlund, Å (2023). Practicable Learning Analytics. Springer Nature, pp.xx
Beyond one-size fits-all in MOOCs: Variation in learning design and persistence of learning in different cultural and socioeconomic contexts. S Rizvi, B Rienties, Rogaten, Kizilcec, 10.1016/j.chb.2021.106973Computers in Human Behavior. 126Rizvi, S, Rienties, B, Rogaten, J & Kizilcec, R 2022, 'Beyond one-size fits-all in MOOCs: Variation in learning design and persistence of learning in different cultural and socioeconomic contexts', Computers in Human Behavior, vol. 126, no. 106973. https://doi.org/10.1016/j.chb.2021.106973
Student privacy in learning analytics: An information ethics perspective. Rubel, K Jones, The Information Society. 322Rubel, A & Jones, K. 2016, 'Student privacy in learning analytics: An information ethics perspective', The Information Society, vol. 32, no.2, pp.143-159.
Participatory and co-design of learning analytics: An initial review of the literature. J P & Sarmiento, A Wise, LAK 22: 12 th International Learning Analytics and Knowledge Conference. Sarmiento, J P & Wise, A F 2022, 'Participatory and co-design of learning analytics: An initial review of the literature', In LAK 22: 12 th International Learning Analytics and Knowledge Conference, pp. 535-541.
The Sciences of the Artificial. H Simon, MIT PressCambridge, MassachusettsSimon, H. (1996) The Sciences of the Artificial. MIT Press, Cambridge, Massachusetts
It's my data! Tensions among stakeholders of a learning analytics dashboard. K Sun, A Mhaidli, S Watel, Brooks, Schaub, CHI'19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Sun, K, Mhaidli, A, Watel, S, Brooks, C & Schaub, F 2019, 'It's my data! Tensions among stakeholders of a learning analytics dashboard', In CHI'19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14.
. 10.1145/3290605.3300824https://doi.org/10.1145/3290605.3300824
Designing culturally aware learning analytics: A value sensitive approach. O Viberg, Jivet, Scheffel, Practicable Learning Analytics. Viberg O & Gronlund, ASpringer NatureViberg, O, Jivet, I & Scheffel, M 2023, 'Designing culturally aware learning analytics: A value sensitive approach', In Viberg O & Gronlund, A (2023). Practicable Learning Analytics. Springer Nature, pp.xx
The current landscape of learning analytics in higher education. O Viberg, M Hatakka, Bälter, Mavroudi, Computers in Human Behaviour. 89Viberg, O, Hatakka, M, Bälter, O & Mavroudi, A 2018, 'The current landscape of learning analytics in higher education', Computers in Human Behaviour, vol.89, pp.98-110.
Self-Regulated Learning and Learning Analytics in Online Learning Environments: A Review of Empirical Research. O Viberg, M & Khalil, Baars, Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK20). the 10th International Conference on Learning Analytics and Knowledge (LAK20)Viberg, O, Khalil, M & Baars, M 2020, 'Self-Regulated Learning and Learning Analytics in Online Learning Environments: A Review of Empirical Research', In Proceedings of the 10th International Conference on Learning Analytics and Knowledge (LAK20), pp.524-533.
Participation-based student final performance prediction model through interpretable Genetic Programming: Integrating learning analytics, educational data mining and theory. W Xing, G Rui, Petakovic, S Goggins, Computers in Human Behaviour. 47Xing, W, Rui, G, Petakovic, E & Goggins, S 2015, 'Participation-based student final performance prediction model through interpretable Genetic Programming: Integrating learning analytics, educational data mining and theory', Computers in Human Behaviour, vol.47, pp.168-181.
International Handbook of Computer-Supported Collaborative Learning. A F Wise, S & Buckingham Knight, S Shum, U Cress, C Rosé, A F Wise, J Oshima, 10.1007/978-3-030-65291-3_23Computer-Supported Collaborative Learning Series. 19SpringerCollaborative Learning AnalyticsWise, A F, Knight, S & Buckingham Shum, S B 2021, 'Collaborative Learning Analytics', In: Cress, U., Rosé, C., Wise, A.F., Oshima, J. (Eds). International Handbook of Computer-Supported Collaborative Learning. Computer-Supported Collaborative Learning Series, vol 19. Springer, Cham. https://doi.org/10.1007/978-3-030-65291- 3_23
Subversive learning analytics. A Wise, J P & Sarmiento, Boothe, LAK21: 11th International Learning Analytics and Knowledge Conference. Wise, A, Sarmiento, J P & Boothe, M 2021, 'Subversive learning analytics', In LAK21: 11th International Learning Analytics and Knowledge Conference, pp. 639-645.
. 10.1145/3448139.3448210https://doi.org/10.1145/3448139.3448210
A review of learning analytics interventions in higher education. B T & Wong, K Li, Journal of Computers in Education. 71Wong, B T & Li, K C 2020, 'A review of learning analytics interventions in higher education', Journal of Computers in Education, vo.7, no.1, pp.7-28.
Reading matters more than mathematics in science learning: an analysis of the relationship between student achievement in reading, mathematics, and science. Zhu, International Journal of Science Education. 441Zhu, Y 2022, 'Reading matters more than mathematics in science learning: an analysis of the relationship between student achievement in reading, mathematics, and science', International Journal of Science Education, vol. 44, no. 1, pp 1-17.
| [] |
[
"III-V quantum light source and cavity-QED on Silicon",
"III-V quantum light source and cavity-QED on Silicon"
] | [
"I J Luxmoore \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n\nCollege of Engineering, Mathematics and Physical Sciences\nUniversity of Exeter\nEX4 4QFExeterUK\n",
"R Toro \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"O Del Pozo-Zamudio \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"N A Wasley \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"E A Chekhovich \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"A M Sanchez \nDepartment of Physics\nUniversity of Warwick\nCV4 7ALCoventryUK\n",
"R Beanland \nDepartment of Physics\nUniversity of Warwick\nCV4 7ALCoventryUK\n",
"A M Fox \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"M. SSkolnick \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n",
"H Y Liu \nDepartment of Electronic and Electrical Engineering\nUniversity College London\nWC1E 7JELondonUK\n",
"A I Tartakovskii \nDepartment of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK\n"
] | [
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"College of Engineering, Mathematics and Physical Sciences\nUniversity of Exeter\nEX4 4QFExeterUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Physics\nUniversity of Warwick\nCV4 7ALCoventryUK",
"Department of Physics\nUniversity of Warwick\nCV4 7ALCoventryUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK",
"Department of Electronic and Electrical Engineering\nUniversity College London\nWC1E 7JELondonUK",
"Department of Physics and Astronomy\nUniversity of Sheffield\nS3 7RHSheffieldUK"
] | [] | Non-classical light sources offer a myriad of possibilities in both fundamental science and commercial applications. Single photons are the most robust carriers of quantum information and can be exploited for linear optics quantum information processing. Scale-up requires miniaturisation of the waveguide circuit and multiple single photon sources. Silicon photonics, driven by the incentive of optical interconnects is a highly promising platform for the passive optical components, but integrated light sources are limited by silicon's indirect band-gap. III-V semiconductor quantum-dots, on the other hand, are proven quantum emitters. Here we demonstrate single-photon emission from quantum-dots coupled to photonic crystal nanocavities fabricated from III-V material grown directly on silicon substrates. The high quality of the III-V material and photonic structures is emphasized by observation of the strong-coupling regime. This work opens-up the advantages of silicon photonics to the integration and scale-up of solid-state quantum optical systems. SUBJECT AREAS: QUANTUM INFORMATION QUANTUM OPTICS QUANTUM DOTS PHOTONIC CRYSTALS | 10.1038/srep01239 | null | 14,288,977 | 1211.5254 | 3fe505e372ffc5d6743a2cbc6d2928a1fc657407 |
III-V quantum light source and cavity-QED on Silicon
Published 7 February 2013
I J Luxmoore
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
College of Engineering, Mathematics and Physical Sciences
University of Exeter
EX4 4QFExeterUK
R Toro
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
O Del Pozo-Zamudio
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
N A Wasley
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
E A Chekhovich
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
A M Sanchez
Department of Physics
University of Warwick
CV4 7ALCoventryUK
R Beanland
Department of Physics
University of Warwick
CV4 7ALCoventryUK
A M Fox
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
M. SSkolnick
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
H Y Liu
Department of Electronic and Electrical Engineering
University College London
WC1E 7JELondonUK
A I Tartakovskii
Department of Physics and Astronomy
University of Sheffield
S3 7RHSheffieldUK
III-V quantum light source and cavity-QED on Silicon
Published 7 February 201310.1038/srep01239Received 14 December 2012 Accepted 9 January 2013Correspondence and requests for materials should be addressed to I.J.L. (i.j.luxmoore@ exeter.ac.uk) or A.I.T. (a.tartakovskii@ sheffield.ac.uk) * These authors contributed equally to this work. SCIENTIFIC REPORTS | 3 : 1239 |
Non-classical light sources offer a myriad of possibilities in both fundamental science and commercial applications. Single photons are the most robust carriers of quantum information and can be exploited for linear optics quantum information processing. Scale-up requires miniaturisation of the waveguide circuit and multiple single photon sources. Silicon photonics, driven by the incentive of optical interconnects is a highly promising platform for the passive optical components, but integrated light sources are limited by silicon's indirect band-gap. III-V semiconductor quantum-dots, on the other hand, are proven quantum emitters. Here we demonstrate single-photon emission from quantum-dots coupled to photonic crystal nanocavities fabricated from III-V material grown directly on silicon substrates. The high quality of the III-V material and photonic structures is emphasized by observation of the strong-coupling regime. This work opens-up the advantages of silicon photonics to the integration and scale-up of solid-state quantum optical systems. SUBJECT AREAS: QUANTUM INFORMATION QUANTUM OPTICS QUANTUM DOTS PHOTONIC CRYSTALS
Q uantum states of light offer an enticing array of possibilities, for example; commercially available Quantum Key Distribution (QKD) relies on the transfer of quantum states to provide secure communication 1 and quantum lithography exploits highly-entangled photon states to define features below the Rayleigh diffraction limit 2 . In addition, single photons can be used to encode quantum information and sophisticated multi-qubit gates fabricated from silica waveguide circuits 3 have been successfully used to implement linear optical quantum computing 4 . Increasing the complexity and computing power of these devices requires miniaturisation of the waveguides and multiple integrated single photon sources (SPS). In order to achieve this, Si photonics is a highly promising technological platform 5 .
Incorporating photonic components onto a Si platform has been a powerful driver behind the development of Si photonics for the last twenty years, with a key motivation being the development of on-chip and chip-chip optical interconnects 6 . However, the indirect band-gap of Si has severely restricted the development of light sources integrated directly with Si components. III-V semiconductor materials provide a mature technology with light emitting devices across a wide spectrum from the UV to the mid infra-red, but the direct growth on Si is difficult because of lattice mismatch. Despite these difficulties, there have been several demonstrations of light emitting devices integrated with Si substrates in recent years. One approach is to use a wafer bonding technique, whereby the III-V material is grown on a lattice matched III-V substrate in the conventional manner, before removal of the host substrate and transferral to a Si wafer (see, for example, the recent review by Roelkens et al. 7 and references therein). A second approach is the direct growth of III-V material on the Si substrate, where a strain relaxation layer is employed to overcome the lattice mismatch. Examples include GaN quantum well 8 and InGaAs/GaAs quantum-dot (QD) lasers monolithically grown on Si 9,10 and with Ge virtual substrates 11 . A Ge virtual substrates was also used recently for GaAs QDs grown by droplet epitaxy 12 . Integrating quantum light sources with Si can realise new circuit functionality as well as, in the long-term, reduce the production costs of QD quantum light sources for commercial applications such as quantum key distribution 1 .
InGaAs/GaAs QDs have been widely employed in non-classical light sources [13][14][15] . Indistinguishable photon emission from two remote QDs 16,17 has opened-up the possibility of transferring quantum information between remote solid-state systems and QDs can emit polarization-entangled photons with near unity probability via the two-photon cascade of the biexciton state 14,15 . QDs are also highly compatible with photonic structures, such as micropillars 15,18-20 and photonic crystal cavities [21][22][23] , which can be exploited to maximise the collection efficiency 15 , enhance the spontaneous emission rate 19,24 and enter the regime of strong light-matter coupling 18,[20][21][22][23] .
In this work, we demonstrate that high quality and low density InGaAs QDs can be grown directly on a Si substrate. Photonic crystal cavities are fabricated using this material and employed to enhance the single-photon emission rate and collection efficiency, thus demonstrating the potential for the integration of a high-efficiency, deterministic single-photon source with Si photonics. The high quality of the material has enabled fabrication of photonic crystal cavities with Q-factors exceeding 13,000. Furthermore, the strong-coupling regime of a QD and the optical field of a nanocavity is observed: characteristic anti-crossing behaviour with a Rabi splitting of 212 meV is measured in photoluminescence (PL) measurements by tuning the sample temperature.
Results
Wafer growth and characterisation. The wafers studied in this work are grown using molecular beam epitaxy with the layer structure shown in Fig. 1(a) (see Methods for further details). A 1 mm layer of GaAs is grown monolithically on a silicon wafer. Due to the high lattice mismatch of 4.2%, threading dislocations nucleate at the interface and propagate through the GaAs layer, leaving surface defects. In order to reduce the density of these dislocations a strained layer superlattice (SLS) is grown using 5 alternating layers of In 0.15 Al 0.85 As/GaAs (10 nm/10 nm), capped with 300 nm of GaAs and repeated 4 times. To obtain the very smooth surface required for low density QD growth and high quality photonic structures, a short period superlattice (SPL) consisting of 50 alternating layers of Al 0.4 Ga 0.6 As/GaAs (2 nm/2 nm) is then grown and capped by 300 nm GaAs 10 . Following this, the 1 mm Al 0.6 Ga 0.4 As sacrificial layer, required for photonic crystal fabrication is grown and a 140 nm GaAs layer containing the InGaAs QDs at its centre completes the structure. The nominally InAs QDs are grown at 500uC using a growth rate of 0.016 ML/s with the QD emission energy controlled using the In-flush technique 25 . The cross-sectional transmission electron microscope image shown in Fig. 1(d) highlights the effectiveness of the dislocation filter layers in reducing the defect density, which is measured using etch-pit density measurements to be ,6 3 10 6 cm 22 in the GaAs layer directly above the SPL.
To assess the quality of the material we use atomic force microscopy (AFM) and PL spectroscopy (see Methods). Fig. 1(b) shows PL spectra recorded from different positions on the wafer and corresponding AFM images of an uncapped sample. The spectra are as expected for InGaAs/GaAs QDs, with a broad peak at 1.43 eV corresponding to emission from the QD wetting layer. Narrow spectral lines originating from charged and neutral exciton complexes within individual QDs are observed in the range of 1.3-1.4 eV and have resolution-limited linewidths of ,30 meV. Due to a small variation in temperature across the 3 inch diameter wafer during growth the QD density varies between 1 3 10 8 and 1.5 3 10 10 cm 22 , as illustrated by the PL measurements and AFM images, shown in Fig. 1(b). QDlike emission is also observed at higher energy, in the range 1.75-1.85 eV, an example of which is shown Fig. 1(c). PL measurements of samples etched to different depths reveal the origin of this emission to be the AlGaAs/GaAs superlattice. The formation and optical properties of QDs in this superlattice is currently the subject of further investigations.
Photonic crystal single photon source. To demonstrate the potential of the InGaAs QDs grown on Si as a single-photon source, photonic crystal cavities are fabricated using an area of the sample with moderate QD density of 5 2 10 3 10 9 cm 22 (see Methods). Fig. 2(a) shows a scanning electron microscope image of an L3-defect photonic crystal cavity. The maximum Q-factor measured in these devices is ,13,000. This is similar to the maximum value measured for devices fabricated with the same process but on GaAs substrates, suggesting that the optical quality of the GaAs is high, despite the Si substrate.
To demonstrate single-photon emission, the photonic crystal cavity is employed to enhance the spontaneous emission rate and increase the extraction efficiency of the QD. In this case, we employ not the fundamental cavity mode, but one of the higher order modes which has greater out of plane leakage 26 . We use the third lowest energy mode, M3 as indicated in Fig. 2(b), which has a Q-factor of ,250. The motivation for using this mode is two-fold; firstly, the large out-of-plane leakage means that the emission can be more efficiently collected into the microscope objective and secondly, the low Q-factor greatly increases the likelihood of finding spectral overlap between a QD and the cavity resonance. The main drawback is that the low Q-factor restricts the maximum Purcell factor, F p that can be achieved; however, the small mode volume of the cavity means that a reasonably large F p up to ,30 can still be expected. At low excitation power, the cavity mode spectrum, shown in Fig. 2(c) reveals several bright lines corresponding to individual QDs in, and close to, resonance with the cavity mode.
To investigate the coupling between the QDs and cavity mode, time-resolved PL measurements are performed as a function of emission energy, as presented in Fig. 2(d). From exponential fits to this data the QD lifetime is extracted and plotted in Fig. 2(e). Although the exact spatial position of each QD within the cavity is different, there is a clear reduction in the lifetime as the spectral detuning between the QD and mode decreases, with a minimum measured lifetime of 0.54 ns for the QD at 1.3875 eV, where the resolution of the measurement system is ,350 ps.
To characterise the performance of the QD single-photon source, we compare the QD line at ,1.388 eV, labelled QD PC in Fig. 2(c), with a typical QD in the bulk material, away from the photonic crystal, QD bulk . Fig. 3(a) shows the power dependent PL intensity of QD PC , compared with the exciton (X) and biexciton (XX) power dependence of QD bulk . For QD bulk the intensity follows the linear (quadratic) behavior consistent with that of the X(XX) in InGaAs QDs and saturates at an excitation power of ,1 mW. Similarly, QD PC displays a linear dependence on excitation power and saturates at a similar excitation power, but at a PL intensity ,54 times brighter, consistent with the enhanced collection efficiency afforded by the photonic crystal cavity 26 . Fig. 3(b) compares the time-resolved PL of QD PC and the X transition of QD bulk and shows a significantly shorter lifetime of 0.64 ns compared with 1.1 ns for QD bulk (average lifetime for QDs in the bulk is 1.22 6 0.18 ns), which corresponds to a Purcell enhancement of ,2 and confirms the regime of weakcoupling between the QD exciton and the cavity mode. The observed Purcell enhancement of ,2 is considerably less than the calculated value of ,30, which most likely results from a spatial mis-alignment of the QD and the cavity mode.
The single-photon emission is investigated with a Hanbury-Brown Twiss measurement. Fig. 3(c) plots the g (2) (t) function recorded from QD PC at a pulsed excitation power of 100 nW. Clear anti-bunching is observed with a multi-photon emission probability g (2) (0) 5 0.16, demonstrating the single-photon nature of the emission. To determine the maximum single-photon emission rate, g (2) (t) is measured for different excitation powers. Fig. 3(d) plots the multi-photon emission probability, g (2) (0) as a function of the excitation power and reveals a monotonic increase with power, as the single-photon emission from the QD saturates but the background cavity emission continues to increase 20 . At the saturation power of 500 nW the single-photon detection rate is ,80 kHz, with g (2) (0) < 0.4.
Strong light-matter coupling regime. In a second structure, we observe the regime of strong-coupling between the optical field of the cavity mode and a single QD. In this case the fundamental mode of the L3 cavity has a Q-factor of ,8,000. At the base temperature of ,10 K, the QD is blue detuned from the cavity mode by 830 meV. By increasing the sample temperature the QD can be tuned into resonance with the cavity mode as shown in Fig. 4(a). As the QD is tuned through the mode resonance, two distinct peaks are observed in the spectra at all temperatures. This anti-crossing is the signature of the strong coupling regime, where there is a reversible exchange of energy between the QD and the cavity mode resulting in the vacuum Rabi splitting (VRS), which has been observed in several QD based systems 18,[20][21][22][23] .
Discussion
To extract a value of the VRS for the coupled QD-cavity system, the peak energies are extracted from the temperature dependent spectra and plotted in Fig. 4(b). The complex energies of the upper and lower polariton branches, E 6 , of the strongly coupled system can be described by the equation 27
E +~E m zE x 2 {i c m zc x 4 + ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi g 2 z 1 4 E m {E x zi c m {c x 2 2 r ,ð1Þ
where E m and E x are the energies of the uncoupled mode and QD exciton, respectively, c m and c x are the linewidths of the cavity mode and QD exciton, respectively and g is the coupling constant. The upper and lower polariton energies, calculated using Eq. (1) are plotted in Fig. 4(b) for a zero detuning VRS, 2 g 5 212 meV, which shows good agreement with the experimental data. The linewidth of the cavity mode at ,10 K is 174 meV, giving a ratio g/c m 5 0.61 . 1/4, thus fulfilling the condition for strong-coupling 27 . The large VRS, ,75% of the predicted value 28 , suggests that the degree of spatial overlap between the QD and cavity mode is high and compares well with values reported for similar systems [21][22][23] .
In summary, we have presented the integration of high quality quantum emitters with a Si substrate. Numerous challenges remain, but the demonstration of the strong coupling regime proves that sophisticated III-V optical devices can be integrated with a Si platform. One such challenge is to develop an efficient means of coupling the quantum light into a Si photonic circuit. Whispering gallery sources laterally coupled to Si waveguides 7 provide one possible solution and epitaxial layer overgrowth 29 another. With the addition of on-chip superconducting single-photon detectors 30 all the elements required for scalable linear optical quantum computing can be combined in a single Si based platform.
Methods
Molecular beam epitaxy. Phosphorus-doped (100)-orientated 3 inch Si substrates with 4u offcut towards the [110] planes are used in the experiments. Prior to growth, oxide desorption is performed by holding the Si substrate at a temperature of 900uC for 10 minutes. The Si substrate is then cooled down for the growth of a 30-nm GaAs nucleation layer with a low growth rate of 0.1 ML/s, followed by the 1 mm GaAs layer grown at high temperature with a higher growth rate.
Photonic crystal fabrication. Photoluminescence measurements are used to identify an area of the wafer with QD density of ,5 3 10 9 cm 22 , which is then employed for the fabrication of the photonic crystal nanocavities. Electron beam lithography is used to define the photonic crystals and the GaAs slab layer is etched using a chlorine based inductively coupled plasma reactive ion etch (ICP-RIE). Finally, hydrofluoric acid is used to selectively remove the Al 0.6 Ga .0.4 As layer from beneath the photonic crystals, leaving the free-standing photonic crystal membrane.
Optical measurements. The optical measurements are performed in a liquid helium flow cryostat with a base temperature of ,10 K. The PL is excited using a continuous wave (CW) or pulsed laser tuned to 850 nm, focused to a ,1 mm diameter using a 503 microscope objective (NA50.42). For the time-resolved measurements, presented in Figures 2 and 3 a Ti:Sapphire laser with a pulsewidth of ,100 fs is used to excite the PL. The emission from a single QD is filtered using a single grating spectrometer and detected with a charge coupled device (CCD) camera or avalanche photo-diode (APD), which has a time-resolution of ,350 ps. In the case of the g (2) measurements the light filtered by the spectrometer is split by a fiber beam-splitter and coupled to a pair of APDs. For the strong-coupling measurements, presented in Fig. 4, a CW diode laser tuned to 850 nm is used to excite the PL, which is dispersed using a double grating spectrometer and detected with a CCD camera.
Figure 1 |
1Layer structure of the QD on Si wafers and material characterisation. (a) Schematic diagram showing the III-V semiconductor layers grown on the Si substrate. (b) Photoluminescence spectra recorded from the InGaAs/GaAs QDs at different locations on the three inch as-grown wafer. The intense, broad peak between 1.41 and 1.43 eV corresponds to emission from the QD wetting layer. Narrow spectral lines originating from charged and neutral exciton complexes within individual QDs are observed in the range of 1.3-1.4 eV. The insets show AFM scans from the corresponding locations on an un-capped wafer. (c) Photoluminescence spectra showing narrow emission lines which originate from the AlGaAs/GaAs short-period superlattice. (d) Cross-sectional bright-field transmission electron microscope image of the as-grown wafer, highlighting the capture of threading dislocations by the strained layer superlattices and the short period superlattice structures. www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 1239 | DOI: 10.1038/srep01239
Figure 2 |
2Photonic crystal cavity QD single-photon source. (a) Scanning electron microscope image showing a typical L3 defect photonic crystal cavity. (b) High power PL spectrum of the L3 cavity. (c) PL spectra recorded at low excitation power, showing emission of QDs coupled to the photonic crystal cavity mode, M3. The grey shaded area represents the spectral location and linewidth of the cavity mode. (d) Time resolved PL measurements recorded as a function of emission energy for the QD-photonic crystal cavity. (e) Lifetimes extracted from time-resolved PL measurements shown in (d) as a function of emission energy.
Figure 3 |
3Performance of cavity coupled QD single-photon source. (a) Integrated PL intensity of a cavity-coupled QD compared to that of the exciton (X) and biexciton (XX) of a typical QD in the bulk as a function of pulsed-laser excitation power. (b) Time-resolved measurements of the lifetime measured for the bulk neutral exciton and cavity-coupled QDs shown in (a). (c) Photon coincidence histogram recorded from the cavity-coupled QD. (d) Normalised area of central peak of second order correlation function, g (2) (0), as a function of excitation laser power. The dashed line shows the limit of single-photon emission at g
Figure 4 |
4Strong light-matter coupling between a quantum dot and the photonic mode of an L3 nanocavity. (a) Temperature dependent photoluminescence spectra showing the anti-crossing of QD and cavity mode peaks. (b) Energies of the upper and lower polaritons, extracted from data shown in (a). The grey line shows a fit to the data of equation (1) and the dashed red and blue lines show estimates of temperature dependence of the uncoupled QD and cavity mode respectively. www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 1239 | DOI: 10.1038/srep01239
SCIENTIFIC REPORTS | 3 : 1239 | DOI: 10.1038/srep01239
AcknowledgementsThis work was supported by the EPSRC Programme grants (EP/G001642/1 and EP/J007544/1) and ITN Spin-Optronics. O.D.P.Z. was supported by a CONACYT-Mexico doctoral scholarship.Author contributions
. W Tittel, J Brendel, H Zbinden, Gisin, N. Quantum cryptography. Reviews of Modern Physics. 74145Tittel, W., Brendel, J., Zbinden, H. & Gisin, N. Quantum cryptography. Reviews of Modern Physics 74, 145 (2002).
Quantum interferometric optical lithography: exploiting entanglement to beat the diffraction limit. A N Boto, Phys. Rev. Lett. 852733Boto, A. N. et al. Quantum interferometric optical lithography: exploiting entanglement to beat the diffraction limit. Phys. Rev. Lett. 85, 2733 (2000).
Shor's quantum factoring algorithm on a photonic chip. A Politi, J C F Matthews, J L O'brien, Science. 3251221Politi, A., Matthews, J. C. F. & O'Brien, J. L. Shor's quantum factoring algorithm on a photonic chip. Science 325, 1221 (2009).
A scheme for efficient quantum computation with linear optics. E Knill, R Laflamme, G J Milburn, Nature. 40946Knill, E., Laflamme, R. & Milburn, G. J. A scheme for efficient quantum computation with linear optics. Nature 409, 46 (2001).
Quantum interference and manipulation of entanglement in silicon wire waveguide quantum circuits. D Bonneau, New J. of Phys. 1445003Bonneau, D. et al. Quantum interference and manipulation of entanglement in silicon wire waveguide quantum circuits. New J. of Phys. 14, 045003 (2012).
Recent progress in lasers on silicon. D Liang, J E Bowers, Nat. Photonics. 4511Liang, D. & Bowers, J. E. Recent progress in lasers on silicon. Nat. Photonics 4, 511 (2010).
III-V/silicon photonics for on-chip and intra-chip optical interconnects. G Roelkens, Laser and Photonics Reviews. 4751Roelkens, G. et al. III-V/silicon photonics for on-chip and intra-chip optical interconnects. Laser and Photonics Reviews 4, 751 (2010).
GaN-based optoelectronics on silicon substrates. A Krost, A Dadgar, Mat. Sci. Eng. B. 9377Krost, A. & Dadgar, A. GaN-based optoelectronics on silicon substrates. Mat. Sci. Eng. B 93, 77 (2002).
Self-organised quantum dots as dislocation filters: the case of GaAs based lasers on Si. Z Mi, J Yang, P Bhattacharya, D L Huffaker, Electron. Lett. 42121Mi, Z., Yang, J., Bhattacharya, P. & Huffaker, D. L. Self-organised quantum dots as dislocation filters: the case of GaAs based lasers on Si. Electron. Lett. 42, 121 (2006).
3-mm InAs-GaAs quantum-dot lasers monolithically grown on Si substrates. T Wang, H Liu, A Lee, F Pozzi, A Seeds, Opt. Express. 111381Wang, T., Liu, H., Lee, A., Pozzi, F. & Seeds, A. 1.3-mm InAs-GaAs quantum-dot lasers monolithically grown on Si substrates. Opt. Express 19, 11381 (2011).
Long-wavelength InAs/GaAs quantum-dot laser diode monolithically grown on Ge substrate. H Liu, Nat. Photonics. 5416Liu, H. et al. Long-wavelength InAs/GaAs quantum-dot laser diode monolithically grown on Ge substrate. Nat. Photonics 5, 416 (2011).
High temperature single photon emitter monolithically integrated on silicon. L Cavigli, Appl. Phys. Lett. 100231112Cavigli, L. et al. High temperature single photon emitter monolithically integrated on silicon. Appl. Phys. Lett. 100, 231112 (2012).
Electrically driven single-photon source. Z Yuan, Science. 295102Yuan, Z. et al. Electrically driven single-photon source. Science 295, 102 (2002).
An entangled-light-emitting diode. C L Salter, Nature. 465594Salter, C. L. et al. An entangled-light-emitting diode. Nature 465, 594 (2010).
Ultrabright source of entangled photon pairs. A Dousse, Nature. 466217Dousse, A. et al. Ultrabright source of entangled photon pairs. Nature 466, 217 (2010).
Interference of single photons from two separate semiconductor quantum dots. E Flagg, Phys. Rev. Lett. 104137401Flagg, E. et al. Interference of single photons from two separate semiconductor quantum dots. Phys. Rev. Lett. 104, 137401 (2010).
Two-photon interference of the emission from electrically tunable remote quantum dots. R B Patel, Nat. Photon. 4632Patel, R. B. et al. Two-photon interference of the emission from electrically tunable remote quantum dots. Nat. Photon. 4, 632 (2010).
Strong coupling in a single quantum dot-semiconductor microcavity system. J P Reithmaier, Nature. 432197Reithmaier, J. P. et al. Strong coupling in a single quantum dot-semiconductor microcavity system. Nature 432, 197 (2004).
Indistinguishable photons from a single-photon device. C Santori, D Fattal, J Vuckovic, G S Solomon, Nature. 419594Santori, C., Fattal, D., Vuckovic, J. & Solomon, G. S. Indistinguishable photons from a single-photon device. Nature 419, 594 (2002).
Photon antibunching from a single quantum-dot-microcavity system in the strong coupling regime. D Press, Phys. Rev. Lett. 98117402Press, D. et al. Photon antibunching from a single quantum-dot-microcavity system in the strong coupling regime. Phys. Rev. Lett. 98, 117402 (2007).
Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity. Yoshie , T , Nature. 432200Yoshie, T. et al. Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity. Nature 432, 200 (2004).
Controlling cavity reflectivity with a single quantum dot. D Englund, Nature. 450857Englund, D. et al. Controlling cavity reflectivity with a single quantum dot. Nature 450, 857 (2007).
Quantum Dot Spectroscopy Using Cavity Quantum Electrodynamics. M Winger, A Badolato, K Hennessy, E Hu, A Imamoglu, Phys. Rev. Lett. 101226808Winger, M., Badolato, A., Hennessy, K., Hu, E. & Imamoglu, A. Quantum Dot Spectroscopy Using Cavity Quantum Electrodynamics. Phys. Rev. Lett. 101, 226808 (2008).
Controlling the spontaneous emission rate of single quantum dots in a two-dimensional photonic crystal. D Englund, Phys. Rev. Lett. 9513904Englund, D. et al. Controlling the spontaneous emission rate of single quantum dots in a two-dimensional photonic crystal. Phys. Rev. Lett. 95, 013904 (2005).
Manipulating the energy levels of semiconductor quantum dots. S Fafard, Phys. Rev. B. 5915368Fafard, S. et al. Manipulating the energy levels of semiconductor quantum dots. Phys. Rev. B 59, 15368 (1999).
Polarized quantum dot emission from photonic crystal nanocavities studied under mode-resonant enhanced excitation. R Oulton, Opt. Express. 1517221Oulton, R. et al. Polarized quantum dot emission from photonic crystal nanocavities studied under mode-resonant enhanced excitation. Opt. Express 15, 17221 (2007).
Strong-coupling regime for quantum boxes in pillar microcavities: Theory. L Andreani, G Panzarini, J Gérard, Phys. Rev. B. 60276Andreani, L., Panzarini, G. & Gérard, J. Strong-coupling regime for quantum boxes in pillar microcavities: Theory. Phys. Rev. B 60, 276 (1999).
Exciton-polaritons and nanoscale cavities in photonic crystal slabs. L C Andreani, D Gerace, M Agio, Phys. Stat. Sol. 2422197Andreani, L. C., Gerace, D. & Agio, M. Exciton-polaritons and nanoscale cavities in photonic crystal slabs. Phys. Stat. Sol. 242, 2197 (2005).
III-Vs on Si for photonic applications-A monolithic approach. Z Wang, Mat. Sci. Eng. B. 1771551Wang, Z. et al. III-Vs on Si for photonic applications-A monolithic approach. Mat. Sci. Eng. B 177, 1551 (2012).
High Speed and High Efficiency Travelling Wave Single-Photon Detectors Embedded in Nanophotonic Circuits. W H P Pernice, arXiv:1108.5299v2Pernice, W. H. P. et al. High Speed and High Efficiency Travelling Wave Single-Photon Detectors Embedded in Nanophotonic Circuits, arXiv:1108.5299v2.
| [] |
[
"Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming",
"Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming"
] | [
"Zhao Zhang [email protected] \nComputation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA\n",
"Allan Espinosa [email protected] \nDepartment of Computer Science\nUniversity of Chicago\nILUSA\n",
"Kamil Iskra [email protected] \nMathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA\n",
"# ",
"Ioan Raicu [email protected] \nDepartment of Computer Science\nUniversity of Chicago\nILUSA\n",
"Ian Foster [email protected] \nComputation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA\n\nDepartment of Computer Science\nUniversity of Chicago\nILUSA\n\nMathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA\n",
"Michael Wilde [email protected] \nComputation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA\n\nMathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA\n"
] | [
"Computation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA",
"Department of Computer Science\nUniversity of Chicago\nILUSA",
"Mathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA",
"Department of Computer Science\nUniversity of Chicago\nILUSA",
"Computation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA",
"Department of Computer Science\nUniversity of Chicago\nILUSA",
"Mathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA",
"Computation Institute\nUniversity of Chicago & Argonne National Laboratory\nUSA",
"Mathematics and Computer Science Division\nArgonne National Laboratory\nArgonne ILUSA"
] | [] | Loosely coupled programming is a powerful paradigm for rapidly creating higher-level applications from scientific programs on petascale systems, typically using scripting languages. This paradigm is a form of many-task computing (MTC) which focuses on the passing of data between programs as ordinary files rather than messages. While it has the significant benefits of decoupling producer and consumer and allowing existing application programs to be executed in parallel with no recoding, its typical implementation using shared file systems places a high performance burden on the overall system and on the user who will analyze and consume the downstream data. Previous efforts have achieved great speedups with loosely coupled programs, but have done so with careful manual tuning of all shared file system access. In this work, we evaluate a prototype collective IO model for filebased MTC. The model enables efficient and easy distribution of input data files to computing nodes and gathering of output results from them. It eliminates the need for such manual tuning and makes the programming of large-scale clusters using a loosely coupled model easier. Our approach, inspired by in-memory approaches to collective operations for parallel programming, builds on fast local file systems to provide highspeed local file caches for parallel scripts, uses a broadcast approach to handle distribution of common input data, and uses efficient scatter/gather and caching techniques for input and output. We describe the design of the prototype model, its implementation on the Blue Gene/P supercomputer, and present preliminary measurements of its performance on synthetic benchmarks and on a large-scale molecular dynamics application. | 10.1109/mtags.2008.4777908 | [
"https://arxiv.org/pdf/0901.0134v1.pdf"
] | 87,375 | 0901.0134 | e6d9883310b9b26e23613029e1b4231ea5debcb4 |
Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming
Zhao Zhang [email protected]
Computation Institute
University of Chicago & Argonne National Laboratory
USA
Allan Espinosa [email protected]
Department of Computer Science
University of Chicago
ILUSA
Kamil Iskra [email protected]
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne ILUSA
#
Ioan Raicu [email protected]
Department of Computer Science
University of Chicago
ILUSA
Ian Foster [email protected]
Computation Institute
University of Chicago & Argonne National Laboratory
USA
Department of Computer Science
University of Chicago
ILUSA
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne ILUSA
Michael Wilde [email protected]
Computation Institute
University of Chicago & Argonne National Laboratory
USA
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne ILUSA
Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming
Loosely coupled programming is a powerful paradigm for rapidly creating higher-level applications from scientific programs on petascale systems, typically using scripting languages. This paradigm is a form of many-task computing (MTC) which focuses on the passing of data between programs as ordinary files rather than messages. While it has the significant benefits of decoupling producer and consumer and allowing existing application programs to be executed in parallel with no recoding, its typical implementation using shared file systems places a high performance burden on the overall system and on the user who will analyze and consume the downstream data. Previous efforts have achieved great speedups with loosely coupled programs, but have done so with careful manual tuning of all shared file system access. In this work, we evaluate a prototype collective IO model for filebased MTC. The model enables efficient and easy distribution of input data files to computing nodes and gathering of output results from them. It eliminates the need for such manual tuning and makes the programming of large-scale clusters using a loosely coupled model easier. Our approach, inspired by in-memory approaches to collective operations for parallel programming, builds on fast local file systems to provide highspeed local file caches for parallel scripts, uses a broadcast approach to handle distribution of common input data, and uses efficient scatter/gather and caching techniques for input and output. We describe the design of the prototype model, its implementation on the Blue Gene/P supercomputer, and present preliminary measurements of its performance on synthetic benchmarks and on a large-scale molecular dynamics application.
Overview
We define "loosely coupled applications" as programs that involve the sequenced execution of other programs. In this programming model, programs exchange data via files; the application typically involves a large number of invocations, often of several different programs; and programs are typically feature a high degree of inter-task parallelism, enabled by data independence within the flow graph of files. Applications are typically written in scripting languages (Perl, Python, Tcl, and numerous "shells") [Ousterhout1998], which facilitate both the invocation of application programs and the passing and manipulation of files for program inputs and outputs. This style of programming is extensively employed in virtually every domain of science. For example, biologists run PERL scripts of BLAST and PFAM; neuroscientists run shell scripts of AIR, AFNI and FSL; physicists analyze collision data with scripts that execute analysis applications written in ROOT.
It is difficult to efficiently map this common and useful programming model onto computing clusters of rapidly increasing scale. We note that we are mainly concerned here with applications running on what we term "petascaleprecursor" systems, where the sheer parallelism of the computing nodes of the system can easily overwhelm a traditional IO subsystem, and in particular, its shared file systems. As clusters have grown larger, to tens or, recently, hundreds of thousands of nodes, the IO strategies of loosely coupled applications have become both a performance bottleneck and a source of complexity. Significant manual effort is needed to scale application performance as cluster size grows.
The specific problem we address here is that as the number of nodes in large-scale clusters contending for shared resources grows large, the IO bandwidth, volume and/or file management transaction rate exceeds some aggregate capacity limit, bottlenecks arise and the system becomes unbalanced. Thus, CPU cycles are wasted because the IO subsystem cannot service the CPUs fast enough. (We are concerned here with applications with high enough IO-to-compute ratios for IO to become the primary obstacle to parallel speedup. Applications that do relatively little IO while computing for long periods typically perform well in loosely coupled settings without any change to their IO strategy.)
While petascale systems have massive shared IO subsystems, these subsystems often have vulnerabilities in handling file management transactions (e.g., creating and writing huge numbers of files at high rates) that are illmatched with the needs of loosely coupled programs. Our work remedies this deficiency and makes petascale systems attractive for this important and productive paradigm for knitting existing scientific programs into powerful workflows.
Our strategy of collective IO is inspired by the collective data operations employed by tightly coupled message passing programming models. In these models, data is exchanged, both between in-memory tasks and between tasks and files, using operations such as scatter (often assisted by broadcast) and gather. In our model:
• Input files are broadcast from shared file systems to local file systems.
• Output files are locally batched up from applications and efficiently transferred to shared persistent storage. • Intermediate file systems are provided within the cluster to aid in efficient input and output staging and to overcome the limitations that large-scale clusters impose on local file system capacity. In the remainder of this paper we first present an abstract model that maps collective IO concepts, previously applied in message-passing and in-memory programming environments, to the file-based MTC domain. We then review the architecture of the IBM Blue Gene/P (BG/P) system, which we use as an exemplar of large-scale clusters (and as a base for our prototype and measurements), and describe prior work on collective IO. We then describe a new collective model that addresses the challenges described above, detail its implementation, and present preliminary measurements of its performance. We conclude with an outline of our plans to extend and improve the model.
2
Abstract Collective IO Model for File Objects Our abstract model, which is independent of specific cluster architectures, is based on the following elements. 1) We have applications involving multiple tasks that can run concurrently, each reading zero or more named objects, performing some computation, and writing zero or more named objects. (These objects are typically files -a detail that will become important when we talk about implementation specifics). The length of individual tasks, and of the objects read and written, are typically not known ahead of time.
2) We can distinguish between two principal input patterns: a) read-many, in which many or all tasks read the same object; b) read-few, in which the number of tasks reading a particular object is small -often only one. We assume that each object is written by just one task.
Typically, we know the objects to be read by each application ahead of time, and thus assume that applications will not determine at run time which files to read. (This restriction can be relaxed for some files, which would be considered outside of, or an extension to, the model). We further assume that we know (typically, from dependency information) which objects are read-many.
3) In the simplest form of these applications, the set of objects read and the set of objects written are disjoint. In more complex forms, one task may write an object that is then read by another. In that case, we assume dataflow synchronization between the writer and the reader, meaning that the reader can only execute when the writer completes execution (as below). 4) We assume a computer system architecture in which (a) all processors can access a high capacity persistent shared storage system (shared-store), albeit with modest performance, and (b) each processor has some local object storage (memory or disk) of modest capacity, but offering high performance (local-store). When many processors access the shared file system concurrently, contention leads to degraded and often unpredictable performance. 5) An abstract cluster IO architecture is useful to define terminology. As shown in Figure 4, this model has three levels of file system: Global persistent shared file systems (GFS) are accessible from all compute nodes of a cluster, and are typically the persistent home of all data. Local file systems (LFS) are per-compute-node file systems, and are only directly accessible to tasks running on the processors of that compute node. As cluster size and density increases, the LFS may be implemented in RAM or FLASH memory, and is typically constrained in size between a few hundred megabytes and a few gigabytes. Intermediate file systems (IFS) are found, typically, only on the largest and most complex clusters, such as the IBM BGP. On the BGP, IFSs exist on the "IO node" processors (IONs); systems such as the SiCortex 5832 allow larger IFSs to be constructed by striping RAM-based LFSs. We use the acronyms GFS, LFS, and IFS throughout.
Based on this abstract model, we employ two simple collective methods to improve IO performance: (a) routines to broadcast read-many objects to many processors; and (b) twostage IO operations to accelerate read-few and write operations, by staging objects between the many local-stores, an intermediate-store (created, for example, on a set of local- and Chirp [Thain+2008]. Our prototype of these methods was implemented on the BG/P and is described in Section 5.
Blue Gene/P System Architecture
The 163,834-processor IBM BG/P computer at the Argonne Leadership Computing Facility [ALCF] is at the time of writing the world's largest open-science computing system [TOP500]. We view it as an exemplar of the coming wave of "petascale" systems, and we base the work described here on this system.
We present here a brief overview of the characteristics of the GPFS distributed parallel file system that serves as the GFS for the ALCF BG/P. We then describe the ZeptoOS operating system environment that we employ for MTC programming of the BG/P, as this environment is critical to enabling the MTC model to be used on this machine, and because its BG/P implementation -some of which was influenced by the work described here -has not to date been published elsewhere.
3.1
Characteristics of GPFS as a Global File system GPFS -the General Parallel File system [Schmuck+2002] is configured on the ALCF BG/P with 24 IO servers, each with 20Gb/s network connectivity, and can sustain an aggregate IO rate of ~8GB/sec. GPFS is in general proficient at reading and writing large units, can handle vast numbers of files, and can maintain huge directories. It also excels at parallel IO operations from multiple client hosts, for which it maintains a sophisticated lock resolution protocol and heuristics. It has, however, two areas of weakness: it is relatively slow at creating new files, and can perform very poorly when multiple clients attempt to create files within the same parent directory (due to lock contention and its approach for maintaining global file system integrity in the face of metadata updates). These characteristics are typical for distributed parallel file systems which maintain local file system semantics in a distributed environment. However, they pose a challenge to MTC workloads, which can, if not carefully planned to avoid GFS weaknesses, perform exceedingly poorly.
BG/P OS and IO
Architecture to support MTC The ZeptoOS project [ZeptoOS] provides an open-source alternative to the proprietary software stacks available on contemporary massively parallel architectures. Its aim is to make petascale architectures more productive for the scientific user community, to enhance community collaboration and to enable computer science research on these architectures. ZeptoOS uses the Linux kernel to create an alternative, fully open software stack on large-scale parallel systems.
The project currently focuses on the IBM BG/P architecture. These machines normally run a limited microkernel on the compute nodes. While the default compute node kernel is highly scalable, it lacks many capabilities that MTC jobs expect, such as the ability to execute sub-processes or run shell scripts. ZeptoOS replaces that kernel with a Linux-based ZeptoOS compute node kernel, which lifts those limitations.
The default IBM BG/P microkernel forwards all file and socket IO calls to the IO nodes, which run Linux. IO nodes run a daemon that receives IO requests from the compute nodes and replays them against the Linux kernel. IO nodes also run file system clients for remote file systems such as NFS, GPFS, or PVFS, which handle the actual file IO.
ZeptoOS also uses a similar, but more general, forwarding architecture for IO requests. ZOID, the ZeptoOS IO Daemon , is a replacement IO daemon running on the IO nodes, used to communicate with the compute nodes when they are running Linux. ZOID provides a generic, highperformance function-forwarding infrastructure for compute nodes. This infrastructure is extensible through the use of plug-ins: users can define their own API and have data efficiently forwarded between the applications running on the compute nodes and the implementation code running on IO nodes. Generic plug-ins for POSIX file and socket IO are available which standard applications can take advantage of. ZOID also performs job management and IP packet forwarding between IO nodes and compute nodes (allowing users to, e.g., perform interactive debugging sessions on the compute nodes over telnet). Figure 5 and Figure 6 present in more detail the hardware and software components of the ZeptoOS environment on the BG/P. The ratio of compute nodes to IO nodes for a given BG/P installation can vary from 16:1 to 128:1 depending on the machine configuration; the ratio on the Argonne machine is fixed at 64:1. Compute nodes communicate with the IO nodes over a custom "collective" (also known as "tree") network, with a bandwidth of 6.8 Gb/s (850 MB/s). Once protocol overheads are considered, the maximum throughput that ZOID can achieve over this network is around 760 MB/s. However, such throughput is only achievable when processes on the compute nodes communicate with ZOID directly. A modified GNU libc library that enables this direct communication is in progress but is currently incomplete.
A solution available to processes on the compute nodes through standard kernel interfaces would be far more desirable. Since our communication stack is in user space, we need mechanisms to forward data between the user and kernel space. The Linux kernel does offer easy to use interfaces for such purposes, in the form of FUSE and TUN. FUSE [FUSE] is a pseudo-file system that performs callbacks from the kernel VFS layer to a user-space daemon, which provides the implementation of file IO operations. TUN [TUN] simulates a network-layer device, allowing one to forward IP packets between a user-space process and the kernel's TCP/IP stack.
The problem is that neither of these solutions is particularly fast. Their designs (particularly that of FUSE) are simple and focused on flexibility, not high performance.
The overheads they introduce are considerable. FUSE can read data in chunks of 128 KB, but writes are performed in chunks no larger than a single memory page. With a page size of 64 KB on the compute nodes we get at most 230 MB/s on input and 180 MB/s on output. (These are raw transfer speeds; if we include file system overhead, then even in the case of local RAM disk on the IO nodes, the read speed is reduced to 180 MB/s and the write speed to 130 MB/s).
The situation with TUN is even worse, because the data is transferred in individual IP packets of no more than 1500 bytes. As a result, we only achieve ~180 Mb/s (22 MB/s) between compute nodes and IO nodes. IP communication works between compute nodes as well, but for simplicity this is implemented in ZeptoOS by sending the packets to the IO node and letting it forward the data to the intended destination. Consequently, as the number of communicating compute node processes increases, the fraction of throughput available to each goes down.
The collective network is not the only one available on BG/P: the primary network for point-to-point communication between compute nodes is the 3-D torus. Every compute node has torus links to six neighbors, each with a bandwidth of 3.4 Gb/s (425 MB/s). Until recently, the torus network was not accessible when running under ZeptoOS, because the torus network's DMA engine lacks scatter/gather capability and thus requires large, continuous areas of physical memory, normally unavailable under Linux.
To enable use of the torus network under ZeptoOS, we modified the Linux kernel to reserve a considerable "flat" segment of memory at boot time. A process wishing to communicate over the torus is mapped into this memory region, so that the DMA engine can operate on its memory buffers. While this capability is still under development, we have implemented IP forwarding over MPI (which uses the torus), again using the TUN device. We measured peak torus point-to-point throughput of around 1.15 Gb/s (140 MB/s). This throughput is an order of magnitude higher than over the collective network, for several reasons, the most significant being that we have increased the maximum transmission unit (MTU) of the TUN network device to 65535 bytes (the maximum value allowed with IPv4). While we would have liked to do the same with the TUN device operating over the collective network, the older version of the Linux kernel used on the IO nodes does not allow an increase in the MTU of the TUN device. We are currently prevented from upgrading that kernel version because the GPFS kernel module depends on it.
4
Prior work
There has been much research on collective operations in the context of the message passing programming paradigm. These operations allow a group of processes to perform a common, pre-defined operation "collectively" on a set of data. For example, the MPI standard [MPI] offers a large number of such operations, from a basic broadcast (delivering an identical copy of data from one source to many destinations), through scatter (delivering a different part of input data from one source to each destination) and its opposite, gather (assembling the result at one destination from its parts available on multiple sources), to reduction operations (like gather, but instead of assembling, the parts of the result are combined). These operations are considered so crucial for the performance of message passing programs that the BG/P provides the separate collective tree network to perform them efficiently in hardware [BGP].
Similarly, collective IO is not a new concept in parallel computing. It is employed, e.g., by ROMIO [Thakur+1999], the most popular MPI-IO implementation, in its generalized two-phase IO implementation. When compute tasks want to perform IO, they first exchange information about their intentions, in an attempt to coalesce many small requests into fewer larger ones (an assumption being that the processes access the same file). When reading, in the first phase the processes issue large read requests, and in the second phase, they exchange parts of their read buffers with one another, using efficient MPI communication primitives so that each process ends up with the data it was interested it. For writing, the two phases are reversed.
MPI collective communication and IO operations require applications to be at least loosely synchronous, in that progress Until recently, such uncoordinated jobs were primarily run on moderate scale clusters or on distributed ("grid") resources. Most clusters were not large enough to encounter IO contention problems such as those described here. Furthermore, cluster nodes generally have considerable local disks suitable for storing large input and output data. The primary problem on such systems has thus been mainly to efficiently stage data and schedule jobs so that they can best benefit from the staged data [Khanna+2006;Khanna+2007].
File IO is a more significant problem with distributed resources. Condor provides a remote IO library that forwards system calls to a shadow process running on the "home" machine where the files actually reside. Global Access to Secondary Storage (GASS [Bester+1999]) available in Globus takes a different approach, transparently providing a temporary replica cache for input and output files. Our collective IO goes beyond these approaches to intelligently utilize local filesystems, and to provide intermediate file systems, broadcasting of input files, and batching of output files. Unlike Condor remote IO, our approach does not require relinking. Our approach makes it practical for tens to hundreds of thousands of processor cores now (and in a few years, a million cores) to perform concurrent, asynchronous IO operations. These numbers are easily an order of magnitude greater than what has been addressed in any previous implementation.
Design and Implementation
The requirements described to this point translate into a straightforward design for handling collective IO, which consists of three main components: 1) one or more intermediate file systems (IFSs) enabling data to be placed and cached closer to the computation (from an access-latency and bandwidth perspective) while overcoming the size limitation of the typical RAM-based local file systems that are prevalent in petascale-precursor systems; 2) a data distributor, which replicates sufficiently large common input datasets to intermediate file systems; and 3) a data collector mechanism, which collects output datasets on IFSs and efficiently writes the collected data to large archive files on the GFS.
Our implementation of this design, which we have prototyped for performance evaluation, uses simple scripts to coordinate "off the shelf" data management components. All of our prototypes and measurements to date have been done on the Argonne BG/P systems (Surveyor, 4096 processors, and Intrepid, 163,840 processors). Not all of the design aspects described below exist yet in the prototype. These are indicated in the description. We executed all of our compute tasks under the Falkon lightweight task scheduler [Raicu+2007; Raicu+2008] running under ZeptoOS [ZeptoOS2008].
The structure of the system is shown in overview in Figure 7, and in more detail in Figure 9, which depicts the flow of input and output data in our BG/P-based prototype. Within the BG/P testbed, the RAM-based file system of the local node, which contains about 1GB of free space, is used as the LFS. For input staging, the LFS of one or more compute nodes is set aside as a "file server" and is dedicated as an IFS for a set of compute nodes.
We create large IFSs from fast LFSs by striping IFS contents over several LFS file systems, using the MosaStore file IO service [Al-Kiswany+2007]. Compute nodes access the IFS using the BG/P torus network [BGP]. The creation of the IFS and the partitioning of compute nodes between IFS functions and computing can be done on a per-workload basis, and can vary from workload to workload. In the same manner that compute node and IO node operating systems are booted when a BG/P job is started, the creation of the IFSs and the CN-to-IFS mapping can be performed as a per-workload setup task performed when compute nodes are provisioned by Falkon Raicu+2008]. This enables the CN-to-IFS ratio to be tailored to the disk space and bandwidth needs of the workflow (Figure 8).
Input Distribution
The input distributor stages common input data efficiently to LFS or IFS. This mechanism is used to cache files that will be frequently re-read, or that will be read in inefficient buffer lengths, closer to the compute nodes. The key to this operation is to use broadcast or multicast methods, where available, to move common data from global to local or intermediate file systems. For accessing input data, we stage input datasets as follows:
• Small input datasets are staged from GFS to the LFS of the compute nodes which read them. • Datasets read by only one task but that are too large to be staged to an LFS are staged to an IFS of sufficient size. • All large datasets that are read by multiple tasks are replicated to all IFSs that serve the set of compute nodes involved in a computation. In our prototype implementation, data is replicated from GFS to multiple IFSs by the Chirp replicate command [Thain+2008]. (Steps 1 and 2 in Figure 7.) We employ two functions: the first identifies if a given compute node is a dataserving or application-executing node. The second maps executor compute nodes to its IFS data server. The decision of whether to place an input file on LFS or IFS is made explicitly (i.e., hard-coded in our prototype). Each IFS is mounted on all associated compute nodes, and accessed via FUSE.
Output Collection
The output collector gathers (small) output data files from multiple processors and aggregates them into efficient units for transfer to GFS. In this way, we reduce greatly the number of files created on the GFS (which reduces the number of costly file creation operations) and also increase the size of those files (which permits data to be written to GFS in larger, more efficient block sizes and write buffer lengths). The use of the output collector also enables data to be cached on LFS or IFS for later analysis or reprocessing.
Our goal is that files which can fit on the LFS can be written there by the application program, while larger output files can be written directly to IFS, and output files too large to fit on the LFS or IFS are written directly to GFS. (This differentiation is not implemented in the prototype). In this way, we can optimize the performance of output operations such as file and directory creation and small write operations.
The collector operates as follows. When application programs complete, any output data on the LFS is copied to an IFS (Figure 7, Step 3). When the copy is complete, the data is atomically moved to a staging directory, where the following algorithm (Step 4) is used: while workload is running if time since last write > maxDelay or data buffered > maxData or free space on IFS < minFreeSpace then write archive to GFS from staging dir One consequence of this design is that short tasks can complete quicker, without having each task remain on a compute node waiting for its data to be written to GFS, as the staging of data from IFS to GFS is handled asynchronously by the collector, as shown in Figure 10.
In our prototype, the IO node (ION) file system serves as the IFS, and data moving relies on POSIX atomicity semantics for data integrity. Files are moved from LFS to IFS via tar, and are then transferred to GFS using dd with a large efficient blocksize.
In our prototype, the LFS and IFS file systems are both RAM-based, and behave somewhat like an in-memory message exchange system, in which messages are moved by read() and write() from one namespace (file server) to another. While these "messages" may be more expensive than MPI messages (the difference remains to be measured), this approach lets users integrate existing application programs into larger application workflows without requiring disk IO.
5.3
Downstream data processing The fact that data managed by the output collector on LFSs or IFSs can be retained for subsequent processing makes it possible to re-process the output data of one stage of a workflow far more efficiently than if the data had to be retrieved from GFS. When previously written output does need to be retrieved from GFS, the ability to access files in parallel from a randomly accessible archive (as described below) further improves performance. And intermediate output data that doesn't need to be retained persistently can be left on LFS or IFS storage without moving it to GFS at all.
To facilitate multi-stage workflows, in which the output of one stage of a parallel computation is consumed by the next, we incorporate two capabilities in our design: 1) the use of an archive format for collective output that can be efficiently reprocessed in parallel, and 2) the ability to cache intermediate results on LFS and/or IFS file systems.
We base our output collector design on the use of a relatively new archive utility xar [XAR], which unlike traditional tar (and similar) archive formats includes an updateable XML directory containing the byte offset of each archive member. This directory enables files to be extracted via random access, and hence xar (unlike tar) archives can be processed efficiently in parallel in later stages or a workflow. In the future, it is likely that we can implement parallel IO to an xar archive from multiple compute nodes, thus enhancing write performance potential even further. To enable testing of such re-processing of derived data from LFS, we employ a prototype of a new primitive collective execution operation "run task x on all compute nodes" which enables all previous outputs on LFS to be processed. Our prototype does not yet use xar, but rather tar, which has a similar interface.
Performance Evaluation
We present measurements from the Argonne ALCF BG/P, running under ZeptoOS and Falkon. We have evaluated various features on up to 98,304 (out of 163,840) processors. Dedicated test time on the entire facility is rare, so all tests below were done with the background noise of activity from other jobs running on other processors. Nonetheless, the trends indicated are fairly clear, and we expect that they will be verifiable in future tests in a controlled, dedicated environment. We have made measurements in both areas of the proposed collective IO primitives (denoted as CIO throughout this section), such as input data distribution, and output data collection. We also applied the collective IO primitives to a molecular dynamics docking application at up to 96K processors.
6.1
Input Data Distribution Our first set of results investigated how effectively compute nodes can read data from the IFSs (over the torus network), examining various data volumes and various IFS/LFS ratios. We used the lightweight Chirp file system [Thain+2008] and the Fuse interface to read files from IFS to LFS. Figure 11 shows higher aggregate performance with larger files, and with higher ratios, with the best IFS performance reaching 162 MB/s for 100 MB files and a 256:1 ratio. However, as the bandwidth is split between 256 clients, the per-node throughput is only 0.6 MB/s. Computing the pernode throughput for the 64:1 ratio yields 2.3 MB/s, a significant increase. Thus, we conclude that a 64:1 ratio is good when trying to maximize the bandwidth per node. Larger ratios reduce the number of IFSs that need to be managed; however, there are practical limits that prohibit these ratios from being extremely large. In the case of a 512:1 ratio and 100 MB files, our benchmarks failed due to memory exhaustion when 512 compute nodes simultaneously connected to 1 compute node to transfer the 100 MB file. This needs further analysis.
Our next set of experiments used the lightweight MosaStore file system [Al-Kiswany+2007] to explore how effectively we can stripe LFSs to form a larger IFS. Our preliminary results in Figure 12 show that as we increase the degree of striping we get significant increases in aggregate throughput, up from 158 MB/s to 831 MB/s. The best performing configuration was 32 compute nodes aggregating their 2GB-per-node LFSs into a 64 GB IFS. This aggregation not only increases performance, but also allows compute nodes to keep their IO relatively local when working with large files that do not fit in a single compute node 2GB RAM-based LSF.
Our final experiment for the input data distribution section focused on how quickly we can distribute data from GFS to a set of IFSs, or potentially to LFSs. As in our previous experiment, we use Chirp (see Figure 13). Chirp has a native operation that allows a file (or set of files) to be distributed to a set of nodes over a spanning tree of copy operations. The spanning tree has the benefit of requiring fewer data transfers: log(n) instead of n, where n is the number of nodes.
In the case of a naïve data distribution in which compute nodes read data directly from GFS (GPFS in our case as noted in the figure), computing the aggregate throughput is straightforward: throughput = nodes*dataSize/workloadTime. For the spanning tree distribution, computing the actual throughput is problematic since the number of transfers is lower than in the naïve method. To make the comparison fair, we compute throughput for the spanning tree distribution with the same formula as for the naïve data distribution, although the actual network traffic would have been significantly less. We believe this is the correct way to compare the two approaches, as it emphasizes the time to complete the workload. On up to 4K processors, GPFS achieves 2.4 GB/s at the largest scale (2.4 MB/s per node). This is the peak rated performance for the file system we tested (/home). However, the spanning tree approach achieves an equivalent of 12.5 GB/s on 4K processors. We plan to explore the performance of the spanning tree distribution at larger scales to find the torus network saturation point. We expect to achieve at least one order of magnitude better performance (for distributing a set of files to many compute nodes) at large scales when using the spanning tree approach as opposed to the naïve approach which reads each file from GPFS directly. Output Data Collection Our second goal for the collective IO primitives was to support the aggregation and transfer of many files from multiple LFSs or IFSs to the GFS. When writing from many compute nodes directly to GPFS (the GFS on the BG/P), care must be taken to avoid locking contention on metadata. One way to avoid this problem is to ensure that each compute node writes files to a unique directory. It is desirable to have as few clients as possible writing to GFS concurrently to limit any locking contention, and to allow the largest buffer sizes and aggregation and potentially small files into larger ones. It is also desirable to make write operations as asynchronous as possible to allow the overlap of computing and data transfer from the compute node. To achieve all these desirable features, we have implemented an output data collector (CIO, which we previously discussed) that resides on an IFS and acts as an intermediate buffer space for output generated on compute nodes. We use a ratio of 64:1 IFS to LFS, which significantly reduces the number of clients that write to GFS.
Our measurements (see Figure 14 and Figure 15) show that the CIO collector strategy yields close to the ideal efficiency when compared to compute tasks of the same length with no IO. For example, in Figure 14 we show the efficiency achieved with short tasks (4 seconds) that produce output files with sizes ranging from 1KB to 1MB. We see that CIO (the dotted lines) is able to achieve > 90% efficiency in most cases, and almost 80% in the worst case with larger files. In contrast, the same workload achieved only 10% to < 50% efficiency when using GPFS. We also observed an anomaly: a slight efficiency increase at the largest scale of 32K processors. One possible cause of this is that we reached the limit of Falkon dispatch throughput. Figure 14, but uses 32 second tasks. We see a similar pattern, in which CIO achieves 90% efficiency, while GPFS achieves almost 90% efficiency with 256 processors but less than 10% on 96K processors.
0%
We also extract from these experiments the achieved aggregate throughput (shown in Figure 16). We limit this plot to the 1 MB case for readability. Notice the extremely poor GPFS write performance as the number of processors increases, peaking at only 250 MB/s. The CIO throughput is almost an order of magnitude higher, peaking at 2100 MB/s, and is within a few percent of the ideal case (tasks with the same duration, but with only local IO to RAM-based LFS, labeled 4sec+RAM and 32sec+RAM). Application Evaluation We have shown significant performance and scalability improvements for synthetic data-intensive workloads. To determine how these improvements translate into real application performance, we evaluated the utility of collective IO on a molecular dynamics workflow which screens candidate drug compounds against metabolic protein targets using the DOCK6 application [DOCK] to simulate the "docking" of small molecules to the "active sites" of large macromolecules. A compound that interacts strongly with a receptor, such as a protein molecule, associated with a disease, may inhibit its function and thus act as a beneficial drug. In this application run, a database of 15,351 compounds was screened against nine proteins that perform key enzymatic functions in the metabolism of bacteria and humans.
The molecular dynamics docking workflow has 3 stages: 1) read input, compute the docking, and write output; 2) summarize, sort, and select results; and 3) archive results. In out tests, the DOCK6 invocations averaged 10KB of output every 550 seconds.
In the simple case where we use GFS, the input data of stage 1 is read from GFS to LFS, the application reads from LFS and writes its output to LFS, and finally the output is synchronously copied back to GFS. Stage 1 is parallelized to process each DOCK invocation on a separate processor core. Both stage 2 and stage 3 were originally a single process application that would run on a login node and access input data directly from GFS. In the case of using CIO, the stages are a bit different: stage 1 writes the output data from LFS to IFS asynchronously; stage 2 is parallelized across all processors and works on IFS; stage 3 copies the data from IFS to GFS. Figure 17 shows the breakdown of the 3 stages, and where time was being spent, for a total of 1412 seconds for CIO and 2140 seconds for GPFS. The first stage is negligibly faster with CIO (1.06X), and the third stage is 1.5X faster, but the second stage is 11.7X faster with 694 seconds being reduced down to 59 seconds. Stage 2 summarizes, sorts and filters the results, which CIO can handle much better in a distributed fashion (as opposed to the centralized GFS solution) with data accesses localized to IFS instead of GFS. In order to see the effects of CIO at larger scale, we also ran the DOCK6 stage 1 with 135K tasks on 96K processors. The net result was a 1.12X speedup using CIO (1772 seconds) as compared to GPFS (1981 seconds) -a negligible speedup, as we expected for this compute-bound workload.
Future Work
The prototype implementation we describe here, while in its early stages of development, has been sufficient to make a reasonable assessment of the performance and usability potential of a file-based collective IO model that can handle at least O(100K) BG/P processors. Our next major focus will be to integrate the model into the Swift parallel programming environment [Zhao+2007], so that BG/P users can benefit from this higher-level programming model without explicitly programming the collective IO operations.
We intend to investigate algorithmic questions and enhancements, such as determining the optimal ratio of IFS nodes to compute nodes for various workloads; determining when we can effectively use the compute nodes of IFS data hosts for computing in addition to file serving; automatically optimizing input data placement on LFSs vs. IFSs; determining if we can learn from the IO patterns of previous runs where best to locate a given input or output file; finding algorithms for automating output data caching in IFSs and LFSs for re-processing by subsequent workflow stages; and determining when data on IFSs/LFSs can be removed.
Lower-level implementation issues we intend to explore include the use of the tree network to enhance the performance of input broadcast, and comparing the performance and reliability benefits of MosaStore, Chirp, and native Linux approaches to IFS striping. We also intend to explore how the random access capabilities of archive formats such as xar can enable parallel reading and parallel archive creation, and what role compression should play in the output process.
We will continue to drive this work with an expanding measurement effort, on both synthetic and actual applications. We are particularly interested in measuring the behavior of applications (such as BLAST runs on large databases) that will benefit greatly from striped IFS capabilities.
We have identified, characterized, and started to address a critical problem for enabling the use of petascale supercomputers by a far larger community of scientific applications and users: how to enable efficient file-based IO by large numbers of independent parallel tasks, as required by many-task computing applications involved in loosely coupled parallel programming.
Our results indicate that it is possible to adapt principles of collective data operations to the world of parallel scripting linked by file interchange. While our results are preliminary, and are based on simple prototypes, they suggest that collective IO primitives, when effectively integrated into parallel scripting programming systems and languages (such as Falkon and Swift) can yield excellent performance on 100,000 processors -and likely well beyond -while greatly enhancing scientific programming productivity.
Figure 2 :Figure 3 :
23Abstract application program IO profile Common
Figure 6 :Figure 5 :
65ZOID/ZeptoOS and BG/P Torus Network ZOID and ZeptoOSmust be made in globally synchronized phases, and that all processes participate in a collective operation. These conditions restrict the use of standard collective operations in loosely coupled, uncoordinated scenarios, limiting them to initialization time (before any individual tasks start running), and possibly termination time (once all individual tasks have completed).
Figure 7 :Figure 8 :
78Logical Distributor/Collector Design Allocation and mapping of compute nodes to IFS servers: 2:64 ratio (top) and 4:64 ratio (bottom) 5.1
Figure 10 :
10Output staging: synchronous, top, without collector; asynchronous, bottom with collector.
Figure 9 :
9Data flow on BG/P
Figure 11 :
11Read performance while varying the ratio of LFS to IFS from 64:1 to 512:1 using the Torus network.
Figure 12 :Figure 13 :
1213Read performance, varying the degree of striping of data across multiple nodes from 1 to 32 using the torus network CIO distribution via spanning tree over Torus network vs. GPFS over Ethernet & Tree networks 6.2
Figure 15 :
15CIO vs GPFS efficiency for 32 second tasks, varying data size (1KB to 1MB) for 256 to 96K processors.
Figure 15
15Figure 15 is similar to Figure 14, but uses 32 second tasks. We see a similar pattern, in which CIO achieves 90% efficiency, while GPFS achieves almost 90% efficiency with 256 processors but less than 10% on 96K processors. We also extract from these experiments the achieved aggregate throughput (shown in Figure 16). We limit this plot to the 1 MB case for readability. Notice the extremely poor GPFS write performance as the number of processors increases, peaking at only 250 MB/s. The CIO throughput is almost an order of magnitude higher, peaking at 2100 MB/s, and is within a few percent of the ideal case (tasks with the same duration, but with only local IO to RAM-based LFS, labeled 4sec+RAM and 32sec+RAM).
Figure 16 :
16CIO collection write performance compared to GPFS write performance on up to 96K processors 6.3
Figure 17 :
17DOCK6 application summary with 15K tasks on 8K processor comparing CIO with GPFS
Figure 14: CIO vs. GFS efficiency for 4 second tasks, varying data size (1KB to 1MB) on 256 to 32K processors10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
256
2048
4096
8192
32768
Number of Processors
Efficiency
4sec+GPFS(1KB)
4sec+GPFS(16KB)
4sec+GPFS(128KB)
4sec+GPFS(1MB)
4sec+CIO(1KB)
4sec+CIO(16KB)
4sec+CIO(128KB)
4sec+CIO(1MB)
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
256
2048
4096
8192
32768
98304
Number of Processors
Efficiency
32sec+GPFS(1KB)
32sec+GPFS(16KB)
32sec+GPFS(128KB)
32sec+GPFS(1MB)
32sec+CIO(1KB)
32sec+CIO(16KB)
32sec+CIO(128KB)
32sec+CIO(1MB)
ACKNOWLEGEMENTSThis work was supported in part by the National Science The authors would like to thank Samer Al-Kiswany of the University of British Columbia for assistance with MosaStore, Kazutomo Yoshii of Argonne National Laboratory for assistance with ZeptoOS, the Argonne Leadership Computing Facility team for their tremendous support in our use of the Intrepid BG/P, and Mike Kubal of the Computation Institute for providing and explaining the molecular docking workflow.
A Checkpoint Storage System for Desktop Grid Computing. S Al-Kiswany, M Ripeanu, S Vazhkudai, NetSysLab-TR-2007-04Argonne Leadership Computing Facility. Networked Systems Lab, U. of British ColumbiaTech ReportAl-Kiswany+2007Argonne Leadership Computing Facility, http://www.alcf.anl.gov [Al-Kiswany+2007] S. Al-Kiswany, M. Ripeanu, S. Vazhkudai, "A Checkpoint Storage System for Desktop Grid Computing", Networked Systems Lab, U. of British Columbia, Tech Report NetSysLab-TR-2007-04, 2007.
GASS: A data movement and access service for wide area computing systems. J Bester, I Foster, C Kesselman, J Tedesco, S Tuecke, IOPADS 99: Proceedings of the Sixth Workshop on IO in Parallel and Distributed Systems. Atlanta, GAJ. Bester, I. Foster, C. Kesselman, J. Tedesco, and S. Tuecke, "GASS: A data movement and access service for wide area computing systems", IOPADS 99: Proceedings of the Sixth Workshop on IO in Parallel and Distributed Systems, Atlanta, GA, pp 78-88, 1999.
Overview of the IBM Blue Gene/P Project. Gene Ibm Blue, Team, IBM Journal of Research and Development. 521/2IBM Blue Gene team, "Overview of the IBM Blue Gene/P Project". IBM Journal of Research and Development, vol. 52, no. 1/2, pp. 199-220, Jan/Mar 2008.
FUSE: File System in Userspace. Overview of DOCK, http://dock.compbio.ucsf.edu/Overview_of_DOCK/index.htm [FUSE] FUSE: File System in Userspace. http://fuse.sourceforge.net/
ZOID: IO-forwarding infrastructure for petascale architectures. K Iskra, J W Romein, K Yoshii, P Beckman, 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Salt Lake City, UTK. Iskra, J. W. Romein, K. Yoshii, and P. Beckman. "ZOID: IO-forwarding infrastructure for petascale architectures". 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 153-162, Salt Lake City, UT, Feb. 2008.
Task Scheduling and File Replication for Data-Intensive Jobs with Batch-shared IO. G Khanna, N Vydyanathan, U V Catalyurek, T M Kurc, S Krishnamoorthy, P Sadayappan, J H Saltz, Proceedings of the 15th IEEE International Symposium on High-Performance Distributed Computing (HPDC-15. the 15th IEEE International Symposium on High-Performance Distributed Computing (HPDC-15G. Khanna, N. Vydyanathan, U. V. Catalyurek, T. M. Kurc, S. Krishnamoorthy, P. Sadayappan, J. H. Saltz, "Task Scheduling and File Replication for Data- Intensive Jobs with Batch-shared IO", Proceedings of the 15th IEEE International Symposium on High-Performance Distributed Computing (HPDC-15) pp. 241-252, June 2006.
Scheduling File Transfers for Data-Intensive Jobs on Heterogeneous Clusters. G Khanna, U V Catalyurek, T M Kurc, P Sadayappan, J H Saltz, Proceedings of Euro-Par 2007 Parallel Processing. Euro-Par 2007 Parallel ProcessingG. Khanna, U. V. Catalyurek, T. M. Kurc, P. Sadayappan, J. H. Saltz, "Scheduling File Transfers for Data-Intensive Jobs on Heterogeneous Clusters", Proceedings of Euro-Par 2007 Parallel Processing, pp. 214- 223, August, 2007.
MPI-2: Extensions to the Message-Passing Interface. A Mpi-Io] K. Coloma, A Ching, W Choudhary, R Liao, R Ross, L Thakur, Ward, A New Flexible MPI Collective IO Implementation. International Conference on Cluster ComputingMessage Passing Interface Forum, "MPI-2: Extensions to the Message-Passing Interface", http://www.mpi- forum.org/docs/mpi-20-html/mpi2-report.html [MPI-IO] K. Coloma, A. Ching, A. Choudhary, W. Liao R. Ross, R. Thakur, L. Ward, "A New Flexible MPI Collective IO Implementation", International Conference on Cluster Computing, 2006.
Scripting: Higher-level programming for the 21 st century. Network Block Device, ; J Ousterhout, IEEE Computer. Network Block Device. http://nbd.sourceforge.net/ [Ousterhout1998] J. Ousterhout, "Scripting: Higher-level programming for the 21 st century", IEEE Computer Mar. 1998.
Falkon: a Fast and Light-weight tasK executiON framework. I Raicu, Y Zhao, C Dumitrescu, I Foster, M Wilde, IEEE/ACM Supercomputing. I. Raicu, Y. Zhao, C. Dumitrescu, I. Foster, M. Wilde. "Falkon: a Fast and Light-weight tasK executiON framework", IEEE/ACM Supercomputing 2007.
Toward Loosely Coupled Programming on Petascale Systems. I Raicu, Z Zhang, M Wilde, I Foster, P Beckman, K Iskra, B Clifford, IEEE/ACM Supercomputing. to appearI. Raicu, Z. Zhang, M. Wilde, I. Foster, P. Beckman, K. Iskra, B. Clifford. "Toward Loosely Coupled Programming on Petascale Systems", to appear, IEEE/ACM Supercomputing 2008.
GPFS: A Shared-Disk File System for Large Computing Clusters. F Schmuck, R Haskin, Proceedings of the USENIX FAST02 Conference on File and Storage Technologies. the USENIX FAST02 Conference on File and Storage TechnologiesMonterey, CaliforniaF. Schmuck, R. Haskin, GPFS: A Shared- Disk File System for Large Computing Clusters, Proceedings of the USENIX FAST02 Conference on File and Storage Technologies, Monterey, California, 2002.
Distributed Computing in Practice: The Condor Experience. D Thain, T Tannenbaum, M Livny, Concurrency and Computation: Practice and Experience. 172-4D. Thain, T. Tannenbaum, and M. Livny, "Distributed Computing in Practice: The Condor Experience" Concurrency and Computation: Practice and Experience, vol. 17, no. 2-4, pp. 323-356, Feb-Apr 2005.
Chirp: A Practical Global File system for Cluster and Grid Computing. D Thain, C Moretti, J Hemmes, Journal of Grid Computing. SpringerD. Thain, C. Moretti, and J. Hemmes, Chirp: A Practical Global File system for Cluster and Grid Computing, Journal of Grid Computing, Springer, accepted for publication in 2008.
R Thakur, W Gropp, E Lusk, Data Sieving and Collective IO in ROMIO, 7th Symposium on the Frontiers of Massively Parallel Computation. R. Thakur, W. Gropp, E. Lusk. Data Sieving and Collective IO in ROMIO, 7th Symposium on the Frontiers of Massively Parallel Computation, 1999.
. Tun/Tap Universal, Driver, Universal TUN/TAP Driver. http://vtun.sourceforge.net/tun
. Xar -Extensible Archiver, XAR -eXtensible ARchiver Project home page, http://code.google.com/p/xar/
Swift: Fast, Reliable, Loosely Coupled Parallel Computation. Y Zhao, M Hategan, B Clifford, I Foster, G Vonlaszewski, I Raicu, T Stef-Praun, M Wilde, The ZeptoOS Project. The ZeptoOS Project. http://www.zeptoos.org/ [Zhao+2007] Y. Zhao, M. Hategan, B. Clifford, I. Foster, G. vonLaszewski, I. Raicu, T. Stef-Praun, M. Wilde, "Swift: Fast, Reliable, Loosely Coupled Parallel Computation" IEEE Workshop on Scientific Workflows 2007
| [] |
[
"Anchoring-mediated stick-slip winding of cholesteric liquid crystals",
"Anchoring-mediated stick-slip winding of cholesteric liquid crystals"
] | [
"Weichao Zheng \nDepartment of Chemistry\nPhysical and Theoretical Chemistry Laboratory\nUniversity of Oxford\nOX1 3QZOxfordUnited Kingdom\n"
] | [
"Department of Chemistry\nPhysical and Theoretical Chemistry Laboratory\nUniversity of Oxford\nOX1 3QZOxfordUnited Kingdom"
] | [] | The stick-slip phenomenon widely exists in contact mechanics, from the macroscale to the nanoscale. During cholesteric-nematic unwinding by external fields, there is controversy regarding the role of planar surface anchoring, which may induce discontinuous stick-slip behaviors despite the well-known continuous transitions observed in past experiments. Here, we observe three regimes, namely constrained, stick-slip, and sliding-slip, under mechanical winding with different anchoring conditions, and measure the responded forces by the Surface Force Balance. These behaviors result from a balance of cholesteric elastic torque and surface torque, reminiscent of the slip morphology on frictional substrates [T. G. Sano et al., Phys. Rev. Lett. 118, 178001 (2017)], and provide evidence of dynamics in static rotational friction. | null | [
"https://export.arxiv.org/pdf/2305.06187v1.pdf"
] | 258,588,156 | 2305.06187 | 0919e939422de2a1fbd24adcd8dc98b45d9cd6a9 |
Anchoring-mediated stick-slip winding of cholesteric liquid crystals
Weichao Zheng
Department of Chemistry
Physical and Theoretical Chemistry Laboratory
University of Oxford
OX1 3QZOxfordUnited Kingdom
Anchoring-mediated stick-slip winding of cholesteric liquid crystals
The stick-slip phenomenon widely exists in contact mechanics, from the macroscale to the nanoscale. During cholesteric-nematic unwinding by external fields, there is controversy regarding the role of planar surface anchoring, which may induce discontinuous stick-slip behaviors despite the well-known continuous transitions observed in past experiments. Here, we observe three regimes, namely constrained, stick-slip, and sliding-slip, under mechanical winding with different anchoring conditions, and measure the responded forces by the Surface Force Balance. These behaviors result from a balance of cholesteric elastic torque and surface torque, reminiscent of the slip morphology on frictional substrates [T. G. Sano et al., Phys. Rev. Lett. 118, 178001 (2017)], and provide evidence of dynamics in static rotational friction.
In broad soft matter areas, including turbulence [1], micro/nanofluidics [2], and yield stress materials [3], boundary conditions are important for material properties and performance. Similarly, in liquid crystals, surface anchoring also plays a crucial role in the order parameter, the temperature of the nematic-isotropic phase transition [4], and the response of molecules to external fields [5], especially in confined geometries such as liquid crystal displays. With strong anchoring, there exists a critical threshold voltage that orients the nematic molecules, called the Fré edericksz transition [6], below which molecules are still. However, there is a debate about whether planar anchoring affects the cholesteric-nematic unwinding transition by external fields. Decades ago, it was predicted [5,7] and proven [8][9][10][11] that magnetic or electric fields can continuously unwind cholesterics to nematics, but the situation with different planar anchoring conditions was not explicitly addressed by experiments. Some studies [7,12,13] suggested that the continuous cholesteric-nematic transition is only applicable for bulk samples in which the surface anchoring is negligible. By varying the anchoring strength in confinement, rich behaviors, such as stick-slip or step-wise transitions, were predicted to happen under external stimuli [7,[13][14][15][16][17][18][19][20][21][22][23][24][25][26][27], such as temperature, light, stress, magnetic and electric fields. Particularly, a recent study [28] reported that if the easy axis on one surface rotates, chiral nematics may show three regimes, including free twist, stick-slip, and constrained winding, as a balance of twist elastic torque and surface torque [20,[28][29][30]. Although some evidence of discontinuous transitions has been presented [12,[31][32][33][34][35][36][37], different mechanisms were still discussed, probably due to the experimental precision and the complexity of surface anchoring, and none of the models could be directly applied to explain the observations in this work.
Here, we use the Surface Force Balance (SFB) to simultaneously measure the optical and mechanical responses of cholesterics along the helical axis under various boundary conditions. Desiccated cholesterics were confined between two freshly-cleaved muscovite mica surfaces that were glued onto crossed cylinders. In the beginning, a strong planar anchoring was obtained, but anchoring strength decayed over time mainly due to the adsorption of water from the ambient environment [38,39]. Therefore, three different regimes were observed during experiments, resulting from the decayed frictional surface torque. Furthermore, the hysteresis of twist transitions was observed during the retraction and approach of surfaces in all three regimes.
Three regimes.--Cholesteric layers, with a layer thickness of half-pitch p = 122 nm, were compressed in the SFB [40] [ Fig. 1(a)] with a cylinder radius of R. With time evolution, three regimes of the measured force profiles were observed [ Fig. 1(b)]. In the first regime, the force generated by the constrained cholesterics initially started from zero and increased with increasing strain to 65%, peaking at 14 mN/m before the surface jumped into contact position, and all the cholesterics were squeezed out together. In the second regime, stick-slip jumps of the surface occurred after the force accumulated to 1.5 mN/m with about 30% strain, and finally, the surfaces jumped to contact. The number of jumps corresponded to five integral layers and a non-integral layer since the easy axes on mica surfaces were not parallel [40]. Sometimes, multiple-layered jumping events were observed in this regime [ Fig. 1(b) and Fig. S1]. In the last regime, the surface jumped periodically with a wavelength equal to the half-pitch without a large deformation of cholesterics, and the last few layers were difficult to squeeze out, resulting in large forces. It is worth noting that there was a constant background force of around 1 mN/m in this regime. ) is good, indicating that the anchoring strength in the first regime is strong. The slope of the jump-in is comparable to the spring constant, which manifests that the spring instability dominates the jumping process [40]. The elastic deformation almost without dissipation [41] works like an ideal spring, neglecting the effect of gravity at the micro/nanoscale. This deformation, which can last for more than one hour without dissipation if the surface stops moving [ Fig. S2], is truly elastic rather than viscous.
In the second regime, the force profile can also be fitted by the harmonic elastic forces calculated using Equation A4 with various layers of cholesterics [ Fig. 1(d)]. The crossing points of the harmonic forces fall on distances equal to the integral number of quarter-pitch [ Fig. S3]. Notably, the slope of forces during the jump is a little larger than the theoretical one but comparable to the spring constant, indicating that the jumping process is a balance of both viscoelastic forces and spring force, which is shown in Fig. 1(d), where jumping distances are smaller than theoretical values. This deviation of jumping distances could be due to the expansion of dislocation defects that store elastic energy. The effect of defects is discussed in the Supplemental Material.
Discussion.--The compression ratio that cholesterics can sustain with time decreases from the first to the third regime, indicating a decrease in anchoring strength after the adsorption of water from the ambient environment [38,39]. In the third regime, surfaces are difficult to compress to contact, which supports the assumption that surfaces are changing with time. There are several reasons why a hard wall is encountered before contact. Firstly, the adsorbed water dissolves and accumulates potassium ions from mica surfaces to the contact position, which increases the electrostatic repulsion. Secondly, liquid crystal molecules grow epitaxially with time [42]. Thirdly, contaminants from the ambient air adsorb to the surface.
These three regimes emerge with the change of anchoring strength. Considering the longer timescale, more regimes might appear. For example, if the adsorbed water changes the direction of the easy axis on the mica surface [38,39], the behaviors could be different. Finally, if the mica surfaces become totally homeotropic, the pitch axis will be parallel with the surface, causing fingerprint textures and more isotropic-like optics.
Surface torque.--The measured forces follow the twist elastic theory very well with manual input of the twist angle, but three different regimes varying with anchoring strength are obtained, namely constrained, stick-slip, and sliding-slip. What is the mechanism that determines the critical threshold of the jump in different regimes?
When cholesterics are confined between two plates, the elastic torque is balanced by the surface torque, which includes the surface anchoring and surface viscosity [28][29][30]. For strong anchoring, molecules deviate very slowly from the easy axis, thus the torque from surface viscosity is negligible. While at mediate anchoring, molecules slide to a deviated angle with a larger speed, therefore, both surface anchoring and surface viscosity balance with elastic torque. Fig. 2 (a and b) shows that there exists a threshold constant of the compression ratio, about 35% and 75% in the first and second regimes respectively, for the surface to sustain the elastic stress at a certain anchoring condition, no matter how many layers are compressed.
This constant ratio manifests that there is a threshold anchoring torque Γ that is analogous to the breakaway friction torque [43], a concept from rotational friction. When the anchoring is strong, the frictional torque can sustain a large elastic torque, such that cholesteric layers won't jump until the threshold is reached.
The anchoring torque in the first and second regimes is plotted in Fig. 2(c), where the first regime and second regime fall on two slopes of calculation based on Equation B3 with twist elastic constant 22 = 3.8 and 6 pN, respectively. If the force profile in Fig. 1(c) is carefully examined, one can see that the slope at small compression is actually higher than the calculation with 22 = 3.8 pN. This may be because mica surfaces on cylindrical lenses are not large enough, such that at small compression, the forces mainly generated near contact position are free from the effect of mica areas, but at the large compression, mica areas start to limit the force responses, producing smaller forces. From the fitted elastic constant, we can estimate the effective coverage of mica on lenses is 2/3. ≈ 0.49π , where Φ 0 is the original twist angle and Φ is the instant twist angle, which means the molecules on each surface deviate around 90° from the easy axis at the jump threshold. Notably, no mathematical solution was found with the Rapini-Papoula potential [5], but the anchoring potential with other forms may also be feasible. For example, if the anchoring torque is Fig. S4(b)]. It is as if the surface torque is correct but the composition of the torque is not a pure anchoring torque. Possibly, the surface viscosity may start to become important in this regime with medium anchoring strength. Alternatively, the 2/3 coverage of mica on the lenses may cause slip on this regime after water adsorption, since the premier critical compression ratio [ Fig. 1(d)] is similar to the compression ratio where 22 changes from a larger value to 3.8 pN in the first regime, as shown in Fig. 1(c). Last but not least, the surface torque may be balanced by the viscoelastic torque in the stick-slip regime.
In the second regime, no defects are observed stretching on the surface during either approach or retraction, indicating that the defects are in the bisector of surfaces and the polar anchoring strength [45] is larger than 2√ 3 8 33 ≈ 0.4 mN/m, where 33 = 27.5 pN is the bend elastic constant, and B is the dilation term. It seems reasonable that the azimuthal anchoring strength is one or two orders of magnitude smaller than the polar anchoring [46]. Then the polar anchoring strength in the first regime would be very large.
For weak anchoring, the anchoring torque is negligible [29]. Therefore, the elastic torque is mainly balanced by the surface viscosity. As a result, the surface viscosity can be estimated as = 1.83 × 10 −4 Pa s m , and the corresponding viscous force is about 0.8 mN/s at a distance 0 = 1000 nm (see Supplemental Material), which is very close to the background force in the third regime [ Fig. 1(b)]. This background force may be related to the commonly observed background forces with liquid crystals in the SFA [47,48] (see Supplemental Material). In fact, the discontinuous twist transition has been attributed to surface anchoring by most past studies [7,[12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][31][32][33][34][35], among which only some [20,28,29,32] used the concept of surface torque to explain the mechanism.
However, many of them [14,[16][17][18][22][23][24][25][26]33,35] differentiated the anchoring energy G with respect to the twist angle / Φ, which is actually the form of torque. The surface torque has long been adopted to describe the surface forces imposed on liquid crystals [5,[49][50][51][52][53][54][55], but this concept does not seem to be widely used in the liquid crystal community. In a recent study [37] explaining the discontinuous transition with the energy barrier from dislocation defects, the integrating range of the equations for calculating the nucleation energy should not be the same for different layers. Therefore, the conclusions about the energy barrier were untenable.
Hysteresis.
-- Fig. 4 shows that hysteresis of the twist angle between retraction and approach exists in all three regimes and decreases with time evolution. The twist angles can be further confirmed by the 4x4 matrix simulation [40,56] [Fig. S5]. In particular, multiple-layer jumping events occur during both approach and retraction [ Fig. 4 (a and c)]. Fig. 4(f) shows that the retraction profiles are the same in the first two regimes and a delayed jump resulting from the viscous stretch on the surface (see Movie S1 and Supplemental Material), is observed in the third regime. Notably, the twist angle profile during the approach in the third regime is coincident with the profile during retraction in the first two regimes, as shown in Fig. 4(d), indicating negligible anchoring torque during the approach. Most of the jumping points occur at integral quarter-pitch distances, but more uncertainties are observed at small distances. can be understood by analogy to fracture in solid materials during tension and compression. However, given the complexity of the analogy from solids to liquid crystals, this topic will be discussed in a separate paper.
In conclusion, three regimes were observed in cholesterics during mechanical compression in the SFB. The elastic torque of cholesterics is balanced by the surface torque, which consists of anchoring torque and viscous torque. In the constrained regime with strong anchoring, the anchoring torque dominates, while the viscous torque dominates in the sliding-slip regime with weak anchoring. In the stick-slip regime, both anchoring torque and viscous torque, as well as mica coverage are possible to affect the stick-slip. This study provides a new method based on the critical surface torque to measure strong anchoring strength and deviation. The surface torque, i.e., frictional torque in rotational friction, elucidates the dynamics of static friction [57,58], as evidenced by the deviation of the anchoring angle and the hysteresis of the twist angle. This study sheds light on the understanding of boundary effects on permeative flows [59,60], friction, yield stress materials [3,61], adhesions, and biomechanics. APPENDIX A: FREE ENERGY In cholesterics, the free energy per unit area is formed by the elastic energy and the anchoring energy from both surfaces. The anchoring potential is not a well-defined term; thus, a general parabolic form is given below,
= ∫ 1 2 0 22 ( ′ − 0 ) 2 + ( Φ 0 −Φ 2 ) 2 , (A1)
where is the closest surface separation between two crossed cylinders, 22 is the twist elastic constant, ′ = Φ = Φ is the molecular rotation rate at a distance with a total twist angle Φ , which is constant for a uniform sample, is the anchoring strength, Φ 0 is the original twist angle, and 0 is the natural molecular rotation rate at relaxation. By ignoring the anchoring energy with a strong boundary, the free energy becomes,
= 1 2 22 ( Φ − 0 ) 2 . (A2)
With a strong anchoring, the twist angle Φ ≈ Φ 0 = 0 0 keeps the original rotation rate at a starting distance 0 with layers. Thus, the free energy and the generated force F with Derjaguin approximation are written as, APPENDIX B: SURFACE TORQUE The torque balance can be written as, 22 (
Φ − 0 ) = Φ 0 −Φ 2 − Φ ,(B1)
where is the anchoring strength, is the surface viscosity, and is the time.
With strong anchoring, the surface viscosity and anchoring deviation are neglected here. The elastic torque Γ is mainly balanced by the anchoring torque Γ ,
Γ = 22 ( Φ − 0 ) = Γ = Φ 0 −Φ 2 ,(B2)22 0 ( 0 − 1) = Φ 0 −Φ 2 . (B3)
For a more rigorous calculation that considers anchoring deviations, the surface distance in Equation B2 is calculated as,
= 22 Φ 22 0 + Φ 0 −Φ 2 .(B4)
At the critical torque threshold Γ , the critical twist angle Φ and critical surface distance are calculated below,
Φ = Φ 0 − 2Γ ,(B5)
Supplemental Material
I. FORCE MEASUREMENTS The speed of the motor in the SFB is usually calibrated with a baseline at large distances without encountering any forces, although there is a control signal roughly showing the speed. However, with cholesterics inside the SFB, there is always an elastic background that disturbs the precise calibration of the speed. Therefore, the speed of the motor was estimated within reasonable ranges in the first regime (mostly 1.6-1.8 nm/s for the motor signal 0.02 VE) and the second regime (1.3-1.8 nm/s for the motor signal 0.02 VE) by differentiating the distance profile. Then the obtained force profiles were compared with the theoretical forces calculated using Equation A4 with the intrinsic twist elastic constant of cholesterics, K22 = 6.18 pN, reported elsewhere [1]. It is worth noting that the main argument in this work, i.e., surface torque, is independent of the force calibration. S2(a) shows force responses as a function of time in the first regime with 50% strain but before reaching the critical jumping distance. At ≈ 20 mins, the motor stopped and re-approached at ≈ 90 mins, between which the surface distance was sustained by the cholesterics with a deviation of less than 40 nm mainly due to mechanical and thermal drifts, as shown in Fig. S2(b). After re-approach, cholesterics were further compressed before all layers were squeezed out, and the surfaces jumped into contact. The long-time study demonstrates that the surfaces experienced elastic forces rather than dissipative viscous forces.
II. EFFECT OF DEFECT ENERGY
When the crossed cylinders are close enough, the geometry is similar to a sphere with a radius R approaching a flat surface. Therefore, the height h at a radius r from the contact point is calculated by,
ℎ − = 2 2 ,(S1)
where D is the closest surface separation. All circular defects confined in the SFB stay between integral cholesteric layers. If, at the contact position (with 0 layers confined), the radius of the innermost dislocation defect with a height p/2 (p is the half-pitch) is defined as 0,1 , the defect energy is the sum of all defects 2πμ ∑ 0, =1 , with the line tension μ ≈ 22 . Similarly, with m relaxed layers confined in the SFB, the defect energy can be calculated as 2πμ ∑ , = +1 . Using Equation S1, the relationship , +1 = 0,1 = √ is found. As a result, the difference of the defect energy between and 0 layers at the relaxed state is 2πμ ∑ 0, = − +1 .
In the first regime, if 10 layers are compressed from the distance = 10 to a critical jumping distance = 4 , the twist elastic energy can be calculated by integrating Equation A4, = 2.54 × 10 −11 N • m. At distance = 4 , the increase of defect energy is difficult to calculate. However, we could estimate an upper bound of the increased defect energy at the contact position as 2πμ ∑ 0, = −9
. The maximum radius of a defect 0, is the radius of the lens area = 5 mm, with which the maximum change of defect energy is 10 × 2πμ = 1.89 × 10 −12 N • m, less than ten percent of the elastic energy.
In fact, the radius of the cholesteric sample is usually smaller than the radius of a cylindrical lens. Moreover, circular dislocations only form in a small area that is highly confined due to the special geometry of the crossed cylinders. Outside the confined regime with large heights, defects are network-like without any circular orders. We have shown in previous research [2] that the energy of the shrinking defects is negligible during the retraction and the Derjaguin approximation holds, which is consistent with the study [3] confining smectics in the SFA. Consequently, the experimental results in the past [3][4][5] and this study were well-fitted with theories without considering defects in similar geometries.
A rigorous description of the dislocation energy may be calculated by simulation in the future to calibrate the data precision.
III. ANCHORING ENERGY From Equation B2
, the twist angle Φ is a function of the surface distance ,
Φ = Φ 0 + 2 22 0 1+ 2 22 .(S2)
With the parabolic anchoring potential, the free energy can be calculated as,
= 1 2 22 (Φ− 0 ) 2 + ( Φ 0 −Φ 2 ) 2 = 1 2 22 ( Φ 0 + 2 22 0 1+ 2 22 − 0 ) 2 + ( Φ 0 − Φ 0 + 2 22 0 1+ 2 22 2 ) 2 ,(S3)
and the corresponding force profile is compared with the profile in the infinite anchoring case, shown in Fig. S4(a). The deviation is small, but indeed, the additional anchoring energy decreases the total forces, and the obtained elastic constant 22 = 4.2 pN is closer to the molecular property [1]. IV. SLIDING-SLIP REGIME WITH WEAK ANCHORING STRENGTH For weak anchoring, the anchoring torque is negligible [6]. Therefore, the elastic torque is mainly balanced by the surface viscosity, 22 (
Φ − 0 ) = − Φ = − , (S4) D = , (S5) Φ = , (S6) = , (S7) where
is the boundary material viscosity, is the interaction length on the boundary material, is the instant molecular rotation rate on the surface, and is the instant surface velocity.
From Equations B1 and B2, = Φ , thus Equation S4 becomes,
22 ( Φ − 0 ) = − Φ .(S8)
In the third regime, the layers are squeezed one by one without large deformation. Therefore, Φ ≈ Φ 0 = 0 0 at a large distance. Equation (S12)
The force induced by surface viscosity is estimated as = 0.8 mN/m at 0 = 1000 nm, which is very close to the background force in the third regime in Fig. 1(b). As a result, surface viscosity stretches cholesteric layers and the defects (Movie S1), in addition to the regular elastic forces. For the first two regimes, the anchoring is strong enough to expel the defects to the center [7]. Therefore, there is no interaction between surfaces and defects. However, in the third regime, weak anchoring attracts defects to the wall, generating binding energy [8][9][10][11][12] in addition to the surface torque.
In some studies [8][9][10][11][12] with smectics confined in the SFA, defects also escaped to the surfaces under homeotropic anchoring, which is typically weaker than planar anchoring. Given that the critical anchoring 2√ 3 8 33 for surfaces to expel defects to the bisector plane [7] is larger for smectics than for cholesterics, defects in smectics were pinned on the surfaces. Those studies [10][11][12] explained the avalanche events by the stretch of screw dislocations pinned on surfaces. However, for cholesterics with larger layers compared to smectics, defects are present in smaller amounts. The pinned defects cannot be the only viscous force contributing to the large background force. The sliding of whole cholesteric planes must also be taken into consideration. This may shed light on the commonly observed background forces in past studies [13,14] with liquid crystals.
FIG. 1 .
1Forces measured in the Surface Force Balance (SFB). (a) Schematic diagram of cholesterics confined in the crossed cylinders with radius R. (b) Force profiles of three regimes, i.e., I Constrained (red), II Stick-slip (black), and III Sliding-slip (blue), during the approach of the surface as the anchoring strength decreases. (c) Force profile (red) in the first regime fitted by elastic forces (black) calculated by Equation A4 with K22 = 3.8 pN. (d) Force profile (black) in the second regime fitted by elastic forces (red) calculated by Equation A4 with various integral layers (numbers) and K22 = 6 pN. The slope of the blue line in (c) and (d) is the spring constant of the cantilever spring that connects to the surface
Fig. 1 (
1c) shows that the match between the experiment and theory (Equation A4
FIG. 2 .
2Compression ratio of cholesteric layers at the critical jumping distance. (a) First regime. (b) Second regime (premier jumps).(c) The data in the first and second regimes fall on two blue lines that are theoretically calculated by Equation B3 with 22 = 3.8 and 6 pN, respectively. From Fig. 3(a), the slope and intersection obtained from the trend line are used to calculate the anchoring strength by Equation B6. Therefore, critical anchoring torque Γ ≈ 0.23 mN/m and anchoring strength ≈ 0.15 mN/m with 22 = 6 pN and half-pitch = 122 nm . The deviated angle on one surface can be calculated Φ 0 −Φ 2
but the critical torque and deviated angle are independent of the form of the anchoring potential. The exact anchoring potential could be further confirmed by optical observation of the deviated angle [44]. With the obtained anchoring strength and deviated angle, the measured forces in the first regime [Fig. 1(c)] can be better fitted by the elastic force by taking into account the anchoring energy [Fig. S4(a)]. FIG. 3. Calculation of the anchoring strength. The critical jumping distance as a function of the original distance 0 in (a) the first regime and (b) the second regime (including all the stick-slip jumps), the blue line is the linear trend line. (c) Fitting the force profile (black) in the second regime by Equations A4 and B6 with the critical surface torque and anchoring strength obtained from (b).Similarly, the critical anchoring torque Γ .4° for the second regime are calculated from the slope and intersection inFig. 3(b). These values are used to predict the positions where consecutive jumps occur, as shown inFig. 3(c). The critical jumping distances fit the experimental data very well. However, the measured forces [Fig. 1(d)] are worse fitted by the elastic theory considering the anchoring energy [
FIG. 4 .
4Hysteresis of the twist angle in three regimes. The non-integral layer has been deducted to eliminate the difference of easy axes among different experiments. (a) First regime. The twist angle during the jump process is assumed to keep a constant compression ratio but decrease the total twist angle. (b) Second regime. (c) Third regime. The deviation of the anchoring angle is ignored in all three regimes. (d) Three regimes. (e) Approach profiles of three regimes. (f) Retraction profiles of three regimes. Thin and thick lines following the direction of the arrow are approach and retraction profiles, respectively. The red, black, and blue lines denote the three regimes, respectively. During retraction and approach, the mechanical responses are very different, showing hysteresis, which
W.Z. is very grateful to S. Perkin for her generous help and insightful guidance on the project. S.P. suggested using the torque balance to analyze the data and the harmonic elastic potential to demonstrate the second regime. S.P. also contributed to the design of experiments and the analysis of several figures. W.Z. thanks R. Lhermerout for his derivation of equations calculating the anchoring strength, critical torque, and anchoring deviation. W.Z. is very grateful to C.S. Perez-Martinez for her assistance with some experiments. W. Z. acknowledges J. Hallett and B. Zappone for helpful discussions. Part of the work has been presented in the Ph.D. thesis titled "Optical and mechanical responses of liquid crystals under confinement (2020)". This work was supported by the European Research Council (under Grant Nos. ERC-2015-StG-676861 and 674979-NANOTRANS).
FIG. S1
S1Force measurements in the SFB. (a) First regime. (b) Second regime. The slope of the blue line in (a) and (b) is the spring constant of the cantilever spring that connects to the surface, = 125 N/m.
Fig.
Fig. S2(a) shows force responses as a function of time in the first regime with 50% strain but before reaching the critical jumping distance. At ≈ 20 mins, the motor stopped and re-approached at ≈ 90 mins, between which the surface distance was sustained by the cholesterics with a deviation of less than 40 nm mainly due to mechanical and thermal drifts, as shown in Fig. S2(b). After re-approach, cholesterics were further compressed before all layers were squeezed out, and the surfaces jumped into contact. The long-time study demonstrates that the surfaces experienced elastic forces rather than dissipative viscous forces.
FIG. S2 Elastic response of cholesterics. (a) The approach and stop of the surface. The motor stopped at t ≈ 20 mins and restarted at t ≈ 90 mins during the approach. (b) Zoom-in view of the distance profile. FIG. S3 Harmonic elastic forces calculated by Equation A4 with various integral layers n and K22 = 6 pN. The slope of the blue line is the spring constant of the cantilever spring = 125 N/m.
FIG.
S4 (a) Force profile (red) in the first regime fitted by elastic forces calculated using Equation A4 (blue line) and Equation S3 (black line) with K22 = 4.2 pN and = 0.15 mN/m. (b) Elastic forces calculated by Equation A4 (red line) and Equation S3 (blue line) with K22 = 6 pN and = 0.0073 mN/m.
viscosity is estimated as = 1.83 × 10 −4 Pa s m, using Equation S10 with a typical 22 = 6 pN, 0 = 1000 nm, 0 − = 61 nm, and = −2 nm/s. If the interaction length on boundary materials is around 10 nm, then the boundary material viscosity = 1.83 × 10 4 Pa s, which is very large.The surface viscous torque in Equation S8 is integrated over Φ to calculate the surface viscous free energy and is substituted by Equation S10,
. A J Grass, J. Fluid Mech. 50233A. J. Grass, J. Fluid Mech. 50, 233 (1971).
E Lauga, M Brenner, H Stone, Springer handbook of experimental fluid mechanics. 1219E. Lauga, M. Brenner, and H. Stone, Springer handbook of experimental fluid mechanics, 1219 (2007).
. D Bonn, M M Denn, L Berthier, T Divoux, S , D. Bonn, M. M. Denn, L. Berthier, T. Divoux, and S.
. Manneville, Rev. Mod. Phys. 8935005Manneville, Rev. Mod. Phys. 89, 035005 (2017).
. P Sheng, Phys. Rev. A. 261610P. Sheng, Phys. Rev. A 26, 1610 (1982).
P. -G De Gennes, J Prost, The physics of liquid crystals. OxfordClarendon PressP. -G. De Gennes and J. Prost, The physics of liquid crystals (Clarendon Press, Oxford, 1993).
. V Fré, V Zolina, Trans. Faraday Soc. 29919V. Fré edericksz and V. Zolina, Trans. Faraday Soc. 29, 919 (1933).
. R B Meyer, Appl. Phys. Lett. 12281R. B. Meyer, Appl. Phys. Lett. 12, 281 (1968).
. H Baessler, M M Labes, Phys. Rev. Lett. 211791H. Baessler and M. M. Labes, Phys. Rev. Lett. 21, 1791 (1968).
. R B Meyer, Appl. Phys. Lett. 14208R. B. Meyer, Appl. Phys. Lett. 14, 208 (1969).
. G Durand, L Leger, F Rondelez, M Veyssie, Phys. Rev. Lett. 22227G. Durand, L. Leger, F. Rondelez, and M. Veyssie, Phys. Rev. Lett. 22, 227 (1969).
. F J Kahn, Phys. Rev. Lett. 24209F. J. Kahn, Phys. Rev. Lett. 24, 209 (1970).
. S V Belyaev, L M Blinov, JETP Lett. 3099S. V. Belyaev and L. M. Blinov, JETP Lett. 30, 99 (1979).
. R Dreher, Solid State Commun. 131571R. Dreher, Solid State Commun. 13, 1571 (1973).
. I P Pinkevich, V Y Reshetnyak, Y A Reznikov, L , I. P. Pinkevich, V. Y. Reshetnyak, Y. A. Reznikov, and L.
. G Grechko, Mol. Cryst. Liq. Cryst. 223269G. Grechko, Mol. Cryst. Liq. Cryst. 223, 269 (1992).
. P J Kedney, I W Stewart, Continuum Mech, P. J. Kedney and I. W. Stewart, Continuum Mech.
. Thermodyn, 6141Thermodyn. 6, 141 (1994).
. H Zink, V A Belyakov, Mol. Cryst. Liq. Cryst. 28217H. Zink and V. A. Belyakov, Mol. Cryst. Liq. Cryst. 282, 17 (1996).
. M Warner, E M Terentjev, R B Meyer, Y Mao, Phys. Rev. Lett. 852320M. Warner, E. M. Terentjev, R. B. Meyer, and Y. Mao, Phys. Rev. Lett. 85, 2320 (2000).
. S P Palto, J. Exp. Theor. Phys. 94260S. P. Palto, J. Exp. Theor. Phys. 94, 260 (2002).
. H Q Xianyu, S Faris, G P Crawford, Appl. Opt. 435006H. Q. Xianyu, S. Faris, and G. P. Crawford, Appl. Opt. 43, 5006 (2004).
. V A Belyakov, I W Stewart, M A Osipov, Phys. Rev. E. 7151708V. A. Belyakov, I. W. Stewart, and M. A. Osipov, Phys. Rev. E 71, 051708 (2005).
. S Uto, J. Appl. Phys. 9714107S. Uto, J. Appl. Phys. 97, 014107 (2005).
. A D Kiselev, T J Sluckin, Phys. Rev. E. 7131704A. D. Kiselev and T. J. Sluckin, Phys. Rev. E 71, 031704 (2005).
. G Mckay, Eur. Phys. J. E. 351G. Mckay, Eur. Phys. J. E 35, 1 (2012).
. I Lelidis, G Barbero, A L Alexe-Ionescu, Phys. Rev. E. 8722503I. Lelidis, G. Barbero, and A. L. Alexe-Ionescu, Phys. Rev. E 87, 022503 (2013).
. A N Zakhlevnykh, V S Shavkunov, Phys. Rev. E. 9442708A. N. Zakhlevnykh and V. S. Shavkunov, Phys. Rev. E 94, 042708 (2016).
. G Barbero, W Zheng, B Zappone, J. Mol. Liq. 242G. Barbero, W. Zheng, and B. Zappone, J. Mol. Liq., 242 (2017).
. S S Tenishchev, A D Kiselev, A V Ivanov, V , S. S. Tenishchev, A. D. Kiselev, A. V. Ivanov, and V. M.
. Uzdin, Phys. Rev. E. 10062704Uzdin, Phys. Rev. E 100, 062704 (2019).
. R F Souza, E K Lenzi, R T Souza, L R , R. F. de Souza, E. K. Lenzi, R. T. de Souza, L. R.
. Q Evangelista, R S Li, Zola, Soft Matter. 142084Evangelista, Q. Li, and R. S. Zola, Soft Matter 14, 2084 (2018).
. P Oswald, A Dequidt, A Zywocinski, Phys. Rev. E. 7761703P. Oswald, A. Dequidt, and A. Zywocinski, Phys. Rev. E 77, 061703 (2008).
. R F Souza, D K Yang, E K Lenzi, L R Evangelista, R S Zola, Ann. Phys. 34614R. F. de Souza, D. K. Yang, E. K. Lenzi, L. R. Evangelista, and R. S. Zola, Ann. Phys. 346, 14 (2014).
. K Funamoto, M Ozaki, K Yoshino, Jpn. J. Appl. K. Funamoto, M. Ozaki, and K. Yoshino, Jpn. J. Appl.
. Phys. 421523Phys. 42, L1523 (2003).
. M F Moreira, I C S Carvalho, W Cao, C Bailey, B , M. F. Moreira, I. C. S. Carvalho, W. Cao, C. Bailey, B.
. P Taheri, Palffy-Muhoray, Appl. Phys. Lett. 852691Taheri, and P. Palffy-Muhoray, Appl. Phys. Lett. 85, 2691 (2004).
. H G Yoon, N W Roberts, H F Gleeson, Liq. Cryst. 33503H. G. Yoon, N. W. Roberts, and H. F. Gleeson, Liq. Cryst. 33, 503 (2006).
. M Skarabot, Z Lokar, K Gabrijelcic, D Wilkes, I , M. Skarabot, Z. Lokar, K. Gabrijelcic, D. Wilkes, and I.
. Musevic, Liq. Cryst. 381017Musevic, Liq. Cryst. 38, 1017 (2011).
. T N Orlova, R I Iegorov, A D Kiselev, Phys. Rev. E. 8912503T. N. Orlova, R. I. Iegorov, and A. D. Kiselev, Phys. Rev. E 89, 012503 (2014).
. S P Palto, M I Barnik, A R Geivandov, I V Kasyanova, V S Palto, Phys. Rev. E. 9232502S. P. Palto, M. I. Barnik, A. R. Geivandov, I. V. Kasyanova, and V. S. Palto, Phys. Rev. E 92, 032502 (2015).
. B Zappone, R Bartolino, PNAS. 118211050311B. Zappone and R. Bartolino, PNAS 118, 211050311 (2021).
. P Pieranski, B Jerome, Phys. Rev. A. 40317P. Pieranski and B. Jerome, Phys. Rev. A 40, 317 (1989).
. B Jerome, Y R Shen, Phys. Rev. E. 484556B. Jerome and Y. R. Shen, Phys. Rev. E 48, 4556 (1993).
. W C Zheng, C S Perez-Martinez, G Petriashvili, S , W. C. Zheng, C. S. Perez-Martinez, G. Petriashvili, S.
. B Perkin, Zappone, Soft Matter. 154905Perkin, and B. Zappone, Soft Matter 15, 4905 (2019).
. P Richetti, P Kekicheff, J L Parker, B W Ninham, Nature. 346252P. Richetti, P. Kekicheff, J. L. Parker, and B. W. Ninham, Nature 346, 252 (1990).
. P A Thompson, M O Robbins, Phys. Rev. A. 416830P. A. Thompson and M. O. Robbins, Phys. Rev. A 41, 6830 (1990).
. D V Shmeliova, V A Belyakov, Mol. Cryst. Liq. D. V. Shmeliova and V. A. Belyakov, Mol. Cryst. Liq.
. Cryst, 646160Cryst. 646, 160 (2017).
. I I Smalyukh, O D Lavrentovich, Phys. Rev. Lett. 9085503I. I. Smalyukh and O. D. Lavrentovich, Phys. Rev. Lett. 90, 085503 (2003).
. B Jerome, Rep. Prog. Phys. 54391B. Jerome, Rep. Prog. Phys. 54, 391 (1991).
. R G Horn, J N Israelachvili, E Perez, J. Phys. 4239R. G. Horn, J. N. Israelachvili, and E. Perez, J. Phys. 42, 39 (1981).
. M Ruths, S Steinberg, J N Israelachvili, Langmuir. 126637M. Ruths, S. Steinberg, and J. N. Israelachvili, Langmuir 12, 6637 (1996).
. F C Frank, Discuss. Faraday Soc. 2519F. C. Frank, Discuss. Faraday Soc. 25, 19 (1958).
. J L Ericksen, Trans. Soc. Rheol. 523J. L. Ericksen, Trans. Soc. Rheol. 5, 23 (1961).
. J L Ericksen, Arch. Ration. Mech. Anal. 9371J. L. Ericksen, Arch. Ration. Mech. Anal. 9, 371 (1962).
. J Basterfield, W Miller, G Weatherly, Can. Metall. Q. 8131J. Basterfield, W. Miller, and G. Weatherly, Can. Metall. Q. 8, 131 (1969).
. M J Stephen, J P Straley, Rev. Mod. Phys. 46617M. J. Stephen and J. P. Straley, Rev. Mod. Phys. 46, 617 (1974).
S Chandrasekhar, Liquid crystals. CambridgeCambridge University PressS. Chandrasekhar, Liquid crystals (Cambridge University Press, Cambridge, 1977).
. P C Martin, O Parodi, P S Pershan, Phys. Rev. A. 62401P. C. Martin, O. Parodi, and P. S. Pershan, Phys. Rev. A 6, 2401 (1972).
. B Zappone, W C Zheng, S Perkin, Rev. Sci. Instrum. 8985112B. Zappone, W. C. Zheng, and S. Perkin, Rev. Sci. Instrum. 89, 085112 (2018).
. M H Muser, PNAS. 10513187M. H. Muser, PNAS 105, 13187 (2008).
. Z P Yang, H P Zhang, M Marder, PNAS. 10513264Z. P. Yang, H. P. Zhang, and M. Marder, PNAS 105, 13264 (2008).
. D Marenduzzo, E Orlandini, J M Yeomans, Phys. Rev. Lett. 921D. Marenduzzo, E. Orlandini, and J. M. Yeomans, Phys. Rev. Lett. 92, 1 (2004).
. W Helfrich, Phys. Rev. Lett. 23372W. Helfrich, Phys. Rev. Lett. 23, 372 (1969).
. P C F Moller, A Fall, D Bonn, Europhys. Lett. 8738004P. C. F. Moller, A. Fall, and D. Bonn, Europhys. Lett. 87, 38004 (2009).
Simulation with a 4x4 matrix has been used in the past [2,15] to reproduce the optics formed in the SFA. Based on the twist angle profile in Fig. 4(c), the simulations are compared by subtracting or adding one or two layers, as shown in Fig. V Simulation, Matrix, V. SIMULATION WITH A 4X4 MATRIX Simulation with a 4x4 matrix has been used in the past [2,15] to reproduce the optics formed in the SFA. Based on the twist angle profile in Fig. 4(c), the simulations are compared by subtracting or adding one or two layers, as shown in Fig.
the simulation changes significantly after one layer is subtracted, but makes negligible differences by adding one or two layers during the retraction. With the addition of layers, the simulation is not sensitive to the change in twist angle because the layers are largely compressed. S5, Overall, which is similar to the isotropic limit [15]. This indicates that with multiple twisted layers, the 4x4 matrices may not accurately reflect the correct twist anglesS5. Overall, the simulation changes significantly after one layer is subtracted, but makes negligible differences by adding one or two layers during the retraction. With the addition of layers, the simulation is not sensitive to the change in twist angle because the layers are largely compressed, which is similar to the isotropic limit [15]. This indicates that with multiple twisted layers, the 4x4 matrices may not accurately reflect the correct twist angles.
The intersection angle between two mica surfaces is 57.4 °. The twist angle in the simulation is the corresponding twist angle profile in Fig. 4(c), (a) by subtracting π, (b) with no change, (c) by adding π, and (d) by adding 2π. FIG. S5 Overlay of the simulation and experiment spectrograms during retraction in the third regime using ImageJ. The frame number is proportional to the time, the thick green line is the experimental data, the thin red line is the simulation data, and the overlaid region shows yellowFIG. S5 Overlay of the simulation and experiment spectrograms during retraction in the third regime using ImageJ. The intersection angle between two mica surfaces is 57.4 °. The twist angle in the simulation is the corresponding twist angle profile in Fig. 4(c), (a) by subtracting π, (b) with no change, (c) by adding π, and (d) by adding 2π. The frame number is proportional to the time, the thick green line is the experimental data, the thin red line is the simulation data, and the overlaid region shows yellow.
Movie S1 The asymmetrical stretch of a defect on the surface during retraction in the third regime. Fringes of equal chromatic order, formed by multiple-beam interferometry. were observed on the spectrometerMovie S1 The asymmetrical stretch of a defect on the surface during retraction in the third regime. Fringes of equal chromatic order, formed by multiple-beam interferometry, were observed on the spectrometer.
. M J Park, O O Park, Microelectron. Eng. 852261M. J. Park and O. O. Park, Microelectron. Eng. 85, 2261 (2008).
. W C Zheng, C S Perez-Martinez, G Petriashvili, S Perkin, B Zappone, Soft Matter. 154905W. C. Zheng, C. S. Perez-Martinez, G. Petriashvili, S. Perkin, and B. Zappone, Soft Matter 15, 4905 (2019).
. P Richetti, P Kekicheff, P Barois, J. Phys. II. 51129P. Richetti, P. Kekicheff, and P. Barois, J. Phys. II 5, 1129 (1995).
. P Richetti, P Kekicheff, J L Parker, B W Ninham, Nature. 346252P. Richetti, P. Kekicheff, J. L. Parker, and B. W. Ninham, Nature 346, 252 (1990).
. G Durand, L Leger, F Rondelez, M Veyssie, Phys. Rev. Lett. 22227G. Durand, L. Leger, F. Rondelez, and M. Veyssie, Phys. Rev. Lett. 22, 227 (1969).
. P Oswald, A Dequidt, A Zywocinski, Phys. Rev. E. 7761703P. Oswald, A. Dequidt, and A. Zywocinski, Phys. Rev. E 77, 061703 (2008).
. I I Smalyukh, O D Lavrentovich, Phys. Rev. Lett. 9085503I. I. Smalyukh and O. D. Lavrentovich, Phys. Rev. Lett. 90, 085503 (2003).
. R A Herke, N A Clark, M A Handschy, Science. 267651R. A. Herke, N. A. Clark, and M. A. Handschy, Science 267, 651 (1995).
. R A Herke, N A Clark, M A Handschy, Phys. Rev. E. 563028R. A. Herke, N. A. Clark, and M. A. Handschy, Phys. Rev. E 56, 3028 (1997).
. C Blanc, N Zuodar, I Lelidis, M Kleman, J L Martin, Phys. Rev. E. 6911705C. Blanc, N. Zuodar, I. Lelidis, M. Kleman, and J. L. Martin, Phys. Rev. E 69, 011705 (2004).
. C Blanc, N Zuodar, J L Martin, I Lelidis, M Kleman, Mol. Cryst. Liq. Cryst. 4121695C. Blanc, N. Zuodar, J. L. Martin, I. Lelidis, and M. Kleman, Mol. Cryst. Liq. Cryst. 412, 1695 (2004).
. I Lelidis, C Blanc, M Kleman, Phys. Rev. E. 7451710I. Lelidis, C. Blanc, and M. Kleman, Phys. Rev. E 74, 051710 (2006).
. M Ruths, S Steinberg, J N Israelachvili, Langmuir. 126637M. Ruths, S. Steinberg, and J. N. Israelachvili, Langmuir 12, 6637 (1996).
. R G Horn, J N Israelachvili, E Perez, J. Phys. 4239R. G. Horn, J. N. Israelachvili, and E. Perez, J. Phys. 42, 39 (1981).
. B Zappone, W C Zheng, S Perkin, Rev. Sci. Instrum. 8985112B. Zappone, W. C. Zheng, and S. Perkin, Rev. Sci. Instrum. 89, 085112 (2018).
| [] |
[
"Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19",
"Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19",
"Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19",
"Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19"
] | [
"Zaid Nabulsi ",
"Andrew Sellergren ",
"Shahar Jamshy ",
"Charles Lau ",
"Edward Santos ",
"Atilla P Kiraly ",
"Wenxing Ye ",
"Jie Yang ",
"Rory Pilgrim ",
"Sahar Kazemzadeh ",
"Jin Yu ",
"Sreenivasa Raju Kalidindi ",
"Mozziyar Etemadi ",
"Florencia Garcia-Vicente ",
"David Melnick ",
"Greg S Corrado ",
"Lily Peng ",
"Krish Eswaran ",
"Daniel Tse ",
"Neeral Beladia ",
"Yun Liu ",
"Po-Hsuan Cameron Chen ",
"Shravya Shetty ",
"Zaid Nabulsi ",
"Andrew Sellergren ",
"Shahar Jamshy ",
"Charles Lau ",
"Edward Santos ",
"Atilla P Kiraly ",
"Wenxing Ye ",
"Jie Yang ",
"Rory Pilgrim ",
"Sahar Kazemzadeh ",
"Jin Yu ",
"Sreenivasa Raju Kalidindi ",
"Mozziyar Etemadi ",
"Florencia Garcia-Vicente ",
"David Melnick ",
"Greg S Corrado ",
"Lily Peng ",
"Krish Eswaran ",
"Daniel Tse ",
"Neeral Beladia ",
"Yun Liu ",
"Po-Hsuan Cameron Chen ",
"Shravya Shetty "
] | [] | [] | Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.Chest radiography (CXR) is a crucial thoracic imaging modality to detect, diagnose, and guide the management of numerous cardiothoracic conditions. Approximately 837 million CXRs are obtained annually worldwide 1 , resulting in a high reviewing burden for radiologists and other healthcare professionals 2,3 . In the United Kingdom, for example, a shortage in the radiology workforce is limiting access to care, increasing wait times, and delaying diagnoses 4 . The need to reduce radiologist workload and improve turnaround time has sparked a surge of interest in developing artificial intelligence (AI)-based tools to interpret CXRs for a broad range of findings 5-7 .Many algorithms have been developed to detect specific diseases, such as pneumonia, pleural effusion, and fracture, with comparable or higher performance than radiologists 5-10 | 10.1038/s41598-021-93967-2 | [
"https://arxiv.org/pdf/2010.11375v2.pdf"
] | 225,041,221 | 2010.11375 | e6b1e5d0831daaf77f75e3b01cdb178a0a0df4ad |
Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19
0123456789
Zaid Nabulsi
Andrew Sellergren
Shahar Jamshy
Charles Lau
Edward Santos
Atilla P Kiraly
Wenxing Ye
Jie Yang
Rory Pilgrim
Sahar Kazemzadeh
Jin Yu
Sreenivasa Raju Kalidindi
Mozziyar Etemadi
Florencia Garcia-Vicente
David Melnick
Greg S Corrado
Lily Peng
Krish Eswaran
Daniel Tse
Neeral Beladia
Yun Liu
Po-Hsuan Cameron Chen
Shravya Shetty
Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19
012345678910.1038/s41598-021-93967-2
Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.Chest radiography (CXR) is a crucial thoracic imaging modality to detect, diagnose, and guide the management of numerous cardiothoracic conditions. Approximately 837 million CXRs are obtained annually worldwide 1 , resulting in a high reviewing burden for radiologists and other healthcare professionals 2,3 . In the United Kingdom, for example, a shortage in the radiology workforce is limiting access to care, increasing wait times, and delaying diagnoses 4 . The need to reduce radiologist workload and improve turnaround time has sparked a surge of interest in developing artificial intelligence (AI)-based tools to interpret CXRs for a broad range of findings 5-7 .Many algorithms have been developed to detect specific diseases, such as pneumonia, pleural effusion, and fracture, with comparable or higher performance than radiologists 5-10
www.nature.com/scientificreports/ framing is required for use as an effective prioritization tool: algorithms are needed to distinguish normal versus abnormal CXRs more generally, where abnormality is defined as the presence of a clinically actionable finding. A reliable AI system for distinguishing normal CXRs from abnormal ones can contribute to prompt patient workup and management. There are several use cases for such a system. First, in scenarios with a high reviewing burden for radiologists, the AI algorithm could be used to identify cases that are unlikely to contain findings, empowering healthcare professionals to quickly exclude certain differential diagnoses and allowing the diagnostic workup to proceed in other directions without delay. Cases that are likely to contain findings can be also grouped together for prioritized review, reducing the turnaround time. Second, in settings when clinical demand outstrips availability of radiologists (for example, in the midst of a large disease outbreak), such a system might be used as a frontline point-of-care tool for non-radiologists. Importantly, the AI needs to be evaluated on CXRs with "unseen" abnormalities (i.e. those that it had not encountered during development), to validate its robustness towards new diseases or new manifestations of diseases.
In this work, we developed a deep learning system (DLS) that classifies CXRs as normal or abnormal using data containing a diverse array of CXR abnormalities from 5 clusters of hospitals from 5 cities in India. We then evaluated the DLS for its generalization to unseen data sources and unseen diseases using 6 independent datasets from India, China, and the United States. These datasets comprise two broad clinical datasets, two tuberculosis (TB) datasets, and two coronavirus disease 2019 (COVID-19) datasets with reverse transcription polymerase chain reaction (RT-PCR)-confirmed positive and negative cases. We are also releasing labels we collected (radiologist interpretations) for images in the publicly-available test dataset to facilitate further development and continual research of AI models by the community (see Data availability).
Results
Dataset curation. Figure 1 shows the overall study design. Our training set consisted of 250,066 CXRs of 213,889 patients from 5 clusters of hospitals from 5 cities in India (Supplementary Table 1, Supplementary Fig. 1). In the training set, all known TB cases were excluded and COVID-19 cases were absent. To evaluate the trained DLS, we used 6 datasets with a total of 11,576 CXRs from 11,298 patients ( Table 1, Supplementary Fig. 1). This includes 2 broad clinical datasets (Dataset 1 and ChestX-ray14 , n = 8557 total cases) with 2423 abnormal cases, 2 datasets (TB-1 and TB-2, n = 595 total cases) with 294 TB-positive cases, and 2 datasets (COV-1 and COV-2, n = 2424 total cases) with 873 COVID-19 positive cases. DS-1, COV-1, and COV-2 were obtained from a mixture of general outpatient and inpatient settings and thus represent a wide spectrum of CXRs seen across different populations. Evaluations on these broad datasets mitigates the risk of selecting only the most obvious cases while excluding more difficult images. CXR-14, TB-1, TB-2 were enriched (such as for pneumothoraces in CXR-14; see Supplementary Fig. 2) and were publicly available. Evaluations on these datasets help to validate the DLS's performance on conditions that would otherwise be rarer, and enables benchmarking with other studies using the same data. To define high-sensitivity and high-specificity operating points for the DLS, we created four small operating point selection datasets for four scenarios: DS-1, CXR-14, TB, and COVID-19; n = 200 cases each (see Fig. 1B and "Operating point selection datasets" section in "Methods"). Across these datasets, we collected 48,877 labels from 31 radiologists for either the reference standard or to serve as a comparison for the DLS (see "Labels" section in "Methods").
Classifying CXRs as normal vs abnormal. The DLS was first evaluated for its ability to classify CXRs as normal or abnormal on the test split of DS-1 and an independent test set CXR-14. We obtained the normal and abnormal labels from the majority vote of three radiologists (see "Labels" section in "Methods"). The percentage of abnormal images were 24% and 71% in DS-1 and CXR-14, respectively ( Table 1). The areas under receiver operating characteristic curves (area under ROC, AUC) were 0.87 (95% CI 0.87-0.88) in DS-1 and 0.94 (95% CI 0.93-0.96) in CXR-14 (Table 2, Fig. 2A). To have a comprehensive understanding of the DLS, we measured sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), percentage of predicted positives and the percentage of predicted negatives at a high-sensitivity operating point and a high-specificity operating point ("Evaluation metrics" section in "Methods"). With the high-sensitivity operating point (see "Operating point selection" section in "Methods"), the DLS predicted 29.9% of DS-1 and 24.0% of CXR-14 as normal, with NPVs of 0.98 and 0.85, respectively (Table 2). With the high-specificity operating point, the DLS predicted 22.2% of DS-1 and 11.7% of CXR-14 as abnormal, with PPVs of 0.68 and 0.99, respectively ( Table 2). The NPVs and PPVs across different operating points are plotted in Fig. 3.
To put the performance of the DLS in context, two independent board-certified radiologists reviewed the test splits of both DS-1 and CXR-14. The radiologists had average NPVs of approximately 0.87 and 0.70 and PPVs of 0.75 and 0.96 on DS-1 and CXR-14, respectively ( Table 3). The radiologists' sensitivity and specificity are illustrated on the ROC curves ( Fig. 2A).
Radiographic findings vary in their difficulty and importance of detection. Thus we next conducted subgroup analyses for each abnormality listed in Supplementary Tables 3 and 4. The DLS showed consistently high NPVs (range 0.93-1.0) with low variability across all findings in both datasets. The radiologists also showed similar NPVs but with higher variability (range 0.86-1.0).
Lastly, for DS-1 and CXR-14, every image was independently reviewed by 3 radiologists to form the reference standard. To understand whether the DLS has learned the intrinsic variability across radiologists, we plotted the distribution of DLS scores stratified by the number of radiologists indicating abnormality in Supplementary Fig. 5. We observed a consistent trend between the DLS scores and the radiologists' discordance across both datasets. (Table 2A). The NPVs and PPVs across different operating points are also plotted in Fig. 3. However, CXRs that were labeled (TB) negative could nonetheless contain other abnormalities (see "Labels" section in "Methods"). Hence PPVs (Table 2A,B) need to be interpreted with the context that low PPVs for identifying TB-positive radiographs as abnormal do not necessarily reflect the PPV for correctly identifying images with other findings in those datasets (see "Distributional shift between datasets" below). The latter results (DLS performance for identifying abnormalities in TB-1 and TB-2) are presented in Supplementary Fig. 6, with AUCs between 0.91 and 0.93. Every image in TB1 and TB2 was also annotated as normal or abnormal by one radiologist from a cohort of 8 consultant radiologists from India. The radiologist NPVs were 0.74 and 0.88 and their PPVs were 0.93 and 0.93 on Table 1 and Supplementary Table 1 Fig. 2A). At the high-sensitivity operating point, the DLS predicts 5.9% of COV-1 and 9.8% of COV-2 as negatives with NPVs of 0.85 and 0.56, respectively ( Table 2). The NPVs and PPVs for different operating points are plotted in Fig. 3. Similar to the TB case above, images that were negative for COVID-19 often contained other abnormalities (see "Distributional shift between datasets" section below). The DLS performance for identifying abnormalities in COV-1 and COV-2 are presented in Supplementary Fig. 6, with an AUC of 0.86 in both datasets.
Every image in COV-1 and COV-2 was also reviewed by one radiologist from a cohort of four US boardcertified radiologists. The radiologist NPVs were 0.78 and 0.62 and their PPVs were 0.51 and 0.60 on COV-1 and COV-2, respectively (Table 3 and Fig. 2C). Further subgroup analyses comparing the DLS performance with individual radiologists are shown in Supplementary Table 5C,D. Finally, to better understand the potential impact of the DLS in the setting of imperfect RT-PCR sensitivity, we conducted a subanalysis of COVID-19 cases that had a "false negative" RT-PCR test result on initial testing, defined as a negative RT-PCR test followed by a positive one within five days. In the 21 such cases, the DLS achieved a 95.2% sensitivity, with the CXR taken at the time of the negative test.
Distributional shifts between datasets. To better understand the data shifts between applications (general clinical setting in DS-1 vs. the enriched CXR-14; the broad clinical settings vs. TB; and the broad clinical settings vs. COVID-19), we next examined the distributions of the DLS predictive scores across all 6 test datasets and their corresponding operating point selection sets (Fig. 4, see "Operating point selection datasets" in "Methods"). We observed similarly peaked DLS prediction score distributions (near 1.0) for positive cases-whether for general abnormalities, specific conditions, TB, or COVID-19 (see red histograms in Fig. 4A-C). However, although the distributions for "negative" cases were mostly similar, they did have a small degree of variability, even among datasets of the same scenario from different sites. For example, comparing TB-1 and TB-2 which have similar CXR findings (TB) but were from two independent sites, negative cases in TB-2 had higher scores Table 1. Data and patient characteristics of the 6 test datasets. N/A indicates information was not available. a Abnormal images in the disease-specific datasets include both those positive for TB or COVID-19, and those with other findings; the numbers of images that contained other findings were not available. www.nature.com/scientificreports/ than in TB-1. Similarly, comparison between COV-1 and COV-2 also shows slight differences in the scores for negative cases. These observations confirm the existence of distributional shifts, suggesting that the scenariospecific operating points are essential, and that even having site-specific operating points may further improve the DLS's performance.
Although scores for positive and the negative cases in DS-1, CXR-14, TB-1, and TB-2 were well-separated, there was significant overlap between the distributions of positive and negative cases for the COVID-19 datasets. In fact, further review of the images revealed that 24.9% of negatives in COV-1 and 31.5% of negatives in COV-2 had other CXR findings, and were thus abnormal. A breakdown of the type of finding in these "negatives" is presented in Supplementary Fig. 7. Examples of challenging cases of each condition and associated saliency maps highlighting the regions with the greatest influence on DLS predictions are presented in Fig. 5.
Performance of two simulated DLS assisted workflows.
To understand how the developed DLS can assist practicing radiologists, we investigated two simulated DLS-based workflows. In the first setup, to assist radiologists in prioritizing review of abnormal cases, the DLS sorted cases by the predicted likelihood of being abnormal (Fig. 1D). We measured the differences in expected turnaround time for the abnormal cases with and without DLS prioritization. For simplicity, in this simulation, we assume the same review time for each case, and that the review time per case does not vary based on review order. The DLS-based prioritization reduced the mean turnaround time of abnormal cases by 8-29% for DS-1 and CXR-14, 21-28% for TB-1 and TB-2, and 8-13% for COV-1 and COV-2 ( Fig. 6). To understand the effect of relative differences in abnormal vs normal review time, we simulated for a range of different scenarios by varying the time it takes to review an abnormal case with respect to the time it takes to review a normal case ( Supplementary Fig. 8). In the second setup, we investigated a simulated sequential reading setup where the DLS identified cases that were unlikely to contain findings, and the radiologist reviewed only the remaining cases (Fig. 1D). Though the deprioritized cases could be reviewed at a later time, we computed the effective immediate performance assuming the DLS-negatives were not yet reviewed by radiologists and considered them to be interpreted as "normal" for evaluation purposes. There were minimal performance differences between radiologists and the sequential DLS-radiologists setup, but the effective "urgent" caseload reduced by 25-30% for DS-1 and CXR-14, about 40% for the TB datasets, and about 5-10% for the COVID-19 datasets (Supplementary Table 6).
Discussion
We have developed and evaluated a DLS for interpreting CXRs as normal or abnormal, instead of detecting individual abnormalities. We further validated that it generalized with acceptable performance using six datasets: two broad clinical datasets (AUC 0.87 and 0.94), two datasets with one unseen disease (TB; AUC 0.95 and 0.97), and two datasets with a second unseen disease (COVID-19; AUC 0.68 and 0.65). www.nature.com/scientificreports/ Generalizability to different datasets and patient populations is critical for evaluation of AI systems in medicine. Studies have shown that many factors might lead to challenges of generalization of AI systems to new populations, such as dataset shift and confounders 14 . Furthermore, with CXRs, as with all medical imagery, the number of potential manifestations is unbounded, especially with the emergence of new diseases over time. Understanding model performance on this set of unseen diseases is an imperative step in developing a robust and clinically useful model that can be trusted in real world situations. In this work, we evaluated the DLS's performance on 6 independent test sets consisting of different patient populations, spanning three countries, and with two unseen diseases (TB and COVID -19). www.nature.com/scientificreports/ across real-world dataset shifts, increasing the likelihood of such a system to also generalize to new datasets and new manifestations. The "lower" observed AUCs of the DLS on the COVID-19 datasets were likely caused by our deliberate application of a general abnormality detector to a cohort enriched for patients with a clinical presentation consistent with COVID-19 and thus tested for COVID-19. However, as other acute diseases may share a similar clinical presentation, many cases negative for COVID-19 exhibited abnormal CXR findings that likely triggered the DLS (Fig. 5, Supplementary Fig. 7). Additionally, a substantial number of COVID-19 patients 15 , which would also contribute to a lower observed AUC. Lastly, we expect an improved performance by training the model specifically on a COVID-19 dataset for detecting the disease, and future work is needed to investigate using the current general abnormality model as a pretraining step (i.e. to pre-initialize new networks) for other specific tasks 16 . However, we focused on evaluating a general-abnormal DLS's performance in identifying patients with normal CXRs in a challenging COVID-19 cohort dataset.
In this study, we focused on evaluating the generalizability of the DLS to unseen diseases (TB and COVID) rather than unseen CXR imaging features, in order to assess the clinical relevance of the DLS. Studies have suggested that radiologists' ability to recognize abnormal imaging features of disease (e.g. consolidation or pleural effusion) on CXR appear relatively independent of experience level, from junior residents through senior faculty 17 . However, proficiency at accurately diagnosing disease on CXR remains strongly tied to experience level 18 . This disparity highlights the value in characterizing an AI system's ability to detect disease on CXR, in addition to its ability to detect abnormal imaging features.
The variability in patient population and clinical environment across different datasets also meant that the same operating point was unlikely to be appropriate across all settings. For example, a general outpatient setting is substantially less likely to contain CXR findings compared to a cohort of patients with respiratory symptoms or fevers in the midst of the COVID-19 pandemic. Similarly, datasets that are deliberately enriched for specific conditions (CXR-14 and TB) are skewed and are not representative of a general disease screening population. Thus, we used a small number of cases (n = 200) from each setting to determine the operating points specific to that setting. Consistent with this hypothesis, these operating points then generalized well to another dataset, 19. Each image has the class activation map presented as red outlines that indicate the areas the DLS is focusing on for identifying abnormalities, and yellow outlines representing regions of interest indicated by radiologists. Text descriptions for each CXR are below the respective image. Note that the general abnormality false negative example is shown with abnormal class activation maps. However, the DLS predictive score on the case was lower than the selected threshold; hence the image was classified as "normal". Note that the TB false positive image was saved in the system with inverted colors that were inconsistent with what was specified in the DICOM header tag, and presented to the model that way. www.nature.com/scientificreports/ such as from TB-1 to TB-2 and from COV-1 to COV-2. However, further performance improvement is likely possible with site-specific operating point selection sets. We anticipate that this simple operating point selection strategy using a small number of cases may be useful when evaluating an AI system in a new setting, institution, or patient population. In addition to general performance across the 6 datasets, subgroup analysis of the DLS' performance on each specific abnormal CXR finding of DS-1 and CXR-14 (Supplementary Tables 3 and 4) revealed consistently high NPVs, suggesting that the DLS was not overtly biased towards any particular abnormal finding. In addition, the DLS outperformed radiologists on atelectasis, pleural effusion, cardiomegaly/enlarged cardiac silhouette, and lung nodules-suggesting that the DLS as a prioritization tool could be particularly valuable in emergency medicine where dyspnea, cardiogenic pulmonary edema, and incidental lung cancer detection are commonly encountered. Furthermore, the DLS also outperformed radiologists in settings where an abnormal chest radiographic finding was present but the abnormality was not one of the predefined chest radiographic findings (e.g. perihilar mass) or radiologists agreed on the presence of a finding but disagreed as to its characterization (indicating case ambiguity; see "Other" in Supplementary Tables 3 and 4). This suggests that the DLS may be robust in the setting of chest radiographic findings that are uncommon or difficult to reach consensus on.
To further evaluate the potential utility of our system, we simulated a setup where the DLS prioritizes cases that are likely to contain findings for radiologists' review. Our evaluation suggests a potential reduction in turnaround time for abnormal cases by 7-28%, indicating the DLS's potential to be a powerful first-line prioritization tool. Additionally, we also found that the longer it takes to review an abnormal case, the less reduction in time there was. Whether deployed in a relatively healthy outpatient practice or in the midst of an unusually busy inpatient or outpatient setting, such a system could help prioritize abnormal CXRs for expedited radiologist interpretation. In radiology teams where CXR interpretation responsibilities are shared between general and subspecialist (i.e. cardiothoracic) radiologists, such a system could be used to distribute work. For non-radiologist healthcare professionals, a rapid determination regarding the presence or absence of an abnormality on CXR prevents the release of a patient who needs care and enables alternative diagnostic workup to proceed without delay while the case is pending radiologist review. Finally, a radiologist's productivity might increase by batching negative CXRs for streamlined formal review.
Finally, to facilitate the continued development of AI models for chest radiography, we are releasing our abnormal versus normal labels from 3 radiologists (2430 labels on 810 images) for the publicly-available CXR-14 test set. We believe this will be useful for future work because label quality is of paramount importance for any AI study in healthcare. In CXR-14, the binary abnormal labels were derived through an automated natural language processing (NLP) algorithm on the radiology report 7 . However, editorials have questioned the the www.nature.com/scientificreports/ quality of labels derived from clinical reports 19 . Hence, in this study we obtained labels from multiple experts to establish the reference standard for evaluation, and a confusion matrix of our majority vote expert labels against the public NLP labels is shown in Supplementary Table 7. We hope that the release of these high-quality labels will aid future work in this area. Prior studies have demonstrated an algorithm's potential to differentiate normal and abnormal CXRs 20-25 . Dunnmon et al. showed high diagnostic performance of a developed system in classifying CXRs as normal or abnormal. Hwang et al. evaluated a commercially available system with comparison to radiology residents 22 . Annarumma et al. further demonstrated the system's utility in a simulated prioritization workflow with three different priority level on a held-out data from the same institution as the training dataset 21 . Our study complements prior works by performing extensive evaluations on model generalizability, including generalization to multiple datasets in different continents, different patient populations settings, and with the presence of unseen diseases. In addition, we also obtained radiologist reviews as benchmarks to understand the DLS's performance. Lastly, we presented two simulated workflows; one demonstrated reduced turnaround time for abnormal cases, and the other showed comparable performance while reducing effective caseload.
Our study has several limitations. First, there are a wide range of abnormalities and diseases that were not represented among the CXRs available for this study. Although it's infeasible to exhaustively obtain and annotate datasets for every possible finding, further increasing the conditions and diseases, especially the rare findings, considered in this study could help both in the DLS development and evaluation. Second, we only had labeled data regarding disease-positive and disease-negative for TB and COVID-19. The absence of normal and abnormal labels for the TB and COVID-19 datasets led to added complexity in understanding the performance metrics of PPVs and specificities for these scenarios. The reference standard for the publicly available TB-2 was based on radiologists reading without appropriate clinical tests; hence the performance measure is subject to the diagnoses' accuracy. Third, the follow-up data or information of more sophisticated modalities were not available for DS-1 and CXR-14, limiting the quality of the obtained reference standard. Fourth, to provide a comparison with the DLS, which only had CXRs as input, the radiologists reviewed the cases solely based on CXRs without referencing additional clinical or patient data. In a real clinical setting, this information is generally available, and likely influences a radiologist's decisions. Fifth, TB cases were excluded from the training and tuning sets by removing all cases indicated as TB-positive or with any reference to TB in the radiology report. Microbiologically verifying the entire training set was infeasible. Hence, there was a potential for leakages of TB positive cases not noted on the radiology reports. Lastly, the results were based on retrospective data. Given the absence of historical reporting timing information, the utility of the DLS-assisted workflows were based on simulation with many assumptions, such as identical radiologist diagnosis regardless of the review order. Additionally, the DLS-assisted workflows did not consider the various degrees of urgency for different diseases, which is an important aspect as a prioritization tool. Hence, the true effects will need to be determined through future evaluation in a prospective setting.
In conclusion, we have developed and evaluated a clinically relevant artificial intelligence model for chest radiographic interpretation and evaluated its generalizability across a diverse set of images in 6 distinct datasets. We hope that the performance analyses reported here along with the release of the expert labels for the publicly available CXR-14 (ChestX-ray14) images will serve as a useful resource to facilitate the continued development of clinically useful AI models for CXR interpretation.
Methods
Datasets. In this study, we utilized 6 independent datasets for DLS development and evaluation. The DLS was evaluated in two ways: distinguishing normal vs. abnormal cases in a general setting with multiple radiologist-confirmed abnormalities (first 2 datasets), and in the setting of diseases that the DLS was not exposed to during training (TB was excluded from the train set and COVID-19 was not present; last 4 datasets). All data were stored in the Digital Imaging and Communications in Medicine (DICOM) format and de-identified prior to transfer to study investigators. Details regarding these datasets and patient characteristics are summarized in Table 1, Supplementary Table 1, and Supplementary Fig. 1. This study using de-identified retrospective data was reviewed by Advarra IRB (Columbia, MD), which determined that it was exempt from further review under 45 CFR 46.
Train and tune datasets. The first dataset (DS-1) was from five clusters of hospitals across five different cities in India (Bangalore, Bhubaneswar, Chennai, Hyderabad, and New Delhi) 5 . DS-1 consisted of images from consecutive inpatient and outpatient encounters between November 2010 and January 2018, and reflected the natural population incidence of the abnormalities in the populations. All TB cases were excluded and COVID-19 cases were not present. In total, DS-1 originally contained 1,052,274 CXRs from 794,501 patients before exclusions (Supplementary Fig. 1A). This dataset was randomly split into training, tuning, and testing sets in a 0.775:0.1:0.125 ratio while ensuring that images from the same patient remained in the same split. The split is consistent with our previous study 5 . The DLS was developed solely using the training and tuning splits of DS-1. Because outpatient management is primarily done using posterior-anterior (PA) CXRs, while inpatient management is primarily done on anterior-posterior (AP) CXRs, we emphasized PA CXRs in the tune split to better represent an outpatient use case. Both PA and AP images are used in the test datasets.
Operating point selection datasets. To select operating points for each of the four scenarios (two general abnormalities, TB, COVID-19), 200 images were randomly selected as the operating point selection sets. For general abnormalities, we selected two independent operating points using 200 randomly sampled images from the DS-1 tune set and 200 randomly sampled images from CXR-14's publicly-specified combined train and tune set 7 www.nature.com/scientificreports/ images from COV-1 were used. These images were only used to determine an operating point for that scenario, and once used for operating point selection, were excluded from the test set ( Supplementary Fig. 1).
Test datasets. Two datasets were used to evaluate the DLS's performance in distinguishing normal and abnormal findings in a general abnormality detection setting. The first dataset contains 7747 randomly selected PA CXRs from the original test split of the DS-1 5 . These sampled images were expertly labelled as normal or abnormal for the purposes of this study. The second dataset contains 2000 randomly selected CXRs from the publiclyspecified test set (25,596 CXRs from 2797 patients) of CXR-14 from the National Institute of Health 7,26 . From these 2000 CXRs (also used in prior work 5 ), we removed all the patients younger than 18 years of age and all the AP scans (to focus on an outpatient setting, see tune split procedure above), leaving us with 810 images.
To evaluate the DLS performance in unseen diseases, we curated 2 datasets for TB and 2 datasets for COVID-19 (1 CXR per patient, Supplementary Fig. 1B,C). For TB, one dataset (TB-1) of 462 PA CXRs with 241 confirmed TB positive CXRs was used, from a hospital in Shenzhen, China. Another dataset (TB-2) of 133 PA CXRs with 53 confirmed TB positive CXRs was used from a hospital in Montgomery, MD, USA [27][28][29] . Both TB datasets are publicly available. For COVID-19, we used 9390 CXRs and 5209 CXRs from all patients who visited two separate hospitals in Chicago in March 2020. Two datasets of 1819 and 605 AP CXRs (with 583 and 290 CXRs with RT-PCR-confirmed COVID-19 positive diagnoses) were curated from the two hospitals: COV-1, COV-2.
Labels. Abnormality labels. For development and evaluation of the DLS, we obtained labels to indicate whether abnormalities were present in each CXR. Each image was annotated as either "normal" or "abnormal", where an "abnormal" scan is defined as a scan containing at least one clinically-significant finding that may warrant further follow-up. For example, degenerative changes and old fractures were not labeled abnormal because no further management is required. The decision to include abnormal but clinically non-actionable findings as "normal" was based on the intended use case of flagging "abnormality" that requires either downstream action or attention by the clinician.
For the train and tune split of DS-1, we obtained the abnormal and normal labels using NLP (regular expressions) on the radiology reports (Supplementary Table 8). For the normal images, radiology report templates were often used, meaning the same report indicating a normal scan was often used for numerous images. We extracted the most commonly used radiology reports, manually confirmed those that indicated normal reports, and obtained all images that used one of these normal template reports. Examples of these radiology reports along with their frequencies are shown in Supplementary Table 8. For the abnormal images, we obtained all images that did not contain keywords indicating the scan is normal in their respective radiology reports.
For the test sets of DS-1 and CXR-14, a group of US board-certified radiologists reviewed the images at their original resolution to provide reference standard labels. For each image in DS-1, three readers were randomly assigned from a cohort of 18 US board-certified radiologists (range of experience 2-24 years in general radiology). For CXR-14, we obtained labels from three US board-certified radiologists (years of experience: 5, 12, and 24). In both cases, the majority vote of the three radiologists was taken to determine the final reference standard label.
For both DS-1 and CXR-14, in addition to the normal versus abnormal label, we also obtained labels for a selected set of findings present in the abnormal images for subgroup analysis (Supplementary Table 2). Note that the lists of findings for DS-1 and CXR-14 differ. For DS-1, we selected a slightly different list of findings to represent conditions that were more clinically reliable, mutually exclusive, and for which the CXR is reasonably sensitive and specific at characterizing (Supplementary Methods and Supplementary Table 2). Similarly to the normal versus abnormal label, the majority vote was taken for each specific finding. For CXR-14, the differences between the majority voted labels and the publically available labels are shown in a confusion matrix in Supplementary Table 7. COVID-19 labels. For the COVID-19 datasets COV-1 and COV-2, patients with RT-PCR tests and CXRs were included ( Supplementary Fig. 1). The COVID-19-positive labels were derived from positive RT-PCR tests. In accordance with current Centers for Disease Control and Prevention (CDC) guidelines 30 , COVID-19-negative labels consisted of CXRs from patients with at least two consecutive negative RT-PCR tests with 12 h apart and no positive test. As false negative rates for RT-PCR have been reported to be ≥ 20% in symptomatic COVID-19-positive patients, CXRs from patients with only one negative RT-PCR test were excluded 31 .
Deep learning system development. Neural network training. We trained a convolutional neural network (CNN) with a single output to distinguish between abnormal and normal CXRs. The CNN uses Efficient-Net-B7 32 as its feature extractor, which was pre-trained on ImageNet 33,34 . Early tuning set results (Supplementary Table 9A) suggested that the EfficientNet-B7 performs better than other advanced networks, hence the decision to use such a network. Since the CNN was pre-trained on three-channel RGB natural images, we tiled the single channel CXR image to three channels for technical compatibility. We trained the CNN using the cross-entropy loss and the momentum optimizer 35 www.nature.com/scientificreports/ ing training, all images were scaled to 600 × 600 pixels with bilinear interpolation and image pixel values were normalized on a per-image basis to be between 0 and 1. Using higher resolution images (1024 × 1024 pixels) led to non-significantly lower results (Supplementary Table 9B), hence we used 600 × 600 pixels due to its lower computational memory usage. Initializing from ImageNet also appeared to improve results (Supplementary Table 9C). The original bit depth for each image was used (Table 1). For regularization, we applied dropout 36 , with a dropout "keep probability" of 0.5. Furthermore, data augmentation techniques were applied to the input images, including horizontal flipping, padding, cropping, and changes in brightness, saturation, hue, and contrast. All hyperparameters were selected based on the empirical performance on the DS-1 tuning set. We developed the network using TensorFlow and used 10 NVIDIA Tesla V100 graphics processing units for training.
Operating point selection. Given a CXR, the DLS predicts a continuous score between 0 and 1 representing the likelihood of the CXR being abnormal. For making clinical decisions, operating points are needed to threshold the scores and produce binary normal or abnormal categorizations. In this study, we selected two operating points (see "Operating point selection datasets" section above), a high sensitivity operating point (95% sensitivity) and a high specificity operating point (95% specificity) for each scenario: general abnormalities for a general clinical setting in DS-1, general abnormalities for an enriched dataset in CXR-14, TB, and COVID-19.
Comparison with radiologists.
To compare the DLS with radiologists in classifying CXRs as normal versus abnormal, additional radiologists reviewed all test images without referencing additional clinical or patient data. All images in the DS-1 and CXR-14 test set were independently interpreted by two board-certified radiologists (with 2 and 13 years of experience), who classified each CXR as normal or abnormal. These radiologists were independent from the cohort of radiologists who contributed to the reference standard labels. Each image in TB-1 and TB-2 was reviewed by a random radiologist from a cohort of 8 consultant radiologists in India. Each image was annotated as abnormal or normal. Each image in COV-1 and COV-2 was reviewed by one of four board-certified radiologists (with 2, 5, 13, and 22 years of experience). Similarly, each image was annotated as abnormal or normal.
Two simulated DLS assisted workflows. We simulated two setups in which the DLS was leveraged to optimize radiologists' workflow (Fig. 1D). In the first setup, we randomly sampled 200 CXRs from each of our 6 datasets to simulate a "batch" workload for a radiologist in a busy clinical environment. For these CXRs, we compared the turnaround time for the abnormal CXRs when (1) they were sorted randomly (to simulate a clinical workflow without the DLS) and (2) when the CXRs were sorted in descending order based on the DLSpredicted scores, such that cases with higher scores appeared earlier. This analysis does not require the selection of an operating point. We repeated each simulation 1000 times per dataset to obtain the empirical distribution of turnaround differences.
In the second setup, we analyzed an extreme use case where the DLS identified CXRs that were unlikely to contain findings using a high sensitivity threshold, and the radiologists only reviewed the remaining cases. All cases skipped by radiologists were labeled negative. We compared the sensitivity between this simulated "reduced workload" workflow and a normal workflow in which the radiologists reviewed all cases.
Evaluation metrics. To evaluate the DLS across different operating points, we calculated the areas under receiver operating characteristic curves (area under ROC, AUC). To evaluate the performance of the DLS in classifying CXRs as normal or abnormal, we measured negative predictive values (NPV), positive predictive values (PPV), sensitivity, specificity, percentage of predicted negatives, and percentage of predicted positives at a high specificity and a high sensitivity operating point chosen for each scenario (see "Operating point selection" in Deep learning system development. For evaluating the DLS for each individual type of finding, we considered a "each abnormality versus normal" setup where negatives consisted of all normal CXRs, and positives consisted of only the CXRs with that particular finding. As such, specificity values were the same across all findings in a given dataset.
We measured the same set of metrics to evaluate the DLS performance with unseen diseases (TB and COVID-19). However, the ground truth here was defined by either the respective TB or COVID-19 tests, and not whether each image contained any abnormal finding. Thus "negative" TB and COVID-19 cases could still contain other abnormalities.
Statistical analysis. Confidence intervals (CI) for all evaluation metrics were calculated using the nonparametric bootstrap method with n = 1000 permutations at the image level.
To compare the performance of DLS with the radiologists in a DLS-assisted workflow, non-inferiority tests with paired binary data were performed using the Wald test procedure with a 5% margin 37 . To correct for multiple hypothesis testing, we used Bonferroni correction, yielding α = 0.003125 (one-sided test with α = 0.025 divided by 8 comparisons) 38 .
Class activation mappings.
To provide an approximate visual explanation of how the DLS makes predictions on a small subset of our data, we utilized gradient-weighted class activation mapping (Grad-CAM) 39 to identify the image regions critical to the model's decision-making process (Fig. 5). Because overlaying activation maps on an image obscures the original image, a common Grad-CAM visualization shows two images: the original image, and the image with the overlaid activation maps. Here, to balance brevity and clarity, we present the activation maps as outlines highlighting the regions of interest. The outlines were obtained by first using www.nature.com/scientificreports/ linear interpolation to upsample the low-resolution Grad-CAM feature maps to the size of the original X-rays, resulting in smooth intensity gradations. Next, the majority of the color map is set to a transparent color while a narrow band around 60% of the maximal intensity is opaque to visualize an isoline contour. Conceptually, this is equivalent to taking a horizontal cross-section of the activated maps' three-dimensional contour plot, where the x and y axes represent the spatial location, and the z-axis represents the magnitude of activation. We found this useful as an alternative way to present the Grad-CAM results in a single image. The purpose of these visualizations are for explainability: to visualize and understand the locations influencing model predictions for a few specific examples. The visualizations do not necessarily reflect an accurate segmentation of the lung abnormality.
Data availability
Many of the datasets used in this study are publicly available. CXR-14 is a public dataset provided by the NIH (https:// nihcc. app. box. com/v/ Chest Xray-NIHCC) 7,26 . The expert labels we obtained will be made available at https:// cloud. google. com/ healt hcare/ docs/ resou rces/ public-datas ets/ nih-chest# addit ional_ labels. TB-1 and TB-2 are publicly available 27,28 . Other than these public datasets, DS-1, COV-1, and COV-2 are owned by their respective institutions. For COV-1 and COV-2 data requests, please contact Dr. Mozziyar Etemadi (mozzi@ northwestern.edu). For additional requests, please contact D.T., P.-H.C.C., or S.S.
Scientific Reports
Figure 1 .
1Schematic of the study design, including (A) training and tuning, (B) operating points selection, (C) evaluation on the deep learning system and radiologists, and (D) two simulated DLS-assisted workflows. DS-1, CXR-14, TB-1, TB-2, COV-1, COV-2 are abbreviations of the datasets, please see
The DLS's high sensitivity operating point for ruling out normal CXRs performed on par with board-certified radiologists, with NPVs of 0.85-0.95 (general abnormalities), 0.88-0.98 (TB), and 0.56-0.85 (COVID-19), comparable to radiologist NPVs of 0.67-0.87 (general abnormalities), 0.74-0.88 (TB), and 0.62-0.78 (COVID-19). These results highlight the DLS's generalizability
Figure 3 .
3Positive predictive values (PPV) and negative predictive values (NPV) of the DLS across 6 datasets. (A) General abnormalities: DS-1 and CXR-14 datasets. (B) TB: TB-1 and TB-2. (C) COVID-19: COV-1 and COV-2. The horizontal dotted lines represent the prevalence of positive examples (red) and negative examples (blue), which also correspond to random predictions' PPV and NPV, respectively. The DLS's NPV converges to the prevalence of negative examples when all examples are predicted as negative, and the DLS's PPV converges to the prevalence of positive examples when all examples are predicted as positive. The vertical, dotted black lines highlight the selected operating point at 95% sensitivity on the operating point selection sets for each scenario. Scientific Reports | (2021) 11:15523 | https://doi.org/10.1038/s41598-021-93967-2
Figure 4 .
4Histogram for the distribution of DLS predicted scores across 6 datasets and their corresponding operating point selection sets: (A) DS-1 and CXR-14, (B) TB-1 and TB-2, and (C) COV-1 and COV-2. Curation of the operating point selection (Op. Sel.) datasets is described in "Operating point selection datasets" in "Methods". Positive and negative examples are visualized separately in red and blue, respectively. The vertical lines (black) highlight the selected high-sensitivity operating point for each scenario.
Figure 5 .
5Sample CXRs of true and false positives, and true and false negatives for (A) general abnormalities, (B) TB, and (C) COVID-
Figure 6 .
6Impact of a simulated DLS-based prioritization in comparison with random review order for (A) general abnormalities, (B) TB, and (C) COVID-19. The red bars indicate sequences of abnormal CXRs in red and normal CXRs in pink; a greater density of red towards the left indicates abnormal CXRs are reviewed sooner than normal ones. The histograms indicate the average improvement in turnaround time. Scientific Reports | (2021) 11:15523 | https://doi.org/10.1038/s41598-021-93967-2
TB labels. The first TB dataset27 (TB-1) was from Montgomery County, Maryland, USA. The TB positive and negative labels were derived from the radiology reports confirmed by clinical tests and patient history from the tuberculosis control program of the Department of Health and Human Services of Montgomery County, Maryland. The second TB dataset 27 (TB-2) was from Shenzhen, China. Positive and negative labels for this dataset came from the TB screening results of radiologists reading without appropriate clinical tests in the outpatient clinics in Shenzhen No. 3 People's Hospital, Guangdong Medical College, Shenzhen, China.
Table 2 .
2The DLS and radiologists' performance for distinguishing normal versus abnormal across all individual findings are shown in Supplementary Figs. 2-4 and Supplementary
Performance in the setting of unseen diseases. The DLS was next evaluated on two diseases that itScientific Reports
|
(2021) 11:15523 |
https://doi.org/10.1038/s41598-021-93967-2
www.nature.com/scientificreports/
had not been trained to detect (TB and COVID-19) across four disease-specific datasets: TB-1, TB-2, COV-1,
and COV-2. In these analyses, the DLS was evaluated against the reference standard for each specific disease (TB
or COVID, respectively, see "Labels" section in "Methods"). For TB (where the percentages of disease-positive
images were 52% and 40% in TB-1 and TB-2; Table 1), the AUCs were 0.95 (95% CI 0.93-0.97) in TB-1 and 0.97
(95% CI 0.94-0.99) in TB-2 (Table 2, Fig. 2B). At the high-sensitivity operating point, the DLS predicted 43.1%
of TB-1 and 38.3% of TB-2 as negative, with NPVs of 0.88 and 0.98, respectively
for details.Scientific Reports |
(2021) 11:15523 |
https://doi.org/10.1038/s41598-021-93967-2
www.nature.com/scientificreports/
TB-1 and TB-2, respectively (Table 3 and Fig. 2B). Further subgroup analyses comparing the DLS performance
with individual radiologists are shown in Supplementary Table 5A,B.
For COVID-19 (where percentage of disease-positive images were 32% and 48% in COV-1 and COV-2;
Table 1), the AUCs were 0.68 (95% CI 0.66-0.71) in COV-1 and 0.65 (95% CI 0.60-0.69) in COV-2 (Table 2,
Table 2 .
2Quantitative evaluation of DLS in distinguishing normal versus abnormal CXRs across 6 datasets. (A) The DLS's performance with the high-sensitivity operating point. (B) The DLS's performance with the high-specificity operating point. The AUC is independent of the operating point and is identical to that in (A).Figure 2. Receiver operating characteristic (ROC) curves for the DLS in distinguishing normal and abnormal CXRs across 6 different datasets. Positive CXRs in DS-1 and CXR-14 contain a mix of multiple labeled abnormalities (SupplementaryTable 2). Positive CXRs in the two TB datasets are from patients with tuberculosis. Positive CXRs in the two COVID-19 datasets are from patients with reverse transcription polymerase chain reaction (RT-PCR)-verified COVID-19. Radiologists' performances in distinguishing the test cases as normal or abnormal are also highlighted in the figures. DLS performance for identifying abnormalities in the TB and COVID-19 datasets (as opposed to the presence or absence of TB or COVID-19) are presented inSupplementary Fig. 6, with AUCs of 0.91-0.93 for TB and 0.86 for COVID-19.(A)
Scenario
Dataset (reference
label used for
evaluation)
High-sensitivity operating point (optimizes for NPV)
AUC (95% CI)
No. predicted
negative (%)
NPV (95% CI)
Sensitivity (95%
CI)
No. predicted
positive (%)
PPV (95% CI)
Specificity (95%
CI)
Abnormality
detection
DS-1 (normal/
abnormal)
2313 (29.9%)
0.98 (0.97-0.99) 0.98 (0.97-0.98)
5434 (70.1%)
0.33 (0.32-0.34) 0.38 (0.37-0.40)
0.87 (0.87-0.88)
CXR-14 (normal/
abnormal)
194 (24.0%)
0.85 (0.79-0.89) 0.95 (0.93-0.97)
616 (76.0%)
0.89 (0.86-0.91) 0.71 (0.65-0.76)
0.94 (0.93-0.96)
Unseen disease
1: TB
TB-1 (TB status)
199 (43.1%)
0.88 (0.84-0.93) 0.90 (0.87-0.94)
263 (56.9%)
0.83 0.78-0.87 )
0.80 (0.74-0.85)
0.95 (0.93-0.97)
TB-2 (TB status)
51 (38.3%)
0.98 (0.94-1.0)
0.98 (0.94-1.0)
82 (61.7%)
0.63 (0.51-0.73) 0.63 (0.51-0.73)
0.97 (0.94-0.99)
Unseen disease 2:
COVID-19
COV-1 (COVID-
19 status)
109 (5.9%)
0.85 (0.78-0.92) 0.97 (0.96-0.98)
1710 (94.0%)
0.33 (0.31-0.35) 0.08 (0.06-0.09)
0.68 (0.66-0.71)
COV-2 (COVID-
19 status)
59 (9.8%)
0.56 (0.43-0.68) 0.91 (0.87-0.94)
546 (90.2%)
0.48 (0.44-0.52) 0.10 (0.07-0.14)
0.65 (0.60-0.69)
(B)
Scenario
Dataset (reference
label used for
evaluation)
High-specificity operating point (optimizes for PPV)
No. predicted
negative (%)
NPV (95% CI)
Sensitivity (95%
CI)
No. predicted
positive (%)
PPV (95% CI)
Specificity (95%
CI)
Abnormality
detection
DS-1 (normal/
abnormal)
6027 (77.8%)
0.89 (0.88-0.90) 0.63 (0.61-0.65)
1720 (22.2%)
0.68 (0.65-0.70) 0.91 (0.90-0.91)
CXR-14 (normal/
abnormal)
715 (88.3%)
0.32 (0.29-0.36) 0.16 (0.13-0.20)
95 (11.7%)
0.99 (0.96-1.0)
1.0 (0.99-1.0)
Unseen disease1:
TB
TB-1 (TB status)
260 (56.3%)
0.81 (0.76-0.85) 0.81 (0.74-0.84)
202 (43.7%)
0.95 (0.91-0.98) 0.95 (0.92-0.98)
TB-2 (TB status)
80 (60.2%)
0.94 (0.88-0.99) 0.91 (0.82-0.98)
53 (39.8%)
0.91 (0.81-0.98) 0.94 (0.88-0.99)
Unseen disease 2:
COVID-19
COV-1 (COVID-
19 status)
1558 (85.7%)
0.72 (0.69-0.74) 0.23 (0.20-0.27)
261 (14.3%)
0.52 (0.46-0.58) 0.90 (0.88-0.92)
COV-2 (COVID-
19 status)
537 (88.8%)
0.55 (0.51-0.59) 0.17 (0.12-0.21)
68 (11.2%)
0.71 (0.59-0.81) 0.94 (0.91-0.96)
Table 3 .
3Radiologist performance in distinguishing normal and abnormal CXRs across the 6 datasets.Scenario
Dataset (reference
label used for
evaluation)
Radiologists
No. predicted
negative (%)
NPV (95% CI)
Sensitivity (95% CI)
No. predicted
positive (%)
PPV (95% CI)
Specificity (95% CI)
Abnormality detec-
tion
DS-1 (normal/abnor-
mal)
6567 (84.8%)
0.86 (0.85-0.86) 0.48 (0.46-0.51)
1180 (15.2%)
0.76 (0.74-0.78) 0.95 (0.95-0.96)
6380 (82.4%)
0.87 (0.86-0.88) 0.54 (0.52-0.57)
1367 (17.6%)
0.74 (0.71-0.76) 0.94 (0.93-0.94)
CXR-14 (normal/
abnormal)
284 (35.1%)
0.73 (0.67-0.77) 0.87 (0.84-0.89)
526 (64.9%)
0.95 (0.93-0.97) 0.89 (0.85-0.93)
325 (40.1%)
0.67 (0.62-0.72) 0.81 (0.78-0.84)
485 (59.9%)
0.97 (0.96-0.99) 0.94 (0.91-0.97)
Unseen disease: TB
TB-1 (TB status)
282 (61.0%)
0.74 (0.69-0.80) 0.70 (0.65-0.76)
180 (39.0%)
0.93 (0.89-0.97) 0.95 (0.91-0.97)
TB-2 (TB status)
88 (66.2%)
0.88 (0.81-0.94) 0.79 (0.68-0.90)
45 (33.8%)
0.93 (0.85-1.0)
0.96 (0.92-1.0)
Unseen disease:
COVID-19
COV-1 (COVID-19
status)
1194 (65.6%)
0.78 (0.76-0.80) 0.55 (0.51-0.59)
625 (34.4%)
0.51 (0.47-0.54) 0.75 (0.73-0.77)
COV-2 (COVID-19
status)
352 (58.2%)
0.62 (0.57-0.66) 0.53 (0.48-0.59)
253 (41.8%)
0.60 (0.55-0.66) 0.68 (0.64-0.74)
,26 . For TB, 200 randomly sampled images from TB-1 were used. For COVID-19, 200 randomly sampledScientific Reports |
(2021) 11:15523 |
https://doi.org/10.1038/s41598-021-93967-2
with a constant learning rate of 0.0004 and a momentum value of 0.9. Dur-Scientific Reports
|
(2021) 11:15523 |
https://doi.org/10.1038/s41598-021-93967-2
Scientific Reports | (2021) 11:15523 | https://doi.org/10.1038/s41598-021-93967-2
| (2021) 11:15523 | https://doi.org/10.1038/s41598-021-93967-2 www.nature.com/scientificreports/
© The Author(s) 2021
Code availabilityThe deep learning framework used here (TensorFlow v1.15) is available at https:// www. tenso rflow. org/, https:// github. com/ tenso rflow/ tenso rflow/ tree/ r1.15AcknowledgementsThe authors thank the members of the Google Health Radiology and labeling software teams for software infrastructure support, logistical support, and assistance in data labeling. For the CXR-14 dataset, we thank the NIH Clinical Center for making it publicly available. For tuberculosis data collection, thanks go to Sameer Antani, Stefan Jaeger, Sema Candemir, Zhiyun Xue, Alex Karargyris, George R. Thomas, Pu-Xuan Lu, Yi-Xiang Wang, Michael Bonifant, Ellan Kim, Sonia Qasba, and Jonathan Musco. Sincere appreciation also goes to the radiologists who enabled this work with their image interpretation and annotation efforts throughout the study, Jonny Wong for coordinating the imaging annotation work, and David F. Steiner, Kunal Nagpal, and Michael D. Howell for providing feedback on the manuscript.Author contributionsCompeting interestsThis study was funded by Google LLC and/or a subsidiary thereof ('Google'). Z. N., A. S., S. J., E. S., A. P. K., W. Y., J. Yang, R.P., S. K., J. Yu, G. S. C., L. P., K. E., D. T., N. B., Y. L., P.-H. C. C., and S. S. are employees of Google and own stock as part of the standard compensation package. C. L. is a paid consultant of Google. R. K., M. E., F. G. V., and D. M. received funding from Google to support the research collaboration.Additional informationSupplementary InformationThe online version contains supplementary material available at https:// doi. org/ 10. 1038/ s41598-021-93967-2.Correspondence and requests for materials should be addressed to D.T., P.-H.C.C. or S.S.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
United Nations Scientific Committee on the Effects of Atomic Radiation. Sources and effects of ionizing radiation. 10.18356/97887b8d-en978878United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) ReportsUnited Nations Scientific Committee on the Effects of Atomic Radiation. Sources and effects of ionizing radiation. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) Reports. https:// doi. org/ 10. 18356/ 97887 b8d-en (2008).
Radiologist supply and workload: International comparison. Y Nakajima, K Yamada, K Imamura, K Kobayashi, Radiat. Med. 26Nakajima, Y., Yamada, K., Imamura, K. & Kobayashi, K. Radiologist supply and workload: International comparison. Radiat. Med. 26, 455-465 (2008).
Training for rural radiology and imaging in sub-saharan Africa: Addressing the mismatch between services and population. M G Kawooya, J. Clin. Imaging Sci. 237Kawooya, M. G. Training for rural radiology and imaging in sub-saharan Africa: Addressing the mismatch between services and population. J. Clin. Imaging Sci. 2, 37 (2012).
Clinical radiology UK workforce census 2019 report. The Royal College of Radiologists. Clinical radiology UK workforce census 2019 report. The Royal College of Radiologists (2020).
Chest radiograph interpretation with deep learning models: Assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. A Majkowska, Radiology. 294Majkowska, A. et al. Chest radiograph interpretation with deep learning models: Assessment with radiologist-adjudicated refer- ence standards and population-adjusted evaluation. Radiology 294, 421-431 (2020).
Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. P Rajpurkar, PLoS Med. 151002686Rajpurkar, P. et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 15, e1002686 (2018).
ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. X Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWang, X. et al. ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localiza- tion of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2097-2106 (2017).
Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. P Lakhani, B Sundaram, Radiology. 284Lakhani, P. & Sundaram, B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284, 574-582 (2017).
Development and validation of deep learning-Based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. J G Nam, Radiology. 290Nam, J. G. et al. Development and validation of deep learning-Based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 290, 218-228 (2019).
CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. J Irvin, Proc. AAAI Conf. AAAI Conf33Irvin, J. et al. CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. Proc. AAAI Conf. Artif. Intell. 33, 590-597 (2019).
Likelihood ratios for out-of-distribution detection. J Ren, Advances in Neural Information Processing Systems. Curran Associates, Inc32Ren, J. et al. Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems 32 (eds. Wallach, H. et al.) 14707-14718 (Curran Associates, Inc., 2019).
Concrete Problems in AI Safety. D Amodei, arXiv [cs.AIAmodei, D. et al. Concrete Problems in AI Safety. arXiv [cs.AI] (2016).
Machine learning for COVID-19-asking the right questions. P Bachtiger, N S Peters, S L Walsh, Lancet Digit. Health. 2Bachtiger, P., Peters, N. S. & Walsh, S. L. Machine learning for COVID-19-asking the right questions. Lancet Digit. Health 2, e391-e392 (2020).
Key challenges for delivering clinical impact with artificial intelligence. C J Kelly, A Karthikesalingam, M Suleyman, G Corrado, D King, BMC Med. 17195Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G. & King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 17, 195 (2019).
Clinical characteristics of coronavirus disease 2019 in China. W.-J Guan, N. Engl. J. Med. 382Guan, W.-J. et al. Clinical characteristics of coronavirus disease 2019 in China. N. Engl. J. Med. 382, 1708-1720 (2020).
Analyzing inter-reader variability affecting deep ensemble learning for COVID-19 detection in chest radiographs. S Rajaraman, S Sornapudi, P O Alderson, L R Folio, S K Antani, PLoS One. 15242301Rajaraman, S., Sornapudi, S., Alderson, P. O., Folio, L. R. & Antani, S. K. Analyzing inter-reader variability affecting deep ensemble learning for COVID-19 detection in chest radiographs. PLoS One 15, e0242301 (2020).
Differentiation of pleural effusions from parenchymal opacities: Accuracy of bedside chest radiography. M T Kitazono, C T Lau, A N Parada, P Renjen, W T MillerJr, AJR Am. J. Roentgenol. 194Kitazono, M. T., Lau, C. T., Parada, A. N., Renjen, P. & Miller, W. T. Jr. Differentiation of pleural effusions from parenchymal opacities: Accuracy of bedside chest radiography. AJR Am. J. Roentgenol. 194, 407-412 (2010).
Competency in chest radiography. A comparison of medical students, residents, and fellows. L A Eisen, J S Berger, A Hegde, R F Schneider, J. Gen. Intern. Med. 21Eisen, L. A., Berger, J. S., Hegde, A. & Schneider, R. F. Competency in chest radiography. A comparison of medical students, residents, and fellows. J. Gen. Intern. Med. 21, 460-465 (2006).
Assessing radiology research on artificial intelligence: A brief guide for authors, reviewers, and readers-from the radiology editorial board. D A Bluemke, Radiology. 294Bluemke, D. A. et al. Assessing radiology research on artificial intelligence: A brief guide for authors, reviewers, and readers-from the radiology editorial board. Radiology 294, 487-489 (2020).
Machine learning 'red dot': Open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification. E J Yates, L C Yates, H Harvey, Clin. Radiol. 73Yates, E. J., Yates, L. C. & Harvey, H. Machine learning 'red dot': Open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification. Clin. Radiol. 73, 827-831 (2018).
Automated triaging of adult chest radiographs with deep artificial neural networks. M Annarumma, Radiology. 291Annarumma, M. et al. Automated triaging of adult chest radiographs with deep artificial neural networks. Radiology 291, 196-202 (2019).
Deep learning for chest radiograph diagnosis in the emergency department. E J Hwang, Radiology. 293Hwang, E. J. et al. Deep learning for chest radiograph diagnosis in the emergency department. Radiology 293, 573-580 (2019).
Automated abnormality classification of chest radiographs using deep convolutional neural networks. Y.-X Tang, NPJ Digit. Med. 370Tang, Y.-X. et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digit. Med. 3, 70 (2020).
Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. M Cicero, Investig. Radiol. 52Cicero, M. et al. Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Investig. Radiol. 52, 281-287 (2017).
Assessment of convolutional neural networks for automated classification of chest radiographs. J A Dunnmon, Radiology. 290Dunnmon, J. A. et al. Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 290, 537-544 (2019).
NIH Chest X-ray Dataset of 14 Common Thorax Disease Categories. NIH Chest X-ray Dataset of 14 Common Thorax Disease Categories. https:// nihcc. app. box. com/v/ Chest Xray-NIHCC/ file/ 22066 07896 10. Accessed 19 Jan 2018.
Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. S Jaeger, Quant. Imaging Med. Surg. 4Jaeger, S. et al. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 4, 475-477 (2014).
Automatic tuberculosis screening using chest radiographs. S Jaeger, IEEE Trans. Med. Imaging. 33Jaeger, S. et al. Automatic tuberculosis screening using chest radiographs. IEEE Trans. Med. Imaging 33, 233-245 (2014).
Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. S Candemir, IEEE Trans. Med. Imaging. 33Candemir, S. et al. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33, 577-590 (2014).
Criteria for Return to Work for Healthcare Personnel with SARS-CoV-2 Infection. Interim GuidanceCriteria for Return to Work for Healthcare Personnel with SARS-CoV-2 Infection (Interim Guidance). https:// www. cdc. gov/ coron avirus/ 2019-ncov/ hcp/ return-to-work. html.
Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. L M Kucirka, S A Lauer, O Laeyendecker, D Boon, J Lessler, Ann. Intern. Med. 173Kucirka, L. M., Lauer, S. A., Laeyendecker, O., Boon, D. & Lessler, J. Variation in false-negative rate of reverse transcriptase poly- merase chain reaction-based SARS-CoV-2 tests by time since exposure. Ann. Intern. Med. 173, 262-267 (2020).
Rethinking model scaling for convolutional neural networks. M Tan, Q Le, Efficientnet, Proceedings of the 36th International Conference on Machine Learning (ICML). the 36th International Conference on Machine Learning (ICML)PMLR97Tan, M. & Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML) vol. 97, 6105-6114 (PMLR, 2019).
Revisiting unreasonable effectiveness of data in deep learning era. C Sun, A Shrivastava, S Singh, A Gupta, 10.1109/iccv.2017.972017 IEEE International Conference on Computer Vision (ICCV). Sun, C., Shrivastava, A., Singh, S. & Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In 2017 IEEE International Conference on Computer Vision (ICCV). https:// doi. org/ 10. 1109/ iccv. 2017. 97 (2017).
ImageNet: A large-scale hierarchical image database. J Deng, 10.1109/cvprw.2009.52068482009 IEEE Conference on Computer Vision and Pattern Recognition. 48Deng, J. et al. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. https:// doi. org/ 10. 1109/ cvprw. 2009. 52068 48 (2009).
On the momentum term in gradient descent learning algorithms. N Qian, Neural Netw. 12Qian, N. On the momentum term in gradient descent learning algorithms. Neural Netw. 12, 145-151 (1999).
Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, J. Mach. Learn. Res. 15Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929-1958 (2014).
Tests for equivalence or non-inferiority for paired binary data. J.-P Liu, H.-M Hsueh, E Hsieh, J J Chen, Stat. Med. 21Liu, J.-P., Hsueh, H.-M., Hsieh, E. & Chen, J. J. Tests for equivalence or non-inferiority for paired binary data. Stat. Med. 21, 231-245 (2002).
Multiple significance tests: The Bonferroni method. J M Bland, D G Altman, BMJ. 310170Bland, J. M. & Altman, D. G. Multiple significance tests: The Bonferroni method. BMJ 310, 170 (1995).
Grad-CAM: Visual explanations from deep networks via gradient-based localization. R R Selvaraju, 2017 IEEE International Conference on Computer Vision (ICCV). IEEESelvaraju, R. R. et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE Interna- tional Conference on Computer Vision (ICCV) 618-626 (IEEE, 2017).
| [] |
[
"On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insights",
"On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insights"
] | [
"Paul Dobson [email protected] \nUniversity of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n\n",
"Jesus Maria Sanz-Serna [email protected] \nUniversity of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n\n",
"Konstantinos C Zygalakis [email protected] \nUniversity of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n\n"
] | [
"University of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n",
"University of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n",
"University of Edinburgh\nUniversidad Carlos III de Madrid\nUniversity of Edinburgh\n"
] | [] | We study connections between differential equations and optimization algorithms for m-strongly and L-smooth convex functions through the use of Lyapunov functions by generalizing the Linear Matrix Inequality framework developed by Fazylab et al. in 2018. Using the new framework we derive analytically a new (discrete) Lyapunov function for a two-parameter family of Nesterov optimization methods and characterize their convergence rate. This allows us to prove a convergence rate that improves substantially on the previously proven rate of Nesterov's method for the standard choice of coefficients, as well as to characterize the choice of coefficients that yields the optimal rate. We obtain a new Lyapunov function for the Polyak ODE and revisit the connection between this ODE and the Nesterov's algorithms. In addition discuss a new interpretation of Nesterov method as an additive Runge-Kutta discretization and explain the structural conditions that discretizations of the Polyak equation should satisfy in order to lead to accelerated optimization algorithms.Preprint. Under review. | 10.48550/arxiv.2305.08658 | [
"https://export.arxiv.org/pdf/2305.08658v1.pdf"
] | 258,685,317 | 2305.08658 | 4c8067e00f36b14ce7f863d14cbd3aa511250dbc |
On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insights
15 May 2023
Paul Dobson [email protected]
University of Edinburgh
Universidad Carlos III de Madrid
University of Edinburgh
Jesus Maria Sanz-Serna [email protected]
University of Edinburgh
Universidad Carlos III de Madrid
University of Edinburgh
Konstantinos C Zygalakis [email protected]
University of Edinburgh
Universidad Carlos III de Madrid
University of Edinburgh
On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insights
15 May 2023
We study connections between differential equations and optimization algorithms for m-strongly and L-smooth convex functions through the use of Lyapunov functions by generalizing the Linear Matrix Inequality framework developed by Fazylab et al. in 2018. Using the new framework we derive analytically a new (discrete) Lyapunov function for a two-parameter family of Nesterov optimization methods and characterize their convergence rate. This allows us to prove a convergence rate that improves substantially on the previously proven rate of Nesterov's method for the standard choice of coefficients, as well as to characterize the choice of coefficients that yields the optimal rate. We obtain a new Lyapunov function for the Polyak ODE and revisit the connection between this ODE and the Nesterov's algorithms. In addition discuss a new interpretation of Nesterov method as an additive Runge-Kutta discretization and explain the structural conditions that discretizations of the Polyak equation should satisfy in order to lead to accelerated optimization algorithms.Preprint. Under review.
Introduction
We are interested in solving the minimization problem
min x∈R d f (x),(1)
where f : R d → R is m-strongly convex and L-smooth (we denote the set of all such f as F m,L ). Gradient descent is the simplest algorithm for obtaining the solution x ⋆ of (1); it converges with a rate 1 − O(1/κ), κ = L/m, for f ∈ F µ,L (Nesterov (2014)). This rate is unsatisfactory, because in many applications the condition number κ is ≫ 1. It is possible to improve upon gradient descent by resorting to accelerated methods with rates 1 − O(1/ √ κ). An example of such a method is the well-known Nesterov's accelerated method Nesterov (1983) for f ∈ F m,L
x k+1 = y k − 1 L ∇f (y k ),(2a)y k = x k + 1 − 1/κ 1 + 1/κ (x k − x k−1 ).(2b)
It is shown in (Nesterov, 2014, Theorem 2.2.3) that, if y 0 = x 0 ,
f (x k ) − f (x ⋆ ) ≤ 1 − 1/κ k f (x 0 ) − f (x ⋆ ) + m 2 x 0 − x ⋆ 2 .
We can view the algorithm (2) as a discretization of the damped oscillator equation in Polyak (1964) x +b √ mẋ + ∇f (x) = 0,b > 0.
A fixed point of (21) corresponds to the minimizer of f and moreover, the solutions x(t) of (21) approach x ⋆ as t → ∞ if f is m-strongly convex (Wilson et al., 2021, Proposition 3). The connection between ordinary differential equations (ODE) and optimization algorithms has been a rewarding line of research. As was shown in Su et al. (2016); Scieur et al. (2017) finding an ODE which corresponds to the limit of an optimization algorithm can give insights about the algorithm. Conversely, there is a large number of research works (see e.g. Wibisono et al. (2016); Krichene et al. (2015)) which propose accelerated algorithms by suitable discretizations of second order dissipative ODEs both in Euclidean and non-Euclidean geometry. Furthermore, the links between such ODEs and Hamiltonian dynamics led to a number of research works that tried to construct or explain optimization algorithms using concepts such as shadowing Orvieto and Lucchi (2019), symplecticity Betancourt et al. (2018); Bravetti et al. (2019); Jordan (2019, 2021); Shi et al. (2019), discrete gradients Ehrhardt et al. (2018), and backward error analysis Franca et al. (2021).
A common element of the analysis presented in many of the papers mentioned above is the construction of discrete Lyapunov functions used to identify the convergence rate of the underlying algorithm. Furthermore, Lessard et al. (2016) introduced a control theoretic view of optimization algorithms, that has been later connected with Lyapunov functions both in discrete and continuous time by Fazlyab et al. (2018) (see also Sanz-Serna and Zygalakis (2021)).
In this work, we modify some of the conditions needed to obtain a Lyapunov function in the control theoretic framework presented in Fazlyab et al. (2018). This allows us to construct a new Lyapunov function for a two-parameter family of Nesterov optimization methods, which in turn is used to prove convergence rates that improve on those previously reported in the literature. It turns out that the coefficients in (2) are not optimal in terms of the convergence rate. We also derive a new Lyapunov function for the oscillator (21). The interpretation of Nesterov algorithms as discretizations of (21) is well known; however this discretization does not belong to the standard classes of integrators given by linear multistep or Runge-Kutta methods. We show that Nesterov algorithms are in fact instances of the class of additive Runge-Kutta integrators (Cooper and Sayfy (1980)). Finally we explain the structural conditions that discretizations of (21) have to satisfy in order to lead to accelerated optimization algorithms.
Linear Matrix Inequalities and Lyapunov functions
A useful viewpoint on optimization algorithms (Lessard et al. (2016)) is that they can often be represented as linear dynamical systems interacting with one or more static nonlinearities. We focus on the following state space representation
ξ k+1 = Aξ k + Bu k , (4a) u k = ∇f (y k ), (4b) y k = Cξ k , (4c) x k = Eξ k ,(4d)
where ξ k ∈ R n is the state, u k ∈ R d is the input (d ≤ n), y k ∈ R d is the feedback output that is mapped to u k by the nonlinear map ∇f . Here, A, B, C, E are constant matrices of appropriate sizes and x k is the approximation to the mimimizer x ⋆ after k steps.
To study the convergence rate of optimization algorithms, Fazlyab et al. (2018) considers Lyapunov functions of the form
V k (ξ) = ρ −2k a 0 (f (x) − f (x ⋆ )) + (ξ − ξ ⋆ ) T P (ξ − ξ ⋆ ) , ρ ∈ (0, 1),(5)
where a 0 > 0 and P is positive semi-definite (denoted by P 0). If along the trajectories of (18)
V k+1 (ξ k+1 ) ≤ V k (ξ k ), we can conclude that ρ −2k a 0 (f (x k ) − f (x ⋆ )) ≤ V k (ξ k ) ≤ V 0 (ξ 0 ) so that
we have the following convergence estimate: Our analysis here is based on the observation that, for f ∈ F µ,L , we have that f (x) − f (x * ) ≥ (m/2) x − x * 2 , a fact that, when constructing Lyapunov functions of the form (26), may be used to modify the requirement P 0 demanded by Fazlyab et al. (2018) (see details in the supplementary material). More precisely, the following theorem is a useful alternative to Theorem 3.2 in Fazlyab et al. (2018) (where it is demanded that P 0). (The notations σ(E T E), σ( P ) indicate eigenvalues.)
f (x k ) − f (x ⋆ ) ≤ ρ 2k V 0 (ξ 0 ) a 0 .
Theorem 1 Suppose that, for (18), there exist a 0 > 0, ρ ∈ (0, 1), ℓ > 0, and a symmetric matrix P , with P := P + (a 0 m/2)E T E ≻ 0, such that
T = M (0) + a 0 ρ 2 M (1) + a 0 (1 − ρ 2 )M (2) + ℓM (3) 0,(6)
where
M (0) = A T P A − ρ 2 P A T P B B T P A B T P B ,
and
M (1) = N (1) + N (2) , M (2) = N (1) + N (3) , M (3) = N (4) , with N (1) = EA − C EB 0 I d T L 2 I d 1 2 I d 1 2 I d 0 EA − C EB 0 I d , N (2) = C − E 0 0 I d T − m 2 I d 1 2 I d 1 2 I d 0 C − E 0 0 I d , N (3) = C T 0 0 I d − m 2 I d 1 2 I d 1 2 I d 0 C 0 0 I d , N (4) = C T 0 0 I d − mL m+L I d 1 2 I d 1 2 I d − 1 m+L I d C 0 0 I d .
Then, for f ∈ F m,L , with V given by (26), the sequence {x k } satisfies
x k − x ⋆ 2 ≤ max σ(E T E) ξ k − ξ ⋆ P ≤ max σ(E T E) min σ( P ) V (ξ 0 , 0)ρ 2k .(7)
Continuous-time systems
We will also consider continuous-time dynamical systems in state space forṁ
ξ(t) =Āξ(t) +Bu(t), y(t) =Cξ(t), u(t) = ∇f (y(t)) t ≥ 0,(8)
where ξ(t) ∈ R n is the state, y(t) ∈ R d (d ≤ n) the output, and u(t) = ∇f (y(t)) the continuous feedback input. Fixed points of (20) satisfy
0 =Āξ ⋆ , y ⋆ =Cξ ⋆ , u ⋆ = ∇f (y ⋆ );
in our context, u ⋆ = 0 and y ⋆ = x ⋆ . Similarly, to the discrete case, Fazlyab et al. (2018) considers Lyapunov functions of the form
V (ξ, t) = e λt f (y(t)) − f (y ⋆ ) + (ξ(t) − ξ ⋆ ) TP (ξ(t) − ξ ⋆ )(9)
whereP is a positive definite matrix. Again, in a similar way as in the discrete case we can modify the positive-definiteness assumption to obtain the following Theorem.
Theorem 2 Suppose that, for (20), there exist λ > 0, σ ≥ 0 and a symmetric matrixP with P :=P + (m/2)C TC ≻ 0, that satisfȳ
T =M (0) +M (1) + λM (2) + σM (3) 0 whereM (0) = PĀ +Ā TP + λPPB B TP 0 , M (1) = 1 2 0 (CĀ) T CĀCB +B TCT , M (2) = C T 0 0 I d − m 2 I d 1 2 I d 1 2 I d 0 C 0 0 I d , M (3) = C T 0 0 I d − mL m+L I d 1 2 I d , 1 2 I d − 1 m+L I d C 0 0 I d .
Then the following inequality holds for f ∈ F m,L , t ≥ 0, with V given by (30)
y(t) − y * 2 ≤ max σ(C TC ) ξ(t) − ξ * P ≤ max σ(C T C) min σ( P ) e −λt V (ξ(0), 0).
A Lyapunov function for a family of Nesterov's methods
We will now study optimization methods of the form
x k+1 = x k + β(x k − x k−1 ) − α∇f (y k ), (10a) y k = x k + β(x k − x k−1 ),(10b)
k = −1, 0, 1, . . ., with parameters α, β > 0; note that (2) arises from a particular choice of α, β. We write (10) in the state space form (18). In order to easily relate the material in this section to later developments, we first introduce as a new variable the divided difference, k = 0, 1, . . .,
d k = 1 δ (x k − x k−1 ),
where the steplength δ = √ mα is nondimensional, in the sense that its numerical value does not change when f and x are scaled (β is also nondimensional). With the new variable, (10) becomes (k = 0, 1, . . .)
d k+1 = βd k − α δ ∇f (y k ),(11a)x k+1 = x k + δβd k − α∇f (y k ), (11b) y k = x k + δβd k ,(11c)
and these equations are of the form (18)
with ξ k = [d T k , x T k ] T ∈ R 2d and A = βI d 0 δβI d I d , B = −(α/δ)I d −αI d , C = [δβI d I d ] , E = [0 I d ] .(12)
According to Theorem 1, in order to identify a convergence rate for (19), it is sufficient to find numbers a 0 > 0, ρ ∈ (0, 1), ℓ ≥ 0 and a matrix P with P + (a 0 m/2)E T E ≻ 0 such that T in (6) is 0. We set ℓ = 0, as this does not have a significant impact on the value of ρ that results from the analysis. This, in turn, allows us to further simplify things, since T is homogeneous in P and a 0 and we may assume a 0 = 1. Then T is a function of P and ρ (and the method parameters β and δ).
The matrix A in (12) is a Kronecker product of a 2 × 2 matrix and I d ,
A = β 0 δβ 1 ⊗ I d ;
the factor I d originates from the dimensionality of the decision variable x and the 2 × 2 factor is independent of d and arises from the optimization algorithm. The matrices B, C and E have a similar Kronecker product structure. It is then natural to consider symmetric matrices P of the form
P = P ⊗ I d , P = p 11 p 12 p 12 p 22 ,(13)
and then T will also have a Kronecker product structure
T = T ⊗ I d , T = t 11 t 12 t 13 t 12 t 22 t 23 t 13 t 23 t 33 ,(14)
where the t ij are explicitly given by the following complicated expressions obtained from (12) and the recipes for M (0) , M (1) and M (2) in Theorem 1:
t 11 = β 2 p 11 + 2δβ 2 p 12 + δ 2 β 2 p 22 − ρ 2 p 11 − δ 2 β 2 m/2, t 12 = βp 12 + δβp 22 − ρ 2 p 12 − δβm/2 + ρ 2 δβm/2, t 13 = −δ −1 αβp 11 − 2αβp 12 − δαβp 22 + δβ/2, t 22 = p 22 − ρ 2 p 22 − m/2 + ρ 2 m/2, t 23 = −δ −1 αp 12 − αp 22 + 1/2 − ρ 2 /2, t 33 = δ −2 α 2 p 11 + 2δ −1 α 2 p 12 + α 2 p 22 + α 2 L/2 − α.
Our objective is to find ρ ∈ (0, 1), p 11 , p 12 , and p 22 that lead to T 0 and P + (m/2) E T E ≻ 0 (which in turn imply T 0 and P + (m/2)E T E ≻ 0). The algebra becomes simpler if we represent β and ρ 2 as β = 1 − bδ, ρ 2 = 1 − rδ. We choose the entries of P as (details may be found in the supplementary material)
p 11 = p 22 δ 2 − mrδ + m 2 , p 12 = mr 2 − δp 22 , p 22 = mr b 2 δ 3 − b 2 δ − 2rbδ 3 + 2rbδ + 3rδ 2 − 2δ − r (4δr − 4) .
With this choice, t 13 = t 23 = 0, t 33 = (α 2 L − α)/2 and t 11 , t 12 , t 22 depend only on δ, r, b. We next limit the steplength by imposing α ≤ 1/L, or equivalently δ ≤ δ max = 1/ √ κ, so as to have t 33 ≤ 0. Finally we demand that t 11 t 22 − t 2 12 = 0, which gives a relation ϕ(r, b; δ) = 0 between the method parameter b and the rate r for each choice of δ. When this relation is satisfied, t 11 ≤ 0, which, in tandem with the other properties of the t ij established before, guarantees T 0. For two values of δ max , we plot in Figure 1a the curve ϕ = 0 for the most favourable steplength δ = δ max and, in addition, we compare with the analogous curve obtained in Sanz-Serna and Zygalakis (2021) under the constraint P 0 required by the framework in Fazlyab et al. (2018). As we may see, by changing the constraint on P it is possible to prove a significantly better convergence rate. In particular, for the modified constraint in the present analysis, one can show that b may be chosen to get r = √ 2 − O(δ), which in turn implies that for δ = δ max in (7):
ρ 2 = 1 − √ 2 √ κ + O(κ −1 ), κ → ∞.
Also note that the parameter values α and β in (2) lead to the best convergence rate that may be established with the approach in Sanz-Serna and Zygalakis (2021); however the present analysis shows that higher convergence rates may be rigorously proved for alternative choices of β.
Connections with the differential equation
It is well knwon that (see e.g. Su et al. (2016)) that, if we set h = √ α (h has the dimensions of t), and assume that in (10), the parameter β = β h changes smoothly with h in such a way that, for some constantb ∈ R, β h = 1 −b √ mh + o(h) we obtain (21) in the limit of h → 0. Note that the friction coefficientb is non-dimensional.
Lyapunov function
We will now proceed in a similar way as in the discrete setting to obtain a Lyapunov equation for (21). Our first step is to express (21) in the control theoretic framework (20). In particular, we define
v = −b √ mv − 1 √ m ∇f (x), (15a) x = √ mv.(15b)
The scaling factor √ m has been introduced to ensure that the variables x and v have the same dimensions.
If we set ξ = [v T , x T ] T , then (15) is of the form (20) with A = −b √ mI d 0 d √ mI d 0 d ,B = −(1/ √ m)I d 0 d ,C = [0 d I d ] .
Similarly to the discrete case, the structure of the matricesĀ,B, andC implies that we can look for a 2 × 2 P and a 3 × 3 T analogous to the equations (13) and (14), rather than forP andT .
We use Theorem 2 to find a Lyapunov function. Similarly to the discrete case, we will simplify the subsequent analysis by considering the case σ = 0. We find that the elements of T are given bȳ
t 11 = −2b √ mp 11 + 2 √ mp 12 + λp 11 , t 12 = −b √ mp 12 + √ mp 22 + λp 12 , t 13 = −(1/ √ m)p 11 + √ m/2, t 22 = λp 22 − (m/2)λ, t 23 = −(1/ √ m)p 12 + λ/2, t 33 = 0.
We determine λ and P . The algebra is simplified if we set λ = √ mr. Sincet 33 = 0, the requirement T 0 impliest 13 =t 23 = 0 which leads tō p 11 = m/2,p 12 = (m/2)r.
We now only have to deal with the leading 2 × 2 submatrix of T and we needt 11 < 0,t 22 < 0 and ∆ :=t 11t22 −t 2 12 ≥ 0. These conditions lead tor ≤ 2b/3, p 22 ≤ 1/2 and
∆ := − √ mr 3 m 3/2r 2 −b m 3/2 m 2 − p 22 − m p 22 +r 2 m 2 −br m 2 2 ≥ 0.
We seek points in the (p 22 ,r) plane with ∆ = 0 and (∂/∂p 22 )∆ = 0. The second of these relations yieldsp 22 =r 2 /4. Then an analysis of the first relation shows that it is possible to get all ratesr in the interval (0, √ 2) (r = √ 2 has to be excluded because for this valueP does not satisfy the requirement on P in Theorem 2). Each value ofr ∈ (0, √ 2) may be achieved in two ways, the first by choosingb = 3r/2 ∈ (0, 3 √ 2/2) and and the second by choosingb > 3 √ 2/2 such that r =b − b2 − 4, see Figure 1b. As in the discrete case, the modification of the hypothesis onP , allows to prove a significantly better convergence rate. Also note that the curve that relatesr andb for the Polyak ODE is indistinguishable from the solid red curve in Figure 1a that relates b and r in the discrete case, a coincidence that will be explained in the next subsection.
If the objective function f is quadratic, it is of course possible to obtain a sharp bound for the convergence rate λ = √ mr by solving (21) in terms of eigenvalues/vectors. (See (Lessard et al., 2016, Section 2.2) for the analysis in the discrete scenario.) For comparison, we have included in Figure 1b the rate for quadratic problems, which is maximized forb = 2, where λ = 2 √ m. For non-quadratic targets, the rate that may be proved under the Fazlyab et al. (2018) hypothesisP 0 is also maximized whenb = 2, where λ = √ m. The present analysis establishes for non-quadratic targets rates arbitrarily close to λ = √ 2 √ m, by choosingb close to 3 √ 2/2. Also note Figure 1b shows that the rate provided by our analysis cannot be improved whenb ≥ 3 √ 2/2.
Optimization algorithms as integrators
Systems of differential equations (d/dt)z = g(z) in cases where it makes sense to decompose g(z) as a sum g(z) = N ν=1 g [ν] (z) may be integrated by using Additive Runge-Kutta (ARK) algorithms, a generalization of the well-known Runge-Kutta (RK) integrators. In the RK case, the numerical solution is advanced over a time step z k → z k+1 by evaluating g(z) at a sequence of so-called stage vectors Z k,1 , . . . , Z k,s and then setting z k+1 = z k + s i=1 b i g(Z k,i ), where the b i are suitable weights. In turn, for the explicit algorithms we are interested in, the stages are computed successively, i = 1, . . . , s, as Z k,i = z k + h i−1 j=1 a i,j g(Z k,j ), with suitable coefficients a i,j . ARK algorithms are entirely similar, but evaluate the individual pieces g [ν] (z) rather than g(z).
With z = [v T , x T ] T ∈ R 2d , the system (15) may be rewritten as
d dt z = g [1] (z) + g [2] (z) + g [3] (z) := −b √ mv 0 + − 1 √ m ∇f (x) 0 + 0 √ mv ;
the three parts of g(z) respectively represent the friction force, potential force and inertia in the oscillator. It is easily checked that, if we choose a steplength h > 0, and see d k and x k as approximations to v(kh) and x(kh) respectively, then a step (d k , x k ) → (d k+1 , x k+1 ) of the optimization algorithm (19) with parameters α = h 2 , β = 1 − hb √ m, δ = √ mh is just one step z k → z k+1 of the ARK integrator for (15) given by:
Z k,1 = z k , Z k,2 = z k + hg [1] (Z k,1 ), Z k,3 = z k + hg [1] (Z k,1 ) + hg [3] (Z k,2 ), Z k,4 = z k + hg [1] (Z k,1 ) + hg [3] (Z k,2 ) + hg [2] (Z k,3 ), z k+1 = z k + hg [1] (Z k,1 ) + hg [2] (Z k,3 ) + hg [3] (Z k,4 ).
The stage vectors have
Z k,1 = [d T k , x T k ] T , Z k,2 = [βd T k , x T k ] T , Z k,3 = [βd T k , y T k ] T , Z k,4 = [d T k+1
, y T k ] T , and therefore the computation of the second, third and fourth stages incorporate successively the contributions of friction, inertia and potential force.
If we now think that the value of h > 0 varies and consider the optimization algorithm (10) (or (19)) with α = h 2 , β = 1 − hb √ m, the standard theory of numerical integration of ODEs shows that, if the initial points x −1 and x 0 are chosen in such a way that, as h → 0, x 0 and (1/h)(x 0 − x −1 ) converge to limits A and B, then in the limit of kh → t, x k and (1/h)(x k+1 − x k ) converge to x(t) andẋ(t) respectively, where x(t) is the solution of (21) with initial conditions x(0) = A anḋ x(0) = B. In addition, the discrete Lyapunov function of the optimization algorithm in Section 3 may be shown to converge to the Lyapunov function of the ODE found in this section. Finally the discrete decay factor over k steps (1 − √ mrh) k converges to the continuous decay factor exp(−λt). These facts in particular imply that, in Figure 1, the graph of the relation betweenb andr that holds for the ODE is indistinguishable from the corresponding graph for the optimization algorithm when κ is large (κ being large corresponds to h being small).
Discretizations that do not succeed in getting acceleration
Many recent contributions have derived optimization algorithms by discretizing suitable chosen dissipative ODEs. It is well known that, unfortunately, many properties of ODEs are likely to be lost in the discretization process, even if high-order, sophisticated integrators are used. The archetypical example is provided by the discretization of the standard harmonic oscillator: most numerical methods, regardless of their accuracy, provide solutions that either decay to the origin or spiral out to infinity as the number of computed points grows unboundedly. Similarly, discretizations of (21) are likely not to share the favourable decay properties in Section 4.1.
Let us consider the following extension of the optimization algorithm (2):
x k+1 = x k + β(x k − x k−1 ) − α∇f (y k ), (16a) y k = x k + γ(x k − x k−1 ),(16b)
with the additional parameter γ. The choice γ = 0 yields the heavy ball algorithm, which (see Sanz-Serna and Zygalakis (2021)) corresponds to a "natural" standard linear multistep discretization of the Polyak equation (21) but that does not provide acceleration. Sanz-Serna and Zygalakis (2021) shows that for γ = 0 (or more generally for γ = β), the optimization algorithm (16) does not inherit the Lyapunov functions of the Polyak ODE. The analysis in that paper hinges on a study of the nondimensional quantity c := t 11 /(mδ), which for T 0 has to be ≤ 0 and for a discretization of an ODE has a finite limit as δ → 0. When γ = 0, the expression for the quantity c includes a positive contribution δ(κ − 1)β 2 /2; for acceleration, δ has to be O(1/ √ κ) which makes it impossible for c to be ≤ 0.
The unwelcome presence of κ in t 11 may be traced back to the appearance of L in the matrix N (1) in Theorem 1. Nesterov's algorithms of the family (10) do not suffer from that appearance because for them the matrix EA − C that multiplies (L/2)I d in the recipe for N (1) vanishes. The condition EA − C = 0 appears then to be of key importance in the success of Nesterov algorithms; we put it into words by saying that one has to impose that the point y k = Cξ k where the gradient is evaluated has to coincide with the point x k+1 = EAξ k that the algorithm would yield if u k = ∇f (y k ) happened to vanish (see (18)). This suggests that the integrator has to treat the potential force and the friction force in the oscillator separately, something that may be achieved by ARK algorithms but not by more conventional linear multistep or RK methods that do not avail themselves of the separate pieces g [1] (z), g [2] (z), g [3] (z) but are rather formulated in terms of g(z).
Numerical illustration
We illustrate Theorem 2 for the simple one dimensional function in F µ,L given by
f (x) = m 2 x 2 + 4(L − m) log(1 + e −x ).(17)
We solve (15) using a very accurate RK algorithm and plot the results in Figure 2. In the left panel, whereb = 3 √ 2/2, we plot, as a function of time t, the Lyapunov function V (ξ(t), t) in (30) with the matrix P found here and different values of λ =r √ m. In agreement with Section 4.1, V decays forr ≤ √ 2. In the central panel, we present the evolution of the function V when the matrix P is chosen as in Sanz-Serna and Zygalakis (2021) to meet the more demanding requirement in Fazlyab et al. (2018); now V does not decay forr > 1. Finally, the right panel corresponds tō b = 3r/2,r = 1.4 < √ 2, see Figure 1b, and plots y(t) − y ⋆ 2 as a function of time as well as the bound provided by our analysis. The distance y(t) − y ⋆ decays non-monotonically, as expected in damped oscillators (and in Nesterov' algorithms); however it is apparent that the decay rate λ provided by our analysis is fairly sharp. For this value ofb, the rate guaranteed by the analysis in Sanz-Serna and Zygalakis (2021) is significantly lower (Figure 1b).
Discussion
We have established improved non-asymptotic convergence guarantees for Nesterov optimization methods and the related Polyak ODE. For a range of parameter values b in the discrete case andb in the continuous case, we proved faster convergence rates than those established in previous studies. Our analysis indicates that the optimal choice ofb for the continuous setting is (close to) 3 √ 2/2 and this gives a convergence rate arbitrarily close to √ 2m. In the discrete setting, for a suitable choice of the algorithmic parameters (slightly different to the standard choices for Nesterov method), one may prove rates arbitrarily close to 1 − √ 2/ √ κ when κ ≫ 1. The improved convergence rates obtained in this paper both in the continuous and discrete setting are a direct result of modifying the positive semidefiniteness condition for the matricesP and P respectively when searching for appropriate Lyapunov functions.
The analysis in the discrete setting highlights an important point; namely that not all numerical discretizations of the Polyak ODE (15) lead to accelerated optimization algorithms. We have derived structural conditions that numerical integrators of (15) should satisfy in order to correspond to accelerated optimization schemes. Standard numerical methods for ODEs such as conventional linear multi-step or RK methods cannot satisfy those structural conditions. The success of Nesterov's method may be easily understood when interpreting it as an Additive RK method.
In order for the fixed point to correspond to a minimiser we require that x ⋆ = y ⋆ and u ⋆ = 0.
The Gradient Descent algorithm with stepsize δ > 0,
x k+1 = x k − δ∇f (x k ),
can be expressed in the form (18) by setting n = d, A = I d , B = −δI d , C = I d , and E = I d .
We can also express Nesterov's accelerated gradient method (NAG) in the form (18). NAG for the parameters α, β, δ > 0 is given by
d k+1 = βd k − α δ ∇f (y k ),(19a)x k+1 = x k + δβd k − α∇f (y k ), (19b) y k = x k + δβd k .(19c)
The standard choice of the parameters for strongly convex f is given by (19) is equivalent to (18) with
α = 1 L , β = √ κ − 1 √ κ + 1 , δ = 1 √ κ . If we write ξ k = [d T k , x T k ] T thenA = βI d 0 d δβ I d , B = − α δ I d −αI d , C = [δβI d , I d ] , E k = [0 d , I d ] .
There is an analogous framework for first order differential equations. Let f ∈ F m,L and consider the systemξ
(t) =Āξ(t) +Bu(t), y(t) =Cξ(t), u(t) = ∇f (y(t)), t ≥ 0.(20)
Here ξ(t) ∈ R n is the state, y(t) ∈ R d the output and u(t) ∈ R d is the continuous feedback input.
Similarly to the discrete setting all fixed points of (20) satisfy
0 =Āξ ⋆ + Bu ⋆ , y ⋆ =Cξ ⋆ , u ⋆ = ∇f (y ⋆ ).
We are interested in settings where x ⋆ = y ⋆ is the minimiser of f and hence u ⋆ = 0. An important ODE for our discussion of optimization algorithms is the Polyak damped oscillator ODE Polyak (1964)
ẍ +b √ mẋ + ∇f (x) = 0,b > 0,(21)
hereb is a nondimensional damping coefficient. The algorithm (19) can be seen as a discretization of this ODE. We can write (21) in the form (20)
with ξ(t) = [(1/ √ m)ẋ(t) T , x(t) T ] and A = −b √ mI d 0 d √ mI d 0 d ,B = −(1/ √ m)I d 0 d ,C = [0 d , I d ] .
B Expressing properties of F m,L as matrix inequalities
In Megretski and Rantzer (1997) integral quadratic constraints (IQC) were proposed as a tool to describe classes of nonlinearities in control theory. This was adapted for optimization algorithms in Lessard et al. (2016).
The key idea here is to express concepts like m-convexity in terms of matrix inequalities. For example a function is m-strongly convex if and only if for all x, y ∈ R d m x − y 2 ≤ (x − y) T (∇f (x) − ∇f (y)).
This is equivalent to the statement involving IQCs that f is m-strongly convex if and only if
x − y ∇f (x) − ∇f (y) T −mI d 1 2 I d 1 2 I d 0 d x − y ∇f (x) − ∇f (y) ≥ 0.
We will use two other inequalities for f ∈ F , x, y ∈ R d , the first follows if ∇f is L-Lipschitz and the second for all f ∈ F m,L
f (x) − f (y) ≤ ∇f (y) T (x − y) + L 2 x − y , mL m + L x − y 2 + 1 m + L ∇f (x) − ∇f (y) 2 ≤ (∇f (x) − ∇f (y)) T (x − y)
we can express these two inequalities as the following IQCs
f (x) − f (y) ≤ x − y ∇f (x) − ∇f (y) T − L 2 I d 1 2 I d 1 2 I d 0 x − y ∇f (x) − ∇f (y) , x − y ∇f (x) − ∇f (y) T − mL m+L I d 1 2 I d 1 2 I d −1 m+L I d x − y ∇f (x) − ∇f (y) ≥ 0.
From the above inequalities we obtain the following lemma in Fazlyab et al. (2018).
Lemma 1 (Fazlyab et al., 2018, Lemma 4.
1) Fix f ∈ F m,L . Define e k = ((ξ k − ξ ⋆ ) T , u T k ) T then the following inequalities hold for all k e T k M (1) e k ≥ f (x k+1 ) − f (x k ),(23)e T k M (2) e k ≥ f (x k+1 ) − f (x ⋆ ),(24)e T k M (3) e k ≥ 0,(25)
where
M (0) = A T P A − ρ 2 P A T P B B T P A B T P B ,
and
M (1) = N (1) + N (2) , M (2) = N (1) + N (3) , M (3) = N (4) , with N (1) = EA − C EB 0 I d T L 2 I d 1 2 I d 1 2 I d 0 EA − C EB 0 I d , N (2) = C − E 0 0 I d T − m 2 I d 1 2 I d 1 2 I d 0 C − E 0 0 I d , N (3) = C T 0 0 I d − m 2 I d 1 2 I d 1 2 I d 0 C 0 0 I d , N (4) = C T 0 0 I d − mL m+L I d 1 2 I d 1 2 I d − 1 m+L I d C 0 0 I d .
C Proof of the Theorems C.1 Proof of Theorem 1
Fix f ∈ F m,L and define
V k (ξ) = ρ −2k a 0 (f (Eξ) − f (x ⋆ )) + (ξ − ξ ⋆ ) T P (ξ − ξ ⋆ ) , ρ ∈ (0, 1).(26)
Let P, ρ, σ be as in the statement of Theorem 1. In order to show that V is a Lyapunov function we need to establish that V is non-increasing along trajectories of the algorithm, i.e. V k+1 (ξ k+1 ) ≤ V k (ξ k ), and that V k (ξ k ) controls the convergence of x k − x ⋆ . Consider the second of these properties, which we establish by showing that V is a suitable upper bound for the distance between x k and x ⋆ .
Since f is m-strongly convex we have that
f (x) − f (x ⋆ ) ≥ m 2 x − x ⋆ 2 .
Therefore we can bound V k (ξ) from below with
V k (ξ k ) ≥ ρ −2k a 0 m 2 x k − x ⋆ 2 + (ξ k − ξ ⋆ ) T P (ξ − ξ ⋆ ) .
WritingP = 1 2 a 0 mE T E + P and using that
x k − x ⋆ = E(ξ k − ξ ⋆ ) we have V k (ξ k ) ≥ ρ −2k (ξ k − ξ ⋆ ) TP (ξ k − ξ ⋆ ).
By the assumptions of Theorem 1 we have thatP ≻ 0 so the quadratic form (ξ − ξ ⋆ ) TP (ξ − ξ ⋆ ) is non-degenerate and is bounded from below by the minimum eigenvalue ofP . Hence
V k (ξ k ) ≥ min σ(P )ρ −2k ξ k − ξ ⋆ 2 .
Using that x k − x ⋆ = E(ξ k − ξ ⋆ ) we obtain the following bound
x k − x ⋆ 2 ≤ max σ(E T E) min σ(P ) ρ 2k V k (ξ k ).(27)
It remains to establish that
V k (ξ k ) is non-increasing. Indeed, if V k+1 (ξ k+1 ) ≤ V k (ξ k ) then we have the desired bound x k − x ⋆ 2 ≤ max σ(E T E) min σ(P ) ρ 2k V 0 (ξ 0 ).(28)
Let M (0) be given in the statement of Theorem 1 then we have
e T k M (0) e k = (ξ k+1 − ξ ⋆ k+1 ) T P (ξ k+1 − ξ ⋆ ) − ρ 2 (ξ k+1 − ξ ⋆ k ) T P (ξ k − ξ ⋆ ),(29)
where e k = ((ξ k − ξ ⋆ ) T , u T k ) T . Indeed, by using (18)
we have (ξ k+1 − ξ ⋆ k+1 ) T P (ξ k+1 − ξ ⋆ ) = (ξ k − ξ ⋆ k ) T A T P A(ξ k − ξ ⋆ ) + (u k ) T B T P A(ξ k − ξ ⋆ ) + (ξ k − ξ ⋆ k ) T A T P Bu k + u T k B T P Bu k = e T k M (0) e k + ρ 2 (ξ k+1 − ξ ⋆ k ) T P (ξ k − ξ ⋆ ).
Using (23)-(24), and (29) we have V k+1 (ξ k+1 ) − V k (ξ k ) = a 0 ρ −2(k+1) ρ 2 ((f (Eξ k+1 ) − f (Eξ k )) + (1 − ρ 2 )(f (Eξ k+1 ) − f (x ⋆ )) + ρ −2(k+1) (ξ k+1 − ξ ⋆ k+1 ) T P (ξ k+1 − ξ ⋆ ) − ρ 2 (ξ k − ξ ⋆ ) T P (ξ k − ξ ⋆ ) ≤ ρ −2(k+1) e T k M (0) + a 0 ρ 2 M (1) + a 0 (1 − ρ 2 )M (2) e k .
Since e T k M (3) e k is positive by (25) we can add ℓe T k M (3) e k to obtain
V k+1 (ξ k+1 ) − V k (ξ k ) ≤ ρ −2(k+1) e T k M (0) + a 0 ρ 2 M (1) + a 0 (1 − ρ 2 )M (2) + ℓM (3) e k = ρ −2(k+1) e T k T e k .
By the assumptions of Theorem 1 we have that T 0 so we have that V k+1 (ξ k+1 ) ≤ V k (ξ k ).
C.2 Proof of Theorem 2
Fix f ∈ F m,L , and define V (ξ, t) = e λt f (y(t)) − f (y ⋆ ) + (ξ(t) − ξ ⋆ ) TP (ξ(t) − ξ ⋆ ) .
As in the discrete case we need to establish two properties:
1. The function V (ξ(t), t) is non-increasing in t; 2. We can bound V (ξ, t) from below with y − y ⋆ 2 , i.e.
y − y ⋆ ≤ e −λt max σ(C TC ) min σ( P ) V (ξ, t)
Observe that in the limit as α → 0 we have that T converges pointwise toT the continuous Lyapunov function for the Polyak ODE. In the limit as α → 0 we have that t 33 = 0 and moreover t 33 = O(α) for α small. For this reason in order to have T 0 it is necessary that t 13 and t 23 vanish in the limit as α → 0. We ensure this holds by setting t 13 , t 23 = 0 which leads to the following conditions:
p 11 = p 22 δ 2 − mrδ + m 2 , p 12 = mr 2 − δp 22
Next we need to ensure that t 33 ≤ 0, with this choice of p 11 and p 12 we have t 33 = 1 2 α(Lα − 1) so we require α ≤ 1/L. Note this justifies the earlier arguments that we are interested in α small.
With these choices the matrix T has the form T = T 1:2 0 0 t 33 with t 33 ≤ 0 so it remains to show that the submatrix T 1:2 is negative semi-definite.
In order to have that T 0 it is necessary and sufficient to find p 22 ∈ R and r > 0 such that t 22 ≤ 0 and ∆ = det( T 1:2 ) ≥ 0.
We choose p 22 which maximises ∆ by solving ∂ p22 ∆ = 0 for p 22 . Indeed ∆ is a quadratic in p 22 which tends to −∞ for p 22 large so there is a unique extremum which is a maxima of ∆. This gives the following expression for p 22
p 22 = mr b 2 δ 3 − b 2 δ − 2rbδ 3 + 2rbδ + 3rδ 2 − 2δ − r (4δr − 4) .
Note that t 22 is non-positive provided p 22 ≤ m/2. It remains to find the solutions of ∆ = 0 and verify that P ≻ 0 and p 22 ≤ m/2. This can be done with a numerical solver, the results are given in Figure 1 of the paper.
Fazlyab et al. (2018) present a useful framework based on Linear Matrix Inequalities to check whether functions of the form (26) decay along trajectories, see, in particular, Theorem 3.2 there.
Figure 1 :
1The left panel shows the relationship between the rate r and the method parameter b in the discrete case when δ = δ max = 1/ √ κ. The red curves correspond to the present analysis and the blue curves correspond to the hypothesis P 0. The solid curves are for κ = 10 6 ; the dashed curves are for κ = 10 2 . The right panel shows the relationship between the rater and the parameter b in the time-continuous case. The red and blue solid lines on the left are indistinguishable from the red and blue lines on the right. The green star denotes the choice of paramters used in Section 5. v = (1/ √ m)ẋ and rewrite(21)as a first-order systeṁ
Figure 2 :
2Convergence of Polyak ODE for f given by(17)when m = 10 −3 , L = 1. The first two panels display the evolution of the function V along trajectories.
If d = n and A = 0 d , B = −I d , C = I d then (20) is the gradient flow of f .
Acknowledgments and Disclosure of FundingA Control framework and linear matrix inequalitiesConsider objective functions f ∈ F m,L , that is, f is continuously differentiable, m-strongly convex and ∇f is L-Lipschitz. The situation of interest is when the condition number κ = L/m is large. A broad class of optimization algorithms which minimise f can be expressed as a linear dynamical systemHere ξ k ∈ R n is the state, u k ∈ R d is the input, y k ∈ R d is the feedback output that is mapped to u k by the function ∇f . The scheme is described by the matrices A ∈ R n×n , B ∈ R n×d , C ∈ R d×n , and E ∈ R d×n . All fixed points, ξ ⋆ , of (18) satisfyFrom these two properties the desired non-asymptotic bound in Theorem 2 follows immediately, see the proof of Theorem 1 for details. The proof of the second property follows from an analogous argument to the proof of Theorem 1.It remains to show thatReplacing ∇f (y) with u andξ with the equation(20)eRecall the matrix M (0) and M (1) defined in Theorem 2, with this notation we haveIn order to control the f terms we use the convex inequalityNote that setting y 1 = x k+1 and y 2 = x ⋆ in (31) corresponds to (24) in the discrete setting, similarly we can rewrite (31) with y 1 = y and y 2 = y ⋆ asTherefore we have e −λtV (ξ, t) ≤ e(t) T (M (0) + M (1) + λM (2) )e(t).By(25)we have that M (3) 0 hence for any σ > 0 we have e −λtV (ξ, t) ≤ e(t) T (M (0) + M (1) + λM (2) + σM (3) )e(t) = e(t) T T e(t).By the assumptions of Theorem 2 we have that T 0 soV (ξ, t) ≤ 0 and is non-increasing.D More details for the discrete caseWe are required to find p 11 , p 12 , p 22 ∈ R, ρ ∈ (0, 1) such thatT 0 where P = p 11 p 12 p 12 p 22 , T = t 11 t 12 t 13 t 12 t 22 t 23 t 13 t 23 t 33.Setting β = 1 − bδ and ρ 2 = 1 − rδ we find t 11 = (1 − bδ) 2 p 11 + 2δ(1 − bδ) 2 p 12 + δ 2 (1 − bδ) 2 p 22 − (1 − rδ) 2 p 11 − δ 2 (1 − bδ) 2 m/2, t 12 = (1 − bδ)p 12 + δ(1 − bδ)p 22 − (1 − rδ) 2 p 12 − δ(1 − bδ)m/2 + (1 − rδ) 2 δβm/2, t 13 = −δ −1 α(1 − bδ)p 11 − 2α(1 − bδ)p 12 − δα(1 − bδ)p 22 + δ(1 − bδ)/2, t 22 = rδp 22 − rδm/2, t 23 = −δ −1 αp 12 − αp 22 + rδ/2, t 33 = δ −2 α 2 p 11 + 2δ −1 α 2 p 12 + α 2 p 22 + α 2 L/2 − α.
On symplectic optimization. M Betancourt, M I Jordan, A C Wilson, arXiv preprintM. Betancourt, M. I. Jordan, and A. C. Wilson. On symplectic optimization. arXiv preprint, 2018.
Optimization algorithms inspired by the geometry of dissipative systems. A Bravetti, M L Daza-Torres, H Flores-Arguedas, M Betancourt, arXiv preprintA. Bravetti, M. L. Daza-Torres, H. Flores-Arguedas, and M. Betancourt. Optimization algorithms inspired by the geometry of dissipative systems. arXiv preprint, 2019.
Additive methods for the numerical solution of ordinary differential equations. G J Cooper, A Sayfy, Mathematics of Computation. 35152G.J. Cooper and A. Sayfy. Additive methods for the numerical solution of ordinary differential equations. Mathematics of Computation, 35(152):1159-1172, 1980.
A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method. M J Ehrhardt, E S Riis, T Ringholm, C.-B Schönlieb, arXiv preprintM. J. Ehrhardt, E. S. Riis, T. Ringholm, and C.-B. Schönlieb. A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method. arXiv preprint, 2018.
Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems. M Fazlyab, A Ribeiro, M Morari, V M Preciado, SIAM J. Optim. 283M. Fazlyab, A. Ribeiro, M. Morari, and V. M. Preciado. Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems. SIAM J. Optim., 28(3):2654-2689, 2018.
On dissipative symplectic integration with applications to gradient-based optimization. G Franca, M I Jordan, R Vidal, Journal of Statistical Mechanics: Theory and Experiment. 2021443402G. Franca, M. I. Jordan, and R. Vidal. On dissipative symplectic integration with applications to gradient-based optimization. Journal of Statistical Mechanics: Theory and Experiment, 2021(4): 043402, 2021.
Accelerated mirror descent in continuous and discrete time. W Krichene, A Bayen, P Bartlett, Advances in Neural Information Processing Systems. 28W. Krichene, A. Bayen, and P. L Bartlett. Accelerated mirror descent in continuous and discrete time. In Advances in Neural Information Processing Systems 28, pages 2845-2853. 2015.
Analysis and design of optimization algorithms via integral quadratic constraints. L Lessard, B Recht, A Packard, SIAM J. Optim. 261L. Lessard, B. Recht, and A. Packard. Analysis and design of optimization algorithms via integral quadratic constraints. SIAM J. Optim., 26(1):57-95, 2016.
System analysis via integral quadratic constraints. A Megretski, A Rantzer, IEEE Transactions on Automatic Control. 426A. Megretski and A. Rantzer. System analysis via integral quadratic constraints. IEEE Transactions on Automatic Control, 42(6):819-830, 1997.
A dynamical systems perspective on Nesterov acceleration. M Muehlebach, M I Jordan, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningPMLR97M. Muehlebach and M. I. Jordan. A dynamical systems perspective on Nesterov acceleration. In Pro- ceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4656-4662. PMLR, 2019.
Optimization with momentum: Dynamical, control-theoretic, and symplectic perspectives. M Muehlebach, M I Jordan, J. Mach. Learn. Res. 221M. Muehlebach and M. I. Jordan. Optimization with momentum: Dynamical, control-theoretic, and symplectic perspectives. J. Mach. Learn. Res., 22(1):1-50, 2021.
A method for solving the convex programming problem with convergence rate O k −2. Y Nesterov, Dokl. Akad. Nauk SSSR. 269Y. Nesterov. A method for solving the convex programming problem with convergence rate O k −2 . Dokl. Akad. Nauk SSSR, 269:543-547, 1983.
Introductory Lectures on Convex Optimization: A Basic Course. Y Nesterov, Springer Publishing CompanyIncorporatedY. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer Publishing Company, Incorporated, 2014.
Shadowing properties of optimization algorithms. A Orvieto, A Lucchi, Advances in Neural Information Processing Systems. 32A. Orvieto and A. Lucchi. Shadowing properties of optimization algorithms. In Advances in Neural Information Processing Systems 32, pages 12692-12703. 2019.
Some methods of speeding up the convergence of iteration methods. B T Polyak, USSR Computational Mathematics and Mathematical Physics. B.T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computa- tional Mathematics and Mathematical Physics, pages 1-17, 1964.
The connections between lyapunov functions for some optimization algorithms and differential equations. J M Sanz-Serna, K C Zygalakis, SIAM J. Numer. Anal. 593J. M. Sanz-Serna and K. C. Zygalakis. The connections between lyapunov functions for some opti- mization algorithms and differential equations. SIAM J. Numer. Anal., 59(3):1542-1565, 2021.
Integration methods and optimization algorithms. D Scieur, V Roulet, F R Bach, A , Advances in Neural Information Processing Systems. 30D. Scieur, V. Roulet, F. R. Bach, and A. d'Aspremont. Integration methods and optimization algo- rithms. In Advances in Neural Information Processing Systems, volume 30, pages 1109-1118, 2017.
Acceleration via symplectic discretization of highresolution differential equations. B Shi, S S Du, W Su, M I Jordan, Advances in Neural Information Processing Systems. 32B. Shi, S. S. Du, W. Su, and M. I. Jordan. Acceleration via symplectic discretization of high- resolution differential equations. In Advances in Neural Information Processing Systems, vol- ume 32, pages 5744-5752, 2019.
A differential equation for modeling Nesterov's accelerated gradient method: Theory and insights. W Su, S Boyd, E J Candès, J. Mach. Learn. Res. 17153W. Su, S. Boyd, and E. J. Candès. A differential equation for modeling Nesterov's accelerated gradient method: Theory and insights. J. Mach. Learn. Res., 17(153):1-43, 2016.
A variational perspective on accelerated methods in optimization. A Wibisono, A C Wilson, M I Jordan, Proc. Natl. Acad. Sci. U.S.A. 11347A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. U.S.A, 113(47):E7351-E7358, 2016.
A lyapunov analysis of accelerated methods in optimization. A C Wilson, B Recht, M I Jordan, J. Mach. Learn. Res. 221A. C. Wilson, B. Recht, and M. I. Jordan. A lyapunov analysis of accelerated methods in optimiza- tion. J. Mach. Learn. Res., 22(1):1-34, 2021.
| [] |
[
"Indexing and Partitioning the Spatial Linear Model for Large Data Sets",
"Indexing and Partitioning the Spatial Linear Model for Large Data Sets"
] | [
"Jay M Ver Hoef \nMarine Mammal Laboratory\nNOAA-NMFS Alaska Fisheries Science Center\n7600 Sand Point Way NE98115SeattleWAUSA\n",
"Michael Dumelle \nUnited States Environmental Protection Agency\nCorvallisOregonUSA\n",
"‡ ",
"Matt Higham \nDepartment of Mathematics\nComputer Science, and Statistics\nSt. Lawrence University\nCantonNew YorkUSA\n",
"Erin E Peterson \nAustralian Research Council Centre of Excellence in Mathematical and Statistical Frontiers (ACEMS)\nQueensland University of Technology\nBrisbaneQueenslandAustralia\n",
"Daniel J Isaak \nRocky Mountain Research Station\nU.S. Forest Service\nBoiseIDUSA\n"
] | [
"Marine Mammal Laboratory\nNOAA-NMFS Alaska Fisheries Science Center\n7600 Sand Point Way NE98115SeattleWAUSA",
"United States Environmental Protection Agency\nCorvallisOregonUSA",
"Department of Mathematics\nComputer Science, and Statistics\nSt. Lawrence University\nCantonNew YorkUSA",
"Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers (ACEMS)\nQueensland University of Technology\nBrisbaneQueenslandAustralia",
"Rocky Mountain Research Station\nU.S. Forest Service\nBoiseIDUSA"
] | [] | We consider four main goals when fitting spatial linear models: 1) estimating covariance parameters, 2) estimating fixed effects, 3) kriging (making point predictions), and 4) block-kriging (predicting the average value over a region). Each of these goals can present different challenges when analyzing large spatial data sets. Current research uses a variety of methods, including spatial basis functions (reduced rank), covariance tapering, etc, to achieve these goals. However, spatial indexing, which is very similar to composite likelihood, offers some advantages. We develop a simple framework for all four goals listed above by using indexing to create a block covariance structure and nearest-neighbor predictions while maintaining a coherent linear model. We show exact inference for fixed effects under this block covariance construction. Spatial indexing is very fast, and simulations are used to validate methods and compare to another popular method. We study various sample designs for indexing and our simulations showed that indexing leading to spatially compact partitions are best over a range of sample sizes, autocorrelation values, and generating processes. Partitions can be kept small, on the order of 50 samples per partition. We use nearest-neighbors for kriging and block kriging, finding that 50 nearest-neighbors is sufficient. In all cases, confidence intervals for fixed effects, and prediction intervals for (block) kriging, have appropriate coverage. Some advantages of spatial indexing are that it is available for any valid covariance matrix, can take advantage of parallel computing, and easily extends to non-Euclidean topologies, such as stream networks. We use stream networks to show how spatial indexing can achieve all four goals, listed above, for very large data sets, in a matter of minutes, rather than days, for an example data set. | null | [
"https://export.arxiv.org/pdf/2305.07811v1.pdf"
] | 258,685,455 | 2305.07811 | dbfb247f3ebd6c543705cf39eddb4e20632adc5b |
Indexing and Partitioning the Spatial Linear Model for Large Data Sets
May 16, 2023 13 May 2023
Jay M Ver Hoef
Marine Mammal Laboratory
NOAA-NMFS Alaska Fisheries Science Center
7600 Sand Point Way NE98115SeattleWAUSA
Michael Dumelle
United States Environmental Protection Agency
CorvallisOregonUSA
‡
Matt Higham
Department of Mathematics
Computer Science, and Statistics
St. Lawrence University
CantonNew YorkUSA
Erin E Peterson
Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers (ACEMS)
Queensland University of Technology
BrisbaneQueenslandAustralia
Daniel J Isaak
Rocky Mountain Research Station
U.S. Forest Service
BoiseIDUSA
Indexing and Partitioning the Spatial Linear Model for Large Data Sets
May 16, 2023 13 May 2023These authors contributed equally to this work. ‡These authors also contributed equally to this work. * [email protected] 1/25
We consider four main goals when fitting spatial linear models: 1) estimating covariance parameters, 2) estimating fixed effects, 3) kriging (making point predictions), and 4) block-kriging (predicting the average value over a region). Each of these goals can present different challenges when analyzing large spatial data sets. Current research uses a variety of methods, including spatial basis functions (reduced rank), covariance tapering, etc, to achieve these goals. However, spatial indexing, which is very similar to composite likelihood, offers some advantages. We develop a simple framework for all four goals listed above by using indexing to create a block covariance structure and nearest-neighbor predictions while maintaining a coherent linear model. We show exact inference for fixed effects under this block covariance construction. Spatial indexing is very fast, and simulations are used to validate methods and compare to another popular method. We study various sample designs for indexing and our simulations showed that indexing leading to spatially compact partitions are best over a range of sample sizes, autocorrelation values, and generating processes. Partitions can be kept small, on the order of 50 samples per partition. We use nearest-neighbors for kriging and block kriging, finding that 50 nearest-neighbors is sufficient. In all cases, confidence intervals for fixed effects, and prediction intervals for (block) kriging, have appropriate coverage. Some advantages of spatial indexing are that it is available for any valid covariance matrix, can take advantage of parallel computing, and easily extends to non-Euclidean topologies, such as stream networks. We use stream networks to show how spatial indexing can achieve all four goals, listed above, for very large data sets, in a matter of minutes, rather than days, for an example data set.
Introduction
The general linear model, including regression and analysis of variance (ANOVA), is still a mainstay in statistics,
Y = Xβ + ε (1)
where Y is an n × 1 vector of response random variables, X is the design matrix with covariates (fixed explanatory variables, containing any combination of continuous, binary, or categorical variables), β is a vector of parameters, and ε is a vector of zero-mean random variables, which are classically assumed to be uncorrelated, var(ε) = σ 2 I. The spatial linear model is a version of Eq (1) where var(ε) = Σ, and Σ is a patterned covariance matrix that is modeled using spatial relationships. Generally, spatial relationships are of two types: spatially-continuous point-referenced data, often called geostatistics, and finite sets of neighbor-based data, often called lattice or areal data [1]. For geostatistical data, we associate random variables in Eq (1) with their spatial locations by denoting the random variable as Y (s i ); i = 1, . . . , n, and ε(s i ); i = 1, . . . , n, where s i is a vector of spatial coordinates for the ith point, and the i, jth element of Σ is cov(ε(s i ), ε(s j )). The main goals from a geostatistical linear model are to 1) estimate Σ, 2) estimate β , 3) make predictions at unsampled Y (s j ), where j = n + 1, . . . , N , form a set of spatial locations without observations, and 4) for some region B, make a prediction of the average value Y (B) = B Y (s)ds/|B|, where |B| is the area of B. Estimation and prediction both require O(n 2 ) for Σ storage and O(n 3 ) operations for Σ −1 [2], which, for massive data sets, is computationally expensive and may be prohibitive. Our overall objective is to use spatial indexing ideas to make all four goals possible for very large spatial data sets. We maintain the moment-based approach of classical geostatistics, which is distribution free, and we work to maintain a coherent model of stationarity and a single set of parameter estimates.
Quick Review of the Spatial Linear Model
When the outcome of the random variable Y (s i ) is observed, we denote it y(s i ), which are contained in the vector y. These observed data are used first to estimate the autocorrelation parameters in Σ, which we will denote as θ. In general, Σ can have n(n + 1)/2 parameters, but use of distance to describe spatial relationships typically reduces this to just 3 or 4 parameters. An example of how Σ depends on θ is given by the exponential autocorrelation model, where the i, jth element of Σ is cov[ε(s i ), ε(s j )] = τ 2 exp(−d i,j /ρ) + η 2 I(d i,j = 0) (2) where θ = (τ 2 , η 2 , ρ) , d i,j is the Euclidean distance between s i and s j , and I(·) is an indicator function, equal to 1 if its argument is true, otherwise it is 0. The parameter η 2 is often called the "nugget effect," τ 2 is called the "partial sill," and ρ is called the "range" parameter. In Eq (2), the variances are constant (stationary), which we denote σ 2 = τ 2 + η 2 , when d i,j = 0. Many other examples of autocorrelation model are given in [1] and [3]. We will use restricted maximum likelihood (REML) [4,5] to estimate parameters of Σ. REML is less biased than full maximum likelihood [6]. REML estimates of covariance parameters are obtained by minimizing L(θ|y) = log|Σ θ | + r θ Σ −1 θ r θ + log|X Σ −1 θ X| + c
for θ, where Σ θ depends on spatial autocorrelation parameters θ, and r θ = y − Xβ θ , β θ = (X Σ −1 θ X) −1 X Σ −1 θ y, and c is a constant that does not depend on θ. It has been shown [7,8] that Eq (3) form unbiased estimating equations for covariance parameters, so Gaussian data are not strictly necessary. After Eq (3) has been minimized for θ, then these estimates, call themθ, are used in the autocorrelation model, e.g. Eq 2, for all of the covariance values to createΣ. This is the first use of data y. The usual frequentist method for geostatistics, with a long tradition [9], "uses the data twice" [10]. NowΣ, along with a second use of the data, are used to estimate regression coefficients or make predictions at unsampled locations. By pluggingΣ into the well-known best-linear-unbiased estimate (BLUE) of β for Eq (1), we obtain the empirical best-linear-unbiased estimate (EBLUE), e.g. [11],
β = (X Σ −1 X) −1 X Σ −1 y(4)
The estimated variance of Eq (4) iŝ var(β) = (X Σ −1 X) −1
Let a single unobserved location be denoted s 0 , with a covariate vector of x 0 (containing the same covariates and length as a row of X). Then EBLUP [12] at an unobserved location isŶ
(s 0 ) = x 0β +ĉ 0Σ −1 (y − Xβ),(6)
whereĉ 0 ≡ĉ ov(ε, ε(s 0 )), using the same autocorrelation model, e.g. Eq (2), and estimated parameters,θ, that were used to developΣ. Note that if we condition onΣ as fixed, then Eq (6) is a linear combination of y, and can also be written as η 0 y when Eq (4) is substituted forβ. The prediction Eq (6) can be seen as the conditional expectation of Y (s 0 )|y with plug-in values for β, Σ, and c. The estimated variance of EBLUP is, var(Ŷ (s 0 )) =σ 2 0 −ĉ 0Σ −1ĉ
0 + (x 0 − X Σ −1ĉ 0 ) (X Σ −1 X) −1 (x 0 − X Σ −1ĉ 0 )(7)
whereσ 2 0 is the estimated variance of Y (s 0 ) using the same covariance model asΣ. [12] Spatial Methods for Big Data
Here, we give a brief overview of the most popular methods currently used for large spatial data sets. There are various ways to classify such methods. For our purposes, there are two broad approaches. One is to adopt a Gaussian Process (GP) model for the data and then approximate the GP. The other is to model locally, essentially creating smaller data sets and using existing models.
There are several good reviews on methods for approximating the GP [13][14][15][16]. These methods include low rank ideas such as radial smoothing [17][18][19], fixed rank kriging [20][21][22][23], predictive processes [24,25], and multiresolution Gaussian processes [26,27]. Other approaches include covariance tapering [28][29][30], stochastic partial differential equations [31,32], and factoring the GP into a series of conditional distributions [33,34], which was extended to nearest neighbor Gaussian processes [35][36][37][38] and other sparse matrix improvements [39][40][41]. The reduced rank methods are very attractive, and allow models for situations where distances are non-Euclidean (for a review and example, see [42]), as well as fast computation.
Modeling locally involves an attempt to maintain classical geostatistical models by creating subsets of the data, using existing methods on subsets, and then making inference from subsets. For example, [43,44] created local data sets in a spatial moving window, and then estimated variograms and used ordinary kriging within those windows. This idea allows for nonstationary variances but forces an unnatural asymmetric autocorrelation because the range parameter changes when moving a window. Nor does it estimate β, but rather there is a different β for every point in space. Another early idea was to create a composite likelihood by taking products of subset-likelihoods and optimizing for autocorrelation parameters θ [45], and thenθ can be held fixed when predicting in local windows. However, this does not solve the problem of estimating a single β.
More recently, two broad approaches have been developed for modeling locally. One is a 'divide and conquer' approach, which is similar to [45]. Here, it is permissible to re-use data in subsets, or not use some data at all [46][47][48], with an overview provided by [49]. Another approach is a simple partition of the data into groups, where partitions are generally spatially compact [50][51][52][53]. This is sensible for estimating covariance parameters and will provide an unbiased estimate forβ, however the estimated variancê var(β) will not be correct. Continuity corrections for predictions are provided, but predictions may not be efficient near partition boundaries.
A blocked structure for the covariance matrix based on spatially-compact groupings was proposed by [54], who then formulated a hybrid likelihood based on blocks of different sizes. The method that we feature is most similar to [54], but we show that there is no need for a hybrid likelihood, and that our approach is different than composite likelihood. Our spatial indexing approach is very simple and extends easily to random effects, and accommodates virtually any covariance matrix that can be constructed. We also show how to obtain the exact covariance matrix of estimated fixed effects without any need for computational derivatives or numerical approximations.
Motivating Example
One of the attractive features of the method that we propose is that it will work with any valid covariance matrix. To motivate our methods, consider a stream network (Fig 1a). This is the Mid-Columbia River basin, located along part of the border between the states of Washington and Oregon, USA, with a small part of the network in Idaho as well (Fig 1b). The stream network consists of 28,613 stream segments. Temperature loggers were placed at 9,521 locations on the stream network, indicated by purple dots in Fig 1a. A close-up of the stream network, indicated by the dark rectangle in Fig 1a, is given as Fig 1c, where we also show a systematic placement of prediction locations with orange dots. There are 60,099 prediction locations that will serve as the basis for point predictions. The response variable is an average of daily maximum temperatures in August from 1993 to 2011. Explanatory variables obtained for both observations and prediction sites included elevation at temperature logger site, slope of stream segment at site, percentage of upstream watershed composed of lakes or reservoirs, proportion of upstream watershed composed of glacial ice surfaces, mean annual precipitation in watershed upstream of sensor, the northing coordinate, base-flow index values, upstream drainage area, a canopy value encompassing the sensor, mean August air temperature from a gridded climate model, mean August stream discharge, and occurrence of sensor in tailwater downstream from a large dam (see [55] for more details).
These data were previously analyzed in [55] with geostatistical models specific to stream networks [11,56]. The models were constructed as spatial moving averages, e.g., [57,58], also called process convolutions, e.g., [59,60]. Two basic covariance matrices are constructed, and then summed. In one, random variables were constructed by integrating a kernel over a white noise process strictly upstream of a site, which are termed "tail-up" models. In the other construction, random variables were created by integrating a kernel over a white noise process strictly downstream of a site, which are termed "tail-down" models. Both types of models allow analytical derivation of autocovariance functions, with different properties. For tail-up models, sites remain independent so long as they are not connected by water flow from an upstream site to a downstream site. This is true even if two sites are very close spatially, but each on a different branch just upstream of a junction. Tail-down models are more typical as they allow spatial dependence that is generally a function of distance along the stream, but autocorrelation will still be different for two pairs of sites that are an equal distance apart, when one pair is connected by flow, and the other is not. When considering big data, such as those in Fig 1, we considered the methods as described in the previous section. The basis-function/reduced rank approaches would be difficult for stream networks because an inspection of Fig 1 reveals that we would need thousands of basis functions in order to cover all headwater stream segments and run the basis functions downstream only. A separate set of basis functions would be needed that ran upstream, and then weighting would be required to split the basis functions at all stream junctions. In fact, all of the GP model approximation methods would require modifying a covariance structure that has already been developed specifically for steam May 16, 2023 5/25
networks. The spatial indexing method that we propose below is much simpler, requiring no modification to the covariance structure, and, as we will demonstrate, proved to be adequate, not only for stream networks, but more generally.
Methods
Consider the covariance matrix to be used in Eq (4) and Eq (6). First, we index the data to create a covariance matrix with P partitions based on the indexes {i; i = 1, . . . , P },
Σ = Σ 1,1 Σ 1,2 · · · Σ 1,P Σ 2,1 Σ 2,2 · · · Σ 2,P . . . . . . . . . . . . Σ P,1 Σ P,2 · · · Σ P,P (8)
In a similar way, imagine a corresponding indexing and partition of the spatial linear model as,
y 1 y 2 . . . y P = X 1 X 2 . . . X P β + ε 1 ε 2 . . . ε P (9)
Now, for the purposes of estimating covariance parameters, we maximize the REML equations based on a covariance matrix,
Σ part = Σ 1,1 0 · · · 0 0 Σ 2,2 · · · 0 . . . . . . . . . . . . 0 0 · · · Σ P,P (10)
rather than Eq (8). The computational advantage of using Eq (10) in Eq (3) is that we only need to invert matrices of size Σ i,i for all i, and, because we have large amounts of data, we assume that {Σ i,i } are sufficient for estimating covariance parameters. If the size of Σ i,i is fixed, then the computational burden grows linearly with n. Also, Eq (10) in Eq (3) allows for use of parallel computing because each Σ i,i can be inverted independently. Note that we are not concerned with the variance ofθ, which is generally true in classical geostatistics. Rather, θ contains nuisance parameters that require estimation in order to estimate fixed effects and make predictions. Because data are massive, we can afford to lose some efficiency in estimating the covariance parameters. For example, sample sizes ≥ 125 are generally recommended for estimating the covariance matrix for geostatistical data [61]. REML is for the most part unbiased. If we have thousands of samples, and if we imagine partitioning the spatial locations into data sets (in ways that we describe later), then using Eq (10) in Eq (3) is, essentially, using REML many times to obtain a pooled estimate ofθ.
Partitioning the covariance matrix is most closely related to the idea of quasi-likelihood [62], composite likelihood [45] and divide and conquer [63]. However, for REML, they are not exactly equivalent. From Eq (3), the term log|X Σ −1 θ X| using composite likelihood,
P i=1 L(θ|y i ), results in P i=1 log|X i Σ −1 i,i X i |log P i=1 X i Σ −1 i,i X i
An advantage to spatial indexing, when compared to composite likelihood, can be seen when X contains columns with many zeros, such as may occur for categorical explanatory variables. Then, partitioning X may result in X i that has columns with all zeros, which presents a problem when computing log|X i Σ −1 i,i X i | for composite likelihood, but not when using Σ part .
Estimation of β
The generalized least squares estimate for β was given in Eq (4). Although the inverse Σ −1 only occurs once (as compared to repeatedly when optimizing the REML equations), it will still be computationally prohibitive if a data set has thousands of samples. Note that under the partitioned model, Eq (9) with covariance matrix Eq (10), Eq (4), is,β
bd = T −1 xx t xy (11) where T xx = P i=1 X iΣ −1 i,i X i and t xy = P i=1 X iΣ −1
i,i y i . This is a "pooled estimator" of β across the partitions. This should be a good estimator of β at a much reduced computational cost. It will also be convenient to show that Eq (11) is linear in y, by noting thatβ
bd = T −1 xx X 1Σ −1 1,1 |T −1 xx X 2Σ −1 2,2 | . . . |T −1 xx X PΣ −1 P,P y 1 y 2 . . . y P = Qy(12)
To estimate the variance ofβ bd we cannot ignore the correlation between the partitions, so we consider the full covariance matrix Eq (8). If we compute the covariance matrix for Eq (11) under the full covariance matrix Eq (8), we obtain
var(β bd ) = T −1 xx + T −1 xx W xx T −1 xx (13) where W xx = P −1 i=1 P j=i+1 [X i Σ −1 i,i Σ i,j Σ −1 j,j X j + (X i Σ −1 i,i Σ i,j Σ −1 j,j X j ) ]
. Note that while we set parts of Σ = 0 Eq (10) in order to estimate θ and β, we computed the variance ofβ using the full Σ in Eq (8). Using a plug-in estimator, whereby θ is replaced byθ, no further inverses of any Σ i,j are required if all Σ −1 i,i are stored as part of the REML optimization. There is only a single additional inverse required, which is R × R, where R is the rank of the design matrix X, and is already computed for T −1 xx in Eq (11). Also note that if we simply substituted Eq. (10) into Eq ((5)), then we obtain only T −1 xx as the variance ofβ bd . In Eq (13), T −1 xx W xx T −1 xx is the adjustment that is required for correlation among the partitions for a pooled estimate ofβ bd . Partitioning of the spatial linear model allows computation from Eq (11), but then going back to the full model for developing Eq (13), which is a new result. This can be contrasted to the approaches for variance estimation of fixed effects using pseudo likelihood, composite likelihood, and divide and conquer found in the earlier literature review.
Eq (13) is quite fast and grows linearly for computing the number of inverse matrices Σ −1 i,i (that is, if observed sample size is 2n, then there are twice as many inverses as a sample of size n, if we hold partition size fixed). Also note that all inverses may already be computed as part of REML estimation of θ. However, Eq (13) is quadratic in pure matrix computations due to the double sum in W xx . These can be parallelized, but may take too long for more than about 100,000 samples. One alternative is to use the empirical variation inβ i = (X iΣ
−1 i,i X i ) −1 X iΣ −1 i,i y i ,
where the ith matrix calculations are already needed for Eq (11) andβ i can be simply computed and stored. Then, let var alt1 (β bd ) = 1
P (P − 1) P i=1 (β i −β bd )(β i −β bd )(14)
which has been used before for partitioned data, e.g. [64]. A second alternative is to pool the estimated variances of eachβ i , which arev ar(β i ) = (X iΣ
−1 i,i X i ) −1 , to obtain var alt2 (β bd ) = 1 P 2 P i=1v ar(β i )(15)
where the first P in the denominator is for averaging individualv ar(β i ), and the second P is the reduction in variance due to averagingβ i . Eq (13), Eq (14), and Eq (15) are tested and compared below using simulations.
Point Prediction
The predictor for Y (s 0 ) was given in Eq (6). As for estimation, the inverse Σ −1 only occurs once (as compared to repeatedly when optimizing to obtain the REML estimates). If the data set has tens of thousands of samples, it will still be computationally prohibitive. Note that under the partitioned model, Eq (9), that assumes zero correlation among partitions, Eq (10), from Eq (6) the predictor is,
Y (s 0 ) = x 0βbd + t cy − t xcβbd(16)
whereβ bd is obtained from Eq. (11), t cy =
P i=1ĉ i Σ −1 i,i y i , t xc = P i=1 X i Σ −1 i,iĉ i
, and c i =ĉ ov(Y (s 0 ), y i ), using the same autocorrelation model and parameters as forΣ. Even though the predictor is developed under the block diagonal matrix Eq (10), the true prediction variance can be computed under Eq (8), as we did for estimation. However, the performance of these predictors turned out to be quite poor.
We recommend point predictions based on local data instead, which is an old idea, e.g. [43], and has already been implemented in software for some time, e.g. [10]. The local data may be in the form of a spatial limitation, such as a radius around the prediction point, or by using a fixed number of nearest neighbors. For example, the R [65] package nabor [66] finds nearest neighbors among hundreds of thousands of samples very quickly. Our method will be to use a single set of global covariance parameters as estimated under the covariance matrix partition Eq (10), and then predict with a fixed number of nearest neighbors. We will investigate the effect due to the number of nearest neighbors through simulation.
A purely local predictor lacks model coherency, as discussed in the literature review section. We use a singleθ for covariance, but there is still the issue ofβ. As seen in Eq (6), estimation of β is implicit in the prediction equations. If y j ⊂ y are data in the neighborhood of prediction location s j , then using Eq (6) with local y j is implicitly adopting a varying coefficient model forβ, making it also local, so call itβ j , and it will vary for each prediction location s j . A further issue arises if there are categorical covariates. It is possible that a level of the covariate is not present in the local neighborhood, so some care is needed to collapse any columns in the design matrix that are all zeros. These are some of the issues that call to question the "coherency" of a model when predicting locally.
Instead, as for estimating the covariance parameters, we will assume that the goal is to have a single global estimate of β. Then we take as our predictor for the jth prediction location,Ŷ
(s j ) = x jβbd +ĉ jΣ −1 j (y j − X jβbd )(17)
where X j andΣ j are the design and covariance matrices, respectively, for the same neighborhood as y j , x j is a vector of covariates at prediction location j, c j = cov(Y (s j ), y j ) (using the same autocorrelation model and parameters as forΣ j ), andβ bd was given in Eq (11). It will be convenient for block kriging to note that if we condition onΣ j being fixed, then Eq (17) can be written as a linear combination of y, call it λ j y, similar to η 0 y as mentioned after Eq (6). Suppose there are m neighbors around s j , so y j is m × 1. Let y j = N j y, where N j is a m × n matrix of zeros and ones that subset the n × 1 vector of all data to only those in the neighborhood. Then
Y (s j ) = x j Qy +ĉ jΣ −1 j N j y −ĉ jΣ −1 j X j Qy = λ j y(18)
where Q was defined in Eq (12).
LetĈ be an estimator of var(β bd ) in Eq (13), Eq (14), or Eq 15), then the prediction variance of Eq (17) is var(Y (s j ) −Ŷ (s j )) when using the local neighborhood set of data, which iŝ
var(Ŷ (s j )) =Eβ bd var {yj ,Y (sj )} Y (s j ) − x jβbd −ĉ jΣ −1 j (y j − X jβbd )|β bd + varβ bd E {yj ,Y (sj )} Y (s j ) − x jβbd +ĉ jΣ −1 j (y j − X jβbd )|β bd =σ 2 −ĉ jΣ −1 iĉj + (x j − X jΣ −1 jĉj ) Ĉ (x j − X jΣ −1 jĉj ) (19)
whereσ 2 is the estimated value of var(Y (s j )) usingθ and the same autocorrelation model that was used forΣ. Eq (19) can be compared to Eq (7).
Block Prediction
None of the literature reviewed earlier considered block prediction, yet it is an important goal in many applications. In fact, the origins of kriging were founded on estimating total gold reserves in the pursuit of mining [9]. The goal of block prediction is to predict the average value over a region, rather than at a point. If that region is a compact set of points denoted as B, then the random quantity is
Y (B) = 1 |B| B Y (s)ds(20)
where |B| = B 1 ds is the area of B. In practice, we approximate the integral by a dense set of points on a regular grid within B. Let us call that dense set of points D = {s j ; j = n + 1, . . . , N }, where recall that {s j ; j = 1, . . . , n} are the observed data. Then the grid-based approximation to Eq (20) is
Y D = (1/N ) j∈D Y (s j ) with generic predictorŶ D = 1 N j∈DŶ (s j )
We are in the same situation as for prediction of single sites, where we are unable to invert the covariance matrix of all n observed locations for predicting {Ŷ (s j ); j = n + 1, n + 2, . . . , N }. Instead, let us use the local predictions as developed in the previous section, which we will average to compute the block prediction. Let the May 16, 2023 9/25 point predictions be a set of random variables denoted as {Ŷ (s j ), j = n + 1, n + 2, . . . , N }. Denote y o a vector of random variables for observed locations, and y u a vector of unobserved random variables on the prediction grid D to be used as an approximation to the block. Recall that we can write Eq (18) aŝ Y (s j ) = λ j y o . We can put all λ j into a large matrix,
W = λ 1 λ 2 . . . λ N (N −n)×n
The average of all predictions, then, iŝ
Y D = a Wy o(21)
where a = (1/N, 1/N, . . . , 1/N ) . Let a * = a W, and so the block prediction a * y o is also linear in y o . Let the covariance matrix for the vector (y o , y u ) be (8). Then, assuming unbiasedness, that is,
V = V o,o V o,u V u,o V u,u where V o,o = Σ in EqE(a * y o ) = E(a y u ) =⇒ a * X o β = aX u β,
where X o and X u are the design matrices for the observed and unobserved variables, respectively, then the block prediction variance is
E(a * y o − a y u ) 2 = a * V o,o a * − 2a * V o,u a + a V u,u a(22)
Although the various parts of V can be very large, the necessary vectors can be created on-the-fly to avoid creating and storing the whole matrix. For example, take the third term in Eq (22). To make the kth element of vector V u,u a, we can create the kth row of V u,u , and then take the inner product with a. This means that only the vector V u,u a must be stored. We then simply take this vector as an inner product with a to obtain a V u,u a. Also note that computing Eq. (21) grows linearly with observed sample size n due to fixing the number of neighbors used for prediction, but Eq (22) grows quadratically, in both n and N , simply due to the matrix dimensions in V o,o and V u,u . We can control the growth of N by choosing the density of the grid approximation, but it may require subsampling of y o if the number of observed data is too large. We often have very precise estimates of block averages, so this may not be too onerous if we have hundreds of thousands of observations.
The SPIN Method
We will use acronym SPIN, for SPatial INdexing, as the collection of methods for covariance parameter estimation, fixed effects estimation, and point and block prediction, based on spatial indexing leading to covariance matrix partitioning, as described above. We estimate covariance parameters using REML, Eq (3) with a valid autocovariance model [e.g., Eq (2) with a partitioned covariance matrix in Eq. (10)]. Using these estimated covariance parameters, we estimate β using Eq (11), with estimated covariance matrix, Eq (13), unless explicitly stating the use of Eq (14) or Eq (15). For point prediction, we use Eq (17) with estimated variance Eq (19), unless explicitly stating the purely local version forβ given by Eq (6) with estimated variance Eq (7). For block prediction, we use Eq (21) with Eq (22).
Simulations
To test the validity of SPIN, we simulated n spatial locations randomly within the [0, 1] × [0, 1] unit square to be used as observations, and we created a uniformly-spaced (N − n) = 40 × 40 prediction grid within the unit square. We simulated data with two methods. The first simulation method created data sets that were not actually very large, using exact geostatistical methods that require the Cholesky decomposition of the covariance matrix. For these simulations, we used the spherical autocovariance model to construct Σ,
cov[ε(s i ), ε(s j )] = τ 2 1 − 3d i,j 2ρ + d 3 i,j 2ρ 3 I(d i,j < ρ) + η 2 I(d i,j = 0)(23)
where terms are defined as in Eq (2). To simulate normally-distributed data from N(0, Σ), let L be the lower triangular matrix such that Σ = LL . If vector z is simulated as independent standard normal variables, then ε = Lz is a simulation from N(0, Σ). Unfortunately, computing L is an O(n 3 ) algorithm, on the same order as inverting Σ, which limits the size of data for simulation. Fig 2a,b shows two realizations from N(0, Σ), where the sample size was n = 2000 and the autocovariance model, Eq (23), had a τ 2 = 10, ρ = 0.5, and η 2 = 0.1. Each simulation took about 3 seconds. Note that when including evaluation of predictions, simulations are required at all N spatial locations. We call this the GEOSTAT simulation method. For all simulations, we fixed τ 2 = 10 and η 2 = 0.1, but allowed ρ to vary randomly from a uniform distribution between 0 and 2. We created another method for simulating spatially patterned data for up to several million records. Let S = [s 1 , s 2 ] be the 2-column matrix of the spatial coordinates of data, where s 1 is the first coordinate, and s 2 is the second coordinate. Let
S * = [s * 1 , s * 2 ] = S cos(U 1,i π) − sin(U 1,i π) sin(U 1,i π) cos(U 1,i π)
be a random rotation of the coordinate system by radian U 1,i π, where U 1,i is a uniform random variable. Then let
ε i = U 2,i 1 − i − 1 100 [sin(iU 3,i 2π[s * 1 + U 4,i π]) + sin(iU 5,i 2π[s * 2 + U 6,i π])](24)
which is a 2-dimensional sine wave surface with a random amplitude (due to uniform random variable U 2,i ), random frequencies on each coordinate (due to uniform random variables U 3,i and U 5,i ), and random shifts on each coordinate (due to uniform random variables U 4,i and U 6,i ). Then the response variable is created by taking ε = 100 i=1 ε i , where expected amplitudes decrease linearly, and expected frequencies increase, with each i. Further, the ε were standardized to zero mean and a variance of 10 for each simulation, and we added a small independent component with variance of 0.1 to each location, similar to the nugget effect η 2 for the GEOSTAT method. Fig 2c,d shows two realizations from the sum of random sine-wave surfaces, where the sample size was 100,000. Each simulation took about 2 seconds. We call this the SUMSINE simulation method.
Thus, random errors, ε, for the simulations were based on GEOSTAT or SUMSINE methods. In either case, we created two fixed effects. A covariate, x 1 (s i ), was generated from standard independent normal-distributions at the s i locations. A second spatially-patterned covariate, x 2 (s i ), was created, using the same model, but a different realization, as the random error simulation for ε. Then the response variable was created as,
Y (s i ) = β 0 + β 1 x 1 (s i ) + β 2 x 2 (s i ) + ε(s i )(25)
for i = 1, 2, ..., for a specified sample size n, or N (if wanting simulations at prediction sites), and β 0 = β 1 = β 2 = 1.
Evaluation of Simulation Results
For one summary of performance of fixed effects estimation, we consider the simulation-based estimator of root-mean-squared error,
RMSE = 1 K K k=1 (β p,k − β p ) 2
for the kth simulation among K, whereβ p,k is the kth simulation estimate for the pth β parameter, and β p is the true parameter used in simulations. We only consider β 1 and β 2 in Eq (25). The next simulation-based estimator we consider is 90% confidence interval coverage,
CI90 = 1 K K k=1 I β p,k − 1.645 v ar(β p,k ) < β p <β p,k + 1.645 v ar(β p,k )
To evaluate point prediction we also consider the simulation-based estimator of root-mean-squared prediction error,
RMSPE = 1 K × 1600 K k=1 1600 j=1 (Ŷ k (s j ) − y k (s j )) 2
whereŶ k (s j ) is the predicted value at the jth location for the kth simulation and y k (s j ) is the realized value at the jth location for the kth simulation. The final summary that we consider is 90% prediction interval coverage,
PI90 = 1 K × 1600 K k=1 1600 j=1 I Ŷ k (s j ) − 1.645 v ar(Ŷ k (s j )) < y k (s j ) < Y k (s j ) + 1.645 v ar(Ŷ k (s j ))
wherev ar(Ŷ k (s j )) is an estimator of the prediction variance.
Effect of Partition Method
We wanted to test SPIN over a wide range of data. Hence, we simulated 1000 data sets where simulation method was chosen randomly, with equal probability, between GEOSTAT and SUMSINE methods. If GEOSTAT was chosen, a random sample size between 1000 and 2000 was generated. If SUMSINE was chosen, a random sample size between 2000 and 10,000 was generated. Thus, throughout the study, the simulations occurred over a wide range of parameters, with two different simulation methods and randomly varying autocorrelation. In all cases, the error models fitted to the data were mis-specified, because we fitted an exponential autocorrelation model to the true models, GEOSTAT and SUMSINE, that generated them. This should provide a good test of the robustness of the SPIN method and provide fairly general conclusions on the effect of partition method. After simulating the data, we considered 3 indexing methods. One was completely random, the second was spatially compact, and the third was a mixed strategy, starting with compact, and then 10% were randomly reassigned. To create compact data partitions, we used k-means clustering [67] on the spatial coordinates. K-means has the property of minimizing within group variances and maximizing among group variances. When applied to spatial coordinates, k-means creates spatially compact partitions. An example of each partition method is given in Fig 3. We created partition sizes that ranged randomly from a target of 25 to 225 locations per group (k-means has some variation in group size). It is possible to create one partition for covariance estimation, and another partition for estimating fixed effects. Therefore we considered all nine combinations of the three partition methods for each estimation method. Table 1 shows performance summaries for the three partition methods, for both fixed effect estimation and point prediction, over wide-ranging simulations when using SPIN with 50 nearest-neighbors for predictions. It is clear that, whether for fixed effect estimation, or prediction, the use of compact partitions was the best option. The worst option was random partitioning. The mixed approach was often close to compact partitioning in performance. Results using 1000 simulations as described in the text. The first column of the table gives data partition method for the covariance parameter estimation (COPE) using REML, which was one of random partitioning (RAND), compact partitioning (COMP), or a mix of compact with 10% randomly distributed (MIXD). The second column of the table uses covariance parameters as estimated in the first row, and gives the data partition method for fixed effects estimation (FEFE), which was one of RAND, COPE, or MIXD. RMSE, RMSPE, CI90, and PI90 are described in the text. RMSE 1 and RMSE 2 are for the first (spatially independent) and second (spatially patterned) covariates, respectively. Similarly, CI90 1 and CI90 2 are for first and second covariates, respectively.
Effect of Partition Size
Next, we investigated the effect of partition size. We only used compact partitions, because they were best, and we used partition sizes of 25, 50, 100, and 200 for both covariance parameter estimation and fixed effect estimation, and again used 50 nearest-neighbors for predictions. We simulated data in the same way as above, and used the same performance summaries. Here, we also included the average time, in seconds, for each estimator. The results are shown in Table 2. In general, larger partition sizes had better RMSE for estimating covariance parameters, but the gains were very small after size 50. For fixed effects estimation, partition size of 50 was often May 16, 2023 14/25 better than 100, and approximately equal to size 200. For prediction, RMSPE was lower as partition size increased. In terms of computing speed, covariance parameter estimation was slower as partition size increased, but fixed effect estimation was faster as partition size increased (because of fewer loops in Eq (13). Partition sizes of 25 often had poor coverage in terms of both CI90 and PI90, but coverage was good for other partition sizes. Based on Table 1 and Table 2, one good overall strategy is to use compact partitions of block size 50 for covariance parameter estimation, and block size 200 for fixed effect estimation, for both efficiency and speed. Note that when partition size is different for fixed effect estimation from covariance parameter estimation, new inverses of diagonal blocks in Eq (10) are needed. If partition size is the same for fixed effect and covariance parameter estimation, inverses of diagonal blocks can be passed from REML to fixed effects estimation, so another good strategy is to use block size 50 for both fixed effect and covariance parameter estimation. Table 1. The first column of the table gives data partition sizes for the covariance parameter estimation (COPE), and the second column gives data partition size for fixed effects estimation (FEFE), while using covariance parameters as estimated in the first column. The columns RMSE 1 , RMSE 2 , RMSPE, CI90 1 , CI90 2 , and PI90 are the same as in Table 1. TIME C is the average time, in seconds, for covariance parameter estimation, and TIME F is the average time, in seconds, for fixed effects estimation.
Variance Estimation for Fixed Effects
In the section on estimating β, we described three possible estimators for the covariance matrix ofβ bd , with Eq (13) being theoretically correct, and faster alternatives Eq (14) and Eq (15). The alternative estimators will only be necessary for very large sample sizes, so to test their efficacy we simulated 1000 data sets with random sample sizes, from 10,000 to 100,000, using the SUMSINE method. We then fitted the covariance model, using compact partitions of size 50, and fixed effects, using partition sizes of 25, 50, 100, and 200. We computed the estimated covariance matrix of the fixed effects using Eq (13), Eq (14), and Eq (15), and evaluated performance based on 90% confidence interval coverage. Table 3 show that all three estimators, at all block sizes, have confidence interval coverage very close to the nominal 90% when estimating β 1 , the independent covariate. However, when estimating the spatially-patterned covariate, β 2 , the theoretical estimator has proper coverage for block sizes 50 and greater, while the two alternative estimators have proper coverage only for block size 50. It is surprising that the results for the alternative estimators are so specific to a particular block size, and these estimators warrant further research. (14) is based on empirical variation inβ among partitions, and Eq (15) is based on averaging the covariance matrices ofβ among partitions.
Results in
Prediction with Global Estimate of β
In the sections on point and block prediction, we described prediction using both a local estimator of β, and the global estimatorβ bd . To compare them, and examine the effect of the number of nearest neighbors, we simulated 1000 data sets as described in earlier, using compact partitions of size 50 for both covariance and fixed-effects estimation. We then predicted values on the gridded locations with 25, 50, 100, and 200 nearest neighbors. Results in Table 4 show that prediction with the global estimatorβ bd had smaller RMSPE, especially with smaller numbers of nearest neighbors. As expected, predictors have lower RMSPE with more nearest neighbors, but gains are small after block size 50. Prediction intervals for both methods had proper coverage. The local estimator of β was faster because it used the local estimator of the covariance of β, while predictions withβ bd needed the global covariance estimator (Eq. 13) to be used in Eq. (19). Higher numbers of nearest neighbors took longer to compute, especially with numbers greater than 100. Of course, predictions for the block average had much smaller RMSPE than points. Again, prediction got better when using more nearest neighbors, but improvements were small with more than 50. Computing time for block averaging increased with number of neighbors, especially when greater than 100, and took longer than point predictions.
A Comparison of Methods
To compare methods, we simulated 1000 data sets using GEOSTAT where we fix sample size at n = 1000. We compared 3 methods: 1) estimation and prediction using the full covariance matrix for all 1000 points, 2) SPIN with compact blocks of 50 for both covariance and fixed effects parameter estimation, and 50 nearest-neighbors for prediction, and 3) nearest-neighbor Gaussian processes (NNGP). NNGP had good performance in [16] and software is readily available in the R package spNNGP [68]. For spNNGP, we used default parameters for the conjugate prior method and a 25 × 25 search grid for phi and alpha, which were the dimensions of the search grid found in [16]. We stress that we do not claim this to be a definitive comparison among methods, as the developers of NNGP could surely make adjustments to improve performance. Likewise, partition size and number of nearest neighbors for prediction could be adjusted to optimize performance of SPIN for any given simulation or data set. We offer these results to show that, broadly, SPIN and NNGP are comparable, and very fast, with little performance lost in comparison to using the full covariance matrix. Table 5 shows that RMSE for estimation of the independent covariate, and the spatially-patterned covariate, were approximately equal for SPIN and NNGP, and only slightly worse than the full covariance matrix. RMSPE for SPIN was equal to the full covariance matrix, and both were just slightly better than NNGP. Confidence and prediction intervals for all three methods were very close to the nominal 90%. Table 1. TIME is the average time, in seconds, for fixed effects estimation and prediction combined. Figure 4 shows computing times, using 5 replicate simulations, for each method for up to 100,000 records. Both NNGP and SPIN can use parallel processing, but here we used a single processor to remove any differences due to parallel implementations. Fitting the full covariance matrix with REML, which is iterative, took more than 30 minutes with sample sizes > 2500. Computing time for NNGP is clearly linear with sample size, while for SPIN, it is quadratic when using Eq. (13), but linear when using the alternative variance estimators for fixed effects (Eqs. 14 and 15). Using the alternative variance estimators, SPIN was about 10 times faster than NNGP, and even with quadratic growth when using Eq. (13), SPIN was faster than NNGP for up to 100,000 records.
Application to Stream Networks
We applied spatial indexing to covariance matrices constructed using stream network models as described for the motivating example in the Introduction. These are variance component models, with a tail-up component, a tail-down component, and a Euclidean-distance component, each with 2 covariance parameters, along with a nugget effect; thus, there are 7 covariance parameters (4 partial sills, and 3 range parameters).
The R package SSN [69] was developed to use a full covariance matrix for these models, and we easily adapted it for spatial partitioning. We used compact blocks of size 50 for estimation, and 50 nearest neighbors for predictions. The 4 partial sill estimates were 1.76, 0.40, 2.57, and 0.66 for tail-up, tail-down, Euclidean-distance, and nugget effect, respectively. These indicate that tail-up and Euclidean-distance components dominated the structure of the overall autocovariance, and both had large range parameters. It took 7.98 minutes to fit the covariance parameters. The fitted fixed effects took an additional 2.15 minutes of computing time (Table 6), which are very similar to results found in [55]. Predictions for 65,099 locations are shown in Fig 5, which took 47 minutes. The se(β bd ) is the standard error using Eq. (13). The z-value is the estimate divided by its standard error. Prob(> |z|) is the probability of getting the fixed effect estimate if it were truly 0, assuming a standard normal distribution. In summary, the original analysis [55] took 10 days of continuous computing time to fit the model and make predictions with a full 9521 × 9521 covariance matrix. Using SPIN, fitting the same model took about 10 minutes, with an additional 47 minutes for predictions. Note that these models take more time than Euclidean distance alone because there are 7 covariance parameters, and the tail-up and tail-down models use stream distance, which takes longer to compute. For this example, we used parallel processing with 8 cores when fitting covariance parameters and fixed effects, and making predictions, which made analyses considerably faster. We did not use block prediction, because that was not a particular goal for this study. However, it is generally important, and has been used for estimating fish abundance with the R package SSN [70].
Discussion and Conclusions
We have explored spatial partitioning to speed computations for massive data sets. We have provided novel and theoretically correct development of variance estimators for all quantities. We proposed a globally coherent model for covariance and fixed effects estimation, and then use that model for improved predictions, even when those predictions are done locally based on nearest neighbors. We include block kriging in our development, which is absent among literature on big data for spatial methods.
Our simulations showed that, over a range of sample sizes, simulation methods, and range of autocorrelation, spatially compact partitions are best. There does not appear to be a need for "large blocks," as used in [54]. A good overall strategy, that combines speed without giving up much precision, is based on 50/50/50, where compact partitions of size 50 are used for both covariance parameter estimation and fixed effects estimation, and 50 nearest neighbors are used for prediction. This strategy compares very favorably with a default strategy for NNGP.
One benefit of the data indexing is that it extends easily to any geostatistical model with a valid covariance matrice. There is no need to approximate a Gaussian process. We provided one example for stream network models, but other examples include geometric anisotropy, nonstationary models, spatio-temporal models (including those that are nonseparable), etc. Any valid covariance matrix can be indexed and partitioned, offering both faster matrix inversions and parallel computing, while providing valid inferences with proper uncertainty assessment.
Fig 1 .
1The study area for the motivating example. (a) A stream network from the mid-Columbia River basin, where purple points show 9521 sample locations that measured mean water temperature during August. (b) Most of the stream network is located in Washington and Oregon in the United States. (c) A close-up of the black rectangle in (a). The orange points are prediction locations.
Fig 2 .
2Examples of simulated surfaces used to test methods. (a) and (b) are two different realizations of 2000 values from the GEOSTAT method with a range of 2. (c) and (d) are two realizations of 100,000 values from the SUMSINE method. Bluer values are lower, and yellower areas are higher.
Fig 3 .
3Illustration of three methods for partitioning data. Sample size was 1000, and the data were partitioned into 5 groups of 200 each. (a) Random assignment to group. (b) K-means clustering on x-and y-coordinates. (c) K-means on x-and y-coordinates, with 10% randomly re-assigned from each group. Each color represents a different grouping.
Fig 4 .
4Computing times as a function of sample size for three methods: 1) Full covariance matrix (black line), 2) NNGP (red line), and 3) SPIN (green lines). For SPIN, the theoretically correct variance estimator (Eq. 13) is solid green, while faster alternatives (Eqs. 14 and 15) are dashed green.
1
1Elevation (m/1000) at sensor site 2 Slope (100m/m) of stream reach of sensor site 3 Percentage of watershed upstream of sensor site composed of lake or reservoir surfaces 4 Mean annual precipitation (mm) in watershed upstream of sensor site 5 Albers equal area northing coordinate (10km) of sensor site 6 Percentage of the base flow to total flow of sensor site 7 Drainage area (10,000 km 2 ) upstream of sensor site 8 Riparian canopy coverage (%) of 1 km stream reach encompassing a sensor site 9 Mean annual August air temperature ( o C) 10 Mean annual August discharge (m 3 /sec)
Fig 5 .
5Temperature predictions at 65,099 locations for the Mid-Columbia river. Yellower colors are higher values, while bluer colors are lower values.
Table 1 .
1Effect of partition method.COPE
FEFE RMSE 1 RMSE 2 RMSPE CI90 1 CI90 2
PI90
RAND RAND
0.1407
0.4133
44.2245 0.8980 0.8540 0.9157
COMP
0.1244
0.2975
44.2076 0.9210 0.8490 0.9157
MIXD
0.1261
0.3382
44.2094 0.9160 0.8500 0.9157
COMP RAND
0.1416
0.4020
41.0370 0.9000 0.9210 0.9053
COMP
0.1196
0.2858
41.0211 0.9170 0.8910 0.9053
MIXD
0.1214
0.3234
41.0228 0.9110 0.9040 0.9052
MIXD RAND
0.1408
0.4154
41.0422 0.8950 0.8900 0.9058
COMP
0.1197
0.2886
41.0247 0.9150 0.8800 0.9058
MIXD
0.1212
0.3300
41.0264 0.9100 0.8810 0.9059
Table 2 .
2Effect of partition sizes.
COPE FEFE RMSE 1 RMSE 2 RMSPE CI90 1 CI90 2 PI90 TIME C TIME F
25
25
0.147
0.645
45.854
0.938 0.845 0.932
2.821
3.328
25
50
0.131
0.340
45.810
0.955 0.807 0.932
2.821
1.249
25
100
0.133
0.372
45.814
0.930 0.833 0.932
2.821
0.758
25
200
0.130
0.346
45.813
0.938 0.810 0.932
2.821
0.730
50
25
0.146
0.593
37.648
0.943 0.963 0.909
3.031
3.328
50
50
0.121
0.290
37.618
0.897 0.900 0.909
3.031
1.249
50
100
0.122
0.309
37.619
0.912 0.922 0.908
3.031
0.758
50
200
0.120
0.288
37.618
0.917 0.922 0.909
3.031
0.730
100
25
0.143
0.634
37.623
0.930 0.882 0.906
4.802
3.328
100
50
0.121
0.304
37.588
0.900 0.885 0.907
4.802
1.249
100
100
0.122
0.322
37.588
0.905 0.917 0.906
4.802
0.758
100
200
0.120
0.299
37.588
0.910 0.910 0.906
4.802
0.730
200
25
0.144
0.637
37.608
0.927 0.877 0.905
12.760
3.328
200
50
0.121
0.300
37.573
0.897 0.887 0.905
12.760
1.249
200
100
0.122
0.322
37.573
0.905 0.905 0.905
12.760
0.758
200
200
0.120
0.300
37.573
0.907 0.902 0.905
12.760
0.730
Results are based on 1000 simulations, using the same simulation parameters as in
Table 3 .
3CI90 for β 1 and β 2 β 1 β 2 Part. Size Eq. (13) Eq. (14) Eq. (15) Eq. (13) Eq. (14) Eq.Results are based on 1000 simulations, using three different variance estimates, given by their equation numbers. Eq (13) is theoretically correct, while Eq(15)
Table 4 .
4Effect of number of nearest neighbors for RMSPE and PI90 nNN RMSPE 1 RMSPE 2 PI90 1 PI90 2 RMSPE 3 PI90 3 Time 1 Time 2 Time 3 Results are based on 1000 simulations, using the same simulation parameters as in Table 1. The first column of the table gives number of nearest neighbors. Time is average computing time in seconds. The subscript 1 indicates a local estimator ofβ using Eq. (6), while subscript 2 indicates global estimator ofβ using Eq. (17). The subscript 3 indicates the block predictor, Eq. (21).25
43.9
40.5
0.908 0.907
0.0415
0.912
0.6
2.4
6.9
50
41.6
40.1
0.907 0.907
0.0406
0.907
1.2
3.0
7.5
100
40.6
39.9
0.907 0.907
0.0405
0.904
4.4
6.3
10.5
200
40.2
39.8
0.907 0.907
0.0401
0.905
23.9
25.7
29.0
Table 5 .
5Comparison of 3 methods for fixed effects estimation and point prediction. Method RMSE 1 RMSE 2 RMSPE CI90 1 CI90 2 PI90 TIME Data were simulated from 1000 random locations with a 40 × 40 prediction grid. The first column of the table gives the method, where Full uses the full 1000 × 1000 covariance matrix, SPIN uses spatial partitioning with compact blocks of size 50 and 50 nearest-neighbor prediction points. NNGP uses default parameters from R package for the conjugate prior method with a 25 × 25 search grid on phi and alpha. The columns RMSE 1 , RMSE 2 , RMSPE, CI90 1 , CI90 2 , and PI90 are the same as inFull
0.0088
0.0359
0.0854
0.893 0.903 0.899 110.2
SPIN
0.0090
0.0380
0.0854
0.908 0.913 0.906
3.0
NNGP
0.0090
0.0381
0.0866
0.888 0.881 0.905
21.8
Table 6 .
6Fixed effects table for Mid-Columbia River data. Effectβ bd se(β bd ) z-value Prob(> |z|)Intercept 30.9324
5.8816
5.2592
< 0.00001
Elevation 1 -4.0312
0.5052 -7.9787
< 0.00001
Slope 2 -0.1504
0.0289 -5.2009
< 0.00001
Lakes 3
0.5287
0.1003
5.2690
< 0.00001
Precipitation 4 -0.0018
0.0004 -4.4639
0.00001
Northing 5 -0.6315
0.3002 -2.1038
0.03565
Flow 6 -0.1118
0.0217 -5.1429
< 0.00001
Drainage Area 7
0.0363
0.0236
1.5400
0.12388
Canopy 8 -0.0238
0.0033 -7.1280
< 0.00001
Air Temperature 9
0.4538
0.0119 38.2106
< 0.00001
Discharge 10
0.0031
0.0140
0.2227
< 0.82385
May 16, 2023
May 16, 2023 20/25
May 16, 2023 21/25
May 16, 2023 25/25
AcknowledgmentsThe project received financial support through Interagency Agreement DW-13-92434601-0 from the U.S. Environmental Protection Agency (EPA), and through Interagency Agreement 81603 from the Bonneville Power Administration (BPA), with the National Marine Fisheries Service, NOAA. The findings and conclusions in the paper are those of the author(s) and do not necessarily represent the views of the reviewers nor the EPA, BPA, and National Marine Fisheries Service, NOAA. Any use of trade, product, or firm names does not imply an endorsement by the US Government.Data and Software AvailabilityThe SPIN method has been implemented in the spmodel R package https://cran.r-project.org/web/packages/spmodel/index.html The example data can be downloaded from the Github repository, https://github.com/jayverhoef/midColumbiaLSN.git
Statistics for Spatial Data. Nac Cressie, John Wiley & SonsNew YorkRevised EditionCressie NAC. Statistics for Spatial Data, Revised Edition. New York: John Wiley & Sons; 1993.
A modeling approach for large spatial datasets. M L Stein, Journal of the Korean Statistical Society. 371Stein ML. A modeling approach for large spatial datasets. Journal of the Korean Statistical Society. 2008;37(1):3-10.
J P Chiles, P Delfiner, Geostatistics, Modeling Spatial Uncertainty. New YorkJohn Wiley & SonsChiles JP, Delfiner P. Geostatistics: Modeling Spatial Uncertainty. New York: John Wiley & Sons; 1999.
Recovery of inter-block information when block sizes are unequal. H D Patterson, R Thompson, Biometrika. 58Patterson HD, Thompson R. Recovery of inter-block information when block sizes are unequal. Biometrika. 1971;58:545-554.
Maximum likelihood estimation of components of variance. H Patterson, R Thompson, Proceedings of the 8th International Biometric Conference. the 8th International Biometric ConferenceWashington, DCBiometric SocietyPatterson H, Thompson R. Maximum likelihood estimation of components of variance. In: Proceedings of the 8th International Biometric Conference. Biometric Society, Washington, DC; 1974. p. 197-207.
Maximum likelihood estimation of models for residual covariance in spatial regression. K V Mardia, R Marshall, Biometrika. 711Mardia KV, Marshall R. Maximum likelihood estimation of models for residual covariance in spatial regression. Biometrika. 1984;71(1):135-146.
A quasi-likelihood approach to the REML estimating equations. C C Heyde, Statistics & Probability Letters. 21Heyde CC. A quasi-likelihood approach to the REML estimating equations. Statistics & Probability Letters. 1994;21:381-384.
Asymptotics for REML estimation of spatial covariance parameters. N Cressie, S N Lahiri, Journal of Statistical Planning and Inference. 50Cressie N, Lahiri SN. Asymptotics for REML estimation of spatial covariance parameters. Journal of Statistical Planning and Inference. 1996;50:327-341.
The origins of kriging. N Cressie, Mathematical Geology. 22Cressie N. The origins of kriging. Mathematical Geology. 1990;22:239-252.
. K Johnston, J M Ver Hoef, K Krivoruchko, N Lucas, Using ArcGIS Geostatistical Analyst. 300EsriJohnston K, Ver Hoef JM, Krivoruchko K, Lucas N. Using ArcGIS Geostatistical Analyst. vol. 300. Esri Redlands, CA; 2001.
A moving average approach for spatial statistical models of stream networks (with discussion). J M Ver Hoef, E Peterson, Journal of the American Statistical Association. 105Ver Hoef JM, Peterson E. A moving average approach for spatial statistical models of stream networks (with discussion). Journal of the American Statistical Association. 2010;105:6-18.
Mean squared prediction error in the spatial linear model with estimated covariance parameters. D L Zimmerman, N Cressie, Annals of the Institute of Statistical Mathematics. 44Zimmerman DL, Cressie N. Mean squared prediction error in the spatial linear model with estimated covariance parameters. Annals of the Institute of Statistical Mathematics. 1992;44:27-43.
Geostatistics for large datasets. Y Sun, B Li, M G Genton, Advances and challenges in Space-Time Modelling of Natural Events. Porcu E, Montero JM, Schlather MSpringerSun Y, Li B, Genton MG. Geostatistics for large datasets. In: Porcu E, Montero JM, Schlather M, editors. Advances and challenges in Space-Time Modelling of Natural Events. Springer; 2012. p. 55-77.
A comparison of spatial predictors when datasets could be very large. J R Bradley, N Cressie, T Shi, Statistics Surveys. 10Bradley JR, Cressie N, Shi T. A comparison of spatial predictors when datasets could be very large. Statistics Surveys. 2016;10:100-131.
When Gaussian process meets big data: A review of scalable GPs. H Liu, Y S Ong, X Shen, J Cai, arXiv:180701065arXiv preprintLiu H, Ong YS, Shen X, Cai J. When Gaussian process meets big data: A review of scalable GPs. arXiv preprint arXiv:180701065. 2018;.
A case study competition among methods for analyzing large spatial data. M J Heaton, A Datta, A O Finley, R Furrer, J Guinness, R Guhaniyogi, Journal of Agricultural, Biological and Environmental Statistics. 243Heaton MJ, Datta A, Finley AO, Furrer R, Guinness J, Guhaniyogi R, et al. A case study competition among methods for analyzing large spatial data. Journal of Agricultural, Biological and Environmental Statistics. 2019;24(3):398-425.
Geoadditive models. E E Kammann, M P Wand, Journal of the Royal Statistical Society: Series C (Applied Statistics). 521Kammann EE, Wand MP. Geoadditive models. Journal of the Royal Statistical Society: Series C (Applied Statistics). 2003;52(1):1-18.
Semiparametric Regression. D Ruppert, M P Wand, R J Carroll, Campbridge University PressCambridge, UKRuppert D, Wand MP, Carroll RJ. Semiparametric Regression. Campbridge University Press, Cambridge, UK; 2003.
Generalized additive models for gigadata: modeling the UK black smoke network daily data. S N Wood, Z Li, G Shaddick, N H Augustin, Journal of the American Statistical Association. 112519Wood SN, Li Z, Shaddick G, Augustin NH. Generalized additive models for gigadata: modeling the UK black smoke network daily data. Journal of the American Statistical Association. 2017;112(519):1199-1210.
Spatial prediction for massive datasets. Noel Cressie, Johannesson , G , Mastering the Data Explosion in the Earth and Environmental Sciences: Proceedings of the Australian Academy of Science Elizabeth and Frederick White Conference. Canberra, Australia: Australian Academy of Science11Cressie, Noel, Johannesson, G . Spatial prediction for massive datasets. In: Mastering the Data Explosion in the Earth and Environmental Sciences: Proceedings of the Australian Academy of Science Elizabeth and Frederick White Conference. Canberra, Australia: Australian Academy of Science; 2006. p. 11.
Fixed rank kriging for very large spatial data sets. N Cressie, G Johannesson, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 701Cressie N, Johannesson G. Fixed rank kriging for very large spatial data sets. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2008;70(1):209-226.
Bayesian inference for the spatial random effects model. E L Kang, N Cressie, Journal of the American Statistical Association. 106495Kang EL, Cressie N. Bayesian inference for the spatial random effects model. Journal of the American Statistical Association. 2011;106(495):972-983.
Spatio-temporal smoothing and EM estimation for massive remote-sensing data sets. M Katzfuss, N Cressie, Journal of Time Series Analysis. 324Katzfuss M, Cressie N. Spatio-temporal smoothing and EM estimation for massive remote-sensing data sets. Journal of Time Series Analysis. 2011;32(4):430-446.
Gaussian predictive process models for large spatial data sets. S Banerjee, A E Gelfand, A O Finley, H Sang, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 704Banerjee S, Gelfand AE, Finley AO, Sang H. Gaussian predictive process models for large spatial data sets. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2008;70(4):825-848.
Improving the performance of predictive process modeling for large datasets. A O Finley, H Sang, S Banerjee, A E Gelfand, Computational Statistics & Data Analysis. 538Finley AO, Sang H, Banerjee S, Gelfand AE. Improving the performance of predictive process modeling for large datasets. Computational Statistics & Data Analysis. 2009;53(8):2873-2884.
A multiresolution Gaussian process model for the analysis of large spatial datasets. D Nychka, S Bandyopadhyay, D Hammerling, F Lindgren, S Sain, Journal of Computational and Graphical Statistics. 242Nychka D, Bandyopadhyay S, Hammerling D, Lindgren F, Sain S. A multiresolution Gaussian process model for the analysis of large spatial datasets. Journal of Computational and Graphical Statistics. 2015;24(2):579-599.
A multi-resolution approximation for massive spatial datasets. M Katzfuss, Journal of the American Statistical Association. 112517Katzfuss M. A multi-resolution approximation for massive spatial datasets. Journal of the American Statistical Association. 2017;112(517):201-214.
Covariance tapering for interpolation of large spatial datasets. R Furrer, M G Genton, D Nychka, Journal of Computational and Graphical Statistics. 153Furrer R, Genton MG, Nychka D. Covariance tapering for interpolation of large spatial datasets. Journal of Computational and Graphical Statistics. 2006;15(3):502-523.
Covariance tapering for likelihood-based estimation in large spatial data sets. C G Kaufman, M J Schervish, D W Nychka, Journal of the American Statistical Association. 103484Kaufman CG, Schervish MJ, Nychka DW. Covariance tapering for likelihood-based estimation in large spatial data sets. Journal of the American Statistical Association. 2008;103(484):1545-1555.
Statistical properties of covariance tapers. M L Stein, Journal of Computational and Graphical Statistics. 224Stein ML. Statistical properties of covariance tapers. Journal of Computational and Graphical Statistics. 2013;22(4):866-885.
An explicit link between Gaussian fields and Gaussian markov random fields: the stochastic partial differential equation approach. F Lindgren, H Rue, J Lindström, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 734Lindgren F, Rue H, Lindström J. An explicit link between Gaussian fields and Gaussian markov random fields: the stochastic partial differential equation approach. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2011;73(4):423-498.
Spatial modeling with R-INLA: A review. H Bakka, H Rue, G A Fuglstad, A Riebler, D Bolin, J Illian, Wiley Interdisciplinary Reviews: Computational Statistics. 1061443Bakka H, Rue H, Fuglstad GA, Riebler A, Bolin D, Illian J, et al. Spatial modeling with R-INLA: A review. Wiley Interdisciplinary Reviews: Computational Statistics. 2018;10(6):e1443.
Estimation and model identification for continuous spatial processes. A V Vecchia, Journal of the Royal Statistical Society: Series B (Methodological). 502Vecchia AV. Estimation and model identification for continuous spatial processes. Journal of the Royal Statistical Society: Series B (Methodological). 1988;50(2):297-312.
Approximating likelihoods for large spatial data sets. M L Stein, Z Chi, L J Welty, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 662Stein ML, Chi Z, Welty LJ. Approximating likelihoods for large spatial data sets. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2004;66(2):275-296.
Hierarchical nearest-neighbor Gaussian process models for large geostatistical datasets. A Datta, S Banerjee, A O Finley, A E Gelfand, Journal of the American Statistical Association. 111514Datta A, Banerjee S, Finley AO, Gelfand AE. Hierarchical nearest-neighbor Gaussian process models for large geostatistical datasets. Journal of the American Statistical Association. 2016;111(514):800-812.
On nearest-neighbor Gaussian process models for massive spatial data. A Datta, S Banerjee, A O Finley, A E Gelfand, Wiley Interdisciplinary Reviews: Computational Statistics. 85Datta A, Banerjee S, Finley AO, Gelfand AE. On nearest-neighbor Gaussian process models for massive spatial data. Wiley Interdisciplinary Reviews: Computational Statistics. 2016;8(5):162-171.
Applying nearest neighbor Gaussian processes to massive spatial data sets forest canopy height prediction across tanana valley alaska. A O Finley, A Datta, B C Cook, D C Morton, H E Andersen, S Banerjee, arXiv:170200434arXiv preprintFinley AO, Datta A, Cook BC, Morton DC, Andersen HE, Banerjee S. Applying nearest neighbor Gaussian processes to massive spatial data sets forest canopy height prediction across tanana valley alaska.". arXiv preprint arXiv:170200434. 2017;.
Efficient algorithms for Bayesian nearest neighbor Gaussian processes. A O Finley, A Datta, B D Cook, D C Morton, H E Andersen, S Banerjee, Journal of Computational and Graphical Statistics. Finley AO, Datta A, Cook BD, Morton DC, Andersen HE, Banerjee S. Efficient algorithms for Bayesian nearest neighbor Gaussian processes. Journal of Computational and Graphical Statistics. 2019; p. 1-14.
A general framework for Vecchia approximations of Gaussian processes. M Katzfuss, J Guinness, arXiv:170806302arXiv preprintKatzfuss M, Guinness J. A general framework for Vecchia approximations of Gaussian processes. arXiv preprint arXiv:170806302. 2017;.
M Katzfuss, J Guinness, W Gong, D Zilber, arXiv:180503309Vecchia approximations of Gaussian-process predictions. arXiv preprintKatzfuss M, Guinness J, Gong W, Zilber D. Vecchia approximations of Gaussian-process predictions. arXiv preprint arXiv:180503309. 2018;.
Vecchia-Laplace approximations of generalized Gaussian processes for big non-Gaussian spatial data. D Zilber, M Katzfuss, arXiv:190607828arXiv preprintZilber D, Katzfuss M. Vecchia-Laplace approximations of generalized Gaussian processes for big non-Gaussian spatial data. arXiv preprint arXiv:190607828. 2019;.
Kriging models for linear networks and non-Euclidean distances: Cautions and solutions. J M Ver Hoef, Methods in Ecology and Evolution. 96Ver Hoef JM. Kriging models for linear networks and non-Euclidean distances: Cautions and solutions. Methods in Ecology and Evolution. 2018;9(6):1600-1613.
Lognormal and moving window methods of estimating acid deposition. T C Haas, Journal of the American Statistical Association. 85412Haas TC. Lognormal and moving window methods of estimating acid deposition. Journal of the American Statistical Association. 1990;85(412):950-963.
Local prediction of a spatio-temporal process with an application to wet sulfate deposition. T C Haas, Journal of the American Statistical Association. 90432Haas TC. Local prediction of a spatio-temporal process with an application to wet sulfate deposition. Journal of the American Statistical Association. 1995;90(432):1189-1199.
A composite likelihood approach to semivariogram estimation. F C Curriero, S Lele, Journal of Agricultural, biological, and Environmental statistics. Curriero FC, Lele S. A composite likelihood approach to semivariogram estimation. Journal of Agricultural, biological, and Environmental statistics. 1999; p. 9-28.
A resampling-based stochastic approximation method for analysis of large geostatistical data. F Liang, Y Cheng, Q Song, J Park, P Yang, Journal of the American Statistical Association. 108501Liang F, Cheng Y, Song Q, Park J, Yang P. A resampling-based stochastic approximation method for analysis of large geostatistical data. Journal of the American Statistical Association. 2013;108(501):325-339.
Estimation and prediction in spatial models with block composite likelihoods. J Eidsvik, B A Shaby, B J Reich, M Wheeler, J Niemi, Journal of Computational and Graphical Statistics. 232Eidsvik J, Shaby BA, Reich BJ, Wheeler M, Niemi J. Estimation and prediction in spatial models with block composite likelihoods. Journal of Computational and Graphical Statistics. 2014;23(2):295-315.
Spatial subsemble estimator for large geostatistical data. M H Barbian, R M Assunção, Spatial Statistics. 22Barbian MH, Assunção RM. Spatial subsemble estimator for large geostatistical data. Spatial Statistics. 2017;22:68-88.
An overview of composite likelihood methods. C Varin, N Reid, D Firth, Statistica Sinica. Varin C, Reid N, Firth D. An overview of composite likelihood methods. Statistica Sinica. 2011; p. 5-42.
Domain decomposition approach for fast Gaussian process regression of large spatial data sets. C Park, J Z Huang, Y Ding, Journal of Machine Learning Research. 12Park C, Huang JZ, Ding Y. Domain decomposition approach for fast Gaussian process regression of large spatial data sets. Journal of Machine Learning Research. 2011;12(May):1697-1728.
Efficient computation of Gaussian process regression for large spatial data sets by patching local Gaussian processes. C Park, J Z Huang, Journal of Machine Learning Research. 17174Park C, Huang JZ. Efficient computation of Gaussian process regression for large spatial data sets by patching local Gaussian processes. Journal of Machine Learning Research. 2016;17(174):1-29.
Nonstationary Gaussian process models using spatial hierarchical clustering from finite differences. M J Heaton, W F Christensen, M A Terres, Technometrics. 591Heaton MJ, Christensen WF, Terres MA. Nonstationary Gaussian process models using spatial hierarchical clustering from finite differences. Technometrics. 2017;59(1):93-101.
Patchwork kriging for large-scale gaussian process regression. C Park, D Apley, The Journal of Machine Learning Research. 191Park C, Apley D. Patchwork kriging for large-scale gaussian process regression. The Journal of Machine Learning Research. 2018;19(1):269-311.
Approximate likelihoods for spatial processes. P Caragea, R L Smith, Preprint. Caragea P, Smith RL. Approximate likelihoods for spatial processes. Preprint. 2006; https://rls.sites.oasis.unc.edu/postscript/rs/approxlh.pdf.
The Norwest summer stream temperature model and scenarios for the western U.S.: a crowd-sourced database and new geospatial tools foster a user community and predict broad climate warming of rivers and streams. D J Isaak, S J Wenger, E E Peterson, Jmv Hoef, D E Nagel, C H Luce, Water Resources Research. 5311Isaak DJ, Wenger SJ, Peterson EE, Hoef JMV, Nagel DE, Luce CH, et al. The Norwest summer stream temperature model and scenarios for the western U.S.: a crowd-sourced database and new geospatial tools foster a user community and predict broad climate warming of rivers and streams. Water Resources Research. 2017;53(11):9181-9205.
Spatial statistical models that use flow and stream distance. J M Ver Hoef, E E Peterson, Theobald D , Environmental and Ecological Statistics. 131Ver Hoef JM, Peterson EE, Theobald D. Spatial statistical models that use flow and stream distance. Environmental and Ecological Statistics. 2006;13(1):449-464.
Blackbox kriging: spatial prediction without specifying variogram models. R P Barry, M Jay, V Hoef, Journal of Agricultural, Biological, and Environmental Statistics. 13Barry RP, Jay M, Hoef V. Blackbox kriging: spatial prediction without specifying variogram models. Journal of Agricultural, Biological, and Environmental Statistics. 1996;1(3):297-322.
Constructing and fitting models for cokriging and multivariable spatial prediction. J M Ver Hoef, R P Barry, Journal of Statistical Planning and Inference. 692Ver Hoef JM, Barry RP. Constructing and fitting models for cokriging and multivariable spatial prediction. Journal of Statistical Planning and Inference. 1998;69(2):275-294.
A process-convolution approach to modelling temperatures in the north atlantic ocean (disc. D Higdon, Environmental and Ecological Statistics. 5Higdon D. A process-convolution approach to modelling temperatures in the north atlantic ocean (disc: p191-192). Environmental and Ecological Statistics. 1998;5:173-190.
Non-stationary spatial modeling. D Higdon, J Swall, J Kern, Bayesian Statistics 6 -Proceedings of the Sixth Valencia International Meeting. Bernardo JM, Berger JO, Dawid AP, Smith AFMOxford University PressHigdon D, Swall J, Kern J. Non-stationary spatial modeling. In: Bernardo JM, Berger JO, Dawid AP, Smith AFM, editors. Bayesian Statistics 6 -Proceedings of the Sixth Valencia International Meeting. Clarendon Press [Oxford University Press]; 1999. p. 761-768.
Geostatistics for Environmental Scientists. R Webster, M A Oliver, John Wiley & SonsChichester, EnglandWebster R, Oliver MA. Geostatistics for Environmental Scientists. Chichester, England: John Wiley & Sons; 2007.
Statistical analysis of non-lattice data. J Besag, Journal of the Royal Statistical Society: Series D (The Statistician). 243Besag J. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society: Series D (The Statistician). 1975;24(3):179-195.
Large complex data: divide and recombine (d&r) with rhipe. S Guha, R Hafen, J Rounds, J Xia, J Li, B Xi, Stat. 11Guha S, Hafen R, Rounds J, Xia J, Li J, Xi B, et al. Large complex data: divide and recombine (d&r) with rhipe. Stat. 2012;1(1):53-67.
Estimation of Fur Seal Pup Populations by Randomized Sampling. D G Chapman, A M Johnson, Transactions of the American Fisheries Society. 973Chapman DG, Johnson AM. Estimation of Fur Seal Pup Populations by Randomized Sampling. Transactions of the American Fisheries Society. 1968;97(3):264-270.
R: A Language and Environment for Statistical Computing. R Core Team, Vienna, Austria: R Foundation for Statistical ComputingR Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2020.
Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration. J Elseberg, S Magnenat, R Siegwart, A Nüchter, Journal of Software Engineering for Robotics. 31Elseberg J, Magnenat S, Siegwart R, Nüchter A. Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration. Journal of Software Engineering for Robotics. 2012;3(1):2-12.
Some methods for classification and analysis of multivariate observations. J B Macqueen, Proc. of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Cam LML, Neyman Jof the Fifth Berkeley Symposium on Mathematical Statistics and ProbabilityUniversity of California Press1MacQueen JB. Some methods for classification and analysis of multivariate observations. In: Cam LML, Neyman J, editors. Proc. of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. vol. 1. University of California Press; 1967. p. 281-297.
R package for nearest neighbor Gaussian process models. A O Finley, A Datta, S Banerjee, arXiv:200109111 [stat]. 2020Finley AO, Datta A, Banerjee S. R package for nearest neighbor Gaussian process models. arXiv:200109111 [stat]. 2020;.
SSN: an R package for spatial statistical modeling on stream networks. J M Ver Hoef, E E Peterson, D Clifford, R Shah, Journal of Statistical Software. 563Ver Hoef JM, Peterson EE, Clifford D, Shah R. SSN: an R package for spatial statistical modeling on stream networks. Journal of Statistical Software. 2014;56(3):1-45.
Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams. D J Isaak, J M Ver Hoef, E E Peterson, D L Horan, D E Nagel, Canadian Journal of Fisheries and Aquatic Sciences. 742Isaak DJ, Ver Hoef JM, Peterson EE, Horan DL, Nagel DE. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams. Canadian Journal of Fisheries and Aquatic Sciences. 2017;74(2):147-156.
| [
"https://github.com/jayverhoef/midColumbiaLSN.git"
] |
[
"Z-Z ′ mixing and oblique corrections in an SU(3) × U(1) model",
"Z-Z ′ mixing and oblique corrections in an SU(3) × U(1) model"
] | [
"James T Liu \nInstitute of Field Physics\nDepartment of Physics and Astronomy\nUniversity of North Carolina\nChapel Hill27599-3255NCUSA\n",
"Daniel Ng \nTRIUMF\n4004 Wesbrook Mall VancouverV6T 2A3B.CCanada\n"
] | [
"Institute of Field Physics\nDepartment of Physics and Astronomy\nUniversity of North Carolina\nChapel Hill27599-3255NCUSA",
"TRIUMF\n4004 Wesbrook Mall VancouverV6T 2A3B.CCanada"
] | [] | We address the effects of the new physics predicted by the SU(3) L × U(1) X model on the precision electroweak measurements. We consider both Z-Z ′ mixing and one-loop oblique corrections, using a combination of neutral gauge boson mixing parameters and the parameters S and T . At tree level, we obtain strong limits on the Z-Z ′ mixing angle, −0.0006 < θ < 0.0042 and find M Z 2 > 490GeV (both at 90% C.L.). The radiative corrections lead to T > 0 if the new Higgs are heavy, which bounds the Higgs masses to be less than a few TeV. S can have either sign depending on the Higgs mass spectrum. Future experiments may soon place strong restrictions on this model, thus making it eminently testable. | 10.1007/bf01574173 | [
"https://export.arxiv.org/pdf/hep-ph/9302271v3.pdf"
] | 15,095,172 | hep-ph/9302271 | 98d58fe11067d2db31715733b8ff7f693233961d |
Z-Z ′ mixing and oblique corrections in an SU(3) × U(1) model
arXiv:hep-ph/9302271v3 30 Jun 1993 February 1993 revised, June 1993
James T Liu
Institute of Field Physics
Department of Physics and Astronomy
University of North Carolina
Chapel Hill27599-3255NCUSA
Daniel Ng
TRIUMF
4004 Wesbrook Mall VancouverV6T 2A3B.CCanada
Z-Z ′ mixing and oblique corrections in an SU(3) × U(1) model
arXiv:hep-ph/9302271v3 30 Jun 1993 February 1993 revised, June 1993
We address the effects of the new physics predicted by the SU(3) L × U(1) X model on the precision electroweak measurements. We consider both Z-Z ′ mixing and one-loop oblique corrections, using a combination of neutral gauge boson mixing parameters and the parameters S and T . At tree level, we obtain strong limits on the Z-Z ′ mixing angle, −0.0006 < θ < 0.0042 and find M Z 2 > 490GeV (both at 90% C.L.). The radiative corrections lead to T > 0 if the new Higgs are heavy, which bounds the Higgs masses to be less than a few TeV. S can have either sign depending on the Higgs mass spectrum. Future experiments may soon place strong restrictions on this model, thus making it eminently testable.
I. INTRODUCTION
Recently a model based on the gauge group SU(3) L × U(1) X has been proposed as a possible explanation of the family replication question [1]. By matching the gauge coupling constants at the electroweak scale [2], the mass of the new heavy neutral gauge boson, Z ′ , is bounded to be less than 2.2 TeV and the mass upper bound for the new charged gauge bosons, Y ±± and Y ± , is 435 GeV [3]. Since Y ++ and Y + carry two units of lepton number, they are called dileptons. Unlike most extensions of the standard model, in which the masses of the new gauge bosons are not bounded from above, this model would be either realized or ruled out in the future high energy colliders such as the superconducting supercollider and the next linear collider.
The new Z ′ , by mixing with the standard model neutral gauge boson Z, modifies the neutral current parameters as well as the ρ-parameter [4]. The dileptons, Y ±± and Y ± , and the new charged Higgs, H ±± and H ± , on the other hand, do not participate directly in the precision LEP experiments [5] nor the neutrino scattering experiments [6]. Instead, they only enter radiatively, mainly via their oblique corrections to the W ± and Z propagators [7,8,9,10,11,12]. Nevertheless, such radiative corrections may be comparable to the tree level corrections due to the Z-Z ′ mixing. Thus we treat both cases in the following.
If the masses of the dileptons are degenerate, we may expect the oblique corrections to vanish. However, the mass degeneracy is lifted when SU(2) L ×U(1) Y breaks into U(1) Q ; thus the mass squared splitting would be on the order of M 2 W . As a result, the oblique corrections to the parameters S and T [7] are expected to be on the order of (1/π)(M 2 W /M 2 Y ++ ), where M Y ++ is the mass of Y ±± . In addition, oblique corrections due to the new heavy charged Higgs, H ±± and H ± , are induced by the small mixing between Higgs multiplets. The contributions have the general form (1/π)(M 2
W /M 2 Y ++ )(m 2 H /M 2 Y ++ ),
where m H is the mass of the new charged Higgs. Hence, the heavy charged Higgs contributions would be important even when the dilepton mass splitting is small. Our analysis in this paper concentrates on both tree level and one-loop oblique corrections to the standard model due to the new physics of the SU(3) L ×U(1) X model. For the dileptons and the new Higgs, which only contribute radiatively, we use the S, T and U parameters.
However the effects of the Z ′ , which enters at tree level, cannot be fully incorporated into this formalism, and may instead be parametrized by a Z-Z ′ mixing angle as well as the mass of the heavy Z 2 . We thus use five parameters to describe the new physics: the two Z ′ parameters and the three oblique ones. Starting with a discussion of tree level mixing, we perform a five parameter fit to experimental data to put strong limits on the Z-Z ′ mixing angle. We then discuss the consequences of the fit on the other particles by carrying out a complete one-loop calculation of S and T for dilepton gauge bosons and the new Higgs bosons. The new quarks, which are SU(2) singlets, do not contribute.
II. TREE LEVEL MIXING
We first outline the model, following the notation given in [2]. The fermions transform
under SU(3) c × SU(3) L × U(1) X according to ψ 1,2,3 = e ν e e c , µ ν µ µ c , τ ν τ τ c : (1, 3 * , 0) , (2.1a) Q 1,2 = u d D , c s S : (3, 3, − 1 3 ) , (2.1b) Q 3 = b t T : (3, 3 * , 2 3 ) , (2.1c) d c , s c , b c : (3 * , 1, 1 3 ) , (2.1d) u c , c c , t c : (3 * , 1, − 2 3 ) , (2.1e) D c , S c : (3 * , 1, 4 3 ) , (2.1f) T c : (3 * , 1, − 5 3 ) . (2.1g)
where D, S and T are new quarks with charges −4/3, −4/3 and 5/3 respectively. The minimal Higgs multiplets required for the symmetry breaking hierarchy and fermion masses are given by
Φ = φ ++ φ + φ 0 : (1, 3, 1) , (2.2a) ∆ = ∆ + 1 ∆ 0 ∆ − 2 : (1, 3, 0) , (2.2b) ∆ ′ = ∆ ′0 ∆ ′− ∆ ′−− : (1, 3, −1) , (2.2c) and η = η ++ 1 η + 1 / √ 2 η 0 / √ 2 η + 1 / √ 2 η 0 1 η − / √ 2 η 0 / √ 2 η − / √ 2 η −− : (1, 6, 0) . (2.2d)
The non-zero vacuum expectation value (VEV)
of φ 0 , u/ √ 2, breaks SU(3) L × U(1) X into SU(2) L × U(1) Y .
The SU(2) components of ∆ and ∆ ′ then behave like the ordinary Higgs doublets of a two-Higgs standard model. The sextet, η, is required to obtain a realistic lepton mass spectrum. For simplicity, we will assume its VEVs are zero. As SU(3) L × U(1) X is broken into SU(2) L × U(1) Y , the sextet will decompose into an SU(2) triplet, an SU(2) doublet and a charged SU(2) singlet. We will also assume that the mass splitting of these scalars within their multiplets is small; hence their contributions to S and T will be negligible.
As SU(2) L × U(1) Y is broken by the VEVs of ∆ 0 and ∆ ′0 , v/ √ 2 and v ′ / √ 2, they will provide masses for the standard model gauge bosons, W ± and Z. The VEVs also induce Z-Z ′ mixing as well as the mass splitting of Y ±± and Y ± . Hence we obtain the masses for the charged gauge bosons,
M 2 W = 1 4 g 2 (v 2 + v ′2 ) , (2.3a) M 2 Y + = 1 4 g 2 (u 2 + v 2 ) , (2.3b) and M 2 Y ++ = 1 4 g 2 (u 2 + v ′2 ) ,(2.M 2 = M 2 Z M 2 ZZ ′ M 2 ZZ ′ M 2 Z ′ , (2.4) with M 2 Z = 1 4 g 2 cos 2 θ W (v 2 + v ′2 ) , (2.5a) M 2 Z ′ = 1 3 g 2 cos 2 θ W 1 − 4 sin 2 θ W u 2 + 1 − 4 sin 2 θ W 4 cos 2 θ W v 2 + (1 + 2 sin 2 θ W ) 2 4 cos 2 θ W (1 − 4 sin 2 θ W ) v ′2 , (2.5b) M 2 ZZ ′ = 1 4 √ 3 g 2 1 − 4 sin 2 θ W cos 2 θ W v 2 − 1 + 2 sin 2 θ W cos 2 θ W 1 − 4 sin 2 θ W v ′2 . (2.5c)
The mass eigenstates are
Z 1 = cos θ Z − sin θ Z ′ , (2.6a)
and
Z 2 = sin θ Z + cos θ Z ′ , (2.6b)
where the mixing angle is given by
tan 2 θ = M 2 Z − M 2 Z 1 M 2 Z 2 − M 2 Z . (2.7)
with M Z 1 and M Z 2 being the masses for Z 1 and Z 2 . Here, Z 1 corresponds to the standard model neutral gauge boson and Z 2 corresponds to the additional neutral gauge boson. For small mixing, we find θ ≈ M 2 ZZ ′ /M 2 Z 2 ≪ 1. Since M Z 1 has been precisely determined by the LEP experiments, the new contributions are parametrized by the two Z ′ parameters, M Z 2 and θ. The structure of the minimal Higgs sector gives additional constraints on the allowed region of (M Z 2 , θ) parameter space, and forces θ ≪ 1 for M Z 2 ≫ M Z 1 . However, we will not make use of this constraint so as to allow for extended Higgs sectors.
While we have only been discussing tree level relations so far, it is important to include both the standard model and new radiative corrections as well. We take the oblique corrections into account by using the starred functions of Kennedy and Lynn [13]. Following [7], the effect of new heavy particles on the starred functions may be expressed in terms of S, T and U. The effects of the tree level Z-Z ′ mixing and the presence of the new Z 2 gauge boson can then be expressed as shifts of the starred functions. We ignore effects due to the combination of both mixing and radiative corrections, as they are suppressed.
In order to perform a fit to experiment, we need to express the SU(3) L × U(1) X model predictions in terms of both tree level Z ′ parameters, (M Z 2 , θ), and one-loop parameters, (S, T, U). This is most easily done by first calculating the standard model observables with the addition of S, T and U and then shifting the results by the tree level parameters. We [14], while g 2 L and g 2 R are given in Ref. [15]. We find it convenient to approximate the top quark and standard model Higgs mass dependence through shifts in S, T and U.
The new contributions to the measurable quantities due to the presence of the Z 2 and Z-Z ′ mixing are given in the appendix. For the (S, T, U) dependence of the observables, we use the results given in Ref. [7]. The result of the fit in the (M Z 2 , θ) plane (with S, T and U unrestricted) is presented in Fig. 1 and indicates that Z-Z ′ mixing is highly restricted. This is partially due to the large couplings of the Z ′ to quarks. At 90% C.L., we find −0.0006 < θ < 0.0042 and M Z 2 > 490GeV. Note the latter restriction is comparable to that obtained from tree level FCNC considerations in the quark sector.
Although not used in the fit, the minimal Higgs sector leads to further restrictions on the Z 2 mass and mixing. The constraint on the Z-Z ′ mixing is shown by the dotted line in Fig. 1. Due to the symmetry breaking hierarchy, u ≫ v, v ′ , the dilepton and Z 2 masses are related. Using the limit M Y + > 300GeV from polarized muon decay [16,17], we find The presence of the Z 2 gauge boson affects the fit in the S-T plane as shown in Fig. 2. We see that the tree level mixing may appear as effective contributions to S and T . The dominant effect is to give a positive contribution to T due to the downshift in the Z 1 mass.
M Z 2 > 1.4TeV,
The large region of negative T corresponds to high Z 2 mass and small mixing. Imposing an upper bound on M Z 2 will affect the fit in this region. At 90% C.L. we find
−1.34 ≤ S ≤ 0.28 , −3.07 ≤ T ≤ 0.45 ,(2.8)
keeping in mind that the errors are nongaussian. Although the definitions of S, T and U are model independent, these numbers are valid only for the SU(3) L × U(1) X model due to the tree level effects. We use these results in the next section to constrain the new charged Higgs masses.
III. RADIATIVE CORRECTIONS
The radiative corrections arising from the dileptons and the new heavy Higgs are process independent and may be parametrized by S, T and U. Following the notation of [7], we
define S = 16π Π ′ 33 (0) − Π ′ 3Q (0) , (3.1a) T = 4π sin 2 θ W M 2 W [Π 11 (0) − Π 33 (0)] , (3.1b) U = 16π [Π ′ 11 (0) − Π ′ 33 (0)] . (3.1c)
In the above, the vacuum polarizations, Π(q 2 ), and their derivatives with respect to q 2 ,
Π ′ (q 2 ),H ±± = sin α ++ φ ±± + cos α ++ ∆ ′±± , (3.2a) H ± = sin α + φ + + cos α + ∆ + 2 , (3.2b) H 0 = √ 2 Re φ 0 , (3.2c)
where we have defined the ratio of VEVs as tan α ++ = v ′ /u and tan α + = v/u. These two VEV angles and tan β are not independent, but are related by tan β = tan α ++ / tan α + .
Orthogonal to these states are the would be Goldstone bosons
π ±± = cos α ++ φ ±± − sin α ++ ∆ ′±± , (3.3a) π ± = cos α + φ + − sin α + ∆ + 2 , (3.3b) π 0 = √ 2 Im φ 0 , (3.3c)
corresponding to Y ±± , Y ± and Z ′ respectively. Again we have assumed the Z-Z ′ mixing is not important for one-loop oblique corrections.
Since the two-Higgs model has already been considered in detail (see for example
Ref. [19,20]), we will only focus on the dileptons and additional Higgs. Assuming the symmetry breaking hierarchy u ≫ {v, v ′ }, we see that {tan α ++ , tan α + } ≪ 1 so that H ±± and H ± are mostly SU(2) singlets, and the would be Goldstone bosons giving masses to the dilepton doublet (Y ++ , Y + ) are mostly contained in the Φ doublet (φ ++ , φ + ). Although the mixings between the SU(2) singlet and doublet scalars are small, the oblique corrections can be important as their contributions are not protected by the custodial symmetry.
Let us first consider only the contributions from the dilepton gauge bosons (Y ++ , Y + ) which corresponds to the limit {tan α ++ , tan α + } → 0. In this limit, the new Higgs, (3.2a-c), are all SU(2) singlets and only the dilepton doublet contributes to S, T and U. We find
S = − 9 4π ln M 2 Y + M 2 Y ++ , (3.4a) T = 3 16π sin 2 θ W M 2 W F (M 2 Y + , M 2 Y ++ ) , (3.4b) U = − 1 4π − 19M 4 Y + − 26M 2 Y + M 2 Y ++ + 19M 4 Y ++ 3(M 2 Y + − M 2 Y ++ ) 2 + 3M 6 Y + − M 4 Y + M 2 Y ++ − M 2 Y + M 4 Y ++ + 3M 6 Y ++ (M 2 Y + − M 2 Y ++ ) 3 ln M 2 Y + M 2 Y ++ , (3.4c)
where F is defined by
F (M 2 1 , M 2 2 ) = M 2 1 + M 2 2 − 2 M 2 1 M 2 2 M 2 1 − M 2 2 ln M 2 1 M 2 2 .
(3.5)
Since F (M 2 1 , M 2 2 ) ≥ 0 and vanishes only when the masses are degenerate, we see that T ≥ 0 and parametrizes the size of custodial SU(2) breaking. S vanishes when the dileptons are degenerate, but can pick up either sign when the masses are split. While U does not play as important a role in confronting experiment [7], we note that the dilepton doublet gives U ≤ 0. This result is the opposite of that found for a chiral fermion doublet where U is non-negative.
A complete calculation of S and T must take into account the mixing between the SU(2) singlet and doublet Higgs. This is especially important in light of the upper limit on the SU(3) L × U(1) X breaking scale which puts a non-zero lower bound on the mixing. Because of the mixing, the dileptons and physical Higgs combine in their contributions. For S, we find the full result
S = − 1 π 2 3 sin 2 α ++ − 1 3 sin 2 α + + 9 4 ln M 2 Y + M 2 Y ++ + 1 4 sin 2 α ++ ln m 2 H ++ M 2 Y ++ − 1 4 sin 2 α + ln m 2 H + M 2 Y + − sin 2 α ++ cos 2 α ++ G m 2 H ++ M 2 Y ++ − sin 2 α + cos 2 α + G m 2 H + M 2 Y + . (3.6)
The function G is defined by
G(x) = 7x 2 − 38x − 29 36(x − 1) 2 + x 3 − 3x 2 + 21x + 1 12(x − 1) 3 ln x , (3.7)
and vanishes when x = 1. G is positive when the Higgs are heavier than the dileptons and is usually negative when they are lighter. We see that the Higgs corrections always enter with a factor of either sin α ++ or sin α + and arise because of the mixing of scalars with different hypercharges. As a result, S reduces to Eqn. (3.4a) in the limit when the Higgs do not mix.
Turning to T , we find that it has the general form
T = 3 16π sin 2 θ W M 2 W F (M 2 Y + , M 2 Y ++ ) + sin 2 α + cos 2 α + F (m 2 H + , M 2 Y + ) + sin 2 α ++ cos 2 α ++ F (m 2 H ++ , M 2 Y ++ ) − sin 2 α + cos 2 α ++ [F (m 2 H + , M 2 Y ++ ) − F (M 2 Y + , M 2 Y ++ )] − cos 2 α + sin 2 α ++ [F (m 2 H ++ , M 2 Y + ) − F (M 2 Y ++ , M 2 Y + )] + 1 3 sin 2 α + sin 2 α ++ [F (m 2 H + , m 2 H ++ ) − F (M 2 Y + , M 2 Y ++ )] + 4 3 sin 2 α + (sin 2 α + − sin 2 α ++ )(m 2 H + − M 2 Y + ) + 4 3 sin 2 α ++ (sin 2 α ++ − sin 2 α + )(m 2 H ++ − M 2 Y ++ ) . (3.8)
In deriving this, we had to use the relation cos 2 α + M 2 Y + = cos 2 α ++ M 2 Y ++ implied by the definitions of tan α ++ and tan α + . Again, the Higgs corrections come in only through their small mixing into an SU(2) doublet. We find that T is positive in most of parameter space and becomes large when the Higgs or dilepton masses are split greatly, thus breaking custodial SU (2). A similar calculation for U is straightforward, but since experimental constraints on U are not as strong, we do not present it here. (a) M Y + = M Y ++ . In order to give identical masses to Y ± and Y ±± , the VEVs, v and v ′ must be equal. As a result, tan β = 1 and sin 2 α ++ = sin 2 α + = M 2 W /2M 2 Y ++ . From Eqn. (3.6), we find for S S = 1 2π
M 2 W M 2 Y ++ − 1 3 + 1 4 ln m 2 H + m 2 H ++ + cos 2 α ++ G m 2 H ++ M 2 Y ++ + G m 2 H + M 2 Y + . (3.9)
Note that even when all masses are degenerate, S takes on a non-zero result. In this case, we see that the singlet-doublet mixing in the scalar sector gives rise to a negative S [21,22].
For large Higgs mass splittings, the second term in (3.9) dominates, and S is positive for m H + ≫ m H ++ and negative for m H + ≪ m H ++ . From the fit in the previous section, (2.8), we see that m H + < ∼ m H ++ is favored. For T , we find the simple result
T = 1 16π sin 2 θ W M 2 W sin 4 α ++ F (m 2 H + , m 2 H ++ ) ,(3.10)
which gives the bounds
0 ≤ T ≤ 1 64π sin 2 θ W M 2 W M 2 Y ++ max(m 2 H ++ , m 2 H + ) M 2 Y ++ .
(3.11)
The lower limit corresponds to Higgs mass degeneracy and the upper limit to large mass splitting. From Eqn. (2.8), we obtain the upper bound for the heavier Higgs, namely
max(m H + , m H ++ ) ≤ 7.0TeV, for M Y ++ ≤ 350GeV.
(b) sin 2 α ++ = 0. Due to the VEV structure, the mass splitting of the dileptons is restricted by the condition |M 2
Y + − M 2 Y ++ | ≤ M 2 W . The limiting case M 2 Y + = M 2 Y ++ + M 2 W
can be realized by v ≫ v ′ or sin 2 α ++ → 0. In this case, the doubly charged Higgs, which is ∆ ′±± , is a pure SU(2) singlet and is not involved in the oblique corrections.
The parameter T is then given by
T = 1 16π sin 2 θ W M 2 W M 2 Y ++ m 2 H +
complicated. Nevertheless, our conclusions that S can pick up corrections due to the mixing of scalars with different hypercharge and that T measures the mass splitting between scalars still hold. Without any fine tuning in the Higgs sector, we expect all physical Higgs to be lighter than a few TeV. [16], strongly restrict the dilepton spectrum.
We note that in this model, it is possible to obtain (small) negative values of S and T . This result is quite general and occurs because of scalar mixing. In order to obtain a negative T , there has to be mixing between different SU(2) multiplets (in this case singlets and doublets). Mixing of states with different hypercharge also allows negative S for the case when all masses are degenerate. These observations have also been made in Ref. [22].
As the precision electroweak parameters are measured to higher accuracy, we can start placing more stringent bounds on the new physics predicted by this SU(3) L × U(1) X model.
When the top quark mass is determined, it will remove much uncertainty in the standard model contributions to S and T ; the parameters then become much more sensitive to truly new physics. Because the masses of the new particles are already tightly constrained, both direct and indirect experiments at future colliders may soon realize or rule out this model. For convenience, we can define a parameter, s 2 0 , which is given by
TABLES
s 2 0 (1 − s 2 0 ) = π α(M Z 1 ) √ 2 G F M 2 Z 1 . (A1)
From the present data, s 2 0 = 0.23146 ± 0.00034 is precisely known. Because of the Z-Z ′ mixing, the mass of the Z 1 is shifted by a factor
δM Z 1 M Z 1 = − 1 2 M 2 Z 2 M 2 Z 1 θ 2 .(A2)
Hence, we obtain
M W /M Z 1 M W * /M Z * = 1 + 1 2 1 − s 2 0 1 − 2s 2 0 M 2 Z 2 M 2 Z 1 θ 2 .(A3)
(i) Z-pole physics. The gauge interaction of the light neutral gauge boson, Z 1 , is given by
L = e * c * s * Z Z (f ) Z 1µ J µ 3 (f ) − Q(f )s 2 eff (f ) J µ V ,(A4)δs 2 eff (f ) = s 2 eff (f ) − s 2 * = a f + b f Q(f ) − 2b f T 3 (f ) θ ,(A5)
and
δZ Z (f )
Z Z * = 4b f T 3 (f ) θ + M 2 Z 2 M 2 Z 1 θ 2 ,(A6)
where a f and b f , given in Ref. [2], are the vector and axial vector coupling coefficients for the Z ′ . Therefore, the partial width for the Z-boson relative to the standard model prediction is given by
Γ(Z → ff ) Γ(Z → ff ) * = 1 + δZ Z (f ) Z Z * − 2Q(f ) g V (f ) * g 2 V (f ) * + g 2 A (f ) * δs 2 ef f (f ) ,(A7)
where g V (f ) * = 1 2 T 3 (f ) − Q(f )s 2 * and g A (f ) * = − 1 2 T 3 (f ). For Γ(Z → bb) * , we also include the vertex correction due to the top-quark.
By the same token, we can express the polarization asymmetry of fermion f as
A pol (f ) A pol (f ) * = 1 − δA(f ) ,(A8)
with
δA(f ) = Q(f ) g V (f ) * g 2 A (f ) * − g 2 V (f ) * g 2 A (f ) * + g 2 V (f ) * δs 2 ef f (f ) .(A9)
Hence we obtain
A pol (τ ) A pol (τ ) * = 1 − δA(l ) (A10) A F B (µ) A F B (µ) * = 1 − 2δA(l ) (A11) A F B (b) A F B (b) * = 1 − δA(b) − δA(l ) . (A12) Q W Q W * = 1 + M 2 Z 2 M 2 Z 1 θ 2 + δC 1 (u)(2Z + N) + δC 1 (d)(Z + 2N) g 0 A (e) * g 0 V (u) * (2Z + N) + g 0 A (e) * g 0 V (d) * (Z + 2N) ,(A16)
where δC 1 (q) = −θ(g 0 A (e) * a q + b e g 0 V (q) * ) +
M 2 Z 1 M 2 Z 2 b e b q .(A17)
The quantities g 0 R,L * and g 0 V,A * in Eqs. (A14)-(A17) are evaluated at zero energy. and m H = 1000GeV. U is always taken as a free parameter.
consider both (i) Z-pole experiments which are sensitive to the mixing only and (ii) low energy experiments which are sensitive to both mixing and the presence of the Z 2 . The experimental values that we use for the five parameter fit, along with the standard model predictions (for reference values of m t = 150GeV and m H = 1000GeV [7]), are given in table I. For the Z-pole data, M W /M Z and Q W (Cs), we use the values given in Ref.
as indicated on the figure. Because of the upper bound on SU(3) L × U(1) X unification, M Z 2 must be below 2.2TeV, thus giving a narrow window for the allowed Z 2 mass.
include only new physics beyond the standard model. Implicit in this parametrization is the assumption that the scale of new physics is much larger than M Z . The SU(3) L × U(1) X model predicts three classes of new particles: the new quarks D, S and T , new gauge bosons, Y ±± , Y ± and Z ′ and new Higgs scalars. Since the new quarks are SU(2) singlets, they do not enter into the oblique corrections which are only sensitive to SU(2) electroweak physics. Similarly, Z ′ will not contribute except through Z-Z ′ mixing which was addressed in the previous section. Thus in the limit of small mixing, only dileptons and new Higgs particles will contribute radiatively to S and T (in addition to the deviations of the top quark and standard model Higgs masses from their reference values). Because of spontaneous symmetry breaking, we must examine the new gauge and Higgs sector simultaneously.In order to simplify the analysis of the Higgs sector, we assume that the sextet η does not acquire a VEV. As a result it can be treated separately from the dileptons, and we now focus on the three SU(3) triplet Higgs, (2.2a-c). These three Higgs contain a total of 18 states of which 8 are "eaten up" by the Higgs mechanism to give masses to the various gauge bosons. Ignoring Z-Z ′ mixing, the SU(2) doublets coming from ∆ and ∆ ′ form a standard two-Higgs model with tan β = v ′ /v and five physical Higgs particles, h ± ,
The full expressions for S and T depend on four unknown parameters of the new physics -the two dilepton masses and the two new Higgs masses (the VEV angles are determined completely from the dilepton masses). In order to understand the general behavior of these radiative corrections, we now turn to three interesting cases: (a) the dileptons are degenerate in mass, M Y + = M Y ++ ; (b) the dileptons are maximally split in mass, sin 2 α ++ = 0; and (c) the Higgs and dilepton masses are related by m H + = M Y + and m H ++ = M Y ++ .
summarize, we have examined both tree level Z-Z ′ mixing and one-loop oblique effects induced by the new charged gauge bosons and Higgs bosons in the SU(3) L × U(1) X model. The precision experiments constrain the mixing angle to be in the range −0.0006 < θ < 0.0042 and gives M Z 2 > 490GeV. Additional indirect lower bounds can be placed on the Z 2 mass from both FCNC considerations and from the Z ′ -dilepton mass relation. The latter gives the strongest limit and, along with the upper bound on the SU(3) L × U(1) X scale highly restricts the neutral gauge sector of the model, giving 1.4 < M Z 2 < 2.2TeV. Constraints on the new Higgs bosons are obtained from examination of the one-loop radiative corrections using the parameters S and T . The parameter T can be negative for very light charged Higgs and is positive for heavy Higgs. Hence we obtain an upper bound for the new charged Higgs masses, namely m H ++ , m H + ≤ a few TeV. The Higgs sector places strong constraints on the mass splitting between the singly and doubly charged members of the dilepton doublet. Hence no restrictions can be placed on the dilepton masses past that coming from the Higgs structure. Nevertheless, other experiments, in particular polarized muon decay
FIGURESFIG. 1 .
190% C.L. allowed region in (M Z 2 , θ) parameter space. The dotted lines indicate the constraints from the minimal Higgs structure. Also included are the FCNC bound of Ref. [2], the lower bound from the Z ′ -dilepton mass relation, and the upper bound on SU(3) L × U(1) X unification. FIG. 2. Best fit point (cross) and 90% C.L. contour in the S-T plane for the SU(3) L × U(1) X model (solid line). For comparison, the model independent (oblique parameters only) fit to the same data is also shown (dotted line). S = T = 0 corresponds to the reference point m t = 150GeV
TABLE I .
IThe experimentally measured values[14,15], and standard model predictions[7] (for m t = 150GeV and m H = 1000GeV) used in the fit.APPENDIXIn the electroweak sector, we can choose three independent precisely measured parameters, α, G F and M Z 1 , from which in principle we can predict all the outcome of experiments in the SU(3) L × U(1) X theory. Due to the presence of an additional gauge boson, Z ′ , and the corresponding Z-Z ′ mixing, the results of the standard model predictions, which are written in terms of the starred functions[13], need to be modified. If we neglect the effects due to any combinations of both the Z ′ parameters and the standard model radiative corrections, the results can be expressed as shifts with respect to the starred functions.Quantity
experimental value
standard model
M Z (GeV)
91.187 ± 0.007
input
Γ Z (GeV)
2.491 ± 0.007
2.484
R = Γ had /Γ ll
20.87 ± 0.07
20.78
Γ bb (MeV)
373 ± 9
377.9
A F B (µ)
0.0152 ± 0.0027
0.0126
A pol (τ )
0.140 ± 0.018
0.1297
A e (P τ )
0.134 ± 0.030
0.1297
A F B (b)
0.093 ± 0.012
0.0848
A LR
0.100 ± 0.044
0.1297
M W /M Z
0.8789 ± 0.0030
0.8787
Q W (Cs)
−71.04 ± 1.81
−73.31
g 2
L
0.2990 ± 0.0042
0.3001
g 2
R
0.0321 ± 0.0034
0.0302
We see that T can be negative if m 2 H + ≪ M 2 Y ++ . However, it is negligible unless the dileptons are extremely light. On the other hand, T is always positive for heavy Higgs, m 2 H + ≫ M 2 Y ++ . Although the Higgs contributions are induced by the small mixing, namely sin 2 α + = M 2 W /M 2 Y + , we obtain a stringent bound for the Higgs mass, m H + ≤ 3.5TeV, for M Y ++ ≤ 350GeV. If we take the other limit v ′ ≫ v, then sin 2 α + → 0. By the same token, we findFor M Y ++ ≥ 250GeV, we obtain −0.06 ≤ S ≤ 0.05 and 0 ≤ T ≤ 0.009 as expected for a small mass splitting.When the η sextet is taken into account, it introduces 12 additional physical Higgs fields. In this case the mixing between scalars in different SU(2) multiplets becomes more
. P H Frampton, Phys. Rev. Lett. 692889P. H. Frampton, Phys. Rev. Lett. 69, 2889 (1992);
. F Pisano, V Pleitez, Phys. Rev. 46410F. Pisano and V. Pleitez, Phys. Rev. D46, 410 (1992).
The electroweak theory of SU(3) × U(1). Daniel Ng, TRI-PP-92-125Triumf preprintDaniel Ng, The electroweak theory of SU(3) × U(1), Triumf preprint TRI-PP-92-125 (December 1992).
This upper bound comes from imposing the condition α X < 2π at the SU(3) L × U(1) X scale, assuming a three Higgs doublet model below it and using the normalization of Ref. An absolute upper limit on the unification scale comes from sin 2 θ W < 1/4, giving M Z 2 < 3.2TeV with corresponding M Y < 590GeVThis upper bound comes from imposing the condition α X < 2π at the SU(3) L × U(1) X scale, assuming a three Higgs doublet model below it and using the normalization of Ref. [2]. An absolute upper limit on the unification scale comes from sin 2 θ W < 1/4, giving M Z 2 < 3.2TeV with corresponding M Y < 590GeV.
. Paul Langacker, Mingxing Luo, Phys. Rev. 45278Paul Langacker and Mingxing Luo, Phys. Rev. D45, 278 (1992).
. Lep The, Collaborations, Phys. Lett. 176247The LEP Collaborations, Phys. Lett. B176, 247 (1992).
. Phys. Lett. 232539CHARM II Collaboration, Phys. Lett. B232, 539 (1989).
. M E Peskin, T Takeuchi, Phys. Rev. Lett. 65964M. E. Peskin and T. Takeuchi, Phys. Rev. Lett. 65, 964 (1990);
. Phys. Rev. 46381Phys. Rev. D46, 381 (1992).
. G Altarelli, R Barbieri, Phys. Lett. 253161G. Altarelli and R. Barbieri, Phys. Lett. B253, 161 (1990).
. W J Marciano, J L Rosner, Phys. Rev. Lett. 652963W. J. Marciano and J. L. Rosner, Phys. Rev. Lett. 65, 2963 (1990);
. D C Kennedy, P Langacker, Phys. Rev. Lett. 652967D. C. Kennedy and P. Langacker, Phys. Rev. Lett. 65, 2967 (1990);
. B Holdom, J Terning, Phys. Lett. 24788B. Holdom and J. Terning, Phys. Lett. B247, 88 (1990).
. M Golden, L Randall, Nucl. Phys. 3613M. Golden and L. Randall, Nucl. Phys. B361, 3 (1991).
. D Kennedy, B W Lynn, Nucl. Phys. 3221D. Kennedy and B. W. Lynn, Nucl. Phys. B322, 1 (1989).
Precision Tests of the Standard Model, University of Pennsylvania preprint UPR-0555T. P Langacker, hep-ph/9303304P. Langacker, Precision Tests of the Standard Model, University of Pennsylvania preprint UPR-0555T, hep-ph/9303304 (March 1993).
. Phys. Rev. D45, III. 64Particle Data GroupParticle Data Group, Phys. Rev. D45, III.64 (1992).
. E Carlson, P H Frampton, Phys Lett, 283123E. Carlson and P. H. Frampton, Phys Lett. B283, 123 (1992).
. I Beltrami, Phys. Lett. 194326I. Beltrami et al., Phys. Lett. B194, 326 (1987).
The Higgs Hunter's Guide. J F Gunion, H E Haber, G Kane, S Dawson, Addison WesleyJ. F. Gunion, H. E. Haber, G. Kane and S. Dawson, The Higgs Hunter's Guide, Addison Wesley, 1990.
. W Hollik, Zeit. Phys. 37569W. Hollik, Zeit. Phys. C37, 569 (1988).
. C D Froggatt, R G Moorhouse, Phys. Rev. 452471C. D. Froggatt and R. G. Moorhouse, Phys. Rev. D45, 2471 (1992).
. M J Dugan, L Randall, Phys. Lett. 264154M. J. Dugan and L. Randall, Phys. Lett. B264, 154 (1991).
Mechanism for obtaining a negative T oblique parameter. L Lavoura, L.-F Li, CMU preprint CMU-HEP93-02L. Lavoura and L.-F. Li, Mechanism for obtaining a negative T oblique parameter, CMU preprint CMU-HEP93-02 (January 1993).
| [] |
[
"The Influence of the Generator's License on Generated Artifacts",
"The Influence of the Generator's License on Generated Artifacts"
] | [
"Carsten Kolassa \nSoftware Engineering RWTH Aachen University\nGermany\n",
"Bernhard Rumpe \nSoftware Engineering RWTH Aachen University\nGermany\n"
] | [
"Software Engineering RWTH Aachen University\nGermany",
"Software Engineering RWTH Aachen University\nGermany"
] | [] | Open sourcing modelling tools and generators becomes more and more important as open source software as a whole becomes more important. We evaluate the impact open source licenses of code generators have on the intellectual property (IP) of generated artifacts comparing the most common open source licenses by categories found in literature.Restrictively licensed generators do have effects on the IP and therefore on the usability of the artifacts they produce. We then show how this effects can be shaped to the needs of the licensor and the licensee. | null | [
"https://arxiv.org/pdf/1412.2963v1.pdf"
] | 13,968,804 | 1412.2963 | e8e9f9ec640638cd8aa201117bb74f94cff7f198 |
The Influence of the Generator's License on Generated Artifacts
Carsten Kolassa
Software Engineering RWTH Aachen University
Germany
Bernhard Rumpe
Software Engineering RWTH Aachen University
Germany
The Influence of the Generator's License on Generated Artifacts
Open sourcing modelling tools and generators becomes more and more important as open source software as a whole becomes more important. We evaluate the impact open source licenses of code generators have on the intellectual property (IP) of generated artifacts comparing the most common open source licenses by categories found in literature.Restrictively licensed generators do have effects on the IP and therefore on the usability of the artifacts they produce. We then show how this effects can be shaped to the needs of the licensor and the licensee.
Introduction
Open source has become more and more important in the last years and has been adopted widely in the consumer as well as the industrial market. In 2008, 85% of all enterprises were using open source software [15], more recent studies give numbers up to 98% [31].
Open Source is not only used in business but also in development. According to [16] 76% of all developers have used open source technology for some of their tasks.
The reasons to develop open source software vary greatly. Some companies provide open source software to rapidly grow their user base [18] and thus creating an industry standard. Other reasons are getting a community invested into the project to create an ecosystem of supporting software (e.g. Plugins) arround the original project or to allow external validation of the software (e.g. every user could potentially security audit open source software). Individuals develop open source software to show off their skill, to show their work to the world, for altruistic reasons, or for potential rewards in the future [17]. Academic institutions provide their software under an open source license to gain a user base, and to foster the use of new approaches in industry. Some companies have a dual license approach where they provide an open source version of the software to gain a user base but offer more flexible licenses and customization to business users [29].
The open source licenses reflect this different reasons to open source software and to use it and differ from each other considerably [26].
It is therefore important to chose the right license that is consistent with the reasons you license the software as open source. This is especially true for generators as there are more potential pitfalls than with other software as the generator's license can have an influence on the intellectual property rights of the generated artifacts.
For example a car manufacturer will not use a generator or compiler when the license of the produced code threatens the intellectual property (IP) of his other code or enforces a logo to be placed on the car.
To give an overview about the potential license choices for generators we list the most common licenses and classify them according to categories found in literature. We then show which impact the licenses have on the artifacts if applied to a generator.
Our research questions are: RQ1: Which impact can the open source license choice for the generator have on the license of the artifacts? RQ2: What possible solutions can be used to counteract unwanted effects on the IP of the artifacts? The paper is structured as follows: Section 2 reviews the related work. In section 3, we describe the licenses and describe their characteristics as found in literature. Section 4 shows which influence those characteristics have on the license/ownership of the generated artifacts if the license is applied to a software generator and how this influence can be shaped to meet the needs of the licensor. Section 5 concludes the paper.
Related Work
Various publications analyse the rationale behind license choices and give guidelines which license is suitable for which project. [22] examines the scope of licensing in open source and lists the various considerations that determine the license of open source projects. While [24] gives a guide to choosing an open source license in a commercial context. [21] examines the licenses and their implications in great detail which negative and positive implication they have on projects in general, the reasons to choose a particular license but aso how to draft an own open source license. [23] gives an overview about trademark, patent, and copyright law in relation to open source and shows how to choose a commercial or open source license. It examines the implications of linking code covered by the gpl and the implications of creating derivative work but only from a developer perspective as well as from a business perspective, but does not cover model driven development. [30] shows what motivates businesses to provide open source, examines common open source licenses and how they relate to community and corporate interests. It also classifies open source licenses and gives an overview of the implications of licenses for the IP of the code they cover using examples. [28] presents how the license choice impacts interest into a project and the development activity. [12] analyses the open source development paradigm and shows differences and similarities of open source development and licensing to non open source approaches.
However, none of the above papers and books examines the intellectual property situation of artifacts that are generated or created by open source projects as separate case.
Most Common Licenses
In this section we describe the most common open source licenses and and show their characteristics. We chose the licenses by looking at the black duck license usage statistics [10]. This statistics are calculated from the black duck Knowl-edgeBase which includes one million open source projects from more than 7500 sites.
We looked at the 10 most widely used licenses and decided to exclude the Artistic License as it is mostly used in the context of the Perl Scripting language and the Microsoft Public License because of its similarity to the Eclipse Public License (EPL) which is more important in the context of model driven development (as many widely used open source software for model driven development use this license) and we decided to look only at the most recent version of each license and exclude older versions which gives us 6 potential licenses for our evaluation. We made the choice to include a 7th license the GNU Affero General Public License as it addresses the privacy loophole which isn't addressed by the other licenses.
Comparison
Permissions for reuse: We use the classification in [30] and differentiate between three different types of licenses called Permissive, Weak Copyleft and Strong Copyleft. Permissive: These licences permit the redistributor to restrict access to the modified source code (make the modified source code closed source) and to put the changed software under a different license (even a proprietary license). Weak Copyleft: The license of the software cannot be changed but it only applies to the software that is directly derived from the original software e.g. software that incorporates copies of source code from the original software. Strong Copyleft: Every software that links or otherwise incorporates code from a software licensed under a strong copyleft license needs to be published under a compatible license [27]. Strong copyleft licenses are often called viral as linking to one strong copyleft library forces the whole project to be put under a compatible license. Patent license: Some Open Source licenses automatically include a patent granting clause that grants a non-exclusive, worldwide, royalty-free patent license for all patents a contributor holds and that affect his contribution to the project. In other words if a contributor contributes code to a project that mandates a license grant and the code he contributes would be covered by a patent he holds he needs to give a license to the users of the project without charging them for it. Enhanced Attribution: All open source licenses specify that that the anyone who distributes or modifies the software needs to give credit to the original authors. "Enhanced attribution" means that the license specifies the form of the credits in a way that goes beyond just giving credit like the attribution clause in the original BSD license that specifies that a special acknowledgement needs to be added to all advertising materials mentioning the use of the licensed software or a feature of the software [2] Privacy Loophole/Provider Loophole [19]: If someone modifies an open source software and just uses it or sells its use e.g. a service provider who sells the use of a webclient, there is normally no obligation to make the changes available to the community. If this loophole is closed on the other hand the changes must be made available.
Dual Licensing/Multi Licensing
Dual licensing is the practice to distribute software under two different licenses [29] while multi licensing is the practice to distribute it under more than two licenses. Dual Licensing is a common business model in open source. Examples for famous projects with dual licenses are:
-Qt [29] -MySQL [29] -Asterisk [11]
-Sendmail [11] The software is normally distributed freely under a restrictive open source license that allows the open source community to participate on the development but that makes it difficult to use the software in a commercial environment. But the software is also available under a proprietary software license for a license fee, which allows creating proprietary applications that are based on it (e.g. commercial applications that use the QT library which would't be possible using the open source version) or that allows OEMs (Original Equipment Manufacturers), ISVs (Independent Software Vendors), VARs (Value Added Resellers) to combine and/or distribute the software together with or as commercially licensed software (like in the case of MySQL [25]).
It is also possible to multi license only parts of a software it sometimes makes sense to license a single file under two different open source licenses for example to allow its use in two different projects whose licenses are incompatible to each other or to put some parts of the software under a less restrictive license to allow its reuse while the rest of the software remains under a restrictive license.
The Ownership of the Generated Code
The question under which license the generated artifacts are is a very important one. The relationship is tricky as the generated artifacts are derived from the models (that are owned by the user of the software) but to some extent also include parts from the generator itself and usually work together with a runtime environment and libraries.
Are Generated Artifacts Derivative Work?
Derivative work according to US copyright law is: "work based upon one or more preexisting works" [4], in software this includes work that contains substantial chunks of source code from another work. Generated source code is therefore derivative work when source code of the original program in our case the generator was used, modified, translated or otherwise changed in any way to create the new program. If the open source license requires that derivative work needs to be published under the same or a compatible license and the generated artifacts are derivative work they automatically need to be open source when they are redistributed, this is the case for all copyleft licenses.
It is important to notice that the mere output of a program itself is not covered by the license of that program. Only if the output constitutes derivative work of the generating program there is this legal dependency between generator output and generator. Figure 1 shows an example where the generator copies chunks from its own code into the generated artifact which means that the artifact is derivative work.
An example for such a generator is bison the GNU parser generator which copies parts of itself into the parser source code it generates. Therefore every parser generated with bison would be derivative work of bison but the license of bison makes an explicit exception to the GPL [14], that allows to include the bison parser generator's source code in other projects without having to put them under the GPL.
Another example for generators that include parts of themselves into generated code are generators based on the Monticore [20] language workbench. Monticore uses templates to generate source code these templates for example generate class definitions and their content is directly put into the generated artifacts. If those templates are licensed under a copy left license the generated code is derivative work and needs to be treated accordingly.
Both weak and strong copyleft licenses impose the restriction that derivative work that includes substantial parts of the original work needs to be put under the same or a compatible license only permissive licenses don't have that restriction.
Dependencies on Libraries
For strong copyleft licenses it isn't enough that the work is not derivative to cause implications of the license to the work depending on it. Strong copyleft licenses define a term "Corresponding Source" which is the code for shared libraries and dynamically linked subprograms that use the work. This "Corresponding Source" needs to be licensed under the same license if the original work is licensed under a strong copyleft license. An example would be a library that is licensed under the GPL and that is used as a shared library by generated artifacts (see Figure 2). These artifacts then need to be licensed under the GPL as well.
For weak copyleft licenses there is no such requirement as merely using a library imposes no effect on the license of the artifacts. The license implication both for incorporating source of the generator in generated artifacts as well as having dependencies to strong copyleft licensed code can create practical problems for users of the generator.
For example a company that uses such a generator to create a product would need to license it as open source under a compatible license. This prevents some business models, e.g. the software could not be distributed as closed source. This is often not intended as the licensor sometimes wants to only protect the generator but not the artifacts.
We identify the following ways to explicitly prevent these dependencies from having practical effects:
Adding an exception in the license. In this approach en exception is added to a restrictive license to allow exactly the usage of dependent or derivative artifacts that is intended by the licensor. This is the approach that the GNU parser generator used it has the advantage that the intentions of the licensor are clear as it is clearly stated what impact the license of the generator should have on the dependent artifacts.
The disadvantage of this approach is that a modified version of a license (adding an exception is a modification) is a new license. This creates problems for distributors of open source software as they normally use automated approaches to package software which need standardized licenses with clear compatibilities and incompatibilities to other open source licenses.
Users as well as distributors need to check these non standardized exceptions and handle them accordingly this creates additional effort.
Duallicensing or multilicensing the files the artifacts are derived from or dependent on. If the generator is under a copyleft license as a whole the licensor can still release parts of it under a less restrictive license. The licensee can then choose the license that applies as both are valid. This solution is recommended by [13] for website templates but can also be applied to generating software. In the case of Monticore based generators for example this means releasing the templates under a permissive license as well while the code parts that don't affect the IP of the generated artifacts are only released under a restrictive license.
Preventing the dependency from having practical effects by using a permissive license for the files the artifacts are derived from or dependent on. The third way to prevent dependencies to have practical effects is using a permissive license for the whole project if possible. This has the disadvantage that others can freely use the code of the original work even commercially or in own generators which is often not wanted. -Adding an exception to the license of the generator.
-Dual licensing the code that creates the dependency under a more permissive license. -Using a more permissive license that prevents the problem in the first place.
Conclusion
The license of the generator can have an impact on generated artifacts. Generator providers need to take that into account when choosing the license of their generator. The effects can be positive and deliberate for example to make the open source version of a generator less attractive when the dual licensing business strategy is employed and thus forcing commercial users to license the commercial version of the generator while still being able to benefit from the open source community. But they can also have detrimental effects when a generator that is only available under a restrictive license creates artifacts that are derivative work or have dependencies that cause that the artifacts cannot be used in a commercial environment although the licensor wants to allow that and only intended to protect the generator's code not the artifacts it creates. It is therefore important to know the effects of the license choice on the artifact and to apply the shown solutions if the effects are not as intended.
Fig. 1 :
1Relationship of generator and artifacts that causes the artifacts to be derivative work.
Fig. 2 :
2Relationship of generator and artifacts that causes a dependency between program and library which causes the program to be under the same license as the library it links to.4.3 Explicitly shaping the license implications of open sourcelicenses on generated code.
Which impact can the open source license choice of the generator have on the license of the artifacts? Answer: The license can have an influence on the artifacts. When the artifacts are derivative work in the case of Weak-Copyleft or Strong-Copyleft licenses or when the artifacts have dependencies on libraries that are part of the generator framework e.g. dependencies on a library in the case of Strong-Copyleft licenses. RQ2: What possible solutions can be used to counteract possibly detrimental effects? Answer: There are three ways to shape minimize the restrictions:
Table 1 :
1Comparison of the different licenses.
The license allows to add clauses to require the "preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it"[6].
. Mit The, License, accessed: 2014-07-18 Archived byThe MIT License (MIT) (1988), http://opensource.org/licenses/MIT, accessed: 2014-07-18 Archived by (WebCite at http://www.webcitation.org/6RBCNyR6K)
BSD 4-clause "Original" or "Old" License. accessed: 2014-07-18 Archived byBSD 4-clause "Original" or "Old" License (Jun 1999), https://spdx.org/ licenses/BSD-4-Clause, accessed: 2014-07-18 Archived by (WebCite at http: //www.webcitation.org/6RBCvX2Ug)
. Apache License, accessed: 2014-07-18 Archived byApache License, Version 2.0 (Jan 2004), http://www.apache.org/licenses/ LICENSE-2.0.html, accessed: 2014-07-18 Archived by (WebCite at http://www. webcitation.org/6RBCKOptH)
. USC. U.S. Copyright Act101found in Title 17) DefinitionsUSC 101: U.S. Copyright Act (found in Title 17) Definitions (October 2005), october 2005
Gnu affero general public license. accessed: 2014-07-18 Archived byGnu affero general public license (Jun 2007), http://www.gnu.org/licenses/ agpl-3.0.html, accessed: 2014-07-18 Archived by (WebCite at http://www. webcitation.org/6RBDAXKp1)
. Gnu General, License, accessed: 2014-07-18 Archived byGNU GENERAL PUBLIC LICENSE (Jun 2007), http://www.gnu.org/ copyleft/gpl.html, accessed: 2014-07-18 Archived by (WebCite at http://www. webcitation.org/6RBC6medP)
. Gnu, General, License, accessed: 2014-07-18 Archived byGNU LESSER GENERAL PUBLIC LICENSE (Jun 2007), http://www.gnu.org/ copyleft/lgpl.html, accessed: 2014-07-18 Archived by (WebCite at http://www. webcitation.org/6RBCxG23g)
The BSD 3-Clause License. accessed: 2014-07-18 Archived byThe BSD 3-Clause License (Jun 2007), http://opensource.org/licenses/ BSD-3-Clause, accessed: 2014-07-18 Archived by (WebCite at http://www. webcitation.org/6RBCpqN1s)
Blackduck, Top 20 Open Source Licenses. BlackDuck: Top 20 Open Source Licenses, http://www.blackducksoftware.com/ resources/data/top-20-open-source-licenses, accessed: 2014-07-18 (Archived by WebCite at http://www.webcitation.org/6RB5Dj9Av)
Open sources 2.0: The continuing evolution. C Dibona, M Stone, D Cooper, Reilly Media, IncDiBona, C., Stone, M., Cooper, D.: Open sources 2.0: The continuing evolution. " O'Reilly Media, Inc." (2005)
A framework analysis of the open source software development paradigm. J Feller, B Fitzgerald, Proceedings of the twenty first international conference on Information systems. the twenty first international conference on Information systemsAssociation for Information SystemsFeller, J., Fitzgerald, B.: A framework analysis of the open source software devel- opment paradigm. In: Proceedings of the twenty first international conference on Information systems. pp. 58-69. Association for Information Systems (2000)
F S Foundation, accessed: 2014-07-18 Archived byFrequently Asked Questions about the GNU Licenses. Foundation, F.S.: Frequently Asked Questions about the GNU Licenses, http: //www.gnu.org/licenses/gpl-faq.html, accessed: 2014-07-18 Archived by (We- bCite at http://www.webcitation.org/6RBHH4EKt)
GNU General Public License v2.0 w/Bison exception. F S Foundation, accessed: 2014- 07-18 Archived byFoundation, F.S.: GNU General Public License v2.0 w/Bison exception, https://spdx.org/licenses/GPL-2.0-with-bison-exception, accessed: 2014- 07-18 Archived by (WebCite at http://www.webcitation.org/6RBDDWcfb)
Gartner: User Survey Analysis: Open-Source Software. WorldwideGartner: User Survey Analysis: Open-Source Software, Worldwide, 2008 (2008), http://www.gartner.com/DisplayDocument?id=757916
J S Hammond, V Brown, P Murphy, R Curran, Development Landscape: 2013 Developer Forrsights North America And Europe. Hammond, J.S., Brown, V., Murphy, P., Curran, R.: Development Landscape: 2013 Developer Forrsights North America And Europe (2013)
Working for free? Motivations of participating in open source projects. A Hars, S Ou, Proceedings of the 34th Annual Hawaii International Conference on. the 34th Annual Hawaii International Conference onIEEE9System SciencesHars, A., Ou, S.: Working for free? Motivations of participating in open source projects. In: System Sciences, 2001. Proceedings of the 34th Annual Hawaii Inter- national Conference on. pp. 9-pp. IEEE (2001)
Setting up shop: The business of open-source software. F Hecker, IEEE software. 161Hecker, F.: Setting up shop: The business of open-source software. IEEE software 16(1), 45-51 (1999)
Simple workload & application portability (SWAP). S Johnston, Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. IEEEJohnston, S.: Simple workload & application portability (SWAP). In: Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. pp. 37-42. IEEE (2014)
Monticore: a framework for compositional development of domain specific languages. H Krahn, B Rumpe, S Völkel, International journal on software tools for technology transfer. 125Krahn, H., Rumpe, B., Völkel, S.: Monticore: a framework for compositional de- velopment of domain specific languages. International journal on software tools for technology transfer 12(5), 353-372 (2010)
Understanding Open Source and Free Software Licensing. A M S Laurent, O'Reilly Mediafirst edition. annotated edn.Laurent, A.M.S.: Understanding Open Source and Free Software Licensing. O'Reilly Media, first edition,annotated edn. (8 2004)
The scope of open source licensing. J Lerner, J Tirole, Journal of Law, Economics, and Organization. 211Lerner, J., Tirole, J.: The scope of open source licensing. Journal of Law, Eco- nomics, and Organization 21(1), 20-56 (2005)
Intellectual Property and Open Source: A Practical Guide to Protecting Code. O'Reilly Media, 1 edn. V Lindberg, Lindberg, V.: Intellectual Property and Open Source: A Practical Guide to Pro- tecting Code. O'Reilly Media, 1 edn. (7 2008)
Choosing an open source software license in commercial context: A managerial perspective. J Lindman, A Paajanen, M Rossi, Software Engineering and Advanced Applications (SEAA), 2010 36th EUROMICRO Conference on. Lindman, J., Paajanen, A., Rossi, M.: Choosing an open source software license in commercial context: A managerial perspective. In: Software Engineering and Advanced Applications (SEAA), 2010 36th EUROMICRO Conference on. pp. 237- 244 (Sept 2010)
accessed: 2014-07-18 Archived byMySql: Commercial License for OEMs, ISVs and VARs. MySql: Commercial License for OEMs, ISVs and VARs, http://www.mysql.com/ about/legal/licensing/oem/, accessed: 2014-07-18 Archived by (WebCite at http://www.webcitation.org/6RBMbAzJi)
Open Source Licensing: Software Freedom and Intellectual Property Law. L Rosen, Prentice Hall1Rosen, L.: Open Source Licensing: Software Freedom and Intellectual Property Law. Prentice Hall, 1 edn. (8 2004), http://amazon.com/o/ASIN/0131487876/
Open source software licenses: Strongcopyleft, non-copyleft, or somewhere in between? Decision support systems. R Sen, C Subramaniam, M L Nelson, 52Sen, R., Subramaniam, C., Nelson, M.L.: Open source software licenses: Strong- copyleft, non-copyleft, or somewhere in between? Decision support systems 52(1), 199-206 (2011)
Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects. K J Stewart, A P Ammeter, L M Maruping, Information Systems Research. 172Stewart, K.J., Ammeter, A.P., Maruping, L.M.: Impacts of license choice and or- ganizational sponsorship on user interest and development activity in open source software projects. Information Systems Research 17(2), 126-144 (2006)
Dual licensing in open source software industry. Systemes dInformation et Management. M Välimäki, 8Välimäki, M.: Dual licensing in open source software industry. Systemes dInfor- mation et Management 8(1), 63-75 (2003)
The Business of Open Source Software: A Primer. M M Widenius, L Nyman, Technology Innovation Management Review. 4Open Source BusinessWidenius, M.M., Nyman, L.: The Business of Open Source Software: A Primer. Technology Innovation Management Review 4(January 2014: Open Source Busi- ness) (2014)
. Zenoss Inc, Open Source Systems Management Survey. Zenoss Inc.: 2010 Open Source Systems Management Survey (2010)
| [] |
[
"Face Forgery Detection by 3D Decomposition",
"Face Forgery Detection by 3D Decomposition"
] | [
"Xiangyu Zhu [email protected] \nCBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n\n",
"Hao Wang \nCBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Hongyan Fei [email protected] \nSchool of Automation and Electrical Engineering\nUniversity of Science and Technology\nBeijing\n",
"Zhen Lei [email protected] \nCBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n\n",
"Stan Z Li [email protected] \nSchool of Engineering\nWestlake University\n\n"
] | [
"CBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n",
"CBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Automation and Electrical Engineering\nUniversity of Science and Technology\nBeijing",
"CBSR & NLPR\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n",
"School of Engineering\nWestlake University\n"
] | [] | Detecting digital face manipulation has attracted extensive attention due to fake media's potential harms to the public. However, recent advances have been able to reduce the forgery signals to a low magnitude. Decomposition, which reversibly decomposes an image into several constituent elements, is a promising way to highlight the hidden forgery details. In this paper, we consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment, and decompose it in a computer graphics view. Specifically, by disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture. Based on this observation, we propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns. Besides, we highlight the manipulated region with a supervised attention mechanism and introduce a two-stream structure to exploit both face image and facial detail together as a multi-modality task. Extensive experiments indicate the effectiveness of the extra features extracted from the facial detail, and our method achieves the state-of-the-art performance. | 10.1109/cvpr46437.2021.00295 | [
"https://arxiv.org/pdf/2011.09737v1.pdf"
] | 227,054,386 | 2011.09737 | 543903c216f92046518761f8dbac68717aa2c302 |
Face Forgery Detection by 3D Decomposition
Xiangyu Zhu [email protected]
CBSR & NLPR
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences
Hao Wang
CBSR & NLPR
Institute of Automation
Chinese Academy of Sciences
Hongyan Fei [email protected]
School of Automation and Electrical Engineering
University of Science and Technology
Beijing
Zhen Lei [email protected]
CBSR & NLPR
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences
Stan Z Li [email protected]
School of Engineering
Westlake University
Face Forgery Detection by 3D Decomposition
Detecting digital face manipulation has attracted extensive attention due to fake media's potential harms to the public. However, recent advances have been able to reduce the forgery signals to a low magnitude. Decomposition, which reversibly decomposes an image into several constituent elements, is a promising way to highlight the hidden forgery details. In this paper, we consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment, and decompose it in a computer graphics view. Specifically, by disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture. Based on this observation, we propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns. Besides, we highlight the manipulated region with a supervised attention mechanism and introduce a two-stream structure to exploit both face image and facial detail together as a multi-modality task. Extensive experiments indicate the effectiveness of the extra features extracted from the facial detail, and our method achieves the state-of-the-art performance.
Introduction
While earlier seamless face manipulation has amazed the public broadly, there has been a constant concern about the potential abuse of relevant techniques. In particular, the recent DeepFake [12] initiated the widespread public discussion among the potential harmful consequence [36] and feasible detection solutions of counterfeit facial media [2].
In this work, we are dedicated to detecting the manip- Figure 1. In computer graphics, a face image can be decomposed into direct light, ambient light, 3D geometry, common texture and identity texture. We find direct light and identity texture contain critical clues and merge them as the facial detail for forgery detection.
ulation on facial identity and expression, related to the very popular DeepFakes (DF) [12], Face2Face (F2F) [35], FaceSwap (FS) [20] and NeuralTextures (NT) [33], which perform the state-of-the-art face manipulation, making it extremely tough to reveal the sophisticated counterfeit flaws from the image view only [28]. This situation stimulates researchers to shift their attention to extracting forgery evidence from other aspects besides the original RGB image. Previous work [41,8,39,28] has discovered that the signals in specific frequency ranges are replaced by particular patterns during manipulation and proposes to detect forgery by signal decomposition. The assumption is that, by disentangling the face image, we can find more critical clues for forgery detection from the constituent elements, which are overlooked or hard to be forged by the manipulation methods, whose loss function mainly constrains pixel values. For example, Zhang et al. [41] identify the unique replications of spectra in the frequency domain due to the up-sampling process. Chen et al. [8] introduce facial semantic segmentation and Discrete Fourier Transform (DFT) to extract both spatial-and frequency-domain features, respectively. However, it is difficult to decide which range of signals contains artifacts since images are captured by different devices, under different environments, and even compressed with different algorithms, leading to large frequency distribution bias across datasets. The handcrafted [31] and learned [28] frequency filters also easily suffer from the generalization problem. Therefore, the crucial problems of this topic lie in how to decompose an image and how to identify reliable constituent elements.
In this paper, we consider the decomposition from a physics view that a face image is the intervention result of the underlying 3D geometry, its albedo, and the environment lighting. Specifically, we introduce 3D Morphable Model (3DMM) [3] and the computer graphics rendering to simulate the generation of a face image. Under Lambertian assumption, we decompose a face image into 5 components: 3D geometry, common texture, identity texture, ambient light, and direct light, as shown in Figure 1. The 3D geometry is the underlying 3D face shape, the common texture is the albedo patterns shared by all the people, the identity texture is the albedo patterns peculiar to this face, the ambient light changes the face color globally, and the direct light generates shading. We introduce how these components are obtained in Section 3.1. Intuitively, the advanced manipulation methods can well reconstruct 3D geometry, common texture and ambient light since we merely see incompatible facial topology, non-face texture and weird skin color among the massive forged images. Therefore, these three elements should be normalized. On the contrary, we detect identity texture since it is hard to be simulated due to the rich variations across faces, leading to specific high-frequency artifacts. Besides, we speculate that direct light is also a decisive forgery clue with the observation on large artifacts under intense direct light, shown in Figure 2. By evaluating various compositions among different components, we find that the combination of direct light and identity texture is the best for forgery detection, which we call facial detail, as shown in Figure 1(f).
When detecting forgery clues with neural networks, we consider the cooperation between face image and facial detail as a multi-modality task and propose a two-stream Forgery-Detection-with-Facial-Detail Net (FD 2 Net). To further highlight the discriminative region, we introduce a supervised Detail-guided Attention mechanism in the network, which employs the facial detail difference between real and fake faces as the objective.
In summary, our contributions are: 1) we introduce 3D decomposition into forgery detection and construct facial detail to amplify subtle artifacts. 2) A two-stream structure FD 2 Net is proposed to fuse the clues from original images and facial details, where a supervised attention module is introduced to highlight the discriminative region. 3) Compared with the other state-of-the-art detection proposals, our method achieves remarkable elevation on both detection performance and generalization ability.
Related Work
Digital face manipulation techniques There has been extensive research on face manipulation. Traditional methods require sophisticated editing tools, domain expertise, and time-consuming processes [37,38,34,35,32,19]. Recent deep learning (DL)-based methods, especially with GAN, have demonstrated their power on image synthesis, which promotes both face swapping and synthesis of entire fake images, making it more easy to be acquired by the public. While the advanced manipulation techniques based on DL facilitate digital face manipulation remarkably, they exacerbate the difficulty for humans to distinguish manipulated faces from the genuine [30].
Manipulation detection method Facial forgery detection has attracted considerable attention recently, which stimulates massive study according to various forgery techniques [29,7,22,16]. There is a large portion of methods discussing manipulation evidence among low-/high-level features. Zhou et al. [42] explore steganalysis features and propose to learn both tampering artifacts and local noise residual features. Liu et al. [23] argue the effectiveness and robustness of global/large texture represented by the Gram matrix. There are also methods of transferring images to the frequency domain to explore other forgery evidence [31,28]. However, previous texture-based methods extract facial features based on pixel-level images, i.e., merely concentrating on exploring manipulation trace among face appearance.
Lighting-based detection There is also research focusing on detecting forgery evidence considering the lighting condition. De Carvalho et al. [11] propose to spot forgery evidence from the inconsistency among the 2D illuminant maps of various segments of the image. Peng et al. [27] propose an optimized solution to estimate the 3D lighting environment. However, these methods require comparison among at least two faces in one image, which is problematic in more common scenarios where only one face is in the image.
Manipulation Detection with Facial Detail
This paper regards face manipulation detection beyond a purely end-to-end binary classification problem. We decompose a face image reversibly into several 3D descriptors, i.e., 3D shape, common texture, identity texture, ambient light and direct light, and explore how these descriptors contribute to the final label, investigating the best combination among these 3D descriptors for forgery detection.
3D Decomposition
In computer graphics, a face image is generated by:
I syn = Z-Buffer(S, C),(1)
where S is the 3D face mesh, as shown in Figure 1(c), and C is the RGB of each vertex in S. Under the Lambertian assumption, the RGB of ith vertex is:
C i = Amb * T i + n i , l · Dir * T i ,(2)
where the facial texture T i = [R i , G i , B i ] T is the albedo of the ith vertex, Amb = diag(R amb , G amb , B amb ) is the color of the ambient light, as shown in Figure 1(b), n i is the vertex normal originating from the 3D mesh, l is the light direction, and Dir = diag(R dir , G dir , B dir ) is the color of the direct light, as shown in Figure 1(a). Then, we assume the facial texture T as the composition of common texture and identity texture, where the common texture T com is the texture patterns shared by all the people, as shown in Figure 1(d), and the identity texture T id is the discriminative fine-grained texture containing one's identity information, as shown in Figure 1(e). In this paper, we model the common texture by the PCA texture model in Basel Face Model (BFM) [26], and calculate the residual between T com and T as the identity texture:
T = T + Bβ + T id ,(3)
where T is the mean texture, B is the principle axes of the PCA texture model, and β is the common texture parameter. Based on these models, any face images can be decomposed by a series of model parameters: [S, Amb, Dir, β, T id ], which can be obtained by optimizing the following loss:
arg min S,Amb,Dir,β,T id
I − I syn (S, Amb, Dir, β, T id ) ,(4)
where I is the input face image. After 3D decomposition, the following problems are whether each component contains forgery clues and how to combine them regarding the real/fake label. Firstly, inspired by the previous discussions on high-frequency features beneath pixel-level texture [11,21], we regard identity texture as a critical forgery clue and remove the topsoil facial texture, i.e., the ambient light and the common texture. Secondly, by observing the fake samples under intensive direct light, as shown in Figure 2, we can consistently spot artifacts due to the large illumination difference between the source and target faces during manipulation. Therefore, we suppose the existence of forgery clues in the direct light. Moreover, we emphasize the normalization of the 3D shape to make the detector concentrate on find-grained texture.
To verify the suppositions, we conduct a fast ablation study and propose 8 inputs for forgery detection: img, amb+ctex+shape, itex+dir+shape, itex+shape, img w/o shape, amb+ctex, itex+dir, itex, where amb, dir, itex, ctex and shape are short for ambient light, direct light, identity texture, common texture and 3D shape, respectively, as shown in Figure 3, where we warp the image to the UV space to discard the 3D shape. We generate the 8 inputs for all the samples in Faceforensics++ [30] and train a VGG16 for evaluation. The results are shown in Table 1. There are several noteworthy results in Table 1. Firstly, the poor performances of In-b and In-f indicate that the topsoil facial appearance, i.e., the ambient light and the common texture, are easy to be faked and have little forgery clues. Secondly, by comparing In-c and In-g or comparing In-d and In-h, we find consistent improvements after warping the fine-grained appearance to the UV space. Therefore we suppose normalizing the 3D shape makes CNN concentrate on specific face regions and simplifies the detection task. Thirdly, by comparing In-g and Inh, we find that direct light benefits the forgery detection remarkably. Moreover, we find that many fake samples that In-c identifies but In-d does not are under intense light, verifying our assumption that current manipulation methods cannot simulate direct light properly. In all, the 3D shape, ambient light, and common texture have few forgery patterns but contribute most of the pixel values, and they should be normalized, while the weak signals of direct light and identity texture should be highlighted due to the embedded critical clues. In the following implementation, we use facial trend to represent the group of 3D shape, ambient light and common texture, and name the combination of direct light and identity texture as the facial detail. [30]. The inputs are the compositions of 5 components, including: 3D shape (shape), ambient light (amb), direct light (dir), common texture (ctex) and identity texture (itex). The examples of In-a to In-h are shown in Figure 3. The best results are highlighted.
(a) (b) (c) (d) (e) (f) (g) (h)
Facial Detail Generation
Based on the analysis in 3D decomposition, we aim to normalize facial trend (the combination of 3D shape, ambient light and common texture) and highlight facial detail (the combination of direct light and identity texture). A trivial method is optimizing all the parameters together as Eqn. 4 in an analysis-by-synthesis [4] manner, but it costs too much computation. Thus, we propose an approximation to expedite the generation of facial detail for fast inference. We begin with the real-time generation of the 3D shape S by the state-of-the-art 3DDFA [43,17]. Then, we keep the 3D shape S and get the ambient and direct light by the spherical harmonic reflectance on the mean texture.
The spherical harmonics [40]:
H = [h 1 , h 2 , . . . , h 9 ](5)
are a set of functions that form an orthonormal basis to represent the brightness changes due to illuminations:
h 1 = 1 √ 4π , h 2 = 3 4π n x , h 3 = 3 4π n y , h 4 = 3 4π n z , h 5 = 1 2 5 4π (2n z 2 − n x 2 − n y 2 ), h 6 = 3 5 12π
n yz , h 7 = 3 5 12π
n xz ,
h 8 = 3 5 12π
n xy , h 9 = 3 2 5 12π
(n x 2 − n y 2 )
where n x , n y , n z are the x, y, z of the vertex normals computed by the 3D mesh S, and we use n x 2 to denote a vector such that n x 2 ,i = n x,i n x,i for the ith vertex and define n y 2 , n z 2 , n xz , n yz , and n xy similarly. With this set of basis, the face appearance under arbitrary illumination can be represented by the linear combination (Hγ) · T, where T is the facial texture (vertex albedo), γ = [γ 1 , γ 2 , . . . , γ 9 ] is the 9-dimensional reflectance parameters and · is the dot product. We consider γ 1 · h 1 as the ambient light and [γ 2 · h 2 , . . . , γ 9 · h 9 ] as the direct light.
In our implementation, we degrade the T to the mean texture T for fast inference and get γ from the least squares solution of the following equation:
I(S) = (Hγ) · T,(7)
where I(S) are the pixels at vertex positions. Based on the harmonic reflectance parameters, we further get the common texture by the following linear equation:
I(S) = (Hγ). * (T + Bβ),(8)
where T and B are from the PCA texture model, γ is the reflectance parameters estimated in Eqn. 7 and β is the common texture parameters. Finally, we obtain the facial detail by:
FD = U V (I − (h 1 γ 1 ). * (T + Bβ), S),(9)
where FD is the facial detail and U V (I, S) is the UV warping that transfers image pixels in I to the UV space by the constraints of 3D mesh S. We suppose that the facial detail highlights the forgery patterns for the forged image, making it more suitable for the input of manipulation detection neural networks.
FD 2 Net
Our Forgery-Detection-with-Facial-Detail Net (FD 2 Net) is briefly presented in Figure 4. We adopt the state-of-the-art XceptionNet [9] as the backbone and then merge the two-stream structure and the attention mechanism into it to explore the potential enhancement assisted by the correlation among the multi-modality data and the location of forgery clues.
Multi-modality Fusion with Two-stream
Although the facial detail highlights the critical clues in fine-grained texture and shading, it may warp specific forgery patterns and miss external face regions. Therefore, we consider facial detail and original face image as complementary clues and characterize forgery detection as a multi-modality task, regarding facial detail and pixel-level face image as two different modalities. To be specific, we adopt a two-stream architecture to study the combination of these two modalities, where each stream is equipped with the XceptionNet [9] to detect face image and facial detail separately. The classifier performs cooperated decisions at the end regarding the joint representations of the two streams.
We evaluate three ways of fusing the representations of the two modalities, as illustrated in Figure 4. Firstly, we implement the score fusion (SF), which performs real/fake classification in each stream and adding their confidences as the final score. The result is real if the score is larger than 1 and fake otherwise, shown in Figure 4(a). Secondly, we implement the feature fusion (FF), where each stream ends with a fully-connected (FC) layer, and their features are concatenated to a one-dimension vector and transferred by another FC layer to make the final decision, shown in Figure 4(b). Finally, we implement the halfway fusion (HF) to concatenate the intermediate 2D feature maps for further single-stream processing, shown in Figure 4(c). To be specific, we convolve the two inputs by the first half backbone, i.e., before the 7th block of XceptionNet [9], and stack their 2D outputs as the feature map afterward. Then we adopt the last half XceptionNet to process the stacked feature in a single stream to make the final decision. In our experiments, we find halfway fusion performs the best with smaller parameter size, which may benefit from preserving spatial information when fusing the local features.
Detail-guided Attention
Extensive tasks have adopted the attention mechanism to enhance forgery detection performance [10]. The embedded attention module exploits more distinguishing characteristics by positioning the plausible manipulated region, and also strengthens the explainability of the classifiers [10,5]. Unlike the previous methods, which either need the ground truth manipulated regions or adaptively learn the attention map by the real/fake labels [10], we supervise our attention map by the facial detail difference between fake and real faces.
Generally, an attention map M att is constructed from an intermediate feature map F by a small regression network M att = N (F, θ att ) with θ att as its parameters. Then the intermediate feature is refined by the attention map F = F Sigmoid(M att ), where denotes elementwise multiplication. In this work, we propose a novel approach to train the attention network θ att . When constructing the batches during network training, two images are selected for each sample, one real I real and one fake I f ake . The absolute of the grayscale facial detail difference Table 2. Test results (%) of the two-stream FD 2 Net and its variants on FFpp, DFD and DFDC. The "Img" is the stream detecting original images only. The "Detail" is the stream detecting facial details only. The "Img (× 2)" is the one-stream network on original images but having the same parameter size as the two-stream structure. The SF, FF and HF refer to score fusion, feature fusion and halfway fusion, respectively. The best results are highlighted.
is taken as a weak supervision of the attention module: (10) where F D(·) is facial detail extraction. Then the total loss is:
L att = N (F, θ att ) − |F D(I real ) − F D(I f ake )| ,L = L cls + λ att L att ,(11)
where λ att is the weight of attention loss and L cls is the cross entropy loss performing real/fake classification.
Experiments
In this section, we introduce the datasets, experiment setups, extensive experiment results of the ablation studies, and comparison with previous works in sequence. Training Dataset. Faceforensics++ (FFpp) [30] is a benchmark dataset released recently for facilitating evaluation among facial manipulation detection methods. There are 1k original video sequences, in which 720, 140, 140 videos are used for training, validation and testing, respectively. These original videos are manipulated by four state-of-the-art face manipulation methods, i.e., DeepFakes (DF) [12], Face2Face (F2F) [35], FaceSwap (FS) [20], and NeuralTextures (NT) [33]. Besides, the raw video sequences are degraded with different compression rate (0, 23, 40) to simulate the real situation [30]. We select the HQ version (c23) of FFpp, considering the extensive post-processing imposed on the original data before they go public, and sample 100 frames for each video in the experiments. Test Datasets. We adopt the following datasets for performance and generalization evaluation. 1) the testing set of FFpp as described above.
2) The DeepFake Detection dataset (DFD) [14] containing hundreds of original data and thousands of manipulated data, released by Google for promoting research on synthetic video detection. 3) Deepfake detection challenge dataset (DFDC) [6] containing over 100k video sequences captured with over 3k paid actors and manipulated videos covering Deepfake, GANbased, and non-learned methods, released recently for the corresponding Kaggle competition 1 by Facebook AI. Implementation Details. For the facial detail generation, we construct the 3D face shape by 3DDFA [43,17], perform UV warping by the UV map in [15], and acquire the common texture by fitting the PCA texture model in Basel Face Model (BFM) [26]. For the neural network, we introduce XceptionNet [9] as the backbone and fuse the feature map after the 4th block of the middle row of XceptionNet when implementing the halfway fusion structure. The Adam optimizer is utilized for training with weight decay equals to 5 × 10 −4 , β 1 = 0.9, β 2 = 0.999 and batch size set to 32. The initial learning rate is 10 −4 , then changed to 5 × 10 −5 at epoch 15, to 5 × 10 −6 at epoch 23, to 10 −6 at epoch 28, and to 5 × 10 −7 for the rest from epoch 32. An early-stop module controls the training process's end, terminating the training if the loss on the validation set does not fall for 7 consecutive epochs. In our implementation, the total epoch is about 25. Besides, the λ att in the loss function is set to 1.
Ablation Studies
Analysis of the Two-stream Network
We regard face image and facial detail as two complementary modalities and implement a two-stream network to fuse their clues. To evaluate each modality's performance and the best fusion manner, we quantitatively evaluate FD 2 Net in different variants: one-stream with original images only, one-stream with facial details only, and two-stream fused by score-fusion, feature-fusion, and halfway-fusion, respectively. We do not adopt the attention module here. The results are listed in Table 2.
Firstly, the one-stream structure considering face images or facial details achieves similar results, but worse than that of the two-stream structure, especially in cross-data evaluation. Secondly, the deteriorated performance of the Table 3. The ablation study results (%) on the Detail-guided Attention module in FD 2 Net. The "Img Attention" and "Detail Attention" refer to the attention module on the image stream and detail stream, respectively. We also explore the performance without the supervised signal by the facial detail difference and present the last row results with "unsupervised" in the bracket. Table 4. The ablation study result (%) of facial detail in FD 2 Net. The S1 and S2 refer to the first and the second stream in the network. The "Tex Norm" refers to texture normalization and the "Shape Norm" refers to shape normalization. The best results are highlighted. score fusion indicates the potential sophistication of multimodality clues fusion and the inflexibility of the handcrafted decision function. Nevertheless, the performance on FFpp, DFD, and DFDC become jointly better with the feature fusion, validating the complementarity of the two modalities. Finally, the halfway fusion further promotes the results on DFD and DFDC by the local fusion manner, which makes the fused features correspond to similar receptive fields. We also find that the two-stream structure outperforms the one-stream with double parameters, ruling out the benefit from a larger parameter size on the performance improvement.
Analysis of the Detail-guided Attention
To highlight the plausible manipulated region, we introduce the detail-guided attention module between the fourth and fifth block of the middle flow of the XceptionNet. Based on the two-stream network with halfway fusion, we evaluate some of the network variants by separately deploying the attention module on each stream and exploring whether the supervised signal improves its effectiveness in further discussion. Results in Table 3 demonstrate the improvements of XceptionNet on all datasets with the additional attention module, either implemented on the image stream or the detail stream. The network with attention modules on both streams achieves the best performance. Besides, we train the attention module indirectly from the real/fake labels, ignoring the supervised signal by the facial detail difference, and observe a performance drop, indicating the effectiveness of the supervised signals on the forgery detection.
Ablation Study of Facial Detail in FD 2 Net
Although Table 1 has performed an extensive ablation study on each 3D component, we further evaluate facial detail when acting as a complementary clue in the two-stream network. In this section, we decompose the effectiveness of facial detail into shape normalization and texture normalization, where shape normalization refers to warping the facial pixels to the UV space, e.g., Figure 3(e), and texture normalization refers to decomposing and removing ambient light and common texture in the pixel values, e.g., Figure 3(c). We adopt the two-stream network with both halfway fusion and supervised attention module and present the results in Table 4.
The first row is a one-stream network which directly detects forgery from original face images without using any facial detail information. Adding the second stream can promote the AUC scores compared with the primary one-stream structure, either implementing shape or texture normalization. Furthermore, the introduction of the facial detail, i.e., adopting the combination of shape and texture normalization, helps the detector achieve the best performance. These progressive improvements validate that the proposed facial detail contributes to the forgery detection and complements the original image.
Comparison with other methods
Some previous works [18,13,25,21] indicate the potential generalization failure when detecting unseen manipula- tion methods or datasets. In this section, we compare our method with previous state-of-the-art methods to explore our performance in both scenarios.
Cross-data Evaluation. Following Khodabakhsh et al. [18], we quantitatively analyze the generalization ability on unseen data and compare it with other methods, including the primary XceptionNet detection method [30] and the ensemble of EfficientNet's variants [5]. We train the model on FFpp, test it on DFD (HQ) and DFDC following the ablation study's configuration, and list the results in Table 5. The improvement in the generalization on unseen data demonstrates that the additional facial detail enables the detection model to effectively extract more discriminative and general features from fake images, even from a different distribution of the training dataset. It is worth noting that the extraction of facial detail is independent of any forgery data, making it the probable reason for better generalization.
Evaluations on Different Manipulation Methods. Following Li et al. [21], we evaluate the robustness of our method on the unseen manipulation methods and compare the performance with previous methods. We introduce the data manipulated by different methods, i.e., Face2Face (F2F) and FaceSwap (FS) under the low compression (c23) from FFpp, and train our approach on F2F and test it on both F2F and FS, taking the correct prediction accuracy as the evaluation metric. The results are listed in Table 6. The proposed method achieves 98.22% on F2F and 86.54% on FS, with a significant improvement compared to the current state-of-the-art. The improvements mainly benefit from the highlighted clues extracted from the facial detail and the plausible forged regions indicated by the attention map. In particular, some compared methods also consider complementary information from various modalities. Nguyen et al. [25] propose sharing knowledge learned simultaneously from images and videos to enhance the performance of the detection on both data. Li et al. [21] explore the estimation of the blending boundary directly from face image to discover the possibility of decomposing an image into the mixture of two images from different sources. Besides, we also include signal decomposition methods in frequency domains, introducing the three base high-pass filters in the FAD stream in [28] to primary XceptionNet (Xception + HP Filter). Unlike these methods, the proposed FD 2 Net strips the ambient lighting and the common appearance with 3D decomposition, exploiting the personalized ambient-free facial detail to extract more robust discriminative features.
Conclusion
This paper proposes a novel face forgery detection method by the 3D decomposition of the face image. By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find critical forgery clues in the direct light and the identity texture. To utilize this observation, we propose the facial detail, which is constructed by warping image pixels to the UV space and removing the topsoil facial texture, to highlight the subtle forgery patterns. The clues in the facial detail and the original image are fused by a two-stream network FD 2 Net for the final real or fake classification. Meanwhile, an attention module supervised by the facial detail is proposed to highlight the plausible manipulated region. Extensive experiments demonstrate the effectiveness and generalization of the proposed FD 2 Net on the FaceForencis++ dataset. In general, our work provides a novel direction to find the forgery clues by analyzing how an image is generated in physics, following the analysis-bysynthesis idea.
Figure 2 .
2Samples under strong direct light. The first row is the original faces, the second row is the corresponding fake samples, where evident inconsistency exists in the dim region.
Figure 3 .
3The 8 inputs for forgery detection. (a) Face Image, (b) Ambient Light + Common Texture + 3D Shape, (c) Identity Texture + Direct Light + 3D Shape, (d) Identity Texture + 3D Shape, (e) Face Image w/o Shape, (f) Ambient Light + Common Texture, (g) Identity Texture + Direct Light, (h) Identity Texture.
Figure 4 .
4The overview of the Forgery-Detection-with-Facial-Detail Net (FD 2 Net). We introduce a two-stream structure to combine the clues from original images and facial details. The results of the two streams are fused by three methods: (a) score fusion (SF), (b) feature fusion (FF) and (c) halfway fusion (HF). Besides, we insert a detail-guided attention module, which is supervised by the facial detail difference, in the middle of the backbone network.
Table 5. Performance (%) comparison among previous state-of-the-art methods on the unseen dataset, DFD (HQ) and DFDC. The best results are highlighted.Table 6.Detection accuracy comparison (%) with previous methods on F2F and FS in FFpp. We adopt the HQ (c23) version data from FFpp to discuss the robustness on the unseen manipulation technique. The best results are highlighted.Model
Training dataset
DFD (HQ)
DFDC
AP
AUC EER
AP
AUC EER
Xception [30]
FFpp
88.07 65.57 38.38 85.60 62.17 39.99
EfficientNetB4
Ensemble [5]
FFpp
89.35 72.82 34.86 85.71 63.03 38.86
FD 2 Net
FFpp
89.84 79.08 25.18 87.93 67.70 34.91
Model
Training data
Acc
F2F
FS
MesoInception4 [1]
F2F
84.56 56.71
VA-LogReg [24]
83.62 59.45
LAE [13]
90.34 62.51
Multi-task [25]
91.27 55.04
Face X-ray [21]
97.73 85.69
Xception + HP Filter
97.98 57.46
FD 2 Net
98.22 86.54
https://www.kaggle.com/c/deepfake-detection-challenge
Mesonet: a compact facial video forgery detection network. Darius Afchar, Vincent Nozick, Junichi Yamagishi, Isao Echizen, 2018 IEEE International Workshop on Information Forensics and Security (WIFS). IEEEDarius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. Mesonet: a compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-7. IEEE, 2018. 8
Deepfakes: What are they and why would i make one?. Bbc Bitesize, BBC Bitesize. Deepfakes: What are they and why would i make one? https://www.bbc.co.uk/bitesize/articles/zfkwcqt, 2020. 1
Face recognition based on fitting a 3d morphable model. Volker Blanz, Thomas Vetter, IEEE Transactions. 259Volker Blanz and Thomas Vetter. Face recognition based on fitting a 3d morphable model. IEEE Transactions on pattern analysis and machine intelligence, 25(9):1063-1074, 2003. 2
Face recognition based on fitting a 3d morphable model. Volker Blanz, Thomas Vetter, IEEE Transactions. 259Volker Blanz and Thomas Vetter. Face recognition based on fitting a 3d morphable model. IEEE Transactions on pattern analysis and machine intelligence, 25(9):1063-1074, 2003. 4
Video face manipulation detection through ensemble of cnns. Nicolò Bonettini, Daniele Edoardo, Sara Cannas, Luca Mandelli, Paolo Bondi, Stefano Bestagini, Tubaro, arXiv:2004.076765arXiv preprintNicolò Bonettini, Edoardo Daniele Cannas, Sara Mandelli, Luca Bondi, Paolo Bestagini, and Stefano Tubaro. Video face manipulation detection through ensemble of cnns. arXiv preprint arXiv:2004.07676, 2020. 5, 8
The deepfake detection challenge dataset. Ben Pflaum Jikuo Lu Russ Howes Menglin Wang Cristian Canton Ferrer Brian Dolhansky, Joanna Bitton, Ben Pflaum Jikuo Lu Russ Howes Menglin Wang Cristian Canton Ferrer Brian Dolhansky, Joanna Bitton. The deepfake detection challenge dataset, 2020. 6
Illuminant-based transformed spaces for image forensics. Tiago Carvalho, Fabio A Faria, Helio Pedrini, Ricardo Torres, Anderson Rocha, IEEE transactions on information forensics and security. 11Tiago Carvalho, Fabio A Faria, Helio Pedrini, Ricardo da S Torres, and Anderson Rocha. Illuminant-based transformed spaces for image forensics. IEEE transactions on informa- tion forensics and security, 11(4):720-733, 2015. 2
Manipulated face detector: Joint spatial and frequency domain attention network. Zehao Chen, Hua Yang, arXiv:2005.029581arXiv preprintZehao Chen and Hua Yang. Manipulated face detector: Joint spatial and frequency domain attention network. arXiv preprint arXiv:2005.02958, 2020. 1, 2
Xception: Deep learning with depthwise separable convolutions. François Chollet, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition46François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 1251-1258, 2017. 4, 5, 6
On the detection of digital face manipulation. Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, Jain, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil K Jain. On the detection of digital face manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5781-5790, 2020. 5
Exposing digital image forgeries by illumination color classification. Tiago José De Carvalho, Christian Riess, Elli Angelopoulou, Helio Pedrini, Anderson De Rezende, Rocha, IEEE Transactions on Information Forensics and Security. 87Tiago José De Carvalho, Christian Riess, Elli Angelopoulou, Helio Pedrini, and Anderson de Rezende Rocha. Exposing digital image forgeries by illumination color classification. IEEE Transactions on Information Forensics and Security, 8(7):1182-1194, 2013. 3
Towards generalizable forgery detection with locality-aware autoencoder. Mengnan Du, Shiva Pentyala, Yuening Li, Xia Hu, arXiv:1909.059997arXiv preprintMengnan Du, Shiva Pentyala, Yuening Li, and Xia Hu. Towards generalizable forgery detection with locality-aware autoencoder. arXiv preprint arXiv:1909.05999, 2019. 7, 8
Contributing data to deepfake detection research. Nick Dufour, Andrew Gully, Nick Dufour and Andrew Gully. Con- tributing data to deepfake detection research. https://ai.googleblog.com/2019/09/contributing-data-to- deepfake-detection.html, 2019. 6
Joint 3d face reconstruction and dense alignment with position map regression network. Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, Xi Zhou, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European Conference on Computer Vision (ECCV), pages 534-551, 2018. 6
Deepfake video detection using recurrent neural networks. David Güera, J Edward, Delp, 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). David Güera and Edward J Delp. Deepfake video detection using recurrent neural networks. In 2018 15th IEEE Inter- national Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 1-6. IEEE, 2018. 2
. Jianzhu Guo, Xiangyu Zhu, Zhen Lei, 46Jianzhu Guo, Xiangyu Zhu, and Zhen Lei. 3ddfa. https://github.com/cleardusk/3DDFA, 2018. 4, 6
Fake face detection methods: Can they be generalized?. Ali Khodabakhsh, Raghavendra Ramachandra, Kiran Raja, Pankaj Wasnik, Christoph Busch, 2018 International Conference of the Biometrics Special Interest Group (BIOSIG). IEEE7Ali Khodabakhsh, Raghavendra Ramachandra, Kiran Raja, Pankaj Wasnik, and Christoph Busch. Fake face detection methods: Can they be generalized? In 2018 Interna- tional Conference of the Biometrics Special Interest Group (BIOSIG), pages 1-6. IEEE, 2018. 7, 8
Deep video portraits. Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt, ACM Transactions on Graphics (TOG). 374Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt. Deep video portraits. ACM Transactions on Graphics (TOG), 37(4):1-14, 2018. 2
Marek Kowalski, Faceswap github. 6Marek Kowalski. Faceswap github. https://github.com/MarekKowalski/FaceSwap. 1, 6
Fang Wen, and Baining Guo. Face x-ray for more general face forgery detection. Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition7Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. Face x-ray for more gen- eral face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5001-5010, 2020. 3, 7, 8
In ictu oculi: Exposing ai created fake videos by detecting eye blinking. Yuezun Li, Ming-Ching Chang, Siwei Lyu, 2018 IEEE International Workshop on Information Forensics and Security (WIFS). Yuezun Li, Ming-Ching Chang, and Siwei Lyu. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-7. IEEE, 2018. 2
Global texture enhancement for fake face detection in the wild. Zhengzhe Liu, Xiaojuan Qi, Philip Hs Torr, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhengzhe Liu, Xiaojuan Qi, and Philip HS Torr. Global texture enhancement for fake face detection in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8060-8069, 2020. 2
Exploiting visual artifacts to expose deepfakes and face manipulations. Falko Matern, Christian Riess, Marc Stamminger, 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). IEEEFalko Matern, Christian Riess, and Marc Stamminger. Ex- ploiting visual artifacts to expose deepfakes and face manip- ulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pages 83-92. IEEE, 2019. 8
Multi-task learning for detecting and segmenting manipulated facial images and videos. H Huy, Fuming Nguyen, Junichi Fang, Isao Yamagishi, Echizen, arXiv:1906.068767arXiv preprintHuy H Nguyen, Fuming Fang, Junichi Yamagishi, and Isao Echizen. Multi-task learning for detecting and segment- ing manipulated facial images and videos. arXiv preprint arXiv:1906.06876, 2019. 7, 8
A 3D face model for pose and illumination invariant face recognition. Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, Thomas Vetter, Advanced Video and Signal Based Surveillance, 2009. AVSS'09. Sixth IEEE International Conference on. IEEE36Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3D face model for pose and illumination invariant face recognition. In Advanced Video and Signal Based Surveillance, 2009. AVSS'09. Sixth IEEE International Conference on, pages 296-301. IEEE, 2009. 3, 6
Optimized 3d lighting environment estimation for image forgery detection. Bo Peng, Wei Wang, Jing Dong, Tieniu Tan, IEEE Transactions on Information Forensics and Security. 122Bo Peng, Wei Wang, Jing Dong, and Tieniu Tan. Optimized 3d lighting environment estimation for image forgery de- tection. IEEE Transactions on Information Forensics and Security, 12(2):479-494, 2016. 3
Thinking in frequency: Face forgery detection by mining frequency-aware clues. Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, Jing Shao, arXiv:2007.09355arXiv preprintYuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao. Thinking in frequency: Face forgery detec- tion by mining frequency-aware clues. arXiv preprint arXiv:2007.09355, 2020. 1, 2, 8
Distinguishing computer graphics from natural images using convolution neural networks. Nicolas Rahmouni, Vincent Nozick, Junichi Yamagishi, Isao Echizen, 2017 IEEE Workshop on Information Forensics and Security (WIFS). Nicolas Rahmouni, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. Distinguishing computer graphics from natural images using convolution neural networks. In 2017 IEEE Workshop on Information Forensics and Security (WIFS), pages 1-6. IEEE, 2017. 2
Face-forensics++: Learning to detect manipulated facial images. Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision6Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Chris- tian Riess, Justus Thies, and Matthias Nießner. Face- forensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE International Conference on Computer Vision, pages 1-11, 2019. 2, 3, 4, 6, 8
Improving image classification with frequency domain layers for feature extraction. A José, Stuchi, A Marcus, Rodrigo F Angeloni, Levy Pereira, Guilherme Boccato, Folego, V S Paulo, Prado, Attux, 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing. IEEEJosé A Stuchi, Marcus A Angeloni, Rodrigo F Pereira, Levy Boccato, Guilherme Folego, Paulo VS Prado, and Romis RF Attux. Improving image classification with frequency domain layers for feature extraction. In 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1-6. IEEE, 2017. 2
Synthesizing obama: learning lip sync from audio. Supasorn Suwajanakorn, M Steven, Ira Seitz, Kemelmacher-Shlizerman, ACM Transactions on Graphics (TOG). 364Supasorn Suwajanakorn, Steven M Seitz, and Ira Kemelmacher-Shlizerman. Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (TOG), 36(4):1-13, 2017. 2
Deferred neural rendering: Image synthesis using neural textures. Justus Thies, Michael Zollhöfer, Matthias Nießner, ACM Transactions on Graphics (TOG). 3846Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics (TOG), 38(4):1- 12, 2019. 1, 6
Realtime expression transfer for facial reenactment. Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, Christian Theobalt, ACM Trans. Graph. 346Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. Real- time expression transfer for facial reenactment. ACM Trans. Graph., 34(6):183-1, 2015. 2
Face2face: Real-time face capture and reenactment of rgb videos. Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, Matthias Nießner, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition6Justus Thies, Michael Zollhofer, Marc Stamminger, Chris- tian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387-2395, 2016. 1, 2, 6
Deepfakes: A threat to democracy or just a bit of fun?. Daniel Thomas, Daniel Thomas. Deepfakes: A threat to democracy or just a bit of fun? https://www.bbc.com/news/business-51204954, 2020. 1
Deepfakes and beyond: A survey of face manipulation and fake detection. Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, Javier Ortega-Garcia, arXiv:2001.00179arXiv preprintRuben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. Deepfakes and beyond: A survey of face manipulation and fake detection. arXiv preprint arXiv:2001.00179, 2020. 2
Luisa Verdoliva, arXiv:2001.06564Media forensics and deepfakes: an overview. arXiv preprintLuisa Verdoliva. Media forensics and deepfakes: an overview. arXiv preprint arXiv:2001.06564, 2020. 2
Cnn-generated images are surprisingly easy to spot... for now. Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A Efros, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition7Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. Cnn-generated images are sur- prisingly easy to spot... for now. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 7, 2020. 1
Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. Lei Zhang, Dimitris Samaras, IEEE Transactions on Pattern Analysis and Machine Intelligence. 283Lei Zhang and Dimitris Samaras. Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(3):351-363, 2006. 4
Detecting and simulating artifacts in gan fake images. Xu Zhang, Svebor Karaman, Shih-Fu Chang, 2019 IEEE International Workshop on Information Forensics and Security (WIFS). IEEEXu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and simulating artifacts in gan fake images. In 2019 IEEE In- ternational Workshop on Information Forensics and Security (WIFS), pages 1-6. IEEE, 2019. 1
Two-stream neural networks for tampered face detection. Peng Zhou, Xintong Han, I Vlad, Larry S Morariu, Davis, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Peng Zhou, Xintong Han, Vlad I Morariu, and Larry S Davis. Two-stream neural networks for tampered face detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1831-1839. IEEE, 2017. 2
Face alignment in full pose range: A 3d total solution. Xiangyu Zhu, Xiaoming Liu, Zhen Lei, Stan Z Li, IEEE transactions on pattern analysis and machine intelligence. 416Xiangyu Zhu, Xiaoming Liu, Zhen Lei, and Stan Z Li. Face alignment in full pose range: A 3d total solution. IEEE transactions on pattern analysis and machine intelligence, 41(1):78-92, 2017. 4, 6
| [
"https://github.com/cleardusk/3DDFA,",
"https://github.com/MarekKowalski/FaceSwap."
] |
[
"30 years in: Quo vadis generalized uncertainty principle?",
"30 years in: Quo vadis generalized uncertainty principle?"
] | [
"Pasquale Bosso \nDipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly\n\nINFN\nSezione di Napoli\nGruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy\n",
"Giuseppe Gaetano Luciano \nApplied Physics Section of Environmental Science Department\nEscola Politècnica Superior\nUniversitat de Lleida\nAv. Jaume II, 6925001LleidaSpain\n",
"Luciano Petruzziello \nDipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly\n\nINFN\nSezione di Napoli\nGruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy\n\nInstitut für Theoretische Physik\nUniversität Ulm\nAlbert-Einstein-Allee 1189069UlmGermany\n",
"Fabian Wagner \nDipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly\n"
] | [
"Dipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly",
"INFN\nSezione di Napoli\nGruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy",
"Applied Physics Section of Environmental Science Department\nEscola Politècnica Superior\nUniversitat de Lleida\nAv. Jaume II, 6925001LleidaSpain",
"Dipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly",
"INFN\nSezione di Napoli\nGruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy",
"Institut für Theoretische Physik\nUniversität Ulm\nAlbert-Einstein-Allee 1189069UlmGermany",
"Dipartimento di Ingegneria Industriale\nUniversità degli Studi di Salerno\nVia Giovanni Paolo II132 I-84084FiscianoSAItaly"
] | [] | According to a number of arguments in quantum gravity, both model-dependent and modelindependent, Heisenberg's uncertainty principle is modified when approaching the Planck scale. This deformation is attributed to the existence of a minimal length. The ensuing models have found entry into the literature under the term Generalized Uncertainty Principle (GUP). In this work, we discuss several conceptual shortcomings of the underlying framework and critically review recent developments in the field. In particular, we touch upon the issues of relativistic and field theoretical generalizations, the classical limit and the application to composite systems. Furthermore, we comment on subtleties involving the use of heuristic arguments instead of explicit calculations. Finally, we present an extensive list of constraints on the model parameter β, classifying them on the basis of the degree of rigour in their derivation and reconsidering the ones subject to problems associated with composites. | null | [
"https://export.arxiv.org/pdf/2305.16193v1.pdf"
] | 258,887,459 | 2305.16193 | 935f5c95a95824543f107c2258077f5992a961bd |
30 years in: Quo vadis generalized uncertainty principle?
25 May 2023 (Dated: May 26, 2023)
Pasquale Bosso
Dipartimento di Ingegneria Industriale
Università degli Studi di Salerno
Via Giovanni Paolo II132 I-84084FiscianoSAItaly
INFN
Sezione di Napoli
Gruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy
Giuseppe Gaetano Luciano
Applied Physics Section of Environmental Science Department
Escola Politècnica Superior
Universitat de Lleida
Av. Jaume II, 6925001LleidaSpain
Luciano Petruzziello
Dipartimento di Ingegneria Industriale
Università degli Studi di Salerno
Via Giovanni Paolo II132 I-84084FiscianoSAItaly
INFN
Sezione di Napoli
Gruppo collegato di Salerno, Via Giovanni Paolo II132 I-84084Fisciano (SA)Italy
Institut für Theoretische Physik
Universität Ulm
Albert-Einstein-Allee 1189069UlmGermany
Fabian Wagner
Dipartimento di Ingegneria Industriale
Università degli Studi di Salerno
Via Giovanni Paolo II132 I-84084FiscianoSAItaly
30 years in: Quo vadis generalized uncertainty principle?
25 May 2023 (Dated: May 26, 2023)
According to a number of arguments in quantum gravity, both model-dependent and modelindependent, Heisenberg's uncertainty principle is modified when approaching the Planck scale. This deformation is attributed to the existence of a minimal length. The ensuing models have found entry into the literature under the term Generalized Uncertainty Principle (GUP). In this work, we discuss several conceptual shortcomings of the underlying framework and critically review recent developments in the field. In particular, we touch upon the issues of relativistic and field theoretical generalizations, the classical limit and the application to composite systems. Furthermore, we comment on subtleties involving the use of heuristic arguments instead of explicit calculations. Finally, we present an extensive list of constraints on the model parameter β, classifying them on the basis of the degree of rigour in their derivation and reconsidering the ones subject to problems associated with composites.
Quantum theory and General Relativity are the two main pillars of our current understanding of physics. Yet, despite long research efforts and the surge of promising candidates, consensus on a predominant approach towards a unifying theory is still lacking. Besides conceptual issues, the development of theories of quantum gravity (QG) is severely complicated by the seeming experimental elusiveness of the Planck scale, which is where QG effects are expected to be strong. In the absence of direct empirical guidance, tentative progress can be made by concentrating on common predictions of the existing QG models. In this vein, arguments from different perspectives, such as string theory [1][2][3][4][5][6], asymptotically safe quantum gravity [7,8] and loop quantum gravity [9][10][11], as well as gedankenexperiments heuristically combining quantum theory and general relativity [12][13][14][15][16][17][18][19][20][21][22][23][24], indicate that the classical-to-quantum transition of gravity could be marked by the emergence of a minimal length of order of the Planck scale (l P ∼ 10 −35 m).
For, by now, 30 years, minimal-length effects have customarily been introduced into quantum mechanics by modifying the Heisenberg Uncertainty Principle (HUP). As a result, the uncertainty of the position operator develops a global minimum when approaching the Planck scale. The resulting uncertainty relation is commonly referred to as Generalized Uncertainty Principle (GUP) . As time has gone by, this effective scheme has proven to be fertile ground for phenomenological explorations on the interplay between quantum and gravitational features. In fact, a variety of applications have hitherto been considered in single-particle quantum mechanics , black hole physics , Cosmology , Astrophysics and statistical mechanics , among others. On a different note, the idea of analogue systems has been gaining traction recently [282][283][284][285][286][287].
Three decades into minimal-length model building, it is about time that a report on the status of the field and its challenges be given. This is why, in this short review we highlight some of the difficulties encountered along the way. Note that we do not aim at thoroughly introducing the subject of GUPs; for a self-contained overview, the interested reader may consult the reviews [288,289], or for a more recent account Sec. 3 of [46]. We would like to emphasize that our choice of topics is unavoidably idiosyncratic, and do not claim comprehensiveness. The main intent behind the present work lies in turning the spotlight on some open problems of the field and providing guidelines on how to properly address them. A definitive solution inevitably requires more effort, and goes beyond the scope of the present review.
For instance, it has been recently shown [290] that the uncertainty relation should receive relativistic corrections which are very much reminiscent of the GUP -similarly to QG deformed wave functions, the single-particle sector of the Fock space in quantum field theory (QFT) inevitably acquires a spread in configuration space. However, as we show in Sec. III A, the two effects can be distinguished.
Another issue that has attracted attention in the last years is the classical limit. As it has been shown in [291], the appearance of Planck's constant, if taken at face value, renders the ensuing classical theory trivial. In Sec. III B, we show that the same reasoning applied to the speed of light removes minimal-length effects in the nonrelativistic regime, i. e. for customary applications of the GUP. This indicates that it may be more instructive to understand the Planck mass m P as fundamental, i. e. to keep it constant while letting tend to 0, which is known as the relative-locality limit [292]. As a result, the classical counterpart to GUP-deformed quantum theory is nontrivial.
On a different note, the introduction of an absolute minimal-length scale alone does not unambiguously fix the ensuing corrections to the dynamics embodied by the Hamiltonian [293]. Instead, as we demonstrate in Sec. III C, the choice of Hamiltonian is riddled with ambiguities. We also explain how symmetry arguments may help to find theoretically more appealing dynamics. For instance, in the relativistic context, it is well-known that the introduction of an absolute length scale requires a deformation of Lorentz symmetries to retain a relativity principle [294,295]. We highlight that this subtlety may trickle down to the nonrelativistic regime, where Galilean invariance would have to be deformed [296]. Otherwise, Galilean/Lorentz invariance is explicitly broken including the severe consequences these effects entail.
The resolution of this issue inevitably entails a relativistic generalization of the GUP, on which as of yet no consensus exists. As the nonrelativistic limit can be obtained from relativistic dynamics but not viceversa, this is a highly underdetermined problem. As a result, a number of mutually inconsistent approaches have been put forward. Similarly, there are different proposals towards QFTs with deformations of GUP-type. These distinct approaches are collected in Sec. III D.
Tackling quantum fields, in turn, requires the introduction of multiparticle states. Section III E is devoted to clarifying some misconceptions surrounding meso-and macroscopic objects in the context of the GUP. In fact, considering center-of-mass motion, QG effects diminish with the number of elementary constituents contained in an object. If this was not the case, we would see minimal-length deformations on the level of, say, soccer balls [297].
In Sec. III F we compare heuristic and explicit approaches to different GUP deformed problems, and find a remarkable disparity between the results. These examples should act as a cautionary tale that heuristic reasoning should only be applied with due care and, if possible, be complemented with precise calculations.
Finally, we collect and classify, to our knowledge, all existing bounds on the dimensionless model parameter of the quadratic GUP (usually denoted β) in Sec. III G. In doing so, we account for the misconceptions in the context of multiparticle states stated above, and list other arising problems. Taking those caveats into consideration, we find that the most stringent constraint
β < 10 33 ,(1)
was found more than a decade ago. This clearly indicates that progress on the phenomenological level has stalled.
II. GENERALIZED UNCERTAINTY PRINCIPLES
While there are alternative ways of approaching limits to localization [298][299][300][301][302][303][304][305][306][307], it is customary to incorporate the minimal length into single-particle quantum mechanics as a deformation of the Heisenberg algebra. The modification of the uncertainty relation behind the term "GUP" is then immediately implied by the Robertson-Schrödinger relation [308,309]. The present Section is intended to shortly introduce the reader into the subject and settle the notation.
In d dimensions, we define a GUP as a nonrelativistic, quantum mechanical model featuring a deformed Heisenberg algebra 1 of the form [111,311]
[x a ,p b ] = i f (p 2 )δ a b +f (p 2 )p ap b p 2 , [x a ,x b ] = 2iθ(p 2 )x [bpa] , [p a ,p b ] = 0,(2)
where we introduced the analytic functions of the magnitude of the momentum f,f and θ. These are related by the
Jacobi identity [[x a ,x b ],p c ] = 2[[x [a ,p c ],x b] ] such that θ = 2f ′ −f p 2 1 + 2(log f ) ′p2 .(3)
Given the deformed Heisenberg algebra (2), the Robertson-Schrödinger relation [308,309] (in its simpler but weaker Robertson form) becomes
∆x a ∆p b ≥ 1 2 f δ a b + fp ap b p 2 .(4)
Considering directions a = b, we may rewrite this relation as
∆x a ≥ 1 2∆p a f + fp 2 â p 2 .(5)
Here, the index a is not being summed over. We obtain a minimum to localizability if the right-hand-side of this relation is bounded from below by a positive constant ℓ, the minimal-length scale. Take for example the often-applied model [48,61,312]
f = 1 + βl 2 Pp 2 ,f = β ′ l 2 Pp 2 ,(6)
with the the Planck length l 2 P , as well as the dimensionless model parameters β and β ′ , supposed to be of order 1. If these parameters are positive, we obtain
∆x a ≥ 1 2∆p a 1 + l 2 P β p 2 + β ′p2 a ≥ 1 + (β + β ′ )l 2 P ∆p 2 a 2∆p a > β + β ′ l P = ℓ,(7)
where the last inequality involves minimization with respect to ∆p a . While for the example given in Eq. (6) the existence of a minimal length could be verified by means of simple algebraic manipulations, this procedure cannot be generalized to other models. Recently, some of the authors of the present review have devised a different method to check generic models for the presence of a minimal uncertainty in the position [293]. As it provides a new viewing angle on the GUP which will prove useful below, we present this approach here.
Given the operatorx a , we can construct its wave-number conjugatek a such that
[x a ,k a ] = i,(8)
where again the index a is not being summed over. The exact form of the commutator [x a ,k b ] for a = b is irrelevant for the argument; for more details see [293]. The existence of a minimal length ℓ then poses a constraint on the spectrum of the wave-number operator. In particular, the space of wave numbers has to be bounded in such a way that
spec(k a ) = k a , b =a lim k b →0 k a ∈ − π 2ℓ , π 2ℓ .(9)
1 While there are also anisotropic models [310], these are relatively new and have not been dealt with thoroughly in the literature.
For example, in one dimension this condition reduces to spec(k) = {k, k ∈ [−π/2ℓ, π/2ℓ]}. Given a model as in Eq.
(2), the operatork a can be obtained as a function ofp a such that this constraint (and with it the presence of a minimal length) can be checked for by analyzing the domain ofk a (p b ). The presence of a minimal length, in turn, has operational implications for the techniques used in solving problems. In particular, without infinite localization, the position operator ceases to be self-adjoint (while staying symmetric) [313], thereby making it impossible to define physical position eigenstates. Instead, we can define a quasi-position representation [41,313] on the basis of an overcomplete set of maximal-localization states with non-zero overlap. 2 To provide a physical model, the kinematics of the minimal-length deformation (represented by Eq. (2)) has to be complemented by dynamics. It is customary to define the underlying single-particle Hamiltonian aŝ
H = δ abp apb 2m + V (x a ),(10)
where m stands for the mass of the particle at hand. This Hamiltonian automatically implies universal corrections [52] to eigenvalues of observables as well as Heisenberg/Schrödinger dynamics (or Hamiltonian dynamics as classical counterpart), which allows for a rich phenomenology as indicated in the introduction.
To put it in a nutshell, GUPs are a precisely defined class of models. Being precisely defined, however, does not preclude the existence of subtleties. This is the matter of the subsequent Section.
III. OPEN PROBLEMS
The present Section is dedicated to presenting some of the open problems and issues plaguing the field of GUPs. In particular, we compare relativistic and minimal-length-induced corrections to uncertainty relations. We further discuss ambiguities the classical limit. Thereupon, the particular choice of Hamiltonian, Eq. (10), is scrutinized. This Hamiltonian is to emerge as the nonrelativistic limit of an underlying relativistic model. We review approaches towards this relativistic extension, pointing out the lack of consensus on the matter and touching upon QFT. In this context, the issue of multiparticle states, which has been the subject of misconceptions we highlight, has to be developed. Furthermore, we note that there are disparities in results obtained by heuristic and explicit approaches to minimal-length models, warning against the light use of the former. Finally, we present the aforementioned collection of bounds, thereby correcting some misunderstandings in the literature.
A. Relativistic vs. GUP corrections
It has recently been pointed out that relativistic corrections to quantum mechanics derived from the single-particle sector of scalar quantum field theory may lead to uncertainty relations of the form [290]
∆x∆p 2 1 + 3∆p 2 4m 2 c 2 = 2 1 + 3λ 2 C ∆p 2 4 2 ,(11)
with the mass of the particle in question m and its Compton wavelength λ C . The similarity to Eq. (7) is obvious. Indeed, the effect even allows for an analogous interpretation to the GUP. In contrast to nonrelativistic positions, the eigenstates of the Newton-Wigner position operator [314] are not given by Dirac-delta functions, but have finite width. This is analogous to the quasi-position representation of the GUP [41]. This observation begs the questions: in which way do GUP-corrections differ from relativistic ones? and can GUPeffects sincerely be deemed nonrelativistic? Considering the expansion of the relativistic energy in one dimension
E = k 2 c 2 + m 2 c 4 ≃ mc 2 + k 2 2m + k 4 8m 3 c 2 ,(12)
we obtain the relativistic correction to the Hamiltonian δĤ Rel = λ 2 Ck 4 /8 2 m. The corresponding GUP-induced correction to the Hamiltonian reads δĤ GUP = βℓ 2k4 / 2 m. Clearly, these two corrections are very similar. However, while the Compton wavelength appearing in the relativistic model is particle-specific, the minimal length ℓ ought to be universal. Thus, both can be told apart once different particle species are considered. Yet, this does not imply that the GUP can be understood to be of zeroth order in a 1/c-expansion, i. e. that it is nonrelativistic. This is the subject of the subsequent subsection.
B. Consistent limits
In the GUP literature, it has been common practise for a long time (see for instance [268,312,315]) to investigate aspects of classical dynamics with deformations of GUP-type. However, it has been shown in a recent study [291] that the classical limit of the GUP is more involved than just naïvely applying the transformation
[x,p] i → {x, p}.(13)
The issue lies in the fact that the length scale ℓ is understood to be proportional to the Planck length l P = G/c 3 . For reasons of clarity, here we resort to quadratic corrections to the Heisenberg algebra and work in one spatial dimension. As a result, the modified commutator in one dimension may be written as
[x,p] = i 1 + β G c 3p 2 ,(14)
where β again amounts to a dimensionless model parameter. Classical physics should be recovered in the limit of vanishing on the level of expectation values
{x, p} = lim →0 [x,p] i = 1 + βG c 3 lim →0 p 2 .(15)
While the latter term depends on the quantum state the system is in, it has been convincingly shown that, due to the additional factor of −1 , the resulting Poisson brackets either diverge (a meaningless result) or stay unmodified [291]. Thus, it appears that GUP effects necessarily dissolve in the classical limit. 3 There are three important lessons to be drawn on the basis of this calculation:
• First, the classical limit is not the only regime of interest in quantum gravity phenomenology. If we, instead, take the relative-locality limit G → 0, → 0 with G/ = const, 4 the corrections survive, eventually leading to modified Poisson brackets. As the name suggests, this viewpoint has been the subject of a number of studies in the context of doubly/deformed special relativity and relative locality [317]. Importantly, this makes it impossible to apply the resulting modifications to gravitational physics or dynamics on curved backgrounds of the kind studied, for instance, in [239,240,243,265].
• Second, the corrections to the algebra are of the order c −3 , i. e. relativistic. Therefore, they a priori should vanish when considering nonrelativistic quantum mechanics. More precisely, in the 1/c-expansion of the Hamiltonian the modifying term should appear after the first relativistic corrections. Thus, the same reasoning that trivializes GUP corrections to classical problems would also render them meaningless in the nonrelativistic regime.
• Third, neither is the speed of light generally infinite, nor do Planck's and Newton's constants vanish in reality.
Taking limits in one of the three dimensionful constants without considering the effect of the other two can lead to faulty conclusions. For phenomenological purposes, it can be sensible to consider QG corrections to classical or nonrelativistic dynamics even though they may only appear at higher orders in perturbative expansions of dimensionful fundamental constants as long as the effect can be distinguished from other, possibly much larger contributions.
We thus observe the complication of considering limits in different quantities at the same time. It can be well-motivated to study corrections to classical dynamics from Planck-scale suppressed effects, even though the interpretation is subtler then at the quantum level. However, classical objects are usually macroscopic, thus complicating the application of GUPs (see Sec. III E). Furthermore, due to them being intrinsically relativistic, it is important to try to understand the relativistic completion as well as the symmetries underlying GUP-deformed theories, an area of research which, as of yet, has not been adequately addressed.
C. Symmetries and the choice of Hamiltonian
In Sec. II, we provided a short introduction into the field of GUPs by first defining a deformed commutator between x a andp b (c. f. Eq. (2)), and then providing a model Hamiltonian in Eq. (10), which we repeat here for conveniencê
H = δ abp apb 2m + V (x a ).(10)
However, we also showed that the mere existence of a minimal length does not fix the definition of the momentum p a . Given a model as in Eq.
(2), we can check for a minimal length, but that minimal length is just a byproduct of the deformation. In other words, simply proposing the Hamiltonian in Eq. (10) corresponds to a top-down approach.
Yet, an exact intuition on a specific form of Eq.
(2) and posterior definition of Hamiltonian, as is usually required from top-down reasoning, is lacking. Given the wave-number conjugate to the positionk a introduced in Eq. (8), there is a constructive statement that can be made from the sole presence of the minimal-length scale itself. The boundedness of the ensuing wavenumber space is not only a tool to check for the existence of a minimal length, but it can also be used as basis for a bottom-up treatment of the ensuing models. The existence of an operatorp a satisfying Eq. (2) is not required on the level of kinematics; it amounts to an additional structure whose definition is somewhat arbitrary. In other words, kinematically, there is only one model of minimal-length quantum mechanics.
The definition of a momentump a , in turn, only has physical consequences once it is incorporated into the dynamics through the Hamiltonian given in Eq. (10). However, its definition was arbitrary in the first place, and the Hamiltonian in Eq. (10) inherits this degree of arbitrariness. Why, for example should we not have chosen
H = δ abk akb 2m + V (x a ),(16)
instead of Eq. (10)? why not any other function ofk a andx b which reduces to the ordinary quantum mechanical Hamiltonian in the limit of vanishing minimal length? At first sight, there is no physical reason to choose one over the other. What could thus be an informed guess on the form of the Hamiltonian? When constructing dynamics, it is usually helpful to revert to spacetime symmetries. It has been shown that the Hamiltonian given in Eq. (10) violates Galilean invariance [296], the consequences of which have not been worked out yet. Those consequences, for example, may involve breaking Lorentz invariance.
Be that as it may, a comprehensive analysis of the symmetries of Eq. (10) is still lacking. Taking inspiration from doubly special relativity, Galilean invariance may be deformed instead of being strictly broken. This property, in turn could provide constraints on the allowed forms of Hamiltonian.
Relativistic models of deformed special relativity could then be understood as algebra deformations of GUP models introducing the speed of light as deformation parameter. 5 GUP models, in turn, could be derived from DSR by a nonrelativistic limit. 6 This is one of the possible relativistic generalizations of the GUP which are the subject of the subsequent subsection.
D. Relativistic extension
The Heisenberg algebra relates positions and momenta. While simple analogy with the nonrelativistic case appears to indicate a straightforward extension to Lorentzian spacetime of the form
[x µ ,p ν ] = i δ µ ν ,(17)
there are a number of shortcomings of this approach:
• First, in both relativistic as well as nonrelativistic quantum mechanics time is a parameter, not an operator (for some recent literature on this subject consult [319] and references therein). In particular, as had been first pointed out by Pauli [320] and rigorously shown in [321], it is impossible to define a self-adjoint time operator for physical systems described by a Hamiltonian with spectrum bounded from below.
• Second, the Heisenberg algebra is a hallmark of single-particle quantum theory. After all, the position x µ represents a single worldline. However, a consistent extension of a quantum theory to the relativistic realm clearly necessitates the possibility of particle creation. It intrinsically has to be a theory of many particles, i. e. a QFT (for an instructive explanation of this fact see chapter one in [322]). In those, positions, not being Poincaré-charges, are not described by operators (leaving aside the Newton-Wigner operator representing spatial positions in the single-particle sector only [314]). As a result, it is unclear how to interpret the deformation of an uncertainty relation of the type (17).
However, instead of concentrating on the commutation relations of physical operators, it may be sufficient to consider the non-canonical transformation towards canonical variables. In particular, while the interpretation of noncoincident physical and canonical positions is unclear in QFT, the particular choice of unequal canonical and physical momenta can be made sense of. In other words, as above we may define the canonical conjugates to the positions, the wave numbersk
µ =k µ (p).(18)
Clearly, this procedure is only possible for GUPs which can be represented in this way, i. e. the ones set on a commutative geometry (for more details see [111]). 7 Following the GUP philosophy, the relativistic dispersion relation may be understood in terms of the momentum p µ (k), thus reading
−p 2 (k) = m 2 ,(19)
with m the mass of the particle under investigation. This modification may be applied in a covariant way, 8 or break/deform Lorentz invariance [111]. This disparity brings to the surface the fundamental ambiguity from which relativistic extensions of the GUP innately suffer. How should the time component of the physical momentum be expressed in terms of its canonical counterpart? Say, for example, in the nonrelativistic regime momentum and wave number are related asp a = f (k 2 )k a . Two different relativistic generalisations immediately come to mind. On the one hand, if understood covariantly, a correction of the formp µ = f (k µkν η µν )k µ is to be expected. 9 On the other hand, the energy may as well be chosen to stay unmodified such thatp µ = (k 0 , f (k 2 )k a ), thus breaking (or deforming) Lorentz symmetry. At best, we can thus say that a relativistic extension is inspired by an underlying GUP, but it cannot be derived unambiguously.
On the conceptual level, it is important to bear in mind that these choices may have deeply differing implications. Do we, for example, want to consider a minimal length only or a minimal time as well? If these statements are to be covariant, could it be more helpful to consider a minimal spacetime volume? 10 These questions are all the more important because at high energies a relativistic description is without alternative. The scale of the considered interaction (say, the center-of-mass energy in scattering processes), in turn, is the most obvious amplifier of Planck-scale effects. But a relativistic description itself will not do. To make consistent predictions, it is required to consider deformations of the QFTs comprising the standard model.
Regarding QFTs, up until now, no consensus has been reached on the level of methods or techniques. The following approaches have been worked out in the past.
• A first ansatz made use of the Bergmann-Fock formalism [26,331], while introducing a minimal length as well as a minimal momentum. This idea has the added advantage that it circumvents the issue that there are no position and momentum eigenstates in these backgrounds. Instead, the theory is defined in a deformed but regular Bargmann-Fock basis.
• The same problem has been tackled in the context of canonical quantization. Specifically, using standard techniques related to models with a minimal length, fields are first described in terms of the corresponding modes, thus represented in momentum space [109,332,333]. Again, this avoids the cumbersome use of the quasi-position representation.
• A possible strategy consists in retaining the ordinary form of the position-space field equations while replacing the properly modified operators [70,82,85,[334][335][336][337][338][339][340][341][342]. However, this approach can at best be perturbatively connected to the GUP. Recall that there is no position representation in this context. As a result, this produces a higher-derivative theory, often raising the issues that such theories ordinarily present, such as Ostrogradsky instabilities and the presence of ghost fields.
• Alternatively, it is possible to apply an adequate generalization of the Fourier transform [41] from momentum space to the maximal-localization states. This ansatz grants a consistent quasi-position representation of the fields and the corresponding equations of motion [343].
• A further alternative is constituted by a path-integral approach, in which a momentum-space description of the Lagrangian density represents the obvious starting ground [77,343,344]. Once again, since the density is then integrated over all possible values of the momentum, special care must be dedicated to the ultraviolet sector of the model.
• Several authors have instead preferred a completely different approach: that of imposing specific commutation relations between the field operators and their conjugates or between the field operators and their canonical momenta [345,346]. Typically, such modified commutators are inspired by the position-momentum commutators of nonrelativistic quantum mechanics (2), i.e., considering modifications depending on the canonical field momentum. However, in this approach, it is not always clear whether a minimal length is present.
Summarizing, there are a number of partially mutually inconsistent approaches towards QFT with a minimal length. A comprehensive comparative study of these different ideas, possibly unearthing connections between them, is lacking. Therefore, it is difficult to assess the state of the field regarding relativistic extensions. In order to be able to study QFT with a certain with a certain degree of rigour, it is necessary to understand multiparticle states, which are subject of the subsequent subsection.
E. Fundamental constituents
A consistent theory of mechanics, which the GUP ought to be, must provide a description of composite particles. In typical analyses involving nonrelativistic quantum mechanics in the mesoscopic regime, the GUP is applied to both elementary particles and macroscopic aggregates (such as large molecular compounds) in exactly the same way. However, there are serious arguments against such a naïve application to macroscopic objects [347].
In the framework of doubly special relativity [294,295], loop quantum gravity [348][349][350] as well as three-dimensional gravity coupled to matter [351,352], it has been argued that the composition law for momenta needs to be deformed such that it becomes nonlinear. Such a deformation can always be treated as a small perturbation as long as the characteristic physical quantities of the considered system are not comparable with the Planck scale. Hence, this implies the absence of inconsistencies in the study of elementary particles. However, the above requirement breaks down when considering composite objects made up of a large number of small constituents, for which, e.g., the Planck mass (≃ 10 −5 g) does not represent an out-of-reach threshold. Therefore, it appears that, for such macroscopic systems, the nonlinear terms of the composition law of momenta should play a relevant role, but experimentally this is not the case. Since in our macroscopic world there is no imprint of QG, the said nonlinearity turns out to be an undesirable feature of the model. This issue is typically referred to as the "soccer-ball problem" [297,353,354], and it has been addressed with several proposals [292,353,355,356].
It may be tempting to transfer the reasoning developed so far in the context of the deformed composition of momenta and argue that the GUP is affected by the same ill-defined macroscopic limit. However, a simple translation of this conclusion, while highlighting a related issue, is not possible. On the contrary, QG effects deteriorate with increasing number of elementary constituents of a composite system rather than being enhanced.
To show this argument in a simple scenario, we extend the reasoning already proposed in [239,347] to a general class of deformed uncertainty relations. The starting point consists in introducing the modified commutation relations for the position and momentum operators belonging to the elements of an N -particle composite system, that is
x i I ,p J,j = iδ IJ f p 2 I δ i j +f p 2 I p i Ip I,ĵ p 2 I , x i I ,x j J =2iδ IJ θ(p 2 I )p [i Ix j] I (p 2 I ) , [p I,i ,p J,j ] = 0 ,(20)
It can be shown that the ensuing deformed commutation relations at the level of center-of-mass positions and momenta read
X i ,P j = 1 N N I=1 N J=1 x i I ,p J,j = i N N I=1 f p 2 I δ i j +f p 2 I p i Ip I,ĵ p 2 I ,(22)X i ,X j = 1 N 2 N I=1 N J=1 x i I ,x j J = 2 N N I=1 θ(p 2 I )p [i Ix j] I .(23)
For simplicity, we impose that the collection of constituents undergoes collective quasi-rigid motion. This assumption is particularly tailored to recent experimental proposals aimed at detecting quantum gravitational signatures via quantum optics and mechanical oscillators [357][358][359]. In other words, consider the momenta to be approximately given byp I,i ≃P i /N , ∀{I, i}. As a result, the modified commutation relations (22) and (23) become
X i ,P j = i f P 2 N 2 δ i j +f P 2 N 2 P iP ĵ P 2 ,(24)X i ,X j = 2 N θ P 2 N 2 P [iX j] .(25)
Considering the argument of the functions f,f and θ, it is evident that the relevance of the deformations decreases quadratically with increasing N . Clearly, the thermodynamical limit (i. e., N → ∞) for the above functions is equivalent to the limitP → 0, which yields the standard quantum mechanical picture with f → 1,f → 0 and θ/N → 0. This implies that macroscopic objects obey ordinary quantum mechanics. While the GUP thus circumvents the soccer-ball problem, this leaves us with a related issue. In removing the problem of nonlinearity, the reasoning creates an "inverse" soccer-ball problem in its place. It is unclear whether the particles deemed as elementary today prove to contain substructure tomorrow. To say that an electron, for example, suffers from Planck-level corrections as introduced here, implies that it marks the final step of the reductionist ladder, a somewhat metaphysical assertion to say the least. This issue appears to sink its roots in the limited nature of single-particle quantum mechanics and is expected to disappear at the level of QFT. Although the above argument does not provide a definite answer to the problem of fundamental constituents, it surely provides an indication towards the settlement of the issue.
F. Heuristic vs. explicit approaches
When browsing the literature on GUPs, it is not uncommon to come across calculations and claims based on heuristic arguments. Indeed, even though the deviations from standard quantum mechanics are deemed small (and thus treated perturbatively), the computational difficulty might require alternative routes for the solution of the considered problem. Issues of this kind motivate streamlined derivations of complex algebraic outcomes relying on reasonable assumptions that heuristically lead to the same conclusion. A paradigmatic example along this line is given by the GUP-induced corrections to the Casimir effect, which has been originally deduced by means of a full-fledged QFT procedure [333], and later on recovered within a simplified heuristic scheme [229,360].
However, heuristic considerations do not always reproduce results stemming from explicit computations. An example of a discrepancy between the two approaches has been recently pointed out in the literature in the context of the Schwinger effect in a (anti-)de Sitter space [361]. In this context, it has been demonstrated that, although the corrections to the production rate of particle-antiparticle pairs in the explicit and heuristic computations exactly match in absolute value, there is a fundamental tension between the two approaches in the predicted sign of the contributions. In a nutshell, before the observation contained in [361], it was widely accepted that the presence of a non-vanishing cosmological constant Λ can be embedded in nonrelativistic quantum mechanics by means of a deformed commutator between position and momentum operators, namely [362] [
x i ,p j ] = i δ ij 1 − Λ 3x 2 ,(26)
with the constant Λ. For Λ > 0 (Λ < 0) the model is supposed to mimic a (anti-)de Sitter background. However, for the corrections to the Schwinger effect based on heuristic arguments to be equivalent to explicit calculations, the interpretation of positive (negative) Λ as modelling (anti-)de Sitter space has to be reversed. This contradicts the existing literature on the subject. Although it does not represent a striking rebuttal of heuristic methods, it certainly casts doubt on their general validity. The previous example is not the only instance that highlights the distinctions between heuristic and explicit results. As a matter of fact, when considering the GUP-corrected Unruh temperature, there is a mismatch between the heuristic [363][364][365] and the explicit [227,366,367] predictions. On the one hand, QFT calculations indicate that GUP corrections, being frequency-dependent, spoil the thermal property of the radiation spectrum. 11 On the other hand, that the thermal spectrum be thermal is a basic assumption of the heuristic approach. Therefore, GUP corrections to the Unruh effect only manifest through a shift of the radiation temperature. The ensuing disparity is not only quantitative but also qualitative.
An analogous conundrum arises in the gravitational context, i. e., when considering the Hawking radiation emitted by black holes: while heuristic derivations [62, 105, 112-123, 125-178, 194, 195, 218, 243, 369, 370] a priori presume a blackbody spectrum (thus only modifying the value of the Hawking temperature), explicit ones [164,[179][180][181][182][183][184][185][186][187][188][189][190][191] based on the Parikh-Wilczek [371] formalism lead to frequency-and mass-dependent corrections. This is also implied by the calculations on the Unruh effect. 12 The non-thermal behavior has been argued to possibly solve the information paradox by letting information leak out [179]. This highlights that there is a qualitative difference between heuristic and explicit ansätze.
In a nutshell, we do not suggest to a priori reject results achieved by means of heuristic arguments; instead, we are warning to instill great care when applying this kind of analysis. When allowed by the mathematical complexity of the considered problem, it is preferable to rely on the explicit approach, and apply heuristic methods solely to gain an intuition.
For example, many of the bounds on the GUP parameter are based on heuristic reasoning and require further assessment. These constraints can be found in the next subsection.
G. Bounds on the model parameter As a phenomenological model, much of the literature on GUPs naturally consists in constraints on model parameters. However, not all bounds were obtained applying equal standards of rigour. Here, we try to give a comprehensive classification of the existing constraints in the literature (see Tabs. I, II and III). Over all, there are four different ways along the lines of which these kinds of studies have proceeded:
• Direct application of the deformed commutation relations to quantum mechanical systems. As such, they allow for the clearest interpretation. These studies, colour-coded green in the tables, mainly concentrate on tabletop experiments (Tab. I).
• Modifications to the black-hole and cosmological spacetimes derived from heuristic corrections to the Hawking temperature in the context of emergent gravity. As explained in Sec. III F, the interpretation of these derivations is still under debate. Indicated in orange, they naturally deal with astrophysical (Tab. II) and cosmological (Tab. III) observations.
• Deformed Poisson brackets as classical counterparts of GUPs. For the reasons outlined in Sec. III B, those are slightly more subtle to interpret, particularly in the case of applications involving the gravitational field. They are indicated in yellow in Tab. III.
• A redefinition of the ADM-mass such that it mimics the mass-dependence of the Compton wavelength at (sub-)Planckian scales in accordance with the black hole uncertainty principle correspondence [373][374][375]. As a reparametrization of a free constant in terms of another one, it is unclear whether the corrections to gravitational observables obtained using this approach are physical. Finding their natural arena in astrophysical observations (Tab. II), these studies are indicated in red.
The careful reader may have realized that Tabs. I and II contain entries with gray background in addition to the coloured ones. These indicate studies based on deformations the Heisenberg algebra for macroscopic objects, whose (macroscopic) mass serves as an amplifier. Unfortunately, as laid out in Sec. III E and [239,347], this is problematic: the modifications to the canonical commutator should scale with the squared inverse number of constituents. Correcting for this issue, the bounds on β weaken by a factor of at least 10 44 , thereby limiting the usefulness of macroscopic bodies to the search for a minimal length.
In a nutshell, we see that the best bound on the model parameter of the quadratic GUP stems from scanning tunnelling microscopes [52,357] [60,67,81], in the respective works the absolute value of the radius appearing in the Coulomb potential was neglected, thus solving for positive and negative radii. Therefore, their findings are still under debate. They have been refuted in [382], which has been corroborated by numerical results in [50,383]. • Besides formal similarities, relativistic corrections to Heisenberg's uncertainty relation and the GUP are intrinsically different. In fact, while the former lead to a particle-specific deformed Hamiltonian, the latter are expected to be universal. In this context, we have also discussed consistent limits of the GUP, showing that, contrary to recent statements in the literature, it can be well-motivated to analyze corrections to classical dynamics from Planck-scale suppressed effects, although the interpretation may be subtler than in the quantum regime (see Sec. III A and III B).
• Reconsidering the essence of the minimal length, it becomes clear that the customary choice of Hamiltonian is not justified by the existence of the minimal length alone; instead, it introduces additional structure. This raises the question: which choice of Hamiltonian is preferable? In order to make an informed decision on the matter, a detailed symmetry analysis is required which is, as of yet, missing. Details can be found in Sec. III C.
• Even though it is frequently adopted in the literature, the naïve relativistic extension (17) of the Heisenberg algebra to Lorentzian spacetime is vitiated by some conceptual subtleties, such as the impossibility of defining a self-adjoint time operator for physical systems with unbounded energy and the fact that position is no longer an operator in a relativistic QFT. These motivations severely question the meaning of relativistic uncertainty relations of Heisenberg-type (details are contained in Sec. III D).
• A consistent relativistic generalization of the Heisenberg algebra requires deformations of QFT, for example to describe the standard model. Despite the number of approaches introduced so far, a generally accepted solution is still lacking. The discussion on this point has been presented in Sec. III D.
• Quantum-gravitational effects deteriorate as the number of elementary constituents of a given composite system increases. This implies a nontrivial rescaling of GUP corrections. On the one hand, this renders the detection of quantum-gravitational signatures in macroscopic objects challenging. On the other hand, it raises a conceptual question: what are the fundamental degrees of freedom? As it requires information on the content of the universe at all, including unknown scales, assumptions on the matter have a rather metaphysical flavour. This issue (a sort of "inverse" soccer ball problem) has been discussed in Sec. III E.
• Although heuristic considerations within the GUP framework can compensate for computational technicalities, they might be misleading. We have provided several examples for which the ensuing results contradict more rigorous and explicit arguments. This mismatch should not discourage the usage of heuristic reasoning in general, because it frequently helps to gain an intuition on the nature of Planck-scale effects. Rather, the examples are to be taken as a warning not to recline on heuristics, but possibly to use them as a starting point towards reaching the correct solution (see Sec. III F for more details).
• In the 30 years of its existence, the field of GUPs has amassed a great number of constraints on the model parameter. We have presented and classified a comprehensive collection of existing bounds in Tabs. I, II, and III. In this context, it is noteworthy that there are applications of the GUP to macroscopic objects which do not take into consideration that Planck-scale effects deteriorate with increasing number of degrees of freedom. The most stringent constraint dates back to 2011, amounting to β < 10 33 . This indicates that progress on the experimental side has come to a halt (see Sec. III G).
30 years into the research on GUPs, even foundational matters such as the inverse soccer ball problem are not entirely resolved. More effort is required for the construction of a well-posed Planck-scale deformation of the Heisenberg uncertainty principle. Rather than just pointing out difficulties, this review is intended to inspire new research directions towards a rigorous understanding of minimal-length phenomenology.
where {i, j} = {1, 2, 3} label vector components whereas {I, J} = {1, 2, . . . , N } denote the I−th and J−th particle. Given the dynamical variables of the constituents, we may define center-of-mass coordinates and momenta characterising the composite system asX
This finding dates back to 2011. We thus see that, while the constraint proves that low-energy measurements can be competitive in terms of precision -it corresponds to the scales accessed at the LHC [376] -it has not been improved upon in more than 10 years. The road to Planckian precision remains long, and the field is in dire need for new ideas to make progress.13 While there have been claims in the literature that the corrections to the hydrogen atom are nonperturbative leading to stronger bounds, reading
β < 10 33 .
(27)
experiment
ref.
upper bound on β
phonon cavity
[378]
62 10 46
harmonic oscillators
[379, 380]
10 7 10 60
scanning tunnelling microscope
[52, 357]
10 33
µ anomalous magnetic moment
[52, 381]
10 33
Hydrogen atom a
[48, 50, 89, 382, 383]
10 34
lamb shift
[52, 384]
10 36
87 Rb interferometry
[385, 386]
10 39
Kratzer potential
[84]
10 46
stimulated emission
[104]
10 46
Landau levels
[52, 54, 384]
10 50
quantum noise
[106]
10 57
a
TABLE I .
IUpper bounds on the quadratic GUP-parameter by tabletop experiments not related to gravity. Tests which are rigorously relatable to modified commutators are marked in green, experiments involving macroscopic quantum objects in gray.experiment
ref.
upper bound on β
equivalence principle (pendula)
[240]
10 20 10 73
gravitational bar detectors
[387, 388]
10 33 10 93
equivalence principle (atoms)
[389]
10 45
perihelion precession (solar system) [123, 155]
10 69
perihelion precession (pulsars)
[123]
10 71
gravitational redshift
[155]
10 76
black hole quasi normal modes
[251]
10 77
light deflection
[123, 155]
10 78
time delay of light
[155]
10 81
black hole shadow
[247]
10 90
black hole shadow
[251, 259]
10 90
TABLE II .
IIUpper bounds on the quadratic GUP-parameter by gravitational experiments and observations. Tests which are rigorously relatable to modified commutators are marked in green, experiments involving macroscopic quantum objects in gray, while results with orange background are based on the heuristic application of the GUP to gravitational thermodynamics. The red rows contain instances of the black-hole mass-modification approach.IV. DISCUSSION AND CONCLUSIONSIn this review, we have critically discussed some shortcomings and open problems arising within the framework of GUPs, which are often overlooked or naively addressed in the pertinent literature. We summarize our main results and conclusions below:experiment
ref.
upper bound on β
gravitational waves [242, 258]
10 36
cosmology (all data)
[231]
10 59
cosmology (late-time) [226, 231]
10 81
TABLE III .
IIIUpper bounds on the quadratic GUP-parameter on cosmological scales. Results with orange background are based on the heuristic application of the GUP to gravitational thermodynamics, while the yellow background stands for Poissonbracket descriptions of the GUP.
For an introduction to the quasi-position representation, consult[41].
The classical and quantum dynamics of the GUP have also been contrasted in the context of Koopmann-von Neumann mechanics[316]. As a result, the classical version of the theory necessarily acquires a quantum nature, a contradiction that appears to corroborate the reasoning in[291]. However, this finding rests crucially on a somewhat idiosyncratic choice of modification of the underlying algebra whose quantum mechanical counterpart does not reproduce the well-known GUP-deformed dynamics. After accounting for this peculiarity, the effect disappears.4 In other words, while the Planck mass m P continues to be a relevant constant, the Planck length l P goes to zero in the limit.
Deformed special relativity would then be a twofold deformation of Galilean relativity with the deformation parameters ℓ and c.6 In the context of modified dispersion relations, this link has been found in[318].
Noncommutative geometries allow for a definition of wave numbers[293]. Those, however, cannot be represented by gradients because the noncommutativity of the coordinates precludes the existence of a position representation. In this case, other methods such as star-product realizations[323][324][325][326] are required.8 While this may appear trivial at first sight, at the level of many particles it motivates higher-derivative, possibly nonlocal quantum field theories[102].9 Note, however, that this modification does not recover the GUP-deformed Schrödinger equation in the nonrelativistic limit.10 This covariant form of discreteness is the basis for the Causal-Set approach to QG[327,328](see[329,330] for reviews).
Note that this is not in conflict with the results of[368] which state that a thermal spectrum follows from unitarity and relativistic invariance because the latter is violated by the GUP.12 To see how the Hawking effect may be understood as an instance of the Unruh effect, see e. g.[372].
A recent investigation[377] on modifications of the Pauli exclusion principle may be going into this direction. However, it has not been explicitly shown that GUPs necessarily imply such modifications.
ACKNOWLEDGMENTSThe authors acknowledge networking support by the COST Action CA18108. G.G.L. is grateful to the Spanish "Ministerio de Universidades" for the awarded Maria Zambrano fellowship and funding received from the European Union -NextGenerationEU. L.P. acknowledges support by MUR (Ministero dell'Università e della Ricerca) via the project PRIN 2017 "Taming complexity via QUantum Strategies: a Hybrid Integrated Photonic approach" (QUSHIP) Id. 2017SRNBRK and is grateful to the "Angelo Della Riccia" foundation for the awarded fellowship received to support the study at Universität Ulm.
Superstring collisions at planckian energies. D Amati, M Ciafaloni, G Veneziano, Phys. Lett. B. 1971-2D. Amati, M. Ciafaloni and G. Veneziano, "Superstring collisions at planckian energies," Phys. Lett. B 197 no. 1-2, (Oct, 1987) 81-88.
Can spacetime be probed below the string size?. D Amati, M Ciafaloni, G Veneziano, Phys. Lett. B. 2161-2D. Amati, M. Ciafaloni and G. Veneziano, "Can spacetime be probed below the string size?," Phys. Lett. B 216 no. 1-2, (Jan, 1989) 41-47.
String theory beyond the Planck scale. D J Gross, P F Mende, Nucl. Phys. B. 3033D. J. Gross and P. F. Mende, "String theory beyond the Planck scale," Nucl. Phys. B 303 no. 3, (Jul, 1988) 407-454.
The high-energy behavior of string scattering amplitudes. D J Gross, P F Mende, Phys. Lett. B. 1971-2D. J. Gross and P. F. Mende, "The high-energy behavior of string scattering amplitudes," Phys. Lett. B 197 no. 1-2, (Oct, 1987) 129-134.
Minimum physical length and the generalized uncertainty principle in string theory. K Konishi, G Paffuti, P Provero, Phys. Lett. B. 2343K. Konishi, G. Paffuti and P. Provero, "Minimum physical length and the generalized uncertainty principle in string theory," Phys. Lett. B 234 no. 3, (Jan, 1990) 276-284.
On the interpretation of minimal length in string theories. T Yoneya, Mod. Phys. Lett. A. 0416T. Yoneya, "On the interpretation of minimal length in string theories," Mod. Phys. Lett. A 04 no. 16, (Aug, 1989) 1587-1595.
. O Lauscher, M Reuter, arXiv:0508202Fractal Spacetime Structure in Asymptotically Safe Gravity. 1050JHEP. hep-thO. Lauscher and M. Reuter, "Fractal Spacetime Structure in Asymptotically Safe Gravity," JHEP 10 (Aug, 2005) 50, arXiv:0508202 [hep-th].
The spectral geometry of de Sitter space in asymptotic safety. R Ferrero, M Reuter, arXiv:2203.08003JHEP. 0840hep-thR. Ferrero and M. Reuter, "The spectral geometry of de Sitter space in asymptotic safety," JHEP 08 (2022) 040, arXiv:2203.08003 [hep-th].
Discreteness of area and volume in quantum gravity. C Rovelli, L Smolin, arXiv:9411005Nucl. Phys. B. 442gr-qcC. Rovelli and L. Smolin, "Discreteness of area and volume in quantum gravity," Nucl. Phys. B 442 (Nov, 1994) 593-622, arXiv:9411005 [gr-qc].
Fractal Structure of Loop Quantum Gravity. L Modesto, arXiv:0812.2214Class. Quant. Grav. 26242002L. Modesto, "Fractal Structure of Loop Quantum Gravity," Class. Quant. Grav. 26 (Dec, 2008) 242002, arXiv:0812.2214.
Loop Quantum Gravity Phenomenology: Linking Loops to Observational Physics. F Girelli, F Hinterleitner, S A Major, arXiv:1210.1485SIGMA. 898F. Girelli, F. Hinterleitner and S. A. Major, "Loop Quantum Gravity Phenomenology: Linking Loops to Observational Physics," SIGMA 8 (Oct, 2012) 98, arXiv:1210.1485.
Quantentheorie schwacher Gravitationsfelder. M Bronstein, Phys. Zeitschrift der Sowjetunion. 9M. Bronstein, "Quantentheorie schwacher Gravitationsfelder," Phys. Zeitschrift der Sowjetunion 9 (1936) 140-157.
Possible Connection Between Gravitation and Fundamental Length. C A Mead, Phys. Rev. 1353C. A. Mead, "Possible Connection Between Gravitation and Fundamental Length," Phys. Rev. 135 no. 3B, (Aug, 1964) B849-B862.
Observable Consequences of Fundamental-Length Hypotheses. C A Mead, Phys. Rev. 1434C. A. Mead, "Observable Consequences of Fundamental-Length Hypotheses," Phys. Rev. 143 no. 4, (Mar, 1966) 990-1005.
Quantum gravity and minimum length. L J Garay, arXiv:9403008Int. J. Mod. Phys. A. 10gr-qcL. J. Garay, "Quantum gravity and minimum length," Int. J. Mod. Phys. A 10 (Mar, 1994) 145-166, arXiv:9403008 [gr-qc].
Limit to space-time measurement. Y J Ng, H Van Dam, Mod. Phys. Lett. A. 0904Y. J. Ng and H. van Dam, "Limit to space-time measurement," Mod. Phys. Lett. A 09 no. 04, (Feb, 1994) 335-340.
Limits on the Measurability of Space-time Distances in (the Semi-classical Approximation of) Quantum Gravity. G Amelino-Camelia, arXiv:9603014Mod. Phys. Lett. A. 9gr-qcG. Amelino-Camelia, "Limits on the Measurability of Space-time Distances in (the Semi-classical Approximation of) Quantum Gravity," Mod. Phys. Lett. A 9 (Mar, 1996) 3415-3422, arXiv:9603014 [gr-qc].
On Gravity and the Uncertainty Principle. R J Adler, D I Santiago, arXiv:9904026Mod. Phys. Lett. A. 1420gr-qcR. J. Adler and D. I. Santiago, "On Gravity and the Uncertainty Principle," Mod. Phys. Lett. A 14 no. 20, (Apr, 1999) 1371-1381, arXiv:9904026 [gr-qc].
Natural extension of the Generalised Uncertainty Principle. C Bambi, F R Urban, arXiv:0709.1965Class. Quant. Grav. 2595006C. Bambi and F. R. Urban, "Natural extension of the Generalised Uncertainty Principle," Class. Quant. Grav. 25 (Sep, 2007) 95006, arXiv:0709.1965.
Limitations on the operational definition of spacetime events and quantum gravity. T Padmanabhan, Class. Quantum Gravity. 44T. Padmanabhan, "Limitations on the operational definition of spacetime events and quantum gravity," Class. Quantum Gravity 4 no. 4, (Jul, 1987) L107-L113.
A Generalized Uncertainty Principle in Quantum Gravity. M Maggiore, arXiv:9301067Phys. Lett. B. 304hep-thM. Maggiore, "A Generalized Uncertainty Principle in Quantum Gravity," Phys. Lett. B 304 (Jan, 1993) 65-69, arXiv:9301067 [hep-th].
Generalized Uncertainty Principle in Quantum Gravity from Micro-Black Hole Gedanken Experiment. F Scardigli, arXiv:9904025Phys. Lett. B. 452hep-thF. Scardigli, "Generalized Uncertainty Principle in Quantum Gravity from Micro-Black Hole Gedanken Experiment," Phys. Lett. B 452 (Apr, 1999) 39-44, arXiv:9904025 [hep-th].
Minimum Length from Quantum Mechanics and Classical General Relativity. X Calmet, M Graesser, S D H Hsu, arXiv:0405033Phys. Rev. Lett. 93211101hep-thX. Calmet, M. Graesser and S. D. H. Hsu, "Minimum Length from Quantum Mechanics and Classical General Relativity," Phys. Rev. Lett. 93 (May, 2004) 211101, arXiv:0405033 [hep-th].
An Introduction to Black Holes, Information and the String Theory Revolution. L Susskind, J Lindesay, WORLD SCIENTIFICL. Susskind and J. Lindesay, An Introduction to Black Holes, Information and the String Theory Revolution. WORLD SCIENTIFIC, Dec, 2004.
The algebraic structure of the generalized uncertainty principle. M Maggiore, arXiv:9309034Phys. Lett. B. 319hep-thM. Maggiore, "The algebraic structure of the generalized uncertainty principle," Phys. Lett. B 319 (Sep, 1993) 83-86, arXiv:9309034 [hep-th].
Quantum groups and quantum field theory with nonzero minimal uncertainties in positions and momenta. A Kempf, Czechoslov. J. Phys. 4411-12A. Kempf, "Quantum groups and quantum field theory with nonzero minimal uncertainties in positions and momenta," Czechoslov. J. Phys. 44 no. 11-12, (Nov, 1994) 1041-1048.
A Higher Order GUP with Minimal Length Uncertainty and Maximal Momentum. P Pedram, arXiv:1110.2999Phys. Lett. B. 714P. Pedram, "A Higher Order GUP with Minimal Length Uncertainty and Maximal Momentum," Phys. Lett. B 714 (Oct, 2011) 317-323, arXiv:1110.2999.
Minimal length, maximal momentum and Hilbert space representation of quantum mechanics. K Nozari, A Etemadi, arXiv:1205.0158Phys. Rev. D. 85104029K. Nozari and A. Etemadi, "Minimal length, maximal momentum and Hilbert space representation of quantum mechanics," Phys. Rev. D 85 (May, 2012) 104029, arXiv:1205.0158.
Quantum Mechanics and the Generalized Uncertainty Principle. J Y Bang, M S Berger, arXiv:0610056Phys. Rev. D. 74125012gr-qcJ. Y. Bang and M. S. Berger, "Quantum Mechanics and the Generalized Uncertainty Principle," Phys. Rev. D 74 (Oct, 2006) 125012, arXiv:0610056 [gr-qc].
Canonical Realizations of Doubly Special Relativity. P Galan, G A M Marugan, arXiv:0702027Int. J. Mod. Phys. D. 16gr-qcP. Galan and G. A. M. Marugan, "Canonical Realizations of Doubly Special Relativity," Int. J. Mod. Phys. D 16 (Feb, 2007) 1133-1147, arXiv:0702027 [gr-qc].
New Approach to Nonperturbative Quantum Mechanics with Minimal Length Uncertainty. P Pedram, arXiv:1112.2327Phys. Rev. D. 8524016P. Pedram, "New Approach to Nonperturbative Quantum Mechanics with Minimal Length Uncertainty," Phys. Rev. D 85 (Dec, 2011) 24016, arXiv:1112.2327.
Classical and quantum mechanics of the nonrelativistic Snyder model in curved space. S Mignemi, Class. Quantum Gravity. 2921215019S. Mignemi, "Classical and quantum mechanics of the nonrelativistic Snyder model in curved space," Class. Quantum Gravity 29 no. 21, (Nov, 2012) 215019.
Path Integral for non-relativistic Generalized Uncertainty Principle corrected Hamiltonian. S Das, S Pramanik, arXiv:1205.3919Phys. Rev. D. 8685004S. Das and S. Pramanik, "Path Integral for non-relativistic Generalized Uncertainty Principle corrected Hamiltonian," Phys. Rev. D 86 (May, 2012) 85004, arXiv:1205.3919.
Effect of the generalized uncertainty principle on Galilean and Lorentz transformations. V M Tkachuk, arXiv:1310.6243Found. Phys. 4612V. M. Tkachuk, "Effect of the generalized uncertainty principle on Galilean and Lorentz transformations," Found. Phys. 46 no. 12, (Oct, 2013) 1666-1679, arXiv:1310.6243.
GUP-based and Snyder Non-Commutative Algebras, Relativistic Particle models and Deformed Symmetries: A Unified Approach. S Pramanik, S Ghosh, arXiv:1301.4042Int. J. Mod. Phys. A. 28271350131S. Pramanik and S. Ghosh, "GUP-based and Snyder Non-Commutative Algebras, Relativistic Particle models and Deformed Symmetries: A Unified Approach," Int. J. Mod. Phys. A 28 no. 27, (Jan, 2013) 1350131, arXiv:1301.4042.
Generalized Uncertainty Principle and Self-Adjoint Operators. V Balasubramanian, S Das, E C Vagenas, arXiv:1404.3962Ann. Phys. 360V. Balasubramanian, S. Das and E. C. Vagenas, "Generalized Uncertainty Principle and Self-Adjoint Operators," Ann. Phys. 360 (Apr, 2014) 1-18, arXiv:1404.3962.
A squeezed review on coherent states and nonclassicality for non-Hermitian systems with minimal length. S Dey, A Fring, V Hussin, arXiv:1801.01139Springer Proc. Phys. 205. S. Dey, A. Fring and V. Hussin, "A squeezed review on coherent states and nonclassicality for non-Hermitian systems with minimal length," Springer Proc. Phys. 205 (Jan, 2018) 209-242, arXiv:1801.01139.
Rigorous Hamiltonian and Lagrangian analysis of classical and quantum theories with minimal length. P Bosso, arXiv:1804.08202Phys. Rev. D. 9712126010P. Bosso, "Rigorous Hamiltonian and Lagrangian analysis of classical and quantum theories with minimal length," Phys. Rev. D 97 no. 12, (Apr, 2018) 126010, arXiv:1804.08202.
New generalized uncertainty principle from the doubly special relativity. W S Chung, H Hassanabadi, arXiv:1807.11552Phys. Lett. B. 785W. S. Chung and H. Hassanabadi, "New generalized uncertainty principle from the doubly special relativity," Phys. Lett. B 785 (Jul, 2018) 127-131, arXiv:1807.11552.
A new higher order GUP: one dimensional quantum system. W S Chung, H Hassanabadi, Eur. Phys. J. C. 793213W. S. Chung and H. Hassanabadi, "A new higher order GUP: one dimensional quantum system," Eur. Phys. J. C 79 no. 3, (2019) 213.
On the quasi-position representation in theories with a minimal length. P Bosso, arXiv:2005.12258Class. Quant. Grav. 38775021P. Bosso, "On the quasi-position representation in theories with a minimal length," Class. Quant. Grav. 38 no. 7, (May, 2020) 75021, arXiv:2005.12258.
Gravitational effects on the Heisenberg Uncertainty Principle: A geometric approach. J Giné, G G Luciano, arXiv:2110.15342Results Phys. 38105594gr-qcJ. Giné and G. G. Luciano, "Gravitational effects on the Heisenberg Uncertainty Principle: A geometric approach," Results Phys. 38 (2022) 105594, arXiv:2110.15342 [gr-qc].
The minimal length is physical. P Bosso, L Petruzziello, F Wagner, arXiv:2206.05064Phys. Lett. B. 834137415P. Bosso, L. Petruzziello and F. Wagner, "The minimal length is physical," Phys. Lett. B 834 (Jun, 2022) 137415, arXiv:2206.05064.
Revisiting the algebraic structure of the generalized uncertainty principle. M Fadel, M Maggiore, arXiv:2112.09034Phys. Rev. D. 10510106017M. Fadel and M. Maggiore, "Revisiting the algebraic structure of the generalized uncertainty principle," Phys. Rev. D 105 no. 10, (May, 2022) 106017, arXiv:2112.09034.
Generalized Uncertainty Principle and Quantum Gravity Phenomenology. P Bosso, arXiv:1709.04947PhD thesis, Lethbridge U.gr-qcP. Bosso, Generalized Uncertainty Principle and Quantum Gravity Phenomenology, 2017. PhD thesis, Lethbridge U. arXiv:1709.04947 [gr-qc].
Modified uncertainty relations from classical and quantum gravity. F Wagner, arXiv:2210.05281University of SzczecinPhD thesisgr-qcF. Wagner, Modified uncertainty relations from classical and quantum gravity, 2022. PhD thesis, University of Szczecin. arXiv:2210.05281 [gr-qc].
Nonpointlike Particles in Harmonic Oscillators. A Kempf, arXiv:9604045J. Phys. A. 30hep-thA. Kempf, "Nonpointlike Particles in Harmonic Oscillators," J. Phys. A 30 (Apr, 1996) 2093-2102, arXiv:9604045 [hep-th].
Minimal Length Uncertainty Relation and Hydrogen Atom. F Brau, arXiv:9905033J. Phys. A. 32quant-phF. Brau, "Minimal Length Uncertainty Relation and Hydrogen Atom," J. Phys. A 32 (May, 1999) 7691-7696, arXiv:9905033 [quant-ph].
Exact solution of the harmonic oscillator in arbitrary dimensions with minimal length uncertainty relations. L N Chang, arXiv:0111181Phys. Rev. D. 6512125027hep-thL. N. Chang et al., "Exact solution of the harmonic oscillator in arbitrary dimensions with minimal length uncertainty relations," Phys. Rev. D 65 no. 12, (Jun, 2002) 125027, arXiv:0111181 [hep-th].
The Hydrogen atom with minimal length. S Benczik, arXiv:hep-th/0502222Phys. Rev. A. 7212104S. Benczik et al., "The Hydrogen atom with minimal length," Phys. Rev. A 72 (2005) 012104, arXiv:hep-th/0502222.
Lorentz-covariant deformed algebra with minimal length and application to the 1+1-dimensional Dirac oscillator. C Quesne, V M Tkachuk, arXiv:0604118J. Phys. A. 39quant-phC. Quesne and V. M. Tkachuk, "Lorentz-covariant deformed algebra with minimal length and application to the 1+1-dimensional Dirac oscillator," J. Phys. A 39 (Apr, 2006) 10909-10922, arXiv:0604118 [quant-ph].
Universality of Quantum Gravity Corrections. S Das, E C Vagenas, arXiv:0810.5333Phys. Rev. Lett. 101221301S. Das and E. C. Vagenas, "Universality of Quantum Gravity Corrections," Phys. Rev. Lett. 101 (Oct, 2008) 221301, arXiv:0810.5333.
Discreteness of Space from the Generalized Uncertainty Principle. A F Ali, S Das, E C Vagenas, arXiv:0906.5396Phys. Lett. B. 6785A. F. Ali, S. Das and E. C. Vagenas, "Discreteness of Space from the Generalized Uncertainty Principle," Phys. Lett. B 678 no. 5, (Jun, 2009) 497-499, arXiv:0906.5396.
Phenomenological Implications of the Generalized Uncertainty Principle. S Das, E C Vagenas, arXiv:0901.1768Can. J. Phys. 87S. Das and E. C. Vagenas, "Phenomenological Implications of the Generalized Uncertainty Principle," Can. J. Phys. 87 (Jan, 2009) 233-240, arXiv:0901.1768.
Discreteness of Space from GUP II: Relativistic Wave Equations. S Das, E C Vagenas, A F Ali, arXiv:1005.3368Phys. Lett. B. 690S. Das, E. C. Vagenas and A. F. Ali, "Discreteness of Space from GUP II: Relativistic Wave Equations," Phys. Lett. B 690 (May, 2010) 407-412, arXiv:1005.3368.
Minimal Length Uncertainty Relation and the Hydrogen Spectrum. R Akhoury, Y P Yao, arXiv:0302108Phys. Lett. B. 5721-2hep-phR. Akhoury and Y. P. Yao, "Minimal Length Uncertainty Relation and the Hydrogen Spectrum," Phys. Lett. B 572 no. 1-2, (Feb, 2003) 37-42, arXiv:0302108 [hep-ph].
Some Aspects of Minimal Length Quantum Mechanics. K Nozari, T Azizi, arXiv:0507018Gen. Rel. Grav. 38quant-phK. Nozari and T. Azizi, "Some Aspects of Minimal Length Quantum Mechanics," Gen. Rel. Grav. 38 (Jul, 2005) 735-742, arXiv:0507018 [quant-ph].
Coherent States of Harmonic Oscillator and Generalized Uncertainty Principle. K Nozari, T Azizi, arXiv:0504090Int. J. Quant. Inf. 3gr-qcK. Nozari and T. Azizi, "Coherent States of Harmonic Oscillator and Generalized Uncertainty Principle," Int. J. Quant. Inf. 3 (Apr, 2005) 623-632, arXiv:0504090 [gr-qc].
Wave Packets Propagation in Quantum Gravity. K Nozari, S H Mehdipour, arXiv:0507019Gen. Rel. Grav. 37quant-phK. Nozari and S. H. Mehdipour, "Wave Packets Propagation in Quantum Gravity," Gen. Rel. Grav. 37 (Jul, 2005) 1995-2001, arXiv:0507019 [quant-ph].
One dimensional Coulomb-like problem in deformed space with minimal length. T V Fityo, I O Vakarchuk, V M Tkachuk, arXiv:quant-ph/0507117J. Phys. A. 39T. V. Fityo, I. O. Vakarchuk and V. M. Tkachuk, "One dimensional Coulomb-like problem in deformed space with minimal length," J. Phys. A 39 (2006) 2143-2149, arXiv:quant-ph/0507117.
Minimal Length Uncertainty Relation and gravitational quantum well. F Brau, F Buisseret, arXiv:0605183Phys. Rev. D. 7436002hep-thF. Brau and F. Buisseret, "Minimal Length Uncertainty Relation and gravitational quantum well," Phys. Rev. D 74 (May, 2006) 36002, arXiv:0605183 [hep-th].
Gravitational Corrections to Energy-Levels of a Hydrogen Atom. Z Zhen-Hua, L Yu-Xiao, L Xi-Guo, arXiv:0705.1743Commun. Theor. Phys. 474Z. Zhen-Hua, L. Yu-Xiao and L. Xi-Guo, "Gravitational Corrections to Energy-Levels of a Hydrogen Atom," Commun. Theor. Phys. 47 no. 4, (Apr, 2007) 658-662, arXiv:0705.1743.
Influence of a generalized uncertainty principle on the energy spectrum of (1+1)-dimensional Dirac equation with linear potential. M Sakhawat Hossain, S B Faruque, Phys. Scr. 78335006M. Sakhawat Hossain and S. B. Faruque, "Influence of a generalized uncertainty principle on the energy spectrum of (1+1)-dimensional Dirac equation with linear potential," Phys. Scr. 78 no. 3, (Sep, 2008) 035006.
Singular inverse square potential in arbitrary dimensions with a minimal length: Application to the motion of a dipole in a cosmic string background. D Bouaziz, M Bawin, arXiv:1009.0930Phys. Rev. A. 7832110D. Bouaziz and M. Bawin, "Singular inverse square potential in arbitrary dimensions with a minimal length: Application to the motion of a dipole in a cosmic string background," Phys. Rev. A 78 (Sep, 2010) 32110, arXiv:1009.0930.
Quantum mechanics in de Sitter space. S Ghosh, S Mignemi, arXiv:0911.5695Int. J. Theor. Phys. 506S. Ghosh and S. Mignemi, "Quantum mechanics in de Sitter space," Int. J. Theor. Phys. 50 no. 6, (Nov, 2009) 1803-1808, arXiv:0911.5695.
Minimal Length and Bouncing Particle Spectrum. K Nozari, P Pedram, arXiv:1011.5673EPL. 92550013K. Nozari and P. Pedram, "Minimal Length and Bouncing Particle Spectrum," EPL 92 no. 5, (Nov, 2010) 50013, arXiv:1011.5673.
Hydrogen atom in momentum space with a minimal length. D Bouaziz, N Ferkous, arXiv:1009.0935Phys. Rev. A. 8222105D. Bouaziz and N. Ferkous, "Hydrogen atom in momentum space with a minimal length," Phys. Rev. A 82 (Sep, 2010) 22105, arXiv:1009.0935.
The Generalized Uncertainty Principle and Quantum Gravity Phenomenology. A F Ali, S Das, E C Vagenas, arXiv:1001.2642Twelfth Marcel Grossmann Meet. A. F. Ali, S. Das and E. C. Vagenas, "The Generalized Uncertainty Principle and Quantum Gravity Phenomenology," Twelfth Marcel Grossmann Meet. (Jan, 2010) 2407-2409, arXiv:1001.2642.
On the modification of Hamiltonians' spectrum in gravitational quantum mechanics. P Pedram, arXiv:1003.2769EPL. 89550008P. Pedram, "On the modification of Hamiltonians' spectrum in gravitational quantum mechanics," EPL 89 no. 5, (Mar, 2010) 50008, arXiv:1003.2769.
Exact Solution of D-Dimensional Klein-Gordon Oscillator with Minimal Length. Y Chargui, L Chetouani, A Trabelsi, Commun. Theor. Phys. 532Y. Chargui, L. Chetouani and A. Trabelsi, "Exact Solution of D-Dimensional Klein-Gordon Oscillator with Minimal Length," Commun. Theor. Phys. 53 no. 2, (Feb, 2010) 231-236.
A class of GUP solutions in deformed quantum mechanics. P Pedram, arXiv:1103.3805Int. J. Mod. Phys. D. 19P. Pedram, "A class of GUP solutions in deformed quantum mechanics," Int. J. Mod. Phys. D 19 (Mar, 2011) 2003-2009, arXiv:1103.3805.
The effects of minimal length and maximal momentum on the transition rate of ultra cold neutrons in gravitational field. P Pedram, K Nozari, S H Taheri, arXiv:1103.1015JHEP. 0393P. Pedram, K. Nozari and S. H. Taheri, "The effects of minimal length and maximal momentum on the transition rate of ultra cold neutrons in gravitational field," JHEP 03 (Mar, 2011) 93, arXiv:1103.1015.
Scattering states of Woods-Saxon interaction in minimal length quantum mechanics. H Hassanabadi, S Zarrinkamar, E Maghsoodi, Phys. Lett. B. 7182H. Hassanabadi, S. Zarrinkamar and E. Maghsoodi, "Scattering states of Woods-Saxon interaction in minimal length quantum mechanics," Phys. Lett. B 718 no. 2, (Dec, 2012) 678-682.
Coherent States in Gravitational Quantum Mechanics. P Pedram, arXiv:1204.1524Int. J. Mod. Phys. D. 221350004P. Pedram, "Coherent States in Gravitational Quantum Mechanics," Int. J. Mod. Phys. D 22 (Apr, 2012) 1350004, arXiv:1204.1524.
Generalized Uncertainty Principle and the Ramsauer-Townsend Effect. J Vahedi, K Nozari, P Pedram, arXiv:1208.1660Grav. Cosmol. 18J. Vahedi, K. Nozari and P. Pedram, "Generalized Uncertainty Principle and the Ramsauer-Townsend Effect," Grav. Cosmol. 18 (Aug, 2012) 211-215, arXiv:1208.1660.
Minimal Length and the Quantum Bouncer: A Nonperturbative Study. P Pedram, arXiv:1201.2802Int. J. Theor. Phys. 51P. Pedram, "Minimal Length and the Quantum Bouncer: A Nonperturbative Study," Int. J. Theor. Phys. 51 (Jan, 2012) 1901-1910, arXiv:1201.2802.
Path Integral for Dirac oscillator with generalized uncertainty principle. H Benzair, T Boudjedaa, M Merad, J. Math. Phys. 5312123516H. Benzair, T. Boudjedaa and M. Merad, "Path Integral for Dirac oscillator with generalized uncertainty principle," J. Math. Phys. 53 no. 12, (2012) 123516.
A Higher Order GUP with Minimal Length Uncertainty and Maximal Momentum II: Applications. P Pedram, arXiv:1210.5334Phys. Lett. B. 718P. Pedram, "A Higher Order GUP with Minimal Length Uncertainty and Maximal Momentum II: Applications," Phys. Lett. B 718 (Oct, 2012) 638-645, arXiv:1210.5334.
One-dimensional hydrogen atom with minimal length uncertainty and maximal momentum. P Pedram, arXiv:1205.0937EPL. 101330005P. Pedram, "One-dimensional hydrogen atom with minimal length uncertainty and maximal momentum," EPL 101 no. 3, (May, 2012) 30005, arXiv:1205.0937.
Nonperturbative effects of the minimal length uncertainty on the relativistic quantum mechanics. P Pedram, Phys. Lett. B. 7103P. Pedram, "Nonperturbative effects of the minimal length uncertainty on the relativistic quantum mechanics," Phys. Lett. B 710 no. 3, (Apr, 2012) 478-485.
A Note on the one-dimensional hydrogen atom with minimal length uncertainty. P Pedram, arXiv:1203.5478J. Phys. A. 45505304quant-phP. Pedram, "A Note on the one-dimensional hydrogen atom with minimal length uncertainty," J. Phys. A 45 (2012) 505304, arXiv:1203.5478 [quant-ph].
Minimal length Dirac equation revisited. H Hassanabadi, S Zarrinkamar, E Maghsoodi, Eur. Phys. J. Plus. 128325H. Hassanabadi, S. Zarrinkamar and E. Maghsoodi, "Minimal length Dirac equation revisited," Eur. Phys. J. Plus 128 no. 3, (Mar, 2013) 25.
A simple efficient methodology for Dirac equation in minimal length quantum mechanics. H Hassanabadi, S Zarrinkamar, A Rajabi, Phys. Lett. B. 7183H. Hassanabadi, S. Zarrinkamar and A. Rajabi, "A simple efficient methodology for Dirac equation in minimal length quantum mechanics," Phys. Lett. B 718 no. 3, (Jan, 2013) 1111-1113.
Kratzer's molecular potential in quantum mechanics with a generalized uncertainty principle. D Bouaziz, arXiv:1312.2619Ann. Phys. 355D. Bouaziz, "Kratzer's molecular potential in quantum mechanics with a generalized uncertainty principle," Ann. Phys. 355 (Dec, 2013) 269-281, arXiv:1312.2619.
Lagrangian Formulation of a Magnetostatic Field in the Presence of a Minimal Length Scale Based on the Kempf Algebra. S K Moayedi, M R Setare, B Khosropour, arXiv:1306.1070Int. J. Mod. Phys. A. 281350142S. K. Moayedi, M. R. Setare and B. Khosropour, "Lagrangian Formulation of a Magnetostatic Field in the Presence of a Minimal Length Scale Based on the Kempf Algebra," Int. J. Mod. Phys. A 28 (Jun, 2013) 1350142, arXiv:1306.1070.
Quantum Wells and the Generalized Uncertainty Principle. G Blado, C Owens, V Meyers, arXiv:1312.4876Eur. J. Phys. 3565011G. Blado, C. Owens and V. Meyers, "Quantum Wells and the Generalized Uncertainty Principle," Eur. J. Phys. 35 (Dec, 2013) 65011, arXiv:1312.4876.
Harmonic oscillator with minimal length, minimal momentum, and maximal momentum uncertainties in SUSYQM framework. M Asghari, P Pedram, K Nozari, arXiv:1307.7899Phys. Lett. B. 725M. Asghari, P. Pedram and K. Nozari, "Harmonic oscillator with minimal length, minimal momentum, and maximal momentum uncertainties in SUSYQM framework," Phys. Lett. B 725 (Jul, 2013) 451-455, arXiv:1307.7899.
Effect of maximal momentum on quantum mechanics scattering and bound states. C.-L Ching, R R Parwani, Mod. Phys. Lett. A. 28151350061C.-L. Ching and R. R. Parwani, "Effect of maximal momentum on quantum mechanics scattering and bound states," Mod. Phys. Lett. A 28 no. 15, (May, 2013) 1350061.
Ground State of the Hydrogen Atom via Dirac Equation in a Minimal Length Scenario. T L A Oakes, arXiv:1308.3395Eur. Phys. J. C. 732495T. L. A. Oakes et al., "Ground State of the Hydrogen Atom via Dirac Equation in a Minimal Length Scenario," Eur. Phys. J. C 73 (Aug, 2013) 2495, arXiv:1308.3395.
Upper bound on minimal length from deuteron. S Faruque, M A Rahman, M Moniruzzaman, Results Phys. 4S. Faruque, M. A. Rahman and M. Moniruzzaman, "Upper bound on minimal length from deuteron," Results Phys. 4 (2014) 52-53.
Generalized Uncertainty Principle Corrections to the Simple Harmonic Oscillator in Phase Space. S Das, M P G Robbins, M A Walton, arXiv:1412.6467Can. J. Phys. 941S. Das, M. P. G. Robbins and M. A. Walton, "Generalized Uncertainty Principle Corrections to the Simple Harmonic Oscillator in Phase Space," Can. J. Phys. 94 no. 1, (Dec, 2014) 139-146, arXiv:1412.6467.
Time Crystals from Minimum Time Uncertainty. M Faizal, M M Khalil, S Das, arXiv:1501.03111Eur. Phys. J. C. 76130M. Faizal, M. M. Khalil and S. Das, "Time Crystals from Minimum Time Uncertainty," Eur. Phys. J. C 76 no. 1, (Dec, 2014) 30, arXiv:1501.03111.
The Generalized Uncertainty Principle and Harmonic Interaction in Three Spatial Dimensions. H Hassanabadi, P Hooshmand, S Zarrinkamar, Few-Body Syst. 561H. Hassanabadi, P. Hooshmand and S. Zarrinkamar, "The Generalized Uncertainty Principle and Harmonic Interaction in Three Spatial Dimensions," Few-Body Syst. 56 no. 1, (Jan, 2015) 19-27.
Nonclassicality versus entanglement in a noncommutative space. S Dey, A Fring, V Hussin, arXiv:1506.08901Int. J. Mod. Phys. B. 3111650248S. Dey, A. Fring and V. Hussin, "Nonclassicality versus entanglement in a noncommutative space," Int. J. Mod. Phys. B 31 no. 1, (Jun, 2015) 1650248, arXiv:1506.08901.
Generalized Uncertainty Principle and Angular Momentum. P Bosso, S Das, arXiv:1607.01083Ann. Phys. 383P. Bosso and S. Das, "Generalized Uncertainty Principle and Angular Momentum," Ann. Phys. 383 (Jul, 2016) 416-438, arXiv:1607.01083.
Solutions of the Schrödinger equation under topological defects space-times and generalized uncertainty principle. B.-Q Wang, Eur. Phys. J. Plus. 13110378B.-Q. Wang et al., "Solutions of the Schrödinger equation under topological defects space-times and generalized uncertainty principle," Eur. Phys. J. Plus 131 no. 10, (Oct, 2016) 378.
On the quantum mechanical solutions with minimal length uncertainty. H Shababi, P Pedram, W S Chung, Int. J. Mod. Phys. A. 31181650101H. Shababi, P. Pedram and W. S. Chung, "On the quantum mechanical solutions with minimal length uncertainty," Int. J. Mod. Phys. A 31 no. 18, (Jun, 2016) 1650101.
Probing deformed quantum commutators. M A C Rossi, T Giani, M G A Paris, arXiv:1606.03420Phys. Rev. D. 94224014M. A. C. Rossi, T. Giani and M. G. A. Paris, "Probing deformed quantum commutators," Phys. Rev. D 94 no. 2, (Jun, 2016) 24014, arXiv:1606.03420.
Discreteness of Space from GUP in a Weak Gravitational Field. S Deb, S Das, E C Vagenas, arXiv:1601.07893Phys. Lett. B. 755S. Deb, S. Das and E. C. Vagenas, "Discreteness of Space from GUP in a Weak Gravitational Field," Phys. Lett. B 755 (Jan, 2016) 17-23, arXiv:1601.07893.
Singular inverse square potential in coordinate space with a minimal length. D Bouaziz, T Birkandan, arXiv:1711.04158Ann. Phys. 387D. Bouaziz and T. Birkandan, "Singular inverse square potential in coordinate space with a minimal length," Ann. Phys. 387 (Nov, 2017) 62-74, arXiv:1711.04158.
Planck scale Corrections to the Harmonic Oscillator, Coherent and Squeezed States. P Bosso, S Das, R B Mann, arXiv:1704.08198Phys. Rev. D. 96666008P. Bosso, S. Das and R. B. Mann, "Planck scale Corrections to the Harmonic Oscillator, Coherent and Squeezed States," Phys. Rev. D 96 no. 6, (Apr, 2017) 66008, arXiv:1704.08198.
Relativistic Generalized Uncertainty Principle. V Todorinov, P Bosso, S Das, arXiv:1810.11761Ann. Phys. 405V. Todorinov, P. Bosso and S. Das, "Relativistic Generalized Uncertainty Principle," Ann. Phys. 405 (Oct, 2018) 92-100, arXiv:1810.11761.
GUP and Point Interaction. D Park, E Jung, arXiv:2001.02850Phys. Rev. D. 101666007D. Park and E. Jung, "GUP and Point Interaction," Phys. Rev. D 101 no. 6, (Jan, 2020) 66007, arXiv:2001.02850.
Generalized uncertainty principle corrections on atomic excitation. F J Twagirayezu, Ann. Phys. (N. Y). 422168294F. J. Twagirayezu, "Generalized uncertainty principle corrections on atomic excitation," Ann. Phys. (N. Y). 422 (Nov, 2020) 168294.
Generalized uncertainty principle with maximal observable momentum and no minimal length indeterminacy. L Petruzziello, arXiv:2010.05896Class. Quant. Grav. 3813135005L. Petruzziello, "Generalized uncertainty principle with maximal observable momentum and no minimal length indeterminacy," Class. Quant. Grav. 38 no. 13, (Oct, 2020) 135005, arXiv:2010.05896.
Testing Generalised Uncertainty Principles through Quantum Noise. P Girdhar, A C Doherty, arXiv:2005.08984New J. Phys. 22993073P. Girdhar and A. C. Doherty, "Testing Generalised Uncertainty Principles through Quantum Noise," New J. Phys. 22 no. 9, (May, 2020) 93073, arXiv:2005.08984.
Minimal length, Berry phase and spin-orbit interactions. S Aghababaei, Phys. Scr. 96555303S. Aghababaei et al., "Minimal length, Berry phase and spin-orbit interactions," Phys. Scr. 96 no. 5, (May, 2021) 055303.
Generalized uncertainty principle and its implications on geometric phases in quantum mechanics. G G Luciano, L Petruzziello, Eur. Phys. J. Plus. 1362179G. G. Luciano and L. Petruzziello, "Generalized uncertainty principle and its implications on geometric phases in quantum mechanics," Eur. Phys. J. Plus 136 no. 2, (2021) 179.
Generalized Uncertainty Principle: from the harmonic oscillator to a QFT toy model. P Bosso, G G Luciano, arXiv:2109.15259Eur. Phys. J. C. 8111982P. Bosso and G. G. Luciano, "Generalized Uncertainty Principle: from the harmonic oscillator to a QFT toy model," Eur. Phys. J. C 81 no. 11, (Sep, 2021) 982, arXiv:2109.15259.
Diffeomorphisms in momentum space: physical implications of different choices of momentum coordinates in the Galilean Snyder model. G Gubitosi, S Mignemi, arXiv:2112.04920Universe. 82108G. Gubitosi and S. Mignemi, "Diffeomorphisms in momentum space: physical implications of different choices of momentum coordinates in the Galilean Snyder model," Universe 8 no. 2, (Dec, 2021) 108, arXiv:2112.04920.
Spin operator, Bell nonlocality and Tsirelson bound in quantum-gravity induced minimal-length quantum mechanics. P Bosso, Commun. Phys. 61114P. Bosso et al., "Spin operator, Bell nonlocality and Tsirelson bound in quantum-gravity induced minimal-length quantum mechanics," Commun. Phys. 6 no. 1, (May, 2023) 114.
The Generalized Uncertainty Principle and Black Hole Remnants. R J Adler, P Chen, D I Santiago, arXiv:0106080Gen. Relativ. Gravit. 3312gr-qcR. J. Adler, P. Chen and D. I. Santiago, "The Generalized Uncertainty Principle and Black Hole Remnants," Gen. Relativ. Gravit. 33 no. 12, (Jun, 2001) 2101-2108, arXiv:0106080 [gr-qc].
Black Hole Remnants and Dark Matter. P Chen, R J Adler, arXiv:0205106Nucl. Phys. B Proc. Suppl. 124gr-qcP. Chen and R. J. Adler, "Black Hole Remnants and Dark Matter," Nucl. Phys. B Proc. Suppl. 124 (May, 2002) 103-106, arXiv:0205106 [gr-qc].
Will we observe black holes at LHC?. M Cavaglia, S Das, R Maartens, arXiv:0305223Class. Quant. Grav. 20hep-phM. Cavaglia, S. Das and R. Maartens, "Will we observe black holes at LHC?," Class. Quant. Grav. 20 (May, 2003) L205-L212, arXiv:0305223 [hep-ph].
When conceptual worlds collide: The GUP and the BH entropy. A J M Medved, E C Vagenas, arXiv:0411022Phys. Rev. D. 70124021hep-thA. J. M. Medved and E. C. Vagenas, "When conceptual worlds collide: The GUP and the BH entropy," Phys. Rev. D 70 (Nov, 2004) 124021, arXiv:0411022 [hep-th].
How classical are TeV-scale black holes?. M Cavaglia, S Das, arXiv:0404050Class. Quant. Grav. 21hep-thM. Cavaglia and S. Das, "How classical are TeV-scale black holes?," Class. Quant. Grav. 21 (Apr, 2004) 4511-4522, arXiv:0404050 [hep-th].
Black hole thermodynamics with generalized uncertainty principle. Y S Myung, Y.-W Kim, Y.-J Park, arXiv:0609031Phys. Lett. B. 645gr-qcY. S. Myung, Y.-W. Kim and Y.-J. Park, "Black hole thermodynamics with generalized uncertainty principle," Phys. Lett. B 645 (Sep, 2006) 393-397, arXiv:0609031 [gr-qc].
Quantum-corrected black hole thermodynamics to all orders in the Planck length. K Nouicer, arXiv:0704.1261Phys. Lett. B. 646K. Nouicer, "Quantum-corrected black hole thermodynamics to all orders in the Planck length," Phys. Lett. B 646 (Apr, 2007) 63-71, arXiv:0704.1261.
The Generalized Uncertainty Principle in (A)dS Space and the Modification of Hawking Temperature from the Minimal Length. M.-I Park, arXiv:0709.2307Phys. Lett. B. 659M.-i. Park, "The Generalized Uncertainty Principle in (A)dS Space and the Modification of Hawking Temperature from the Minimal Length," Phys. Lett. B 659 (Sep, 2007) 698-702, arXiv:0709.2307.
Black hole thermodynamics with generalized uncertainty principle. L Xiang, X Q Wen, arXiv:0901.0603JHEP. 1046L. Xiang and X. Q. Wen, "Black hole thermodynamics with generalized uncertainty principle," JHEP 10 (Jan, 2009) 46, arXiv:0901.0603.
Thermodynamics of black holes in the deformed Hořava-Lifshitz gravity. Y S Myung, arXiv:0905.0957Phys. Lett. B. 678Y. S. Myung, "Thermodynamics of black holes in the deformed Hořava-Lifshitz gravity," Phys. Lett. B 678 (May, 2009) 127-130, arXiv:0905.0957.
Uncertainty Relation on World Crystal and its Applications to Micro Black Holes. P Jizba, H Kleinert, F Scardigli, arXiv:0912.2253Phys. Rev. D. 8184030P. Jizba, H. Kleinert and F. Scardigli, "Uncertainty Relation on World Crystal and its Applications to Micro Black Holes," Phys. Rev. D 81 (Dec, 2009) 84030, arXiv:0912.2253.
Gravitational tests of the Generalized Uncertainty Principle. F Scardigli, R Casadio, arXiv:1407.0113Eur. Phys. J. C. 759425F. Scardigli and R. Casadio, "Gravitational tests of the Generalized Uncertainty Principle," Eur. Phys. J. C 75 no. 9, (Jul, 2014) 425, arXiv:1407.0113.
GUP parameter from quantum corrections to the Newtonian potential. F Scardigli, G Lambiase, E Vagenas, arXiv:1611.01469Phys. Lett. B. 767F. Scardigli, G. Lambiase and E. Vagenas, "GUP parameter from quantum corrections to the Newtonian potential," Phys. Lett. B 767 (Nov, 2016) 242-246, arXiv:1611.01469.
The Generalized Uncertainty Principle, entropy bounds and black hole (non-)evaporation in a thermal bath. P S Custodio, J E Horvath, arXiv:0305022Class. Quant. Grav. 20gr-qcP. S. Custodio and J. E. Horvath, "The Generalized Uncertainty Principle, entropy bounds and black hole (non-)evaporation in a thermal bath," Class. Quant. Grav. 20 (May, 2003) L197-L203, arXiv:0305022 [gr-qc].
Corrections to the Cardy-Verlinde formula from the generalized uncertainty principle. M R Setare, arXiv:0410044Phys. Rev. D. 70hep-thM. R. Setare, "Corrections to the Cardy-Verlinde formula from the generalized uncertainty principle," Phys. Rev. D 70 (Oct, 2004) 87501, arXiv:0410044 [hep-th].
The Generalized Uncertainty Principle and Corrections to the Cardy-Verlinde Formula in SAdS5 Black Holes. M R Setare, arXiv:0504179Int. J. Mod. Phys. A. 21hep-thM. R. Setare, "The Generalized Uncertainty Principle and Corrections to the Cardy-Verlinde Formula in SAdS5 Black Holes," Int. J. Mod. Phys. A 21 (Apr, 2005) 1325-1332, arXiv:0504179 [hep-th].
Gravitational Uncertainty and Black Hole Remnants. K Nozari, S H Mehdipour, arXiv:0809.3144Mod. Phys. Lett. A. 20K. Nozari and S. H. Mehdipour, "Gravitational Uncertainty and Black Hole Remnants," Mod. Phys. Lett. A 20 (Sep, 2008) 2937-2948, arXiv:0809.3144.
Quantum-Corrected Black Hole Thermodynamics in Extra Dimensions. K Nozari, S H Mehdipour, arXiv:0511110Int. J. Mod. Phys. A. 21gr-qcK. Nozari and S. H. Mehdipour, "Quantum-Corrected Black Hole Thermodynamics in Extra Dimensions," Int. J. Mod. Phys. A 21 (Nov, 2005) 4979-4992, arXiv:0511110 [gr-qc].
Hawking temperature for various kinds of black holes from Heisenberg uncertainty principle. F Scardigli, Int. J. Geom. Methods Mod. Phys. 1701F. Scardigli, "Hawking temperature for various kinds of black holes from Heisenberg uncertainty principle," Int. J. Geom. Methods Mod. Phys. 17 no. supp01, (Sep, 2020) 2040004.
Generalized uncertainty principle and black hole entropy. Z Ren, Z Sheng-Li, Phys. Lett. B. 6412Z. Ren and Z. Sheng-Li, "Generalized uncertainty principle and black hole entropy," Phys. Lett. B 641 no. 2, (Oct, 2006) 208-211.
Tests of Quantum Gravity via Generalized Uncertainty Principle. Y Ko, S Lee, S Nam, arXiv:0608016hep-thY. Ko, S. Lee and S. Nam, "Tests of Quantum Gravity via Generalized Uncertainty Principle," arXiv:0608016 [hep-th].
Dispersion relation, black hole thermodynamics and generalization of uncertainty principle. L Xiang, Phys. Lett. B. 6385-6L. Xiang, "Dispersion relation, black hole thermodynamics and generalization of uncertainty principle," Phys. Lett. B 638 no. 5-6, (Jul, 2006) 519-522.
Failure of Standard Thermodynamics in Planck Scale Black Hole System. K Nozari, S H Mehdipour, arXiv:0610076Chaos Solitons Fractals. 39hep-thK. Nozari and S. H. Mehdipour, "Failure of Standard Thermodynamics in Planck Scale Black Hole System," Chaos Solitons Fractals 39 (Oct, 2006) 956-970, arXiv:0610076 [hep-th].
Reissner-Nordström Black Hole Thermodynamics in Noncommutative Spaces. K Nozari, B Fazlpour, arXiv:0608077Acta Phys. Pol. B. 39gr-qcK. Nozari and B. Fazlpour, "Reissner-Nordström Black Hole Thermodynamics in Noncommutative Spaces," Acta Phys. Pol. B 39 (Aug, 2006) 1363-1374, arXiv:0608077 [gr-qc].
On the Existence of the Logarithmic Correction Term in Black Hole Entropy-Area Relation. K Nozari, A S Sefiedgar, arXiv:0606046Gen. Rel. Grav. 39gr-qcK. Nozari and A. S. Sefiedgar, "On the Existence of the Logarithmic Correction Term in Black Hole Entropy-Area Relation," Gen. Rel. Grav. 39 (Jun, 2006) 501-509, arXiv:0606046 [gr-qc].
Thermodynamics of an Evaporating Schwarzschild Black Hole in Noncommutative Space. K Nozari, B Fazlpour, arXiv:0605109Mod. Phys. Lett. A. 22hep-thK. Nozari and B. Fazlpour, "Thermodynamics of an Evaporating Schwarzschild Black Hole in Noncommutative Space," Mod. Phys. Lett. A 22 (May, 2006) 2917-2930, arXiv:0605109 [hep-th].
Thermodynamics of a black hole based on a generalized uncertainty principle. W Kim, E J Son, M Yoon, arXiv:0711.0786JHEP. 0135W. Kim, E. J. Son and M. Yoon, "Thermodynamics of a black hole based on a generalized uncertainty principle," JHEP 01 (Nov, 2007) 35, arXiv:0711.0786.
Black holes thermodynamics to all orders in the Planck length in extra dimensions. K Nouicer, arXiv:0706.2749Class. Quant. Grav. 24K. Nouicer, "Black holes thermodynamics to all orders in the Planck length in extra dimensions," Class. Quant. Grav. 24 (Jun, 2007) 5917-5934, arXiv:0706.2749.
Comparing two approaches to Hawking radiation of Schwarzschild-de Sitter black holes. I Arraut, D Batic, M Nowakowski, arXiv:0810.5156Class. Quant. Grav. 26125006I. Arraut, D. Batic and M. Nowakowski, "Comparing two approaches to Hawking radiation of Schwarzschild-de Sitter black holes," Class. Quant. Grav. 26 (Oct, 2008) 125006, arXiv:0810.5156.
Tunneling black hole radiation, generalized uncertainty principle and de Sitter-Schwarzschild black hole. A Farmany, Phys. Lett. B. 6821A. Farmany et al., "Tunneling black hole radiation, generalized uncertainty principle and de Sitter-Schwarzschild black hole," Phys. Lett. B 682 no. 1, (Nov, 2009) 114-117.
Higher dimensional black hole radiation and a generalized uncertainty principle. M Dehghani, A Farmany, Phys. Lett. B. 675M. Dehghani and A. Farmany, "Higher dimensional black hole radiation and a generalized uncertainty principle," Phys. Lett. B 675 (2009) 460-462.
Generalised Uncertainty Principle, Remnant Mass and Singularity Problem in Black Hole Thermodynamics. R Banerjee, S Ghosh, arXiv:1002.2302Phys. Lett. B. 688R. Banerjee and S. Ghosh, "Generalised Uncertainty Principle, Remnant Mass and Singularity Problem in Black Hole Thermodynamics," Phys. Lett. B 688 (Feb, 2010) 224-229, arXiv:1002.2302.
Entropic corrections to Newton's law. M R Setare, D Momeni, R Myrzakulov, arXiv:1004.2794Phys. Scr. 8565007M. R. Setare, D. Momeni and R. Myrzakulov, "Entropic corrections to Newton's law," Phys. Scr. 85 (Apr, 2010) 65007, arXiv:1004.2794.
The Generalized Uncertainty Principle in f(R) Gravity for a Charged Black Hole. J L Said, K Z Adami, arXiv:1102.3553Phys. Rev. D. 8343008J. L. Said and K. Z. Adami, "The Generalized Uncertainty Principle in f(R) Gravity for a Charged Black Hole," Phys. Rev. D 83 (Feb, 2011) 43008, arXiv:1102.3553.
Black Hole Entropy and the Modified Uncertainty Principle: A heuristic analysis. B Majumder, arXiv:1106.0715Phys. Lett. B. 703B. Majumder, "Black Hole Entropy and the Modified Uncertainty Principle: A heuristic analysis," Phys. Lett. B 703 (Jun, 2011) 402-405, arXiv:1106.0715.
Phase transitions of a GUP-corrected Schwarzschild black hole within isothermal cavities. Y Sabri, K Nouicer, Class. Quantum Gravity. 2921215015Y. Sabri and K. Nouicer, "Phase transitions of a GUP-corrected Schwarzschild black hole within isothermal cavities," Class. Quantum Gravity 29 no. 21, (Nov, 2012) 215015.
Impacts of Generalized Uncertainty Principle on Black Hole Thermodynamics and Salecker-Wigner Inequalities. A Tawfik, arXiv:1307.1894JCAP. 0740A. Tawfik, "Impacts of Generalized Uncertainty Principle on Black Hole Thermodynamics and Salecker-Wigner Inequalities," JCAP 07 (Jul, 2013) 40, arXiv:1307.1894.
Singularities and the Finale of Black Hole Evaporation. L Xiang, Y Ling, Y G Shen, arXiv:1305.3851Int. J. Mod. Phys. D. 221342016L. Xiang, Y. Ling and Y. G. Shen, "Singularities and the Finale of Black Hole Evaporation," Int. J. Mod. Phys. D 22 (May, 2013) 1342016, arXiv:1305.3851.
The Effects of Minimal Length in Entropic Force Approach. B Majumder, arXiv:1310.1165Adv. High Energy Phys. 296836B. Majumder, "The Effects of Minimal Length in Entropic Force Approach," Adv. High Energy Phys. 2013 (Oct, 2013) 296836, arXiv:1310.1165.
Generalized uncertainty principle and black hole thermodynamics. S Gangopadhyay, A Dutta, A Saha, arXiv:1307.7045Gen. Rel. Grav. 461661S. Gangopadhyay, A. Dutta and A. Saha, "Generalized uncertainty principle and black hole thermodynamics," Gen. Rel. Grav. 46 (Jul, 2013) 1661, arXiv:1307.7045.
Generalized Uncertainty Principle: Implications for Black Hole Complementarity. P Chen, Y C Ong, D.-H Yeom, arXiv:1408.3763JHEP. 1221P. Chen, Y. C. Ong and D.-h. Yeom, "Generalized Uncertainty Principle: Implications for Black Hole Complementarity," JHEP 12 (Aug, 2014) 21, arXiv:1408.3763.
GUP-Corrected Thermodynamics for all Black Objects and the Existence of Remnants. M Faizal, M M Khalil, arXiv:1411.4042Int. J. Mod. Phys. A. 30221550144M. Faizal and M. M. Khalil, "GUP-Corrected Thermodynamics for all Black Objects and the Existence of Remnants," Int. J. Mod. Phys. A 30 no. 22, (Nov, 2014) 1550144, arXiv:1411.4042.
Corrections to entropy and thermodynamics of charged black hole using generalized uncertainty principle. A N Tawfik, E A E Dahab, arXiv:1501.01286Int. J. Mod. Phys. A. 30091550030A. N. Tawfik and E. A. E. Dahab, "Corrections to entropy and thermodynamics of charged black hole using generalized uncertainty principle," Int. J. Mod. Phys. A 30 no. 09, (Jan, 2015) 1550030, arXiv:1501.01286.
Minimal Length in quantum gravity and gravitational measurements. A F Ali, M M Khalil, E C Vagenas, arXiv:1510.06365EPL. 1122A. F. Ali, M. M. Khalil and E. C. Vagenas, "Minimal Length in quantum gravity and gravitational measurements," EPL 112 no. 2, (Oct, 2015) 20005, arXiv:1510.06365.
Black Hole Corrections due to Minimal Length and Modified Dispersion Relation. A N Tawfik, A M Diab, arXiv:1502.04562Int. J. Mod. Phys. A. 30121550059A. N. Tawfik and A. M. Diab, "Black Hole Corrections due to Minimal Length and Modified Dispersion Relation," Int. J. Mod. Phys. A 30 no. 12, (Feb, 2015) 1550059, arXiv:1502.04562.
Constraints on the Generalized Uncertainty Principle from Black Hole Thermodynamics. S Gangopadhyay, A Dutta, M , arXiv:1501.01482EPL. 1122S. Gangopadhyay, A. Dutta and M. Faizal, "Constraints on the Generalized Uncertainty Principle from Black Hole Thermodynamics," EPL 112 no. 2, (Jan, 2015) 20006, arXiv:1501.01482.
Quantum-corrected two-dimensional Horava-Lifshitz black hole entropy. M A Anacleto, arXiv:1512.07886Adv. High Energy Phys. 8465759M. A. Anacleto et al., "Quantum-corrected two-dimensional Horava-Lifshitz black hole entropy," Adv. High Energy Phys. 2016 (Dec, 2015) 8465759, arXiv:1512.07886.
Quantum-corrected finite entropy of noncommutative acoustic black holes. M A Anacleto, arXiv:1502.00179Ann. Phys. 362M. A. Anacleto et al., "Quantum-corrected finite entropy of noncommutative acoustic black holes," Ann. Phys. 362 (Jan, 2015) 436-448, arXiv:1502.00179.
Quantum-corrected self-dual black hole entropy in tunneling formalism with GUP. M A Anacleto, F A Brito, E Passos, arXiv:1504.06295Phys. Lett. B. 749M. A. Anacleto, F. A. Brito and E. Passos, "Quantum-corrected self-dual black hole entropy in tunneling formalism with GUP," Phys. Lett. B 749 (Apr, 2015) 181-186, arXiv:1504.06295.
Quantum correction to the entropy of noncommutative BTZ black hole. M A Anacleto, arXiv:1510.08444Gen. Rel. Grav. 50223M. A. Anacleto et al., "Quantum correction to the entropy of noncommutative BTZ black hole," Gen. Rel. Grav. 50 no. 2, (Oct, 2015) 23, arXiv:1510.08444.
f (R)-Modified Gravity, Wald Entropy, and the Generalized Uncertainty Principle. F Hammad, arXiv:1508.05126Phys. Rev. D. 9244004F. Hammad, "f (R)-Modified Gravity, Wald Entropy, and the Generalized Uncertainty Principle," Phys. Rev. D 92 (Aug, 2015) 44004, arXiv:1508.05126.
Corrections to the Hawking tunneling radiation in extra dimensions. M Dehghani, Phys. Lett. B. 749M. Dehghani, "Corrections to the Hawking tunneling radiation in extra dimensions," Phys. Lett. B 749 (Oct, 2015) 125-129.
GUP Assisted Hawking Radiation of Rotating Acoustic Black Holes. I Sakalli, A Ovgun, K Jusufi, arXiv:1602.04304Astrophys. Sp. Sci. 36110330I. Sakalli, A. Ovgun and K. Jusufi, "GUP Assisted Hawking Radiation of Rotating Acoustic Black Holes," Astrophys. Sp. Sci. 361 no. 10, (Feb, 2016) 330, arXiv:1602.04304.
Lorentz violation and generalized uncertainty principle. G Lambiase, F Scardigli, arXiv:1709.00637Phys. Rev. D. 97775003G. Lambiase and F. Scardigli, "Lorentz violation and generalized uncertainty principle," Phys. Rev. D 97 no. 7, (Sep, 2017) 75003, arXiv:1709.00637.
GUP-Corrected Black Hole Thermodynamics and the Maximum Force Conjecture. Y C Ong, arXiv:1809.00442Phys. Lett. B. 785Y. C. Ong, "GUP-Corrected Black Hole Thermodynamics and the Maximum Force Conjecture," Phys. Lett. B 785 (Sep, 2018) 217-220, arXiv:1809.00442.
Zero Mass Remnant as an Asymptotic State of Hawking Evaporation. Y C Ong, arXiv:1806.03691JHEP. 10195Y. C. Ong, "Zero Mass Remnant as an Asymptotic State of Hawking Evaporation," JHEP 10 (Jun, 2018) 195, arXiv:1806.03691.
Thermodynamics of a class of regular black holes with a generalized uncertainty principle. R V Maluf, J C S Neves, arXiv:1801.02661Phys. Rev. D. 9710104015R. V. Maluf and J. C. S. Neves, "Thermodynamics of a class of regular black holes with a generalized uncertainty principle," Phys. Rev. D 97 no. 10, (Jan, 2018) 104015, arXiv:1801.02661.
Bardeen regular black hole as a quantum-corrected Schwarzschild black hole. R V Maluf, J C S Neves, arXiv:1801.08872Int. J. Mod. Phys. D. 2803R. V. Maluf and J. C. S. Neves, "Bardeen regular black hole as a quantum-corrected Schwarzschild black hole," Int. J. Mod. Phys. D 28 no. 03, (Jan, 2018) 1950048, arXiv:1801.08872.
Minimal length and the flow of entropy from black holes. A Alonso-Serrano, M P Dabrowski, H Gohar, arXiv:1805.07690Int. J. Mod. Phys. D. 27141847028A. Alonso-Serrano, M. P. Dabrowski and H. Gohar, "Minimal length and the flow of entropy from black holes," Int. J. Mod. Phys. D 27 no. 14, (May, 2018) 1847028, arXiv:1805.07690.
Generalized uncertainty principle impact onto the black holes information flux and the sparsity of Hawking radiation. A Alonso-Serrano, M P Dabrowski, H Gohar, arXiv:1801.09660Phys. Rev. D. 97444029A. Alonso-Serrano, M. P. Dabrowski and H. Gohar, "Generalized uncertainty principle impact onto the black holes information flux and the sparsity of Hawking radiation," Phys. Rev. D 97 no. 4, (Jan, 2018) 44029, arXiv:1801.09660.
Scale-dependent Hayward black hole and the generalized uncertainty principle. E Contreras, P Bargueño, arXiv:1809.00785Mod. Phys. Lett. A. 33321850184E. Contreras and P. Bargueño, "Scale-dependent Hayward black hole and the generalized uncertainty principle," Mod. Phys. Lett. A 33 no. 32, (Sep, 2018) 1850184, arXiv:1809.00785.
Generalized Uncertainty Principle and Corpuscular Gravity. L Buoninfante, G G Luciano, L Petruzziello, arXiv:1903.01382Eur. Phys. J. C. 798663L. Buoninfante, G. G. Luciano and L. Petruzziello, "Generalized Uncertainty Principle and Corpuscular Gravity," Eur. Phys. J. C 79 no. 8, (Mar, 2019) 663, arXiv:1903.01382.
Noncommutative Schwarzschild geometry and generalized uncertainty principle. T Kanazawa, Eur. Phys. J. C. 79295T. Kanazawa et al., "Noncommutative Schwarzschild geometry and generalized uncertainty principle," Eur. Phys. J. C 79 no. 2, (Feb, 2019) 95.
Analysis of black hole thermodynamics with a new higher order generalized uncertainty principle. H Hassanabadi, E Maghsoodi, W S Chung, Eur. Phys. J. C. 794358H. Hassanabadi, E. Maghsoodi and W. S. Chung, "Analysis of black hole thermodynamics with a new higher order generalized uncertainty principle," Eur. Phys. J. C 79 no. 4, (Apr, 2019) 358.
Nonextensive Black Hole Entropy and Quantum Gravity Effects at the Last Stages of Evaporation. A Alonso-Serrano, M P Dabrowski, H Gohar, arXiv:2009.02129Phys. Rev. D. 103226021A. Alonso-Serrano, M. P. Dabrowski and H. Gohar, "Nonextensive Black Hole Entropy and Quantum Gravity Effects at the Last Stages of Evaporation," Phys. Rev. D 103 no. 2, (Sep, 2020) 26021, arXiv:2009.02129.
The Influence of Approximation in Generalized Uncertainty Principle on Black Hole Evaporation. X.-D Du, C.-Y. Long, arXiv:2108.07269J. Cosmol. Astropart. Phys. 0431X.-D. Du and C.-Y. Long, "The Influence of Approximation in Generalized Uncertainty Principle on Black Hole Evaporation," J. Cosmol. Astropart. Phys. 2022 no. 04, (Aug, 2021) 031, arXiv:2108.07269.
Dark matter as an effect of a minimal length. P Bosso, M Fridman, G G Luciano, arXiv:2207.09967Front. Astron. Space Sci. 9932276gr-qcP. Bosso, M. Fridman and G. G. Luciano, "Dark matter as an effect of a minimal length," Front. Astron. Space Sci. 9 (2022) 932276, arXiv:2207.09967 [gr-qc].
Quantum Gravity and Recovery of Information in Black Hole Evaporation. K Nozari, S H Mehdipour, arXiv:0804.4221EPL. 842K. Nozari and S. H. Mehdipour, "Quantum Gravity and Recovery of Information in Black Hole Evaporation," EPL 84 no. 2, (Apr, 2008) 20008, arXiv:0804.4221.
Black Hole Entropy with minimal length in Tunneling formalism. B Majumder, arXiv:1212.6591Gen. Rel. Grav. 45B. Majumder, "Black Hole Entropy with minimal length in Tunneling formalism," Gen. Rel. Grav. 45 (Dec, 2012) 2403-2414, arXiv:1212.6591.
Natural Cutoffs and Quantum Tunneling from Black Hole Horizon. K Nozari, S Saghafi, arXiv:1206.5621JHEP. 511K. Nozari and S. Saghafi, "Natural Cutoffs and Quantum Tunneling from Black Hole Horizon," JHEP 11 (Jun, 2012) 5, arXiv:1206.5621.
Fermion's tunnelling with effects of quantum gravity. D Chen, H Wu, H Yang, arXiv:1305.7104Adv. High Energy Phys. 432412D. Chen, H. Wu and H. Yang, "Fermion's tunnelling with effects of quantum gravity," Adv. High Energy Phys. 2013 (May, 2013) 432412, arXiv:1305.7104.
Observing remnants by fermions' tunneling. D Chen, H Wu, H Yang, arXiv:1307.0172JCAP 03. 36D. Chen, H. Wu and H. Yang, "Observing remnants by fermions' tunneling," JCAP 03 (Jun, 2013) 36, arXiv:1307.0172.
Remnants, fermions' tunnelling and effects of quantum gravity. D Chen, arXiv:1312.3781JHEP. 11176D. Chen et al., "Remnants, fermions' tunnelling and effects of quantum gravity," JHEP 11 (Dec, 2013) 176, arXiv:1312.3781.
Quantum corrections to the thermodynamics of Schwarzschild-Tangherlini black hole and the generalized uncertainty principle. Z W Feng, arXiv:1604.04702Eur. Phys. J. C. 764212Z. W. Feng et al., "Quantum corrections to the thermodynamics of Schwarzschild-Tangherlini black hole and the generalized uncertainty principle," Eur. Phys. J. C 76 no. 4, (Apr, 2016) 212, arXiv:1604.04702.
Entangled Particles Tunneling From a Schwarzschild Black Hole immersed in an Electromagnetic Universe with GUP. A Övgün, arXiv:1508.04100Int. J. Theor. Phys. 556A.Övgün, "Entangled Particles Tunneling From a Schwarzschild Black Hole immersed in an Electromagnetic Universe with GUP," Int. J. Theor. Phys. 55 no. 6, (Aug, 2015) 2919-2927, arXiv:1508.04100.
Massive vector particles tunneling from black holes influenced by the generalized uncertainty principle. X.-Q Li, arXiv:1605.03248Phys. Lett. B. 763X.-Q. Li, "Massive vector particles tunneling from black holes influenced by the generalized uncertainty principle," Phys. Lett. B 763 (May, 2016) 80-86, arXiv:1605.03248.
The effect of the GUP on massive vector and scalar particles tunneling from a warped DGP gravity black hole. A Övgün, K Jusufi, arXiv:1703.08073Eur. Phys. J. Plus. 1327298A.Övgün and K. Jusufi, "The effect of the GUP on massive vector and scalar particles tunneling from a warped DGP gravity black hole," Eur. Phys. J. Plus 132 no. 7, (Mar, 2017) 298, arXiv:1703.08073.
Quantum Gravity Effect on the Tunneling Particles from 2+1 dimensional New-type Black Hole. G Gecim, Y Sucu, arXiv:1710.09125Adv. High Energy Phys. 8728564G. Gecim and Y. Sucu, "Quantum Gravity Effect on the Tunneling Particles from 2+1 dimensional New-type Black Hole," Adv. High Energy Phys. 2018 (Oct, 2017) 8728564, arXiv:1710.09125.
The GUP effect on Hawking Radiation of the 2+1 dimensional Black Hole. G Gecim, Y Sucu, arXiv:1704.03536Phys. Lett. B. 773G. Gecim and Y. Sucu, "The GUP effect on Hawking Radiation of the 2+1 dimensional Black Hole," Phys. Lett. B 773 (Apr, 2017) 391-394, arXiv:1704.03536.
GUP Modified Hawking Radiation in Bumblebee Gravity. S Kanzi, I Sakallı, arXiv:1905.00477Nucl. Phys. B. 946114703S. Kanzi and I. Sakallı, "GUP Modified Hawking Radiation in Bumblebee Gravity," Nucl. Phys. B 946 (May, 2019) 114703, arXiv:1905.00477.
Phenomenology of GUP stars. L Buoninfante, arXiv:2001.05825Eur. Phys. J. C. 809853L. Buoninfante et al., "Phenomenology of GUP stars," Eur. Phys. J. C 80 no. 9, (Jan, 2020) 853, arXiv:2001.05825.
Bekenstein bound and uncertainty relations. L Buoninfante, Phys. Lett. B. 824136818L. Buoninfante et al., "Bekenstein bound and uncertainty relations," Phys. Lett. B 824 (Jan, 2022) 136818.
Generalized Uncertainty Principle and Dark Matter. P Chen, arXiv:0305025Int. Symp. Front. Sci. -Celebr. 80th Birthd. Chen Ning Yang. astro-phP. Chen, "Generalized Uncertainty Principle and Dark Matter," Int. Symp. Front. Sci. -Celebr. 80th Birthd. Chen Ning Yang (May, 2003) , arXiv:0305025 [astro-ph].
Inflation Induced Planck-Size Black Hole Remnants As Dark Matter. P Chen, arXiv:0406514New Astron. Rev. 49astro-phP. Chen, "Inflation Induced Planck-Size Black Hole Remnants As Dark Matter," New Astron. Rev. 49 (Jun, 2004) 233-239, arXiv:0406514 [astro-ph].
The Big-Bang Singularity in the framework of a Generalized Uncertainty Principle. M V Battisti, G Montani, arXiv:0703025Phys. Lett. B. 656gr-qcM. V. Battisti and G. Montani, "The Big-Bang Singularity in the framework of a Generalized Uncertainty Principle," Phys. Lett. B 656 (Mar, 2007) 96-101, arXiv:0703025 [gr-qc].
Minisuperspace dynamics in a generalized uncertainty principle framework. M V Battisti, G Montani, arXiv:0709.4610AIP Conf. Proc. 9661M. V. Battisti and G. Montani, "Minisuperspace dynamics in a generalized uncertainty principle framework," AIP Conf. Proc. 966 no. 1, (Sep, 2007) 219-226, arXiv:0709.4610.
Quantum Dynamics of the Taub Universe in a Generalized Uncertainty Principle framework. M V Battisti, G Montani, arXiv:0707.2726Phys. Rev. D. 77M. V. Battisti and G. Montani, "Quantum Dynamics of the Taub Universe in a Generalized Uncertainty Principle framework," Phys. Rev. D 77 (Jul, 2007) 23518, arXiv:0707.2726.
Noncommutativity, generalized uncertainty principle and FRW cosmology. A Bina, K Atazadeh, S Jalalzadeh, arXiv:0709.3623Int. J. Theor. Phys. 47A. Bina, K. Atazadeh and S. Jalalzadeh, "Noncommutativity, generalized uncertainty principle and FRW cosmology," Int. J. Theor. Phys. 47 (Sep, 2007) 1354-1362, arXiv:0709.3623.
Generalized uncertainty principle in Bianchi type I quantum cosmology. B Vakili, H R Sepangi, arXiv:0706.0273Phys. Lett. B. 651B. Vakili and H. R. Sepangi, "Generalized uncertainty principle in Bianchi type I quantum cosmology," Phys. Lett. B 651 (Jun, 2007) 79-83, arXiv:0706.0273.
Cosmology with minimal length uncertainty relations. B Vakili, arXiv:0811.3481Int. J. Mod. Phys. D. 18B. Vakili, "Cosmology with minimal length uncertainty relations," Int. J. Mod. Phys. D 18 (Nov, 2008) 1059-1071, arXiv:0811.3481.
Dilaton Cosmology, Noncommutativity and Generalized Uncertainty Principle. B Vakili, arXiv:0801.2438Phys. Rev. D. 7744023B. Vakili, "Dilaton Cosmology, Noncommutativity and Generalized Uncertainty Principle," Phys. Rev. D 77 (Jan, 2008) 44023, arXiv:0801.2438.
Influence of Generalized and Extended Uncertainty Principle on Thermodynamics of FRW universe. T Zhu, J.-R Ren, M.-F Li, arXiv:0811.0212Phys. Lett. B. 674T. Zhu, J.-R. Ren and M.-F. Li, "Influence of Generalized and Extended Uncertainty Principle on Thermodynamics of FRW universe," Phys. Lett. B 674 (Nov, 2008) 204-209, arXiv:0811.0212.
Modification of Heisenberg uncertainty relations in non-commutative Snyder space-time geometry. M V Battisti, S Meljanac, arXiv:0812.3755Phys. Rev. D. 79667505M. V. Battisti and S. Meljanac, "Modification of Heisenberg uncertainty relations in non-commutative Snyder space-time geometry," Phys. Rev. D 79 no. 6, (Dec, 2008) 067505, arXiv:0812.3755.
Holographic Cosmology from the First Law of Thermodynamics and the Generalized Uncertainty Principle. J E Lidsey, arXiv:0911.3286Phys. Rev. D. 88103519J. E. Lidsey, "Holographic Cosmology from the First Law of Thermodynamics and the Generalized Uncertainty Principle," Phys. Rev. D 88 (Nov, 2009) 103519, arXiv:0911.3286.
Quantum Gravity Corrections and Entropy at the Planck time. S Basilakos, S Das, E C Vagenas, arXiv:1009.0365JCAP. 0927S. Basilakos, S. Das and E. C. Vagenas, "Quantum Gravity Corrections and Entropy at the Planck time," JCAP 09 (Sep, 2010) 27, arXiv:1009.0365.
Entropy of the FRW universe based on the generalized uncertainty principle. W Kim, Y.-J Park, M Yoon, arXiv:1003.3287Mod. Phys. Lett. A. 25W. Kim, Y.-J. Park and M. Yoon, "Entropy of the FRW universe based on the generalized uncertainty principle," Mod. Phys. Lett. A 25 (Mar, 2010) 1267-1274, arXiv:1003.3287.
Background independent quantization and the uncertainty principle. G M Hossain, V Husain, S S Seahra, arXiv:1003.2207Class. Quant. Grav. 27165013G. M. Hossain, V. Husain and S. S. Seahra, "Background independent quantization and the uncertainty principle," Class. Quant. Grav. 27 (Mar, 2010) 165013, arXiv:1003.2207.
Effect of the Generalized Uncertainty Principle on Post-Inflation Preheating. W Chemissany, arXiv:1111.7288JCAP. 1217W. Chemissany et al., "Effect of the Generalized Uncertainty Principle on Post-Inflation Preheating," JCAP 12 (Nov, 2011) 17, arXiv:1111.7288.
Dilaton cosmology and the Modified Uncertainty Principle. B Majumder, arXiv:1106.4494Phys. Rev. D. 8464031B. Majumder, "Dilaton cosmology and the Modified Uncertainty Principle," Phys. Rev. D 84 (Jun, 2011) 64031, arXiv:1106.4494.
The Generalized Uncertainty Principle and the Friedmann equations. B Majumder, arXiv:1105.2425Astrophys. Sp. Sci. 336B. Majumder, "The Generalized Uncertainty Principle and the Friedmann equations," Astrophys. Sp. Sci. 336 (May, 2011) 331-335, arXiv:1105.2425.
Effects of the Modified Uncertainty Principle on the Inflation Parameters. B Majumder, arXiv:1202.1226Phys. Lett. B. 709B. Majumder, "Effects of the Modified Uncertainty Principle on the Inflation Parameters," Phys. Lett. B 709 (Feb, 2012) 133-136, arXiv:1202.1226.
Emergence of Cosmic Space and Minimal Length in Quantum Gravity. A F Ali, arXiv:1310.1790Phys. Lett. B. 732A. F. Ali, "Emergence of Cosmic Space and Minimal Length in Quantum Gravity," Phys. Lett. B 732 (Oct, 2013) 335-342, arXiv:1310.1790.
Effects of the Generalized Uncertainty Principle on Compact Stars. A F Ali, A Tawfik, arXiv:1301.6133Int. J. Mod. Phys. D. 221350020A. F. Ali and A. Tawfik, "Effects of the Generalized Uncertainty Principle on Compact Stars," Int. J. Mod. Phys. D 22 (Jan, 2013) 1350020, arXiv:1301.6133.
Deviation from the Standard Uncertainty Principle and the Dark Energy Problem. S Jalalzadeh, M A Gorji, K Nozari, arXiv:1310.8065Gen. Rel. Grav. 461632S. Jalalzadeh, M. A. Gorji and K. Nozari, "Deviation from the Standard Uncertainty Principle and the Dark Energy Problem," Gen. Rel. Grav. 46 (Oct, 2013) 1632, arXiv:1310.8065.
Towards a Cosmology with Minimal Length and Maximal Energy. A F Ali, B Majumder, arXiv:1402.5104Class. Quant. Grav. 3121215007A. F. Ali and B. Majumder, "Towards a Cosmology with Minimal Length and Maximal Energy," Class. Quant. Grav. 31 no. 21, (Feb, 2014) 215007, arXiv:1402.5104.
Minimal Length, Friedmann Equations and Maximum Density. A Awad, A F Ali, arXiv:1404.7825JHEP. 0693A. Awad and A. F. Ali, "Minimal Length, Friedmann Equations and Maximum Density," JHEP 06 (Apr, 2014) 93, arXiv:1404.7825.
Planck-Scale Corrections to Friedmann Equation. A Awad, A F Ali, arXiv:1403.5319Cent. Eur. J. Phys. 124A. Awad and A. F. Ali, "Planck-Scale Corrections to Friedmann Equation," Cent. Eur. J. Phys. 12 no. 4, (Mar, 2014) 245-255, arXiv:1403.5319.
Scalar field cosmology modified by the Generalized Uncertainty Principle. A Paliathanasis, S Pan, S Pramanik, arXiv:1508.06543Class. Quant. Grav. 3224245006A. Paliathanasis, S. Pan and S. Pramanik, "Scalar field cosmology modified by the Generalized Uncertainty Principle," Class. Quant. Grav. 32 no. 24, (Aug, 2015) 245006, arXiv:1508.06543.
Cosmological Constant from a Deformation of the Wheeler-DeWitt Equation. R Garattini, M , arXiv:1510.04423Nucl. Phys. B. 905R. Garattini and M. Faizal, "Cosmological Constant from a Deformation of the Wheeler-DeWitt Equation," Nucl. Phys. B 905 (Oct, 2015) 313-326, arXiv:1510.04423.
Effect of Generalized Uncertainty Principle on Main-Sequence Stars and White Dwarfs. M Moussa, arXiv:1512.04337Adv. High Energy Phys. 343284M. Moussa, "Effect of Generalized Uncertainty Principle on Main-Sequence Stars and White Dwarfs," Adv. High Energy Phys. 2015 (Dec, 2015) 343284, arXiv:1512.04337.
Short Distance Physics of the Inflationary de Sitter Universe. A F Ali, M Faizal, M M Khalil, arXiv:1505.06963JCAP. 0925A. F. Ali, M. Faizal and M. M. Khalil, "Short Distance Physics of the Inflationary de Sitter Universe," JCAP 09 (May, 2015) 25, arXiv:1505.06963.
Einstein static universe from GUP. K Atazadeh, F Darabi, arXiv:1701.00060Phys. Dark Univ. 16K. Atazadeh and F. Darabi, "Einstein static universe from GUP," Phys. Dark Univ. 16 (Dec, 2016) 87-93, arXiv:1701.00060.
Non-singular and Cyclic Universe from the Modified GUP. M Salah, arXiv:1608.00560JCAP. 0235M. Salah et al., "Non-singular and Cyclic Universe from the Modified GUP," JCAP 02 (Jul, 2016) 35, arXiv:1608.00560.
Planck scale effects on the stochastic gravitational wave background generated from cosmological hadronization transition: A qualitative study. M Khodadi, arXiv:1805.11310Phys. Lett. B. 783M. Khodadi et al., "Planck scale effects on the stochastic gravitational wave background generated from cosmological hadronization transition: A qualitative study," Phys. Lett. B 783 (May, 2018) 326-333, arXiv:1805.11310.
Implications of Minimum and Maximum Length Scales in Cosmology. S Kouwn, arXiv:1805.07278Phys. Dark Univ. 21S. Kouwn, "Implications of Minimum and Maximum Length Scales in Cosmology," Phys. Dark Univ. 21 (May, 2018) 76-81, arXiv:1805.07278.
Modified Unruh effect from Generalized Uncertainty Principle. F Scardigli, arXiv:1804.05282Eur. Phys. J. C. 789728F. Scardigli et al., "Modified Unruh effect from Generalized Uncertainty Principle," Eur. Phys. J. C 78 no. 9, (Apr, 2018) 728, arXiv:1804.05282.
Minimal Length Effects on Quantum Cosmology and Quantum Black Hole Models. P Bosso, O Obregón, arXiv:1904.06343Class. Quant. Grav. 37445003P. Bosso and O. Obregón, "Minimal Length Effects on Quantum Cosmology and Quantum Black Hole Models," Class. Quant. Grav. 37 no. 4, (Apr, 2019) 45003, arXiv:1904.06343.
Heuristic derivation of Casimir effect in minimal length theories. M Blasone, arXiv:1912.00241Int. J. Mod. Phys. D. 2902M. Blasone et al., "Heuristic derivation of Casimir effect in minimal length theories," Int. J. Mod. Phys. D 29 no. 02, (Nov, 2019) 2050011, arXiv:1912.00241.
Quantum Cosmology with Dynamical Vacuum in a Minimal-Length Scenario. M F Gusson, arXiv:2012.09158Eur. Phys. J. C. 814336M. F. Gusson et al., "Quantum Cosmology with Dynamical Vacuum in a Minimal-Length Scenario," Eur. Phys. J. C 81 no. 4, (Dec, 2020) 336, arXiv:2012.09158.
Cosmological constraints on the Generalized Uncertainty Principle from modified Friedmann equations. S Giardino, V Salzano, arXiv:2006.01580Eur. Phys. J. C. 812110S. Giardino and V. Salzano, "Cosmological constraints on the Generalized Uncertainty Principle from modified Friedmann equations," Eur. Phys. J. C 81 no. 2, (Jun, 2020) 110, arXiv:2006.01580.
Dynamics of Quintessence in Generalized Uncertainty Principle. A Giacomini, arXiv:2008.01395Eur. Phys. J. C. 8010931A. Giacomini et al., "Dynamics of Quintessence in Generalized Uncertainty Principle," Eur. Phys. J. C 80 no. 10, (Aug, 2020) 931, arXiv:2008.01395.
Interacting quintessence in light of Generalized Uncertainty Principle: Cosmological perturbations and dynamics. A Paliathanasis, arXiv:2104.06097Eur. Phys. J. C. 817607A. Paliathanasis et al., "Interacting quintessence in light of Generalized Uncertainty Principle: Cosmological perturbations and dynamics," Eur. Phys. J. C 81 no. 7, (Apr, 2021) 607, arXiv:2104.06097.
Primordial big bang nucleosynthesis and generalized uncertainty principle. G G Luciano, arXiv:2111.06000Eur. Phys. J. C. 81121086astro-ph.COG. G. Luciano, "Primordial big bang nucleosynthesis and generalized uncertainty principle," Eur. Phys. J. C 81 no. 12, (2021) 1086, arXiv:2111.06000 [astro-ph.CO].
Minimal length, maximal momentum and stochastic gravitational waves spectrum generated from cosmological QCD phase transition. M Moussa, arXiv:2107.08641Phys. Lett. B. 820136488M. Moussa et al., "Minimal length, maximal momentum and stochastic gravitational waves spectrum generated from cosmological QCD phase transition," Phys. Lett. B 820 (Jul, 2021) 136488, arXiv:2107.08641.
Maximal momentum GUP leads to quadratic gravity. V Nenmeli, arXiv:2106.04141Phys. Lett. B. 821136621V. Nenmeli et al., "Maximal momentum GUP leads to quadratic gravity," Phys. Lett. B 821 (Oct, 2021) 136621, arXiv:2106.04141.
On the Stability of Planetary Circular Orbits in Noncommutative Spaces. K Nozari, S Akhshabi, arXiv:0608076Chaos Solitons Fractals. 37gr-qcK. Nozari and S. Akhshabi, "On the Stability of Planetary Circular Orbits in Noncommutative Spaces," Chaos Solitons Fractals 37 (Aug, 2006) 324-331, arXiv:0608076 [gr-qc].
Point-like sources and the scale of quantum gravity. R Casadio, R Garattini, F Scardigli, arXiv:0904.3406Phys. Lett. B. 679R. Casadio, R. Garattini and F. Scardigli, "Point-like sources and the scale of quantum gravity," Phys. Lett. B 679 (Apr, 2009) 156-159, arXiv:0904.3406.
Deformed Heisenberg algebra with minimal length and equivalence principle. V M Tkachuk, arXiv:1301.1891Phys. Rev. A. 8662112V. M. Tkachuk, "Deformed Heisenberg algebra with minimal length and equivalence principle," Phys. Rev. A 86 (Jan, 2013) 62112, arXiv:1301.1891.
Quantum Gravity Effects in Geodesic Motion and Predictions of Equivalence Principle Violation. S Ghosh, arXiv:1303.1256Class. Quant. Grav. 3125025S. Ghosh, "Quantum Gravity Effects in Geodesic Motion and Predictions of Equivalence Principle Violation," Class. Quant. Grav. 31 (Mar, 2013) 25025, arXiv:1303.1256.
Effect of GUP on the Kepler problem and a variable minimal length. F Ahmadi, J Khodagholizadeh, arXiv:1411.0241Can. J. Phys. 926F. Ahmadi and J. Khodagholizadeh, "Effect of GUP on the Kepler problem and a variable minimal length," Can. J. Phys. 92 no. 6, (Nov, 2014) 484-487, arXiv:1411.0241.
Constraining the generalized uncertainty principle with the gravitational wave event GW150914. Z.-W Feng, arXiv:1610.08549Phys. Lett. B. 768Z.-W. Feng et al., "Constraining the generalized uncertainty principle with the gravitational wave event GW150914," Phys. Lett. B 768 (Oct, 2016) 81-85, arXiv:1610.08549.
The GUP and quantum Raychaudhuri equation. E C Vagenas, arXiv:1706.06502Nucl. Phys. B. 931E. C. Vagenas et al., "The GUP and quantum Raychaudhuri equation," Nucl. Phys. B 931 (Jun, 2017) 72-78, arXiv:1706.06502.
Generalized Uncertainty Principle, Black Holes, and White Dwarfs: A Tale of Two Infinities. Y C Ong, arXiv:1804.05176JCAP. 0915Y. C. Ong, "Generalized Uncertainty Principle, Black Holes, and White Dwarfs: A Tale of Two Infinities," JCAP 09 (Apr, 2018) 15, arXiv:1804.05176.
Generalized Uncertainty Principle and White Dwarfs Redux: How Cosmological Constant Protects Chandrasekhar Limit. Y C Ong, Y Yao, arXiv:1809.06348Phys. Rev. D. 9812126018Y. C. Ong and Y. Yao, "Generalized Uncertainty Principle and White Dwarfs Redux: How Cosmological Constant Protects Chandrasekhar Limit," Phys. Rev. D 98 no. 12, (Sep, 2018) 126018, arXiv:1809.06348.
Potential tests of the Generalized Uncertainty Principle in the advanced LIGO experiment. P Bosso, S Das, R B Mann, arXiv:1804.03620Phys. Lett. B. 785P. Bosso, S. Das and R. B. Mann, "Potential tests of the Generalized Uncertainty Principle in the advanced LIGO experiment," Phys. Lett. B 785 (Apr, 2018) 498-505, arXiv:1804.03620.
Upper bound on the GUP parameter using the black hole shadow. J C S Neves, arXiv:1906.11735Eur. Phys. J. C. 804343J. C. S. Neves, "Upper bound on the GUP parameter using the black hole shadow," Eur. Phys. J. C 80 no. 4, (Jun, 2019) 343, arXiv:1906.11735.
The generalized and extended uncertainty principles and their implications on the Jeans mass. H Moradpour, arXiv:1907.12940Mon. Not. Roy. Astron. Soc. 4881H. Moradpour et al., "The generalized and extended uncertainty principles and their implications on the Jeans mass," Mon. Not. Roy. Astron. Soc. 488 no. 1, (Jul, 2019) L69-L74, arXiv:1907.12940.
A proposal for Heisenberg uncertainty principle and STUR for curved backgrounds: an application to white dwarf, neutron stars and black holes. S Viaggiu, arXiv:2012.10103Class. Quant. Grav. 38225017S. Viaggiu, "A proposal for Heisenberg uncertainty principle and STUR for curved backgrounds: an application to white dwarf, neutron stars and black holes," Class. Quant. Grav. 38 no. 2, (Dec, 2020) 25017, arXiv:2012.10103.
R A El-Nabulsi, Generalized Uncertainty Principle in Astrophysics from Fermi Statistical Physics Arguments. 59R. A. El-Nabulsi, "Generalized Uncertainty Principle in Astrophysics from Fermi Statistical Physics Arguments," Int. J. Theor. Phys. 59 no. 7, (Jul, 2020) 2083-2090.
Constraining the Generalized Uncertainty Principle Through Black Hole Shadow and Quasiperiodic Oscillations. K Jusufi, arXiv:2008.09115Int. J. Geom. Methods Mod. Phys. 1905K. Jusufi et al., "Constraining the Generalized Uncertainty Principle Through Black Hole Shadow and Quasiperiodic Oscillations," Int. J. Geom. Methods Mod. Phys. 19 no. 05, (Aug, 2020) , arXiv:2008.09115.
Existence of Chandrasekhar's limit in GUP white dwarfs. A Mathew, M K Nandy, arXiv:2002.08360R. Soc. Open Sci. 86210301A. Mathew and M. K. Nandy, "Existence of Chandrasekhar's limit in GUP white dwarfs," R. Soc. Open Sci. 8 no. 6, (Feb, 2020) 210301, arXiv:2002.08360.
Modified structure equations and mass-radius relations of white dwarfs arising from the linear generalized uncertainty principle. A G Abac, J P H Esguerra, R E S Otadoy, Int. J. Mod. Phys. D. 3012150005A. G. Abac, J. P. H. Esguerra and R. E. S. Otadoy, "Modified structure equations and mass-radius relations of white dwarfs arising from the linear generalized uncertainty principle," Int. J. Mod. Phys. D 30 no. 1, (Jan, 2021) 2150005.
White dwarfs and generalized uncertainty principle. I H Belfaqih, H Maulana, A Sulaksono, arXiv:2104.11774Int. J. Mod. Phys. D. 30092150064I. H. Belfaqih, H. Maulana and A. Sulaksono, "White dwarfs and generalized uncertainty principle," Int. J. Mod. Phys. D 30 no. 09, (Apr, 2021) 2150064, arXiv:2104.11774.
Quasinormal modes and shadow of a Schwarzschild black hole with GUP. M Anacleto, Ann. Phys. (N. Y). 434168662M. Anacleto et al., "Quasinormal modes and shadow of a Schwarzschild black hole with GUP," Ann. Phys. (N. Y). 434 (Nov, 2021) 168662.
Implications of the generalized uncertainty principle on the Walecka model equation of state and neutron star structure. A G Abac, J P H Esguerra, Int. J. Mod. Phys. D. 30082150055A. G. Abac and J. P. H. Esguerra, "Implications of the generalized uncertainty principle on the Walecka model equation of state and neutron star structure," Int. J. Mod. Phys. D 30 no. 08, (Jun, 2021) 2150055.
Effective GUP-modified Raychaudhuri equation and black hole singularity: four models. K Blanchette, S Das, S Rastgoo, J. High Energy Phys. 962K. Blanchette, S. Das and S. Rastgoo, "Effective GUP-modified Raychaudhuri equation and black hole singularity: four models," J. High Energy Phys. 2021 no. 9, (Sep, 2021) 62.
Bounds on GUP parameters from GW150914 and GW190521. A Das, arXiv:2101.03746Phys. Lett. B. 819136429A. Das et al., "Bounds on GUP parameters from GW150914 and GW190521," Phys. Lett. B 819 (Jan, 2021) 136429, arXiv:2101.03746.
Constraining the Generalized Uncertainty Principle with the light twisted by rotating black holes and M87*. F Tamburini, F Feleppa, B Thidé, arXiv:2103.13750Phys. Lett. B. 826136894F. Tamburini, F. Feleppa and B. Thidé, "Constraining the Generalized Uncertainty Principle with the light twisted by rotating black holes and M87*," Phys. Lett. B 826 (Mar, 2022) 136894, arXiv:2103.13750.
Gravitational bending angle with finite distances by Casimir wormholes. Í D D Carvalho, G Alencar, C R Muniz, Int. J. Mod. Phys. D. 3103Í. D. D. Carvalho, G. Alencar and C. R. Muniz, "Gravitational bending angle with finite distances by Casimir wormholes," Int. J. Mod. Phys. D 31 no. 03, (Feb, 2022) .
Generalized Uncertainty Principle, Modified Dispersion Relations and Early Universe Thermodynamics. K Nozari, B Fazlpour, arXiv:0601092Gen. Rel. Grav. 38gr-qcK. Nozari and B. Fazlpour, "Generalized Uncertainty Principle, Modified Dispersion Relations and Early Universe Thermodynamics," Gen. Rel. Grav. 38 (Jan, 2006) 1661-1679, arXiv:0601092 [gr-qc].
Effects of quantum gravity on the inflationary parameters and thermodynamics of the early universe. A Tawfik, H Magdy, A F Ali, arXiv:1208.5655Gen. Rel. Grav. 45A. Tawfik, H. Magdy and A. F. Ali, "Effects of quantum gravity on the inflationary parameters and thermodynamics of the early universe," Gen. Rel. Grav. 45 (Aug, 2012) 1227-1246, arXiv:1208.5655.
Energy distribution of massless particles on black hole backgrounds with generalized uncertainty principle. Z.-H Li, Phys. Rev. D. 80884013Z.-H. Li, "Energy distribution of massless particles on black hole backgrounds with generalized uncertainty principle," Phys. Rev. D 80 no. 8, (Oct, 2009) 084013.
GUP and the no-cloning theorem. E C Vagenas, A F Ali, H , arXiv:1811.06614Eur. Phys. J. C. 793276E. C. Vagenas, A. F. Ali and H. Alshal, "GUP and the no-cloning theorem," Eur. Phys. J. C 79 no. 3, (Nov, 2018) 276, arXiv:1811.06614.
Quantum gravity effects on statistics and compact star configurations. P Wang, H Yang, X Zhang, arXiv:1006.5362JHEP. 0843P. Wang, H. Yang and X. Zhang, "Quantum gravity effects on statistics and compact star configurations," JHEP 08 (Jun, 2010) 43, arXiv:1006.5362.
Quantum gravity effects on compact star cores. P Wang, H Yang, X Zhang, arXiv:1110.5550Phys. Lett. B. 718P. Wang, H. Yang and X. Zhang, "Quantum gravity effects on compact star cores," Phys. Lett. B 718 (Oct, 2011) 265-269, arXiv:1110.5550.
Generalized Uncertainty Principle Removes The Chandrasekhar Limit. R Rashidi, arXiv:1512.06356Ann. Phys. 374R. Rashidi, "Generalized Uncertainty Principle Removes The Chandrasekhar Limit," Ann. Phys. 374 (Dec, 2015) 434-443, arXiv:1512.06356.
Some Consequences of the Generalised Uncertainty Principle: Statistical Mechanical, Cosmological, and Varying Speed of Light. S K Rama, arXiv:0107255Phys. Lett. B. 519hep-thS. K. Rama, "Some Consequences of the Generalised Uncertainty Principle: Statistical Mechanical, Cosmological, and Varying Speed of Light," Phys. Lett. B 519 (Jul, 2001) 103-110, arXiv:0107255 [hep-th].
Implications of Minimal Length Scale on the Statistical Mechanics of Ideal Gas. K Nozari, S H Mehdipour, arXiv:0601096Chaos Solitons Fractals. 32hep-thK. Nozari and S. H. Mehdipour, "Implications of Minimal Length Scale on the Statistical Mechanics of Ideal Gas," Chaos Solitons Fractals 32 (Jan, 2006) 1637-1644, arXiv:0601096 [hep-th].
Minimal Length in Quantum Gravity, Equivalence Principle and Holographic Entropy Bound. A F Ali, arXiv:1101.4181Class. Quant. Grav. 2865013A. F. Ali, "Minimal Length in Quantum Gravity, Equivalence Principle and Holographic Entropy Bound," Class. Quant. Grav. 28 (Jan, 2011) 65013, arXiv:1101.4181.
Thermostatistics with minimal length uncertainty relation. B Vakili, M A Gorji, arXiv:1207.1049J. Stat. Mech. 121010013B. Vakili and M. A. Gorji, "Thermostatistics with minimal length uncertainty relation," J. Stat. Mech. 1210 (Jul, 2012) P10013, arXiv:1207.1049.
The Minimal Length and the Quantum Partition Functions. M Abbasiyan-Motlaq, P Pedram, arXiv:1406.3189J. Stat. Mech. Theory Exp. 88002M. Abbasiyan-Motlaq and P. Pedram, "The Minimal Length and the Quantum Partition Functions," J. Stat. Mech. Theory Exp. 2014 no. 8, (Jun, 2014) P08002, arXiv:1406.3189.
Towards Thermodynamics with Generalized Uncertainty Principle. A Farag Ali, M Moussa, Adv. High Energy Phys. 2014A. Farag Ali and M. Moussa, "Towards Thermodynamics with Generalized Uncertainty Principle," Adv. High Energy Phys. 2014 (2014) 1-7.
Effect of minimal length uncertainty on the mass-radius relation of white dwarfs. A Mathew, M K Nandy, arXiv:1712.03953Ann. Phys. 393A. Mathew and M. K. Nandy, "Effect of minimal length uncertainty on the mass-radius relation of white dwarfs," Ann. Phys. 393 (Dec, 2017) 184-205, arXiv:1712.03953.
Linear and Quadratic GUP, Liouville Theorem, Cosmological Constant, and Brick Wall Entropy. E C Vagenas, arXiv:1903.08494Eur. Phys. J. C. 795398E. C. Vagenas et al., "Linear and Quadratic GUP, Liouville Theorem, Cosmological Constant, and Brick Wall Entropy," Eur. Phys. J. C 79 no. 5, (Mar, 2019) 398, arXiv:1903.08494.
Non-Gaussian statistics from the generalized uncertainty principle. H Shababi, K Ourabah, Eur. Phys. J. Plus. 1359697H. Shababi and K. Ourabah, "Non-Gaussian statistics from the generalized uncertainty principle," Eur. Phys. J. Plus 135 no. 9, (Sep, 2020) 697.
New higher-order generalized uncertainty principle: Applications. B Hamil, B C Lütfüoglu, arXiv:2009.13838Int. J. Theor. Phys. 608B. Hamil and B. C. Lütfüoglu, "New higher-order generalized uncertainty principle: Applications," Int. J. Theor. Phys. 60 no. 8, (Sep, 2020) 2790-2803, arXiv:2009.13838.
Tsallis statistics and generalized uncertainty principle. G G Luciano, Eur. Phys. J. C. 817672G. G. Luciano, "Tsallis statistics and generalized uncertainty principle," Eur. Phys. J. C 81 no. 7, (2021) 672.
A Note on Effects of Generalized and Extended Uncertainty Principles on Jüttner Gas. H Moradpour, S Aghababaei, A H Ziaie, arXiv:2102.00916Symmetry (Basel). 132213H. Moradpour, S. Aghababaei and A. H. Ziaie, "A Note on Effects of Generalized and Extended Uncertainty Principles on Jüttner Gas," Symmetry (Basel). 13 no. 2, (Jan, 2021) 213, arXiv:2102.00916.
Baryogenesis in non-extensive Tsallis Cosmology. G G Luciano, J Giné, arXiv:2204.02723Phys. Lett. B. 833137352gr-qcG. G. Luciano and J. Giné, "Baryogenesis in non-extensive Tsallis Cosmology," Phys. Lett. B 833 (2022) 137352, arXiv:2204.02723 [gr-qc].
Decoherence limit of quantum systems obeying generalized uncertainty principle: New paradigm for Tsallis thermostatistics. P Jizba, arXiv:2201.07919Phys. Rev. D. 10512121501hep-thP. Jizba et al., "Decoherence limit of quantum systems obeying generalized uncertainty principle: New paradigm for Tsallis thermostatistics," Phys. Rev. D 105 no. 12, (2022) L121501, arXiv:2201.07919 [hep-th].
Quantum gravity simulation by nonparaxial nonlinear optics. C Conti, arXiv:1406.6677Phys. Rev. A. 89661801physics.opticsC. Conti, "Quantum gravity simulation by nonparaxial nonlinear optics," Phys. Rev. A 89 no. 6, (2014) 061801, arXiv:1406.6677 [physics.optics].
Generalized Uncertainty Principle and Analogue of Quantum Gravity in Optics. M C Braidotti, Z H Musslimani, C Conti, arXiv:1604.03405Phys. D. 338M. C. Braidotti, Z. H. Musslimani and C. Conti, "Generalized Uncertainty Principle and Analogue of Quantum Gravity in Optics," Phys. D 338 (Apr, 2016) 34-41, arXiv:1604.03405.
Generalized Dirac structure beyond the linear regime in graphene. A Iorio, arXiv:1706.01332Int. J. Mod. Phys. D. 27081850080A. Iorio et al., "Generalized Dirac structure beyond the linear regime in graphene," Int. J. Mod. Phys. D 27 no. 08, (May, 2017) 1850080, arXiv:1706.01332.
Analog hep-th, on Dirac materials and in general. A Iorio, arXiv:2005.11514PoS. 2019203hep-thA. Iorio, "Analog hep-th, on Dirac materials and in general," PoS CORFU2019 (2020) 203, arXiv:2005.11514 [hep-th].
Three "layers" of graphene monolayer and their analog generalized uncertainty principles. A Iorio, arXiv:2208.02237Phys. Rev. D. 10611116011gr-qcA. Iorio et al., "Three "layers" of graphene monolayer and their analog generalized uncertainty principles," Phys. Rev. D 106 no. 11, (2022) 116011, arXiv:2208.02237 [gr-qc].
A Iorio, arXiv:2302.00677Shadows of new physics on Dirac materials, analog GUPs and other amusements. Spacetime, Matter, Quantum Mechanics. 1, 2023. gr-qcA. Iorio et al., "Shadows of new physics on Dirac materials, analog GUPs and other amusements," in Spacetime, Matter, Quantum Mechanics. 1, 2023. arXiv:2302.00677 [gr-qc].
Minimal Length Scale Scenarios for Quantum Gravity. S Hossenfelder, arXiv:1203.6191Living Rev. Rel. 16S. Hossenfelder, "Minimal Length Scale Scenarios for Quantum Gravity," Living Rev. Rel. 16 (Mar, 2012) 2, arXiv:1203.6191.
Generalized Uncertainty Principle: Approaches and Applications. A N Tawfik, A M Diab, arXiv:1410.0206Int. J. Mod. Phys. D. 23121430025A. N. Tawfik and A. M. Diab, "Generalized Uncertainty Principle: Approaches and Applications," Int. J. Mod. Phys. D 23 no. 12, (Sep, 2014) 1430025, arXiv:1410.0206.
Theory and phenomenology of relativistic corrections to the Heisenberg principle. G Amelino-Camelia, V Astuti, arXiv:2209.04350G. Amelino-Camelia and V. Astuti, "Theory and phenomenology of relativistic corrections to the Heisenberg principle," arXiv:2209.04350.
Generalized Uncertainty Principle, Classical Mechanics, and General Relativity. R Casadio, F Scardigli, arXiv:2004.04076Phys. Lett. B. 807135558R. Casadio and F. Scardigli, "Generalized Uncertainty Principle, Classical Mechanics, and General Relativity," Phys. Lett. B 807 (Apr, 2020) 135558, arXiv:2004.04076.
Relative locality and the soccer ball problem. G Amelino-Camelia, arXiv:1104.2019Phys. Rev. D. 84887702G. Amelino-Camelia et al., "Relative locality and the soccer ball problem," Phys. Rev. D 84 no. 8, (Apr, 2011) 087702, arXiv:1104.2019.
The minimal length: a cut-off in disguise?. P Bosso, L Petruzziello, F Wagner, arXiv:2302.04564hep-thP. Bosso, L. Petruzziello and F. Wagner, "The minimal length: a cut-off in disguise?," arXiv:2302.04564 [hep-th].
Relativity in space-times with short distance structure governed by an observer independent (Planckian) length scale. G Amelino-Camelia, arXiv:0012051Int. J. Mod. Phys. D. 11gr-qcG. Amelino-Camelia, "Relativity in space-times with short distance structure governed by an observer independent (Planckian) length scale," Int. J. Mod. Phys. D 11 (2002) 35-60, arXiv:0012051 [gr-qc].
Lorentz invariance with an invariant energy scale. J Magueijo, L Smolin, arXiv:0112090Phys. Rev. Lett. 88hep-thJ. Magueijo and L. Smolin, "Lorentz invariance with an invariant energy scale," Phys. Rev. Lett. 88 (Dec, 2001) 190403, arXiv:0112090 [hep-th].
Space and time transformations with a minimal length. P Bosso, arXiv:2206.15422Class. Quant. Grav. 40555001gr-qcP. Bosso, "Space and time transformations with a minimal length," Class. Quant. Grav. 40 no. 5, (2023) 055001, arXiv:2206.15422 [gr-qc].
The Soccer-Ball Problem. S Hossenfelder, arXiv:1403.2080SIGMA 10. 74S. Hossenfelder, "The Soccer-Ball Problem," SIGMA 10 (Mar, 2014) 74, arXiv:1403.2080.
Minimal Length and Small Scale Structure of Spacetime. D Kothawala, arXiv:1307.5618Phys. Rev. D. 8810104029D. Kothawala, "Minimal Length and Small Scale Structure of Spacetime," Phys. Rev. D 88 no. 10, (Jul, 2013) 104029, arXiv:1307.5618.
Grin of the Cheshire cat: Entropy density of spacetime as a relic from quantum gravity. D Kothawala, T Padmanabhan, arXiv:1405.4967Phys. Rev. D. 9012124060D. Kothawala and T. Padmanabhan, "Grin of the Cheshire cat: Entropy density of spacetime as a relic from quantum gravity," Phys. Rev. D 90 no. 12, (May, 2014) 124060, arXiv:1405.4967.
Spacetime with zero point length is two-dimensional at the Planck scale. T Padmanabhan, S Chakraborty, D Kothawala, arXiv:1507.05669Gen. Rel. Grav. 48555T. Padmanabhan, S. Chakraborty and D. Kothawala, "Spacetime with zero point length is two-dimensional at the Planck scale," Gen. Rel. Grav. 48 no. 5, (Jul, 2015) 55, arXiv:1507.05669.
Generalised uncertainty relations from superpositions of geometries. M J Lake, arXiv:1812.10045Class. Quant. Grav. 3615155012M. J. Lake et al., "Generalised uncertainty relations from superpositions of geometries," Class. Quant. Grav. 36 no. 15, (Dec, 2018) 155012, arXiv:1812.10045.
Generalised uncertainty relations for angular momentum and spin in quantum geometry. M J Lake, M Miller, S.-D Liang, arXiv:1912.0709456M. J. Lake, M. Miller and S.-D. Liang, "Generalised uncertainty relations for angular momentum and spin in quantum geometry," Universe 6 no. 4, (Dec, 2019) 56, arXiv:1912.07094.
A solution to the soccer ball problem for generalised uncertainty relations. M J Lake, arXiv:1912.07093Ukr. J. Phys. 6411M. J. Lake, "A solution to the soccer ball problem for generalised uncertainty relations," Ukr. J. Phys. 64 no. 11, (Dec, 2019) 1036-1041, arXiv:1912.07093.
A New Approach to Generalised Uncertainty Relations. M J Lake, arXiv:2008.13183M. J. Lake, "A New Approach to Generalised Uncertainty Relations," arXiv:2008.13183.
Asymptotic Generalized Extended Uncertainty Principle. M P Dabrowski, F Wagner, arXiv:2006.02188Eur. Phys. J. C. 807676M. P. Dabrowski and F. Wagner, "Asymptotic Generalized Extended Uncertainty Principle," Eur. Phys. J. C 80 no. 7, (Jun, 2020) 676, arXiv:2006.02188.
Quantum gravitational decoherence from fluctuating minimal length and deformation parameter at the Planck scale. L Petruzziello, F Illuminati, arXiv:2011.01255Nat. Commun. 1214449L. Petruzziello and F. Illuminati, "Quantum gravitational decoherence from fluctuating minimal length and deformation parameter at the Planck scale," Nat. Commun. 12 no. 1, (Nov, 2020) 4449, arXiv:2011.01255.
Relativistic extended uncertainty principle from spacetime curvature. F Wagner, arXiv:2111.15583Phys. Rev. D. 105225005F. Wagner, "Relativistic extended uncertainty principle from spacetime curvature," Phys. Rev. D 105 no. 2, (Jan, 2022) 025005, arXiv:2111.15583.
Zum Heisenbergschen Unschärfeprinzip. E Schrödinger, Sitzungsberichte der Preuss. Akad. der Wissenschaften. Phys. Klasse. E. Schrödinger, "Zum Heisenbergschen Unschärfeprinzip," Sitzungsberichte der Preuss. Akad. der Wissenschaften. Phys. Klasse (1930) 296-303.
The Uncertainty Principle. H P Robertson, Phys. Rev. 341H. P. Robertson, "The Uncertainty Principle," Phys. Rev. 34 no. 1, (Jul, 1929) 163-164.
Constraining GUP models using limits on SME coefficients. A H Gomes, arXiv:2205.02044Class. Quant. Grav. 3922225017hep-thA. H. Gomes, "Constraining GUP models using limits on SME coefficients," Class. Quant. Grav. 39 no. 22, (2022) 225017, arXiv:2205.02044 [hep-th].
Generalized uncertainty principle or curved momentum space?. F Wagner, Phys. Rev. D. 10412126010F. Wagner, "Generalized uncertainty principle or curved momentum space?," Phys. Rev. D 104 no. 12, (Dec, 2021) 126010.
The Effect of the Minimal Length Uncertainty Relation on the Density of States and the Cosmological Constant Problem. L N Chang, arXiv:0201017Phys. Rev. D. 65125028hep-thL. N. Chang et al., "The Effect of the Minimal Length Uncertainty Relation on the Density of States and the Cosmological Constant Problem," Phys. Rev. D 65 (Jan, 2002) 125028, arXiv:0201017 [hep-th].
Hilbert Space Representation of the Minimal Length Uncertainty Relation. A Kempf, G Mangano, R B Mann, arXiv:9412167Phys. Rev. D. 52hep-thA. Kempf, G. Mangano and R. B. Mann, "Hilbert Space Representation of the Minimal Length Uncertainty Relation," Phys. Rev. D 52 (Dec, 1994) 1108-1118, arXiv:9412167 [hep-th].
Localized States for Elementary Systems. T D Newton, E P Wigner, Rev. Mod. Phys. 213T. D. Newton and E. P. Wigner, "Localized States for Elementary Systems," Rev. Mod. Phys. 21 no. 3, (Jul, 1949) 400-406.
Short Distance vs. Long Distance Physics: The Classical Limit of the Minimal Length Uncertainty Relation. S Benczik, arXiv:0204049Phys. Rev. D. 6626003hep-thS. Benczik et al., "Short Distance vs. Long Distance Physics: The Classical Limit of the Minimal Length Uncertainty Relation," Phys. Rev. D 66 (Apr, 2002) 26003, arXiv:0204049 [hep-th].
On deformations of classical mechanics due to Planck-scale physics. O I Chashchina, A Sen, Z K Silagadze, arXiv:1902.09728Int. J. Mod. Phys. D. 2910O. I. Chashchina, A. Sen and Z. K. Silagadze, "On deformations of classical mechanics due to Planck-scale physics," Int. J. Mod. Phys. D 29 no. 10, (Feb, 2019) 2050070, arXiv:1902.09728.
The principle of relative locality. G Amelino-Camelia, arXiv:1101.0931Phys. Rev. D. 8484010G. Amelino-Camelia et al., "The principle of relative locality," Phys. Rev. D 84 (Jan, 2011) 84010, arXiv:1101.0931.
Nonrelativistic quantum dynamics on planck-scale deformed cotangent bundles. F Wagner, in preparationF. Wagner et al., "Nonrelativistic quantum dynamics on planck-scale deformed cotangent bundles,". in preparation.
Time in Quantum Mechanics. J. Muga, R. S. Mayato andÍ. Egusquiza734J. Muga, R. S. Mayato andÍ. Egusquiza, eds., Time in Quantum Mechanics, vol. 734 of Lecture Notes in Physics.
. Heidelberg Springer Berlin, Berlin, HeidelbergSpringer Berlin Heidelberg, Berlin, Heidelberg, 2008.
Die allgemeinen Prinzipien der Wellenmechanik. W Pauli, Quantentheorie, H. Bethe et al.SpringerBerlin Heidelberg; Berlin, HeidelbergW. Pauli, "Die allgemeinen Prinzipien der Wellenmechanik," in Quantentheorie, H. Bethe et al., eds., pp. 83-272. Springer Berlin Heidelberg, Berlin, Heidelberg, 1933.
The 'time of occurrence' in quantum mechanics. M D Srinivas, R Vijayalakshmi, Pramana. 163M. D. Srinivas and R. Vijayalakshmi, "The 'time of occurrence' in quantum mechanics," Pramana 16 no. 3, (Mar, 1981) 173-199.
T Padmanabhan, Quantum Field Theory. ChamSpringer International PublishingT. Padmanabhan, Quantum Field Theory. Springer International Publishing, Cham, 2016.
Kappa-deformed Snyder spacetime. S Meljanac, arXiv:0912.5087Mod. Phys. Lett. A. 25hep-thS. MELJANAC et al., "Kappa-deformed Snyder spacetime," Mod. Phys. Lett. A 25 (2010) 579-590, arXiv:0912.5087 [hep-th].
Scalar Field Theory on Non-commutative Snyder Space-Time. M V Battisti, S Meljanac, arXiv:1003.2108Phys. Rev. D. 8224028hep-thM. V. Battisti and S. Meljanac, "Scalar Field Theory on Non-commutative Snyder Space-Time," Phys. Rev. D 82 (2010) 024028, arXiv:1003.2108 [hep-th].
Kappa Snyder deformations of Minkowski spacetime, realizations and Hopf algebra. S Meljanac, arXiv:1102.1655Phys. Rev. D. 8365009math-phS. Meljanac et al., "Kappa Snyder deformations of Minkowski spacetime, realizations and Hopf algebra," Phys. Rev. D 83 (2011) 065009, arXiv:1102.1655 [math-ph].
Twist for Snyder space. D Meljanac, arXiv:1711.02941Eur. Phys. J. C. 783194hep-thD. Meljanac et al., "Twist for Snyder space," Eur. Phys. J. C 78 no. 3, (2018) 194, arXiv:1711.02941 [hep-th].
Space-Time as a Causal Set. L Bombelli, Phys. Rev. Lett. 59L. Bombelli et al., "Space-Time as a Causal Set," Phys. Rev. Lett. 59 (1987) 521-524.
A Classical sequential growth dynamics for causal sets. D P Rideout, R D Sorkin, arXiv:gr-qc/9904062Phys. Rev. D. 6124002D. P. Rideout and R. D. Sorkin, "A Classical sequential growth dynamics for causal sets," Phys. Rev. D 61 (2000) 024002, arXiv:gr-qc/9904062.
Causal sets: Discrete gravity. R D Sorkin, arXiv:gr-qc/0309009School on Quantum Gravity. 9R. D. Sorkin, "Causal sets: Discrete gravity," in School on Quantum Gravity, pp. 305-327. 9, 2003. arXiv:gr-qc/0309009.
The causal set approach to quantum gravity. S Surya, arXiv:1903.11544Living Rev. Rel. 2215gr-qcS. Surya, "The causal set approach to quantum gravity," Living Rev. Rel. 22 no. 1, (2019) 5, arXiv:1903.11544 [gr-qc].
On Quantum Field Theory with Nonzero Minimal Uncertainties in Positions and Momenta. A Kempf, arXiv:9602085J. Math. Phys. 38hep-thA. Kempf, "On Quantum Field Theory with Nonzero Minimal Uncertainties in Positions and Momenta," J. Math. Phys. 38 (Feb, 1996) 1347-1372, arXiv:9602085 [hep-th].
Casimir Effect in the Presence of Minimal Lengths. K Nouicer, arXiv:0512027J. Phys. A. 38hep-thK. Nouicer, "Casimir Effect in the Presence of Minimal Lengths," J. Phys. A 38 (Dec, 2005) 10027-10035, arXiv:0512027 [hep-th].
The Casimir Effect in Minimal Length Theories based on a Generalized Uncertainity Principle. A M Frassino, O Panella, arXiv:1112.2924Phys. Rev. D. 8545030A. M. Frassino and O. Panella, "The Casimir Effect in Minimal Length Theories based on a Generalized Uncertainity Principle," Phys. Rev. D 85 (Dec, 2011) 45030, arXiv:1112.2924.
On a Generalization in Quantum Theory: Is Constant?. R J Adler, D I Santiago, arXiv:9908073hep-thR. J. Adler and D. I. Santiago, "On a Generalization in Quantum Theory: Is Constant?," arXiv:9908073 [hep-th].
Minimal Length and Generalized Dirac Equation. K Nozari, M Karami, arXiv:0507028Mod. Phys. Lett. A. 20hep-thK. Nozari and M. Karami, "Minimal Length and Generalized Dirac Equation," Mod. Phys. Lett. A 20 (Jul, 2005) 3095-3104, arXiv:0507028 [hep-th].
Gauge Theories under Incorporation of a Generalized Uncertainty Principle. M Kober, arXiv:1008.0154Phys. Rev. D. 8285017M. Kober, "Gauge Theories under Incorporation of a Generalized Uncertainty Principle," Phys. Rev. D 82 (Aug, 2010) 85017, arXiv:1008.0154.
Electroweak Theory with a Minimal Length. M Kober, arXiv:1104.2319Int. J. Mod. Phys. A. 26M. Kober, "Electroweak Theory with a Minimal Length," Int. J. Mod. Phys. A 26 (Apr, 2011) 4251-4285, arXiv:1104.2319.
Dirac particle in gravitational quantum mechanics. P Pedram, Phys. Lett. B. 7024P. Pedram, "Dirac particle in gravitational quantum mechanics," Phys. Lett. B 702 no. 4, (Aug, 2011) 295-298.
Formulation of the Spinor Field in the Presence of a Minimal Length Based on the Quesne-Tkachuk Algebra. S K Moayedi, M R Setare, H Moayeri, arXiv:1105.1900Int. J. Mod. Phys. A. 26S. K. Moayedi, M. R. Setare and H. Moayeri, "Formulation of the Spinor Field in the Presence of a Minimal Length Based on the Quesne-Tkachuk Algebra," Int. J. Mod. Phys. A 26 (May, 2011) 4981-4990, arXiv:1105.1900.
Formulation of Electrodynamics with an External Source in the Presence of a Minimal Measurable Length. S K Moayedi, M R Setare, B Khosropour, arXiv:1303.0100Adv. High Energy Phys. 657870S. K. Moayedi, M. R. Setare and B. Khosropour, "Formulation of Electrodynamics with an External Source in the Presence of a Minimal Measurable Length," Adv. High Energy Phys. 2013 (Mar, 2013) 657870, arXiv:1303.0100.
Incorporation of Generalized Uncertainty Principle into Lifshitz Field Theories. M Faizal, B Majumder, arXiv:1408.3795Ann. Phys. 357M. Faizal and B. Majumder, "Incorporation of Generalized Uncertainty Principle into Lifshitz Field Theories," Ann. Phys. 357 (Aug, 2014) 49-58, arXiv:1408.3795.
Quantum field theory with the generalized uncertainty principle II: Quantum Electrodynamics. P Bosso, S Das, V Todorinov, arXiv:2005.03772Ann. Phys. 424168350P. Bosso, S. Das and V. Todorinov, "Quantum field theory with the generalized uncertainty principle II: Quantum Electrodynamics," Ann. Phys. 424 (May, 2020) 168350, arXiv:2005.03772.
Quantization of fields based on Generalized Uncertainty Principle. T Matsuo, Y Shibusa, arXiv:0511031Mod. Phys. Lett. A. 21hep-thT. Matsuo and Y. Shibusa, "Quantization of fields based on Generalized Uncertainty Principle," Mod. Phys. Lett. A 21 (Nov, 2005) 1285-1296, arXiv:0511031 [hep-th].
On Path Integration on Noncommutative Geometries. A Kempf, arXiv:9603115Minisemester on Quantum Groups and Quantum Spaces. hep-thA. Kempf, "On Path Integration on Noncommutative Geometries," in Minisemester on Quantum Groups and Quantum Spaces. Mar, 1996. arXiv:9603115 [hep-th].
Generalized Quantization Principle in Canonical Quantum Gravity and Application to Quantum Cosmology. M Kober, arXiv:1109.4629Int. J. Mod. Phys. A. 271250106M. Kober, "Generalized Quantization Principle in Canonical Quantum Gravity and Application to Quantum Cosmology," Int. J. Mod. Phys. A 27 (Sep, 2011) 1250106, arXiv:1109.4629.
Generalized uncertainty principles and quantum field theory. V Husain, D Kothawala, S S Seahra, arXiv:1208.5761Phys. Rev. D. 87225014V. Husain, D. Kothawala and S. S. Seahra, "Generalized uncertainty principles and quantum field theory," Phys. Rev. D 87 no. 2, (Aug, 2012) 25014, arXiv:1208.5761.
Challenge to Macroscopic Probes of Quantum Spacetime Based on Noncommutative Geometry. G Amelino-Camelia, arXiv:1304.7271Phys. Rev. Lett. 111101301G. Amelino-Camelia, "Challenge to Macroscopic Probes of Quantum Spacetime Based on Noncommutative Geometry," Phys. Rev. Lett. 111 (Apr, 2013) 101301, arXiv:1304.7271.
Deformed General Relativity. M Bojowald, G M Paily, arXiv:1212.4773Phys. Rev. D. 87444044gr-qcM. Bojowald and G. M. Paily, "Deformed General Relativity," Phys. Rev. D 87 no. 4, (2013) 044044, arXiv:1212.4773 [gr-qc].
Rainbow metric from quantum gravity. M Assanioussi, A Dapor, J Lewandowski, arXiv:1412.6000Phys. Lett. B. 751gr-qcM. Assanioussi, A. Dapor and J. Lewandowski, "Rainbow metric from quantum gravity," Phys. Lett. B 751 (2015) 302-305, arXiv:1412.6000 [gr-qc].
Constraining the loop quantum gravity parameter space from phenomenology. S Brahma, M Ronco, arXiv:1801.09417Phys. Lett. B. 778hep-thS. Brahma and M. Ronco, "Constraining the loop quantum gravity parameter space from phenomenology," Phys. Lett. B 778 (2018) 184-189, arXiv:1801.09417 [hep-th].
3D Quantum Gravity and Effective Noncommutative Quantum Field Theory. L Freidel, E R Livine, arXiv:hep-th/0512113Phys. Rev. Lett. 96221301L. Freidel and E. R. Livine, "3D Quantum Gravity and Effective Noncommutative Quantum Field Theory," Phys. Rev. Lett. 96 (2006) 221301, arXiv:hep-th/0512113.
Ponzano-Regge model revisited III: Feynman diagrams and effective field theory. L Freidel, E R Livine, arXiv:hep-th/0502106Class. Quant. Grav. 23L. Freidel and E. R. Livine, "Ponzano-Regge model revisited III: Feynman diagrams and effective field theory," Class. Quant. Grav. 23 (2006) 2021-2062, arXiv:hep-th/0502106.
Planck-scale soccer-ball problem: a case of mistaken identity. G Amelino-Camelia, arXiv:1407.7891Entropy. 198400G. Amelino-Camelia, "Planck-scale soccer-ball problem: a case of mistaken identity," Entropy 19 no. 8, (Jul, 2014) 400, arXiv:1407.7891.
Quantum symmetry, the cosmological constant and Planck scale phenomenology. G Amelino-Camelia, L Smolin, A Starodubtsev, arXiv:0306134Class. Quant. Grav. 21hep-thG. Amelino-Camelia, L. Smolin and A. Starodubtsev, "Quantum symmetry, the cosmological constant and Planck scale phenomenology," Class. Quant. Grav. 21 (Jun, 2003) 3095-3110, arXiv:0306134 [hep-th].
Multi-Particle States in Deformed Special Relativity. S Hossenfelder, arXiv:0702016Phys. Rev. D. 75105005hep-thS. Hossenfelder, "Multi-Particle States in Deformed Special Relativity," Phys. Rev. D 75 (Feb, 2007) 105005, arXiv:0702016 [hep-th].
Multi-particle systems in quantum spacetime and a novel challenge for center-of-mass motion. G Amelino-Camelia, arXiv:2012.04397Int. J. Mod. Phys. D. 30062150046G. Amelino-Camelia et al., "Multi-particle systems in quantum spacetime and a novel challenge for center-of-mass motion," Int. J. Mod. Phys. D 30 no. 06, (Dec, 2020) 2150046, arXiv:2012.04397.
Probing Planck-scale physics with quantum optics. I Pikovski, arXiv:1111.1979Nat. Phys. 8I. Pikovski et al., "Probing Planck-scale physics with quantum optics," Nat. Phys. 8 (Nov, 2011) 393-397, arXiv:1111.1979.
Is a tabletop search for Planck scale signals feasible?. J D Bekenstein, arXiv:1211.3816Phys. Rev. D. 8612124040J. D. Bekenstein, "Is a tabletop search for Planck scale signals feasible?," Phys. Rev. D 86 no. 12, (Nov, 2012) 124040, arXiv:1211.3816.
On Quantum Gravity Tests with Composite Particles. S P Kumar, M B Plenio, arXiv:1908.11164Nat. Commun. 1113900S. P. Kumar and M. B. Plenio, "On Quantum Gravity Tests with Composite Particles," Nat. Commun. 11 no. 1, (Aug, 2019) 3900, arXiv:1908.11164.
Heuristic derivation of the Casimir effect from Generalized Uncertainty Principle. M Blasone, arXiv:1902.02414J. Phys. Conf. Ser. 112024hep-thM. Blasone et al., "Heuristic derivation of the Casimir effect from Generalized Uncertainty Principle," J. Phys. Conf. Ser. 1275 no. 1, (2019) 012024, arXiv:1902.02414 [hep-th].
Schwinger Pair Production and the Extended Uncertainty Principle: Can Heuristic Derivations Be Trusted?. Y C Ong, arXiv:2005.12075Eur. Phys. J. C. 808777Y. C. Ong, "Schwinger Pair Production and the Extended Uncertainty Principle: Can Heuristic Derivations Be Trusted?," Eur. Phys. J. C 80 no. 8, (May, 2020) 777, arXiv:2005.12075.
Extended uncertainty principle and the geometry of (anti)-de Sitter space. S Mignemi, arXiv:0909.1202Mod. Phys. Lett. A. 25S. Mignemi, "Extended uncertainty principle and the geometry of (anti)-de Sitter space," Mod. Phys. Lett. A 25 (Sep, 2009) 1697-1703, arXiv:0909.1202.
Minimal length, maximal momentum and the entropic force law. K Nozari, P Pedram, M Molkara, arXiv:1111.2204Int. J. Theor. Phys. 51K. Nozari, P. Pedram and M. Molkara, "Minimal length, maximal momentum and the entropic force law," Int. J. Theor. Phys. 51 (Nov, 2011) 1268-1275, arXiv:1111.2204.
Extended Uncertainty Principle for Rindler and cosmological horizons. M P Dabrowski, F Wagner, arXiv:1905.09713Eur. Phys. J. C. 798716M. P. Dabrowski and F. Wagner, "Extended Uncertainty Principle for Rindler and cosmological horizons," Eur. Phys. J. C 79 no. 8, (May, 2019) 716, arXiv:1905.09713.
GUP parameter from Maximal Acceleration. G G Luciano, L Petruzziello, arXiv:1902.07059Eur. Phys. J. C. 793283G. G. Luciano and L. Petruzziello, "GUP parameter from Maximal Acceleration," Eur. Phys. J. C 79 no. 3, (Feb, 2019) 283, arXiv:1902.07059.
A Minimal length versus the Unruh effect. P Nicolini, M Rinaldi, arXiv:0910.2860Phys. Lett. B. 695hep-thP. Nicolini and M. Rinaldi, "A Minimal length versus the Unruh effect," Phys. Lett. B 695 (2011) 303-306, arXiv:0910.2860 [hep-th].
Modified Dispersion Relation, Photon's Velocity, and Unruh Effect. B R Majhi, E C Vagenas, arXiv:1307.4195Phys. Lett. B. 725B. R. Majhi and E. C. Vagenas, "Modified Dispersion Relation, Photon's Velocity, and Unruh Effect," Phys. Lett. B 725 (Jul, 2013) 477-480, arXiv:1307.4195.
On the Duality Condition for Quantum Fields. J J Bisognano, E H Wichmann, J. Math. Phys. 17J. J. Bisognano and E. H. Wichmann, "On the Duality Condition for Quantum Fields," J. Math. Phys. 17 (1976) 303-321.
Some heuristic semi-classical derivations of the Planck length, the Hawking effect and the Unruh effect. F Scardigli, Nuovo Cim. B Ser. 119F. Scardigli, "Some heuristic semi-classical derivations of the Planck length, the Hawking effect and the Unruh effect," Nuovo Cim. B Ser. 11 110 no. 9, (Sep, 1995) 1029-1034.
Generalized uncertainty principle and correction value to the black hole entropy. Z Hai-Xia, arXiv:0608023Commun. Theor. Phys. 48gr-qcZ. Hai-Xia et al., "Generalized uncertainty principle and correction value to the black hole entropy," Commun. Theor. Phys. 48 (Aug, 2006) 465-468, arXiv:0608023 [gr-qc].
Hawking radiation as tunneling. M K Parikh, F Wilczek, arXiv:hep-th/9907001Phys. Rev. Lett. 85M. K. Parikh and F. Wilczek, "Hawking radiation as tunneling," Phys. Rev. Lett. 85 (2000) 5042-5045, arXiv:hep-th/9907001.
Thermodynamical Aspects of Gravity: New insights. T Padmanabhan, arXiv:0911.5004Rept. Prog. Phys. 7346901gr-qcT. Padmanabhan, "Thermodynamical Aspects of Gravity: New insights," Rept. Prog. Phys. 73 (2010) 046901, arXiv:0911.5004 [gr-qc].
B Carr, L Modesto, I Prémont-Schwarz, arXiv:1107.0708Generalized Uncertainty Principle and Self-dual Black Holes. B. Carr, L. Modesto and I. Prémont-Schwarz, "Generalized Uncertainty Principle and Self-dual Black Holes," arXiv:1107.0708.
The Black Hole Uncertainty Principle Correspondence. B J Carr, arXiv:1402.1427Springer Proc. Phys. 170B. J. Carr, "The Black Hole Uncertainty Principle Correspondence," Springer Proc. Phys. 170 (Feb, 2014) 159-167, arXiv:1402.1427.
Sub-Planckian black holes and the Generalized Uncertainty Principle. B J Carr, J Mureika, P Nicolini, arXiv:1504.07637JHEP. 0752B. J. Carr, J. Mureika and P. Nicolini, "Sub-Planckian black holes and the Generalized Uncertainty Principle," JHEP 07 (Apr, 2015) 52, arXiv:1504.07637.
LHC Machine. L Evans, P Bryant, J. Instrum. 308L. Evans and P. Bryant, "LHC Machine," J. Instrum. 3 no. 08, (Aug, 2008) S08001-S08001.
Phenomenology of the Pauli exclusion principle violations due to the non-perturbative generalized uncertainty principle. A Addazi, Eur. Phys. J. C. 808795A. Addazi et al., "Phenomenology of the Pauli exclusion principle violations due to the non-perturbative generalized uncertainty principle," Eur. Phys. J. C 80 no. 8, (Aug, 2020) 795.
Improved Constraints on the Minimum Length with a Macroscopic Low Loss Phonon Cavity. W M Campbell, arXiv:2304.00688gr-qcW. M. Campbell et al., "Improved Constraints on the Minimum Length with a Macroscopic Low Loss Phonon Cavity," arXiv:2304.00688 [gr-qc].
Testing of Generalized Uncertainty Principle With Macroscopic Mechanical Oscillators and Pendulums. P A Bushev, arXiv:1903.03346Phys. Rev. D. 100666020P. A. Bushev et al., "Testing of Generalized Uncertainty Principle With Macroscopic Mechanical Oscillators and Pendulums," Phys. Rev. D 100 no. 6, (Mar, 2019) 66020, arXiv:1903.03346.
Probing deformed commutators with macroscopic harmonic oscillators. M Bawaj, arXiv:1411.6410Nat. Commun. 67503M. Bawaj et al., "Probing deformed commutators with macroscopic harmonic oscillators," Nat. Commun. 6 (Nov, 2014) 7503, arXiv:1411.6410.
Planck scale effects on some low energy quantum phenomena. S Das, R B Mann, arXiv:1109.3258Phys. Lett. B. 704S. Das and R. B. Mann, "Planck scale effects on some low energy quantum phenomena," Phys. Lett. B 704 (Sep, 2011) 596-599, arXiv:1109.3258.
Bound states of hydrogen atom in a theory with minimal length uncertainty relations. J Slawny, J. Math. Phys. 48553515J. Slawny, "Bound states of hydrogen atom in a theory with minimal length uncertainty relations," J. Math. Phys. 48 no. 5, (2007) 053515.
On the Minimal Length Uncertainty Relation and the Foundations of String Theory. L N Chang, arXiv:1106.0068Adv. High Energy Phys. 493514L. N. Chang et al., "On the Minimal Length Uncertainty Relation and the Foundations of String Theory," Adv. High Energy Phys. 2011 (May, 2011) 493514, arXiv:1106.0068.
A proposal for testing Quantum Gravity in the lab. A F Ali, S Das, E C Vagenas, arXiv:1107.3164Phys. Rev. D. 84444013A. F. Ali, S. Das and E. C. Vagenas, "A proposal for testing Quantum Gravity in the lab," Phys. Rev. D 84 no. 4, (Jul, 2011) 044013, arXiv:1107.3164.
Constraining the generalized uncertainty principle with cold atoms. D Gao, M Zhan, arXiv:1607.04353Phys. Rev. A. 94113607D. Gao and M. Zhan, "Constraining the generalized uncertainty principle with cold atoms," Phys. Rev. A 94 no. 1, (Jul, 2016) 13607, arXiv:1607.04353.
Probing Planck Scale Spacetime By Cavity Opto-Atomic 87 Rb Interferometry. M Khodadi, arXiv:1804.06389PTEP. 5M. Khodadi et al., "Probing Planck Scale Spacetime By Cavity Opto-Atomic 87 Rb Interferometry," PTEP 2019 no. 5, (Apr, 2018) 053E03, arXiv:1804.06389.
Gravitational bar detectors set limits to Planck-scale physics on macroscopic variables. F Marin, Nat. Phys. 92F. Marin et al., "Gravitational bar detectors set limits to Planck-scale physics on macroscopic variables," Nat. Phys. 9 no. 2, (Feb, 2013) 71-73.
Investigation on Planck scale physics by the AURIGA gravitational bar detector. F Marin, New J. Phys. 16885012F. Marin et al., "Investigation on Planck scale physics by the AURIGA gravitational bar detector," New J. Phys. 16 no. 8, (Aug, 2014) 085012.
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test. D Gao, J Wang, M Zhan, arXiv:1704.02037Phys. Rev. A. 95442106D. Gao, J. Wang and M. Zhan, "Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test," Phys. Rev. A 95 no. 4, (Apr, 2017) 42106, arXiv:1704.02037.
| [] |
[] | [
"Paolo Leonetti \nDepartment of Economics\nUniversitá degli Studi dell'Insubria\nvia Monte Generoso 7121100VareseItaly\n"
] | [
"Department of Economics\nUniversitá degli Studi dell'Insubria\nvia Monte Generoso 7121100VareseItaly"
] | [] | We define the notion of ideal convergence for sequences (xn) with values in topological spaces X with respect to a family {Fη : η ∈ X} of subsets of X with η ∈ Fη. Each set Fη quantifies the degree of accuracy of the convergence toward η.After proving that this is really a new notion, we provide some properties of the set of limit points and characterize the latter through the ideal cluster points and the ideal core of (xn). | null | [
"https://export.arxiv.org/pdf/2305.15928v1.pdf"
] | 258,887,682 | 2305.15928 | 5e0b0e9f60d2a6a9520aa3186e91be43dc1e2ab8 |
25 May 2023
Paolo Leonetti
Department of Economics
Universitá degli Studi dell'Insubria
via Monte Generoso 7121100VareseItaly
25 May 2023ROUGH FAMILIES, CLUSTER POINTS, AND CORES
We define the notion of ideal convergence for sequences (xn) with values in topological spaces X with respect to a family {Fη : η ∈ X} of subsets of X with η ∈ Fη. Each set Fη quantifies the degree of accuracy of the convergence toward η.After proving that this is really a new notion, we provide some properties of the set of limit points and characterize the latter through the ideal cluster points and the ideal core of (xn).
Introduction and Main Results
Let I ⊆ P(ω) be an ideal on the nonnegative integers ω, that is, a family closed under subsets and finite unions. It is also assumed that the family of finite subsets of ω, denoted by Fin, is contained in I and that and ω / ∈ I. Let also x = (x n ) be a sequence taking values in a topological space (X, τ ) (note that it is not assumed to be Hausdorff). Lastly, let F := {F η : η ∈ X} be a rough family, that is, a collection of subsets of X with the property that η ∈ F η for all η ∈ X. Rough families, as it will be clear from the following definition, quantifies the "degree of accuracy" of sequences taking values in X toward their limits η. In particular, they can change depending on η: smaller sets F η can be interpreted as smaller oscillations of the tail of sequence around its limit η.
(iii) if F η = {η} for all η ∈ X and, in addition, I = Fin, then I-convergence with roughness F corresponds to ordinary τ -convergence; (iv) special instances where X is a normed vector space and each F η is chosen as the closed ball with center η and fixed radius r ∈ [0, ∞) have been studied in several works, see e.g. [1,2,13] and references therein. It is remarkable that Definition 1.1 may not correspond to (J , ν)-convergence, for every ideal J on ω and for every topology ν on X: Proposition 1.2. Suppose that X = R is endowed with the standard Euclidean topology τ . Then there exists a rough family F such that, for each ideal I on ω, there is no ideal J on ω and no topology ν on R for which the equivalence
(I, F , τ )-lim n x n = η if and only if (J , ν)-lim n x n = η(1)
holds for all real sequences (x n ) and all η ∈ R.
This proves that the type of convergence stated in Definition 1.1 defines a new notion which is not included in the classical one. Note that such preliminary result is necessary to avoid unnecessary repetitions of known facts, as it already happened in the literature with other variants of ideal convergence, see for instance the case of "ideal statistical convergence" in [3,Theorem 2.3]. Hereafter, the dependence on the underlying topology τ will be made implicit whenever it is clear from the context, so that we will simply write (I, F )-lim n x n = η or L x (I, F ).
The aim of this note is to prove some characterizations of I-convergence with roughness F . For, we need to recall some definitions. A point η ∈ X is said to be an I-cluster point of a sequence x if {n ∈ ω : x n ∈ U } / ∈ I for all open sets U containing η. The set of I-cluster points of x is denoted by Γ x (I). It is known that Γ x (I) is a closed subset of X, and it is nonempty provided that {n ∈ ω : x n / ∈ K} ∈ I for some compact K ⊆ X. Moreover, it follows readily from the definitions that
L x (I, F ) ⊆ Γ x (I).
We refer to [10] for basic properties and characterizations of I-cluster points. Theorem 1.3. Let x be a sequence taking values in a regular topological space X such that {n ∈ ω : x n / ∈ K} ∈ I for some compact set K ⊆ X. Also, let I be an ideal on ω, let F be a rough family, and pick η ∈ X such that F η is closed. Then
(I, F )-lim n x n = η if and only if Γ x (I) ⊆ F η .
Note that the hypothesis on x includes the case of relatively compact sequences (which corresponds to the case I = Fin). In addition, the claim does not hold without any restriction of F η : for, suppose that X = R, F η = (η − 1 /2, η + 1 2 ) for all η ∈ R, I = Fin and x is an enumeration of the rationals in [0, 1]. Then it is readily checked that L x (I, F ) = { 1 /2} and, on the other hand, there are no η for which
[0, 1] = Γ x (I) ⊆ F η .
The following corollary is immediate:
Corollary 1.4.
Suppose, in addition to the hypotheses of Theorem 1.3, that every F η is closed.
Then
L x (I, F ) = {η ∈ X : Γ x (I) ⊆ F η } .
Hereafter, if X is a metric space with metric d, we denote the closed ball with center η ∈ X and radius r ∈ [0, ∞] by B r (η) := {x ∈ X : d(x, η) ≤ r}.
In particular, B 0 (η) = {η} and B ∞ (η) = X.
As a [non-]linear property of (I, F )-convergence, we obtain the following:
Proposition 1.5. Let X be a normed vector space, let I be a nonmaximal ideal on ω, and fix a rough family F for which the sets F η are uniformly bounded. Then the family of (I, F )-convergent sequences is a vector space if and only if F η = {η} for all η ∈ X.
We remark that, if I is maximal (that is, if its dual filter I ⋆ := {S ⊆ ω : ω \ S ∈ I} is a free ultrafilter on ω), then all relatively compact sequences are I-convergent (hence also (I, F )-convergent, for each rough family F ).
Given a topological space X, we endow the hyperspace
H(X) := {F ⊆ X : F nonempty closed}.
with the upper Vietoris topology τ , that is, the topology generated by the base of sets
{F ∈ H(X) : F ⊆ U }, with U ∈ τ open.
Moreover, we recall that a metric space X is said to have the UC-property if nonempty closed sets are at a positive distance apart, that is, for all F, [11,12] and references therein. (It is remarkable that a metric space X has the UC-property if and only if the ordinary Vietoris topology is weaker than the Hausdorff topology on H(X). In addition, standard Euclidean spaces R k have the UC-property.) Theorem 1.6. Let x be a sequence taking values in a topological space X, let I be an ideal on ω, and pick a rough family F made by closed sets. Also, suppose that the map η → F η is τ -continuous. Then L x (I, F ) is closed.
F ′ ∈ H(X) with F ∩ F ′ = ∅, there exists ε > 0 such that d(x, x ′ ) > ε for all x ∈ F and x ′ ∈ F ′ , where d is the metric on X. See
The result above does not hold, similarly, without any restriction on the rough family F . Indeed, suppose that X = R, F η = (η − 3, η + 3) for all η ∈ R, I = Fin, and that x is defined by x n = (−1) n for all n ∈ ω. Then L x (I, F ) = (−2, 2). In particular, together with the example given after Theorem 1.3, if every F η is open then L x (I, F ) is not necessarily closed, nor open.
In the special case where X is a metric space with the UC-property and each F η is a closed ball B r(η) (η), for some function r(·), we obtain the following: Corollary 1.7. Let x be a sequence taking values in a metric space X with the UCproperty, let I be an ideal on ω, and fix an upper semicontinuous function r :
X → [0, ∞) such that F η = B r(η) (η) for all η ∈ X. Then L x (I, F ) is closed.
In the case that X has a linear structure on X, we can show that L x (I, F ) is convex:
Theorem 1.8. Let
x be a sequence taking values in a normed vector space X with the UC-property, let I be an ideal on ω, and fix a concave function r :
X → [0, ∞) such that F η = B r(η) (η) for all η ∈ X. Then L x (I, F ) is convex.
Using the above results, we provide a relationship between (I, F )-convergence and the I-core of a sequence x, see [6,8]. For, given an ideal I on ω and a sequence x taking values in a topological vector space X, we define
core x (I) := E∈I ⋆ co({x n : n ∈ E}).
In other words, the I-core of x is the least closed convex set containing {x n : n ∈ E} for all E ∈ I ⋆ (where co stands for the closed convex hull operator). In the case where I = Fin, we obtain the so-called Knopp core, see [4,5,7] and references therein. Theorem 1.9. Let x be a sequence taking values in a locally convex space X such that {n ∈ ω : x n / ∈ K} ∈ I for some compact K ⊆ X. Also, let I be an ideal on ω and pick a rough family F such that every F η is closed and convex. Then
L x (I, F ) = {η ∈ X : core x (I) ⊆ F η } .(2)
We remark that the hypothesis on x cannot be removed. Indeed, if X = ℓ ∞ is the Banach space of bounded real sequences, endowed with the supremum norm, F η = B 1 (η) for all η ∈ ℓ ∞ and x = (e 0 , −e 0 , e 1 , −e 1 , . . .), where e k stands for the kth unit vector (so that x is not relatively compact), then it is readily seen that core x (Fin) = {0} and L x (Fin, F ) = c 00 .
(Here, c 00 represents the Banach subspace of eventually zero sequences.) However,
(1, 1, . . .) ∈ {x ∈ ℓ ∞ : 0 ∈ B 1 (x)} \ c 00 .
To sum up, x is a nonconvergent bounded sequence, its Knopp core is a singleton, it is (Fin, F )-convergent to every sequence η ∈ c 00 , and the claimed equality (2) fails.
Lastly, we prove that the above properties coincide for relatively compact sequences:
Corollary 1. 10. Let x be a sequence taking values in a metric vector space X such that {n ∈ ω : x n / ∈ K} ∈ I for some compact K ⊆ X. Also, let I be an ideal on ω, fix r ∈ [0, ∞), and define F by F η = B r (η) for all η ∈ X.
Then the following are equivalent : (i) x is I-convergent to some η ∈ X;
(ii) core x (I) contains a unique vector η ′ ∈ X;
(iii) L x (Fin, F ) = B r (η ′′ ) for some η ′′ ∈ X. In addition, in such case, η = η ′ = η ′′ .
The proofs follow in the next section.
Proofs
Proof of Proposition 1.2. Let F be the rough family defined by F η := [η − 1, η + 1] for all η ∈ R. Fix also an ideal J on ω and a topology ν on R, and suppose for the sake of contradiction that equivalence (1) holds for all x and η. We divide the proof in two cases. for each r ∈ R and h ∈ (0, 1]. Now, let V be a nonempty ν-open set and fix a point r ∈ V . Fix also h ∈ (0, 1]. By the equivalence (1), we obtain that {n ∈ ω : x (r,h) n / ∈ V } ∈ J and {n ∈ ω : y (r,h) n / ∈ V } ∈ J . If r + h / ∈ V , this can be rewritten as B ∈ J and A ∈ J , respectively, which is impossible because it would imply ω = A ∪ B ∈ J . Hence r + h ∈ V . By the arbitrariness of h, we obtain [r, r + 1] ⊆ V . However, since r is arbitrary, we conclude that V = R, therefore ν stands for the trivial topology τ 0 . This is a contradiction because L x (r,h) (I, F , τ ) = R.
Case (ii): I maximal. If I is maximal, then either A := {2n : n ∈ ω} or B := {2n + 1 : n ∈ ω} belong to I. Suppose without loss of generality B ∈ I (the remaining case is symmetric). With the same notations above, it follows that
L x (r,h) (I, F , τ ) = [r − 1, r + 1] and L y (r,h) (I, F , τ ) = [r + h − 1, r + h + 1]
for each r ∈ R and h ∈ (0, 1]. Since r ∈ L x (r,h) (I, F , τ ) ∩ L y (r,h) (I, F , τ ), we obtain by the equivalence (1) that (J , ν)-lim n x (r,h) n = (J , ν)-lim n y (r,h) n = r. Hence the sequence x (r,h) + y (r,h) , which is constantly equal to 2r + h, is (J , ν)-convergent to 2r. Since r and h are arbitrary, it follows that ν is the trivial topology τ 0 , reaching the same contradiction as in the nonmaximal case above.
Proof of Theorem 1.3. If part. Suppose that every I-cluster point belongs to F η . Fix an open set U ⊆ X containing Γ x (I). We need to show that S := {n ∈ ω : x n / ∈ U } belongs to I. For, notice that Γ x (I) is a nonempty compact subset of K, see [10,Lemma 3.1]. Suppose by contradiction that S / ∈ I. Considering that {n ∈ ω : x n ∈ K \ U } = S \ I / ∈ I and that K \ U is compact, we conclude, again by [10, Lemma 3.1], that Γ x (I) ∩ (K \ U ) = ∅, which contradicts the hypothesis S / ∈ I. Therefore (I, F )-lim n x n = η.
Only If part. Suppose for the sake of contradiction that there exists an I-cluster point η 0 ∈ X \ F η . Since X is regular, one can pick disjoint open sets U 0 , U η ⊆ X such η 0 ∈ U 0 and F η ⊆ U η . However, this is impossible because {n ∈ ω : x n ∈ U 0 }, which does not belong to I since η 0 is an I-cluster point of x, is contained in {n ∈ ω : x n / ∈ U η }, which belongs to I because (I, F )-lim n x n = η.
Remark 2.1. It is clear from the proof above that the If part of Theorem 1.3 holds for arbitrary topological spaces X and arbitrary rough families F .
On the same line, the Only If part holds also for arbitrary sequences.
Proof of Proposition 1.5. If part This is a folklore fact, by the linearity of the I-limit.
Only If part. Since I is not maximal, there exists a partition {A, B} of ω such that A, B / ∈ I. Suppose also that there exists η ∈ X such that F η = {η}, hence it is possible to fix a point η ′ ∈ F η \ {η}. Let x be the sequence such that x n = η if η ∈ A and x n = η ′ otherwise. It follows by the definition of (I, F )-convergence that {η, η ′ } ⊆ L x (I, F ). Pick also r ∈ (0, ∞) such that diam(F η ) ≤ r for all η ∈ X. If the claim were false, the sequence kx would be (I, F )-convergent for all k ∈ ω. Notice that Γ kx (I) = kΓ x (I) = {kη, kη ′ }, see e.g. [9, Proposition 3.2]. However, the distance between the latter two I-cluster points can be made arbitrarily large as k → ∞, which contradicts the hypothesis that the sets F η are uniformly bounded.
Proof of Theorem 1.6. If L x (I, F ) = ∅, the claim is obvious. Otherwise, pick a τconvergent net (η i ) i∈I with values in L x (I, F ) and define η := lim i η i . Since the map η → F η is τ -continuous, then the net (F η i ) i∈I is τ -convergent to F η . Fix an arbitrary open set U ⊆ X which contains F η and define the τ -open set U := {F ∈ H(X) : F ⊆ U }. By the convergence of (F η i ) i∈I , there exists j ∈ I such that F η j ∈ U , i.e., F η j ⊆ U . Since η j ∈ L x (I, F ), it follows that {n ∈ ω : x n / ∈ U } ∈ I. Therefore (I, F )-lim n x n = η.
Proof of Corollary 1.7. Denote by d the metric on X. Thanks to Theorem 1.6, it is sufficient to show that the map η → F η is τ -continuous. Pick a convergent net (η i ) i∈I with limit η ∈ X, hence lim i d(η i , η) = 0. We claim that the net of closed balls (B r(η i ) (η i )) i∈I is τ -convergent to B r(η) (η). For, pick an open set U containing B r(η) (η). In particular, η ∈ U . As in the previous proof, set U := {F ∈ H(X) : F ⊆ U }. If U = X then {n ∈ ω : x n / ∈ U } = ∅ ∈ I. Otherwise U is a proper subset of X, hence X \ U is a nonempty closed set disjoint from F η . Since X has the UC-property, it follows that there exists ε > 0 such that d(x, y) ≥ ε for all x ∈ F η and y ∈ X \ U . At this point, set
G := {x ∈ X : d(x, η) < r(η) + ε } .
It follows by contruction that F η ⊆ G ⊆ U . By the upper semicontinuity of r and the convergence of (η i ) i∈I , there exists an index i 0 ∈ I such that r(η i ) < r(η) + ε /2 and d(η i , η) < ε /2 for all i ≥ i 0 . Therefore
∀i ≥ i 0 , ∀x ∈ F η i 0 , d(x, η) ≤ d(x, η i ) + d(η i , η) ≤ r(η j ) + ε /2 < r(η) + ε.
This shows that F η i ⊆ G ⊆ U (hence, F η j ∈ U ) for all i ≥ i 0 , concluding the proof.
Proof of Theorem 1.8. If L x (I, F ) = ∅, the claim is obvious. Otherwise, fix two vectors η, η ′ ∈ L x (I, F ), a weight α ∈ (0, 1), and define γ := αη + (1 − α)η ′ . We claim that γ ∈ L x (I, F ). For, pick an open set U containing F γ . If U = X then {n ∈ ω : x n / ∈ U } = ∅ ∈ I. If U = X, then by the UC-property of X, there exists ε > 0 such that F γ ⊆ V ⊆ U , where V is the open ball with center γ and radius r(γ) + ε. Lastly, set S := n ∈ ω : x n − η ≥ r(η) + ε or x n − η ′ ≥ r(η ′ ) + ε .
Since η, η ′ ∈ L x (I, F ), it follows that S ∈ I. At this point, for each n ∈ ω \ S,
x n − γ = α(x n − η) − (1 − α)(x n − η ′ )
≤ α x n − η + (1 − α) x n − η ′ < α(r(η) + ε) + (1 − α)(r(η ′ ) + ε) ≤ r(γ) + ε.
Therefore {n ∈ ω : x n / ∈ U } ⊆ {n ∈ ω : x n / ∈ V } ⊆ S ∈ I. Since U is arbitrary, we conclude that γ ∈ L x (I, F ).
Remark 2.2. It is worth noting that the analogue of Theorem 1.8 holds in metrizable vector spaces X with a compatible metric d which is translation invariant and for which d(αx, 0) ≤ αd(x, 0) for all x ∈ X and α ∈ (0, 1).
Proof of Theorem 1.9. Since every topological vector space is regular, it follows by Corollary 1.4 and the hypothesis that each F η is closed and convex that L x (I, F ) = {η ∈ X : co(Γ x (I)) ⊆ F η } .
The conclusion follows by [8, Theorem 2.2] and [6,Theorem 3.4], which state that core x (I) coincides with co(Γ x (I)). (i) =⇒ (iii). Suppose that Ilim n x n = η. Pick γ ∈ B r (η) and an open set U containing B r (γ). Since η ∈ U , it follows by {n ∈ ω : x n / ∈ U } ∈ I. Therefore x is (I, F )-convergent to γ. Conversely, if γ / ∈ B r (η), then η / ∈ B r (γ). Since X is regular, there exists disjont open sets U, U γ such that η ∈ U and B r (γ) ⊆ U γ . However, this implies that {n ∈ ω : x n / ∈ U γ } ⊇ {n ∈ ω : x n ∈ U } ∈ I ⋆ . By the arbitrariness of γ in both cases, we conclude that L x (Fin, F ) = B r (η).
(iii) =⇒ (ii). Thanks to Theorem 1.9, we obtain necessarily that core x (I) = {η}.
Case (i): I not maximal. If I is not maximal, there exists a partition {A, B} of ω such that A, B / ∈ I. For each r ∈ R and h ∈ (0, 1], let x (r,h) be the real sequence defined by x (r) n = r if n ∈ A and x (r) n = r + h otherwise. Define similarly y (r,h) such that y (r) n = r + h if n ∈ A and x (r,h) n = r otherwise. If follows that L x (r,h) (I, F , τ ) = L y (r,h) (I, F , τ ) = [r + h − 1, r + 1]
Proof of Corollary 1.10. (i) ⇐⇒ (ii). See [8, Proposition 3.2].
The rough limit set and the core of a real sequence. S Aytar, 283-290. 2Numer. Funct. Anal. Optim. 293-4Numer. Funct. Anal. Optim.S. Aytar, The rough limit set and the core of a real sequence, Numer. Funct. Anal. Optim. 29 (2008), no. 3-4, 283-290. 2. , Rough statistical convergence, Numer. Funct. Anal. Optim. 29 (2008), no. 3-4, 291-303.
A Tauberian theorem for ideal statistical convergence. M Balcerzak, P Leonetti, Indag. Math. (N.S.). 311M. Balcerzak and P. Leonetti, A Tauberian theorem for ideal statistical convergence, Indag. Math. (N.S.) 31 (2020), no. 1, 83-95.
Core equality results for sequences. J Connor, J A Fridy, C Orhan, J. Math. Anal. Appl. 3212J. Connor, J. A. Fridy, and C. Orhan, Core equality results for sequences, J. Math. Anal. Appl. 321 (2006), no. 2, 515-523.
Statistical core theorems. J A Fridy, C Orhan, J. Math. Anal. Appl. 2082J. A. Fridy and C. Orhan, Statistical core theorems, J. Math. Anal. Appl. 208 (1997), no. 2, 520-527.
On relation between the ideal core and ideal cluster points. V Kadets, D Seliutin, J. Math. Anal. Appl. 49217V. Kadets and D. Seliutin, On relation between the ideal core and ideal cluster points, J. Math. Anal. Appl. 492 (2020), no. 1, 124430, 7.
Knopp's core theorem and subsequences of a bounded sequence. G Laush, S Park, Proc. Amer. Math. Soc. 13G. Laush and S. Park, Knopp's core theorem and subsequences of a bounded sequence, Proc. Amer. Math. Soc. 13 (1962), 971-974.
Characterizations of the ideal core. P Leonetti, J. Math. Anal. Appl. 4772P. Leonetti, Characterizations of the ideal core, J. Math. Anal. Appl. 477 (2019), no. 2, 1063-1071.
Turnpike in infinite dimension. P Leonetti, M Caprio, Canad. Math. Bull. 652P. Leonetti and M. Caprio, Turnpike in infinite dimension, Canad. Math. Bull. 65 (2022), no. 2, 416-430.
P Leonetti, F Maccheroni, Characterizations of ideal cluster points. 39P. Leonetti and F. Maccheroni, Characterizations of ideal cluster points, Analysis (Berlin) 39 (2019), no. 1, 19-26.
S Levi, R Lucchetti, J Pelant, On the infimum of the Hausdorff and Vietoris topologies. 118S. Levi, R. Lucchetti, and J. Pelant, On the infimum of the Hausdorff and Vietoris topologies, Proc. Amer. Math. Soc. 118 (1993), no. 3, 971-978.
Topologies on spaces of subsets. E Michael, Trans. Amer. Math. Soc. 71E. Michael, Topologies on spaces of subsets, Trans. Amer. Math. Soc. 71 (1951), 152-182.
Rough convergence in infinite-dimensional normed spaces. H X Phu, Numer. Funct. Anal. Optim. 243-4H. X. Phu, Rough convergence in infinite-dimensional normed spaces, Numer. Funct. Anal. Optim. 24 (2003), no. 3-4, 285-301.
| [] |
[
"The role of magnetic fields in the formation of protostars, disks, and outflows",
"The role of magnetic fields in the formation of protostars, disks, and outflows"
] | [
"Yusuke Tsukamoto ",
"Anaëlle Maury ",
"Felipe O Alves ",
"Erin G Cox ",
"Nami Sakai ",
"Tom Ray ",
"Bo Zhao ",
"Masahiro N Machida ",
"\nDepartment of Physics and Astronomy Graduate School of Science and Engineering\nCEA/DRF/IRFU Astrophysics\nKaghoshima University\nKorimotoKagoshimaJapan\n",
"\nUMR AIM\nUniversité Paris-Saclay\nF-91191Gif-sur-YvetteFrance Benoît\n",
"\nEns de Lyon\nMax-Planck-Institut für extraterrestrische Physik\nCentre de Recherche Astrophysique de Lyon UMR5574\nCommerçon Univ Lyon\nUniv Lyon1\nCNRS\nGießenbachstraße 1F-69007, 85748Lyon, GarchingFrance, Germany\n",
"\nRIKEN Cluster for Pioneering Research\nAstronomy and Astrophysics Section\nSchool of Cosmic Physics\nDublin Institute for Advanced Studies\nMax-Planck-Institut für extraterrestrische Physik\nDepartment of Earth and Planetary Sciences, Motoka\nCenter for Interdisciplinary Exploration and Research in Astrophysics (CIERA)\nKyushu University\n1800 Sherman Avenue, 2-1 Hirosawa, Gießenbachstraße 160201, 85748Evanston, Wako, GarchingIL, Saitama, FukuokaUSA, Japan, Ireland, Germany, Japan\n"
] | [
"Department of Physics and Astronomy Graduate School of Science and Engineering\nCEA/DRF/IRFU Astrophysics\nKaghoshima University\nKorimotoKagoshimaJapan",
"UMR AIM\nUniversité Paris-Saclay\nF-91191Gif-sur-YvetteFrance Benoît",
"Ens de Lyon\nMax-Planck-Institut für extraterrestrische Physik\nCentre de Recherche Astrophysique de Lyon UMR5574\nCommerçon Univ Lyon\nUniv Lyon1\nCNRS\nGießenbachstraße 1F-69007, 85748Lyon, GarchingFrance, Germany",
"RIKEN Cluster for Pioneering Research\nAstronomy and Astrophysics Section\nSchool of Cosmic Physics\nDublin Institute for Advanced Studies\nMax-Planck-Institut für extraterrestrische Physik\nDepartment of Earth and Planetary Sciences, Motoka\nCenter for Interdisciplinary Exploration and Research in Astrophysics (CIERA)\nKyushu University\n1800 Sherman Avenue, 2-1 Hirosawa, Gießenbachstraße 160201, 85748Evanston, Wako, GarchingIL, Saitama, FukuokaUSA, Japan, Ireland, Germany, Japan"
] | [] | We present our current understanding of the formation and early evolution of protostars, protoplanetary disks, and the driving of outflows as dictated by the interplay of magnetic fields and partially ionized gas in molecular cloud cores. In recent years, the field has witnessed enormous development through sub-millimeter observations which in turn have constrained models of protostar formation. As a result of these observations the state-of-the-art theoretical understanding of the formation and evolution of young stellar objects is described. In particular, we emphasize the importance of the coupling, decoupling, and re-coupling between weakly ionized gas and the magnetic field on appropriate scales. This highlights the complex and intimate relationship between gravitational collapse and magnetic fields in young protostars. | null | [
"https://export.arxiv.org/pdf/2209.13765v1.pdf"
] | 252,567,945 | 2209.13765 | df1a5ea4d2faaf3a255cd4a18481df683e329e0b |
The role of magnetic fields in the formation of protostars, disks, and outflows
Yusuke Tsukamoto
Anaëlle Maury
Felipe O Alves
Erin G Cox
Nami Sakai
Tom Ray
Bo Zhao
Masahiro N Machida
Department of Physics and Astronomy Graduate School of Science and Engineering
CEA/DRF/IRFU Astrophysics
Kaghoshima University
KorimotoKagoshimaJapan
UMR AIM
Université Paris-Saclay
F-91191Gif-sur-YvetteFrance Benoît
Ens de Lyon
Max-Planck-Institut für extraterrestrische Physik
Centre de Recherche Astrophysique de Lyon UMR5574
Commerçon Univ Lyon
Univ Lyon1
CNRS
Gießenbachstraße 1F-69007, 85748Lyon, GarchingFrance, Germany
RIKEN Cluster for Pioneering Research
Astronomy and Astrophysics Section
School of Cosmic Physics
Dublin Institute for Advanced Studies
Max-Planck-Institut für extraterrestrische Physik
Department of Earth and Planetary Sciences, Motoka
Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA)
Kyushu University
1800 Sherman Avenue, 2-1 Hirosawa, Gießenbachstraße 160201, 85748Evanston, Wako, GarchingIL, Saitama, FukuokaUSA, Japan, Ireland, Germany, Japan
The role of magnetic fields in the formation of protostars, disks, and outflows
We present our current understanding of the formation and early evolution of protostars, protoplanetary disks, and the driving of outflows as dictated by the interplay of magnetic fields and partially ionized gas in molecular cloud cores. In recent years, the field has witnessed enormous development through sub-millimeter observations which in turn have constrained models of protostar formation. As a result of these observations the state-of-the-art theoretical understanding of the formation and evolution of young stellar objects is described. In particular, we emphasize the importance of the coupling, decoupling, and re-coupling between weakly ionized gas and the magnetic field on appropriate scales. This highlights the complex and intimate relationship between gravitational collapse and magnetic fields in young protostars.
Introduction
The evolution of solar-type protostars is classified in stages following an empirical sequence. It starts from the most embedded objects, Class 0 protostars, where the envelope dominates the mass: this main accretion stage is believed to last less than 0.1 Myr (e.g. Evans et al. 2009;Maury et al. 2011). Once the envelope mass and the mass accumulated in the central embryo are roughly equal, protostars enter the Class I phase which sees most of the second half of the final stellar mass accreted, over a fraction of a Myr. Finally, the young stellar object is no longer enbedded and becomes visible during the T-Tauri phase that precedes its arrival onto the Main Sequence. In this review, we focus on the physics at work during the early stages of star formation, in embedded protostars.
Recent years have witnessed the major development of realistic magneto-hydrodynamics simulations modelling the physics of protostellar formation, as well as a cascade of detailed, sensitive observations of large samples of protostellar objects. These advances have transformed our understanding and highlighted the paramount role of the magnetic field during the early evolution of young stellar objects. Indeed, radio interferometers sensitive to the cold and dense material typical of embedded protostars, e.g., the Atacama Large Millimeter Array (ALMA), Northern Extended Millimeter Aray (NOEMA), and the Very Large Array (VLA), have significantly extended their capabilities since Protostars and Planets VI (henceforth PP VI). Exquisite maps probing the gas, dust, and magnetic field properties on the very small scales of protostellar cores, where material is accreted into a stellar embryo, stored in a disk, and partially ejected via outflows and jets, have placed unprecedented constraints on theoretical models.
In parallel, state-of-the art computations now provide a comprehensive theoretical landscape thanks to recent progress in 3-D magneto-hydrodynamic simulations which are able to follow the time evolution of the physics from the large spatial scales typical of dense cores (∼ 10 4 au) to the protostardisk system (∼ 10 2 au), over long timescales from the onset of prestellar collapse to the end of the Class I phase (∼ 10 5 years after protostar formation). They revealed the essential physics in the complex processes involved in the formation of protostar, disk, and outflow; i.e., the coupling, decoupling, and re-coupling between weakly ionized gas and magnetic field on appropriate scales. in these processes, microscopic ion chemistry, especially adsorption of charged particles by dust, plays an essential role.
In this chapter, we aim to present a synthesis of the observational and theoretical progresses made in the past decade to enrich our understanding of the formation of protostars, their protostellar disks, outflows, and jets.
This chapter is organized as follows: Section 2 presents the state-of-the-art observational characterization of magnetic fields, gas and dust properties in protostellar environments. Section 3 describes our current understanding of the formation and early evolution of protostars, protoplanetary disks, outflows, and jets based on recent theoretical studies. We present theories that are in the main successful in reproducing current observations, but also highlight some remaining discrepancies and key challenges for both observations and modelling to tackle in the near future. In Section 4, we describe a promising avenue for future research regarding the evolution of dust in magnetized protostellar environments, and its relevance to understanding not only the formation of planetesimals but also the pristine properties of the disks and stars that host them.
2. Observations of embedded protostars: envelopes, young disks, outflows, and magnetic fields 2.1. Observing magnetic fields: techniques
Since Protostar and Planets VI (see chapter by Li et al. 2014a), our view of interstellar magnetic fields as a key ingredient in threading the interstellar medium from clouds, to filaments and their embedded cores has been firmly established by multi-wavelength analysis of polarized light. Several observational techniques have been developed or improved. We summarise them below and the constraints they allow us to put on the strength and topology of magnetic fields in star-forming structures.
The linear polarization of dust thermal emission and absorption is widely used to trace the magnetic field topology from the plane-of-the-sky component B pos of the magnetic vector B, in dense ISM structures. The polarized dust thermal emission largely maps the field topology at mm/sub-mm wavelengths: assuming dust grains populating the star-forming structures are elongated, they are expected to produce polarized dust emis-sion if they are aligned along B-field lines in an anisotropic radiation field (radiative torques alignment, Lazarian and Hoang 2007). Optical and near-IR observations of the dichroic polarization of background stars are also used to probe magnetic fields along highly extincted lines-of-sight, such as those typical of prestellar cores (A v > 10). The magnetic field strength can be estimated from these polarimetric observations, by performing a statistical analysis of the dispersion of polarization angles compared to the velocity dispersion of the gas, such as in the Davis-Chandrasekhar-Fermi method (DCF, Davis 1951;Chandrasekhar and Fermi 1953). Although this method relies heavily on the assumed physical properties of the medium and how the latter couples to the magnetic field (Falceta-Gonçalves et al. 2008;Liu et al. 2021Liu et al. , 2022, it has been widely used in studies from clouds to cores scales.
The only direct technique to estimate the strength of the magnetic field in star-forming structures is through utilizing the Zeeman effect via spectro-polarimetry. Observations of molecular species with significant magnetic susceptibility, i.e., those with an unpaired electron, can be used to trace the magnitude of the line-of-sight component B los of the magnetic vector B in star-forming clouds, dense cores, and disks (Crutcher 2012;Crutcher and Kemball 2019). Spectral-line polarization can also arise from the so-called Goldreich-Kylafis (GK) effect (Goldreich and Kylafis 1982), because individual molecular rotational levels, in the presence of a magnetic field, split into sub-levels and the resulting line emission can be polarized either parallel or perpendicular to the magnetic field depending on the sign of n u − n ± , where n u and n ± are the populations of an upper energy level and its magnetic sub-levels, respectively. It is worth pointing out however that the GK effect requires optical depths τ ≈ 1 to be observed, i.e. the radiation field has to be anisotropic. It is not usable as a means of measuring the magnetic field if the optical depth is large.
When magnetic field strengths can be estimated, they are typically used to compute two physical quantities that tell about the importance of magnetic field against other key ingredients, turbulence and gravity. The Alfvénic Mach number captures the relative importance of magnetic and turbulent energies: it is the velocity dispersion in the gas flow divided by the Alfvén waves velocity, which is directly computed from the B-field strength. In the densest star-forming structures, observations are used to estimate the ratio of gravitational to magnetic energies, a.k.a. the mass-to-flux ratio µ expressed in units of the critical value when the magnetic energy equals the gravitational energy (Nakano and Nakamura 1978). Gravitational collapse to form a stellar system requires µ > 1, otherwise the magnetic forces will dominate over gravity and stabilize the structure. In the latter case evolution out of equilibrium can only happen over long timescales by diffusive processes that allow to remove the magnetic flux from the central region of the core, for example by ambipolar diffusion (see, e.g. Shu 1983).
While many interesting constraints have been obtained regarding the magnetic field in star-forming clouds (see for example the reviews by Pattle and Fissel 2019; Hennebelle and Inutsuka 2019), in the following we focus on observations that probe the magnetic properties of dense cores, protostars, their disks and outflows.
Magnetic fields in prestellar cores and protostars
In prestellar cores, both near-IR starlight polarization and sub-mm polarized dust emission have revealed a large fraction of cores with highly organized magnetic field topologies, at the typical ∼ 0.1 pc core scales (see e.g. Kirk et al. 2006;Jones et al. 2015;Kandori et al. 2018, and references therein). In the few cores where a magnetic field strength can be estimated, values are typically between 10 and 100 µG. A comparison of gravitational and magnetic energies then suggests a dynamical state in which the mass is close to the magnetic critical mass, and this could significantly delay gravitational collapse (Stephens et al. 2013;Girart et al. 2006;Kandori et al. 2017Kandori et al. , 2020bMyers and Basu 2021). The orientations of B-field lines mapped with optical/near-IR polarimetry are sometimes smoothly connected to the ones recovered with sub-mm singledish observations on core scales (Soler et al. 2016;Soam et al. 2019), but are sometimes significantly different (Alves et al. 2014;Clemens et al. 2016). Detailed analyses using statistics from large single-dish surveys, such as the JCMT BISTRO survey (Ward-Thompson et al. 2017;Liu et al. 2019), and starlight polarimetry, may give more insights into the role and structure of magnetic fields in prestellar cores.
Thanks to the Zeeman effect, the OH molecule is used to probe the strength of magnetic fields in molecular clouds (Troland and Crutcher 2008). However, OH cannot be used in the same way to investigate magnetic field strength in cores, due to their high gas densities. Here other tracers, such as CN, must be employed. Only the most massive protostellar cores have detections of the Zeeman effect in CN single-dish surveys: the values found for the mean total magnetic field, B tot , are typically around a few hundred µG (Falgarone et al. 2008). They seem in broad agreement with statistical estimates derived from polarized dust emission in the mm, which suggest values up to 1 mG towards massive cores. Observed mass-tomagnetic flux ratios, derived from Zeeman measurements are found to be super-critical with typical values µ ∼ 2 − 3. The average Alfvénic Mach number in star-forming gas at densities ∼ 10 3 − 10 6 cm −3 inferred from these observations is then found to be M A ∼ 1.5 (Falgarone et al. 2008;Girart et al. 2009). This suggests that from clouds to cores scales, the turbulent and magnetic energies may be in approximate equipartition. The magnetic field strengths observed are systematically higher in the CN cores than in the lower density clouds (typical magnetic field strengths of ∼ 10 to 30 µG for densities of a few 10 3 cm −3 , and on ∼0.1 pc scales, see e.g., Figure 1 in Crutcher et al. 2010). This suggests intrinsically stronger magnetic fields in denser gas, but there could also be a bias through the specific conditions necessary to form the most massive cores. Note that it is still unclear however whether the switch in Zeeman molecular tracer is responsible for the steep increase in the observed B tot between clouds and protostellar cores. Finally, the total magnetic pressure of a core with a complex magnetic field could be higher than what is inferred from the Zeeman effect, due to the cancelling effect of field vectors with opposite directions along the line of sight. With all these caveats in mind, nevertheless these measurements suggest magnetic fields are relatively strong in most massive star-forming cores.
Single-dish observations of the polarized dust emission have allowed us to map the magnetic field line spatial distribution in several low-mass protostellar cores, particularly in recent times as a result of the extensive JCMT BISTRO survey (see, e.g. Eswaraiah et al. 2021). In the inner envelope, on typical scales ∼ 100 − 5000 au, polarized dust emission can be probed using (sub-) millimeter interferometry and has been detected at the few percent level (see e.g. Cox et al. 2018). The wealth of recent interferometric observations of magnetic field lines have suggested that the early detections of the "hourglass" pattern (e.g. Girart et al. 2006;Rao et al. 2009) may actually be a common feature in lowmass cores (Maury et al. 2018;Kwon et al. 2019;Yen et al. 2019;Kandori et al. 2020b). An example can be found in the left panel of Figure 1 showing the B-field topology reconstructed from ALMA observations of the millimeter polarized dust emission in a Class 0 protostar. Multi-scale observations and comparison of the magnetic field topology with the gas kinematics in the inner envelope suggest this hourglass shape may be a natural result from an initially quasi-poloidal configuration being affected by the collapse of the inner envelope: magnetic field lines efficiently coupled to the infalling gas are pulled towards the core center as the collapse proceeds. However, this morphology is not seen as ubiquitously as one might expect from a simple strongly magnetized scenario for lowmass star formation (Hull and Zhang 2019) and this may stem from several causes. First, identifying hourglass fields is not simple due to uncertainties caused by projection effects and limited instrument sensitivity; thus a robust survey probing the magnetic field in star-forming cores with high spatial dynamic range still needs to be carried out. Second, the simple view that protostellar accretion proceeds largely in a purely symmetric fashion, with gas flowing down to the central embryo along the equatorial plane and creating a distinguishable hourglass pattern, has received much criticism, and contradicts some recent observations (see the PPVII review chapter by Pineda et al., and references in §2.5 of the present chapter). The early development of non-isotropic infall and differences in gas ionization levels affecting the coupling at different locations in the protostellar cores may lead to the complex field geometries observed. Finally, local irradiation conditions may enhance polarized dust emission towards specific locations, making it difficult to assess the global 3-D B-field topology. Observations of the magnetic field on core scales have suggested in a few cases that the magnetic field lines could correlate with density and/or kinematic structures such as (gravo-)turbulence (e. g., Hull et al. 2017b), accretion streamers (e. g., Alves et al. 2018;Le Gouellec et al. 2019), irradiated layers (Le Gouellec et al. 2020) and protostellar outflows (e. g. Hull et al. 2017a).
Protostars are actively accreting envelope material onto the central stellar embryo: as such they are expected to be primarily super-critical (µ > 1). Unfortunately magnetic field strengths in solar-type cores have not been widely explored using the Zeeman effect, because it is a subtle effect in relatively weak spectral lines. Only recent sensitive observations have allowed us to detect large numbers of independent B-field orientations within single low-mass cores, and use the DCF method to estimate B-field strengths. Measured values suffer from large uncertainties due to the gravitational energies associated with these collapsing objects, but typical field strengths are around a few tens of µG (Hull and Zhang 2019). Finally, since the polarized flux is a cumulative quantity that is prone to cancellation if the polarization angle is highly disorganized along the line of sight, observations of a few percent of polarization suggests the magnetic field may be strong enough to remain at least partly organized inside star-forming cores (Le Gouellec et al. 2020).
Magnetic fields observed in protostellar disks and outflows
Polarization observations at high angular resolution should be able to differentiate the properties (i.e., polarization fraction and position angle) between components from the disk and the envelope (e.g. Cox et al. 2018;Sadavoy et al. 2019). However, although a small number of embedded disks show a few detectable polarization measurements, most observations at these small scales are still limited by sensitivity (e. g., Ophemb-1, Sadavoy et al. 2019). Moreover, in relevant surveys the disk emission is often optically thick and B-field aligned grains are not the only plausible origin for the polarized emission, complicating the interpretation. Indeed, the scattering opacity of large grains (a max > 100 µm) is predicted to prevail over their absorption opacity (Kataoka et al. 2015), in most conditions typical of disks. This results in polarized emission due to self-scattering of the dust grains, up to a few percent at millimeter wavelengths in the most extreme cases. In edgeon disks such as HH 212 or L1527, the impact of large optical depth is clear on the polarization maps, where polarization angles are predominantly parallel to the disk minor axis (Segura- Cox et al. 2015;Harris et al. 2018;Lee et al. 2021b). Hence, only very few measurements of dust polarization from disks in young protostars can be interpreted as magnetically aligned grains rather than by the polarized emission from dust self-scattering (this is usually probed by large polarization fractions and the combined analysis of the polarization properties at varying wavelengths). For example, ALMA observations of the Class I protostar TMC-1A shows polarization components from the disk poles that can be interpreted as arising from toroidal magnetic fields associated the disk or the outflow or magnetic accretion flows within the disk (Aso et al. 2021). Another example is the disk around the Class 0/I [BHB2007] 11 protostar, the magnetic field of which is modelled as a combination of poloidal and toroidal components produced by disk rotation and infalling material from the envelope, respectively (see Fig. 1 and Alves et al. 2018). However, if the dust population in this source is dominated by mm-size grains, the polarization observed at mm-wavelengths (i.e., in the Mie regime) is expected to be negative, which in practical terms means a 90degree flip in the polarization direction (Guillet et al. 2020b). In this scenario, the disk polarization is parallel to the magnetic fields, potentially following the accretion streamers seen within the circumbinary disk (Alves et al. 2019). A poloidal hour-glass magnetic field was also interpreted as causing the polarized dust emission around the disk of VLA 1623A, at scales of 200 au (Sadavoy et al. 2018). The non-detection of a toroidal component in the dust polarization signal in this pro-tostar could be due to the predominance of scattering polarization from large grains at disk scales, non-ideal MHD effects decoupling the magnetic field from the rotating gas in the disk at small scales, or a combination of an intrinsically weak magnetic field and a weakly ionized medium on disk scales (Vaytet et al. 2018;Hennebelle et al. 2020).
Spectro-polarimetry may become a powerful tool to measure the magnetic strength in embedded disks. The high abundance of the Zeeman-sensitive CN molecule in their midplane (Chapillon et al. 2012), for example, make this molecule a potential tracer of magnetic fields in disks as well (Brauer et al. 2017). Several attempts to carry out Zeeman measurements in mm lines with ALMA have provided stringent upper limits of a few mG for the line-of-sight magnetic field intensity in a couple of evolved disks Harrison et al. 2021), while polarized emission from spectral lines has been tentatively linked to the magnetic field morphology in the TW-Hya disk (Teague et al. 2021). In the future, it may be possible to extend this technique to embedded disks, and hopefully lift the current challenge to measuring their B-fields. Maser emission has also proved to be a powerful tool to study magnetic fields in the densest portions of star-forming regions (Lankhaar et al. 2018). As masers are often very bright, polarimetric observations can detect the line-of-sight component of the magnetic field, reliably determining its strength (and direction, if VLBI techniques are employed) in high-mass (e.g., using 6.7 GHz methanol masers, Surcis et al. 2013) and low-mass (e.g., using 22 GHz water masers, Alves et al. 2012) disks. Despite these observations being sensitive to very specific conditions such as shocks, where magnetic fields may be amplified to several mG, comparison to models may allow for a finer characterization of maser emission over a wide range of pumping and excitation conditions, using sources such as SiO and H 2 O masers, in the near future (Lankhaar and Vlemmings 2019).
The physical and chemical conditions in inner envelopes: observational characterization
The rotating/infalling envelopes of embedded protostars are the cradles where the most pristine disks form and evolve, at the same time as the accretion of envelope material onto the protostellar embryo. Both the gas kinematics and the chemical processes at work on disk-forming scales can directly affect the disk properties. We refer the reader to the dedicated PPVII chapters devoted to the gas kinematics in star-forming structures (Pineda et al.) and the chemical conditions in protostellar environments (Ceccarelli et al.) for detailed reviews. We briefly describe here the gas properties in the inner envelopes which have important consequences for how efficiently magnetic field couple to the protostellar gas, the transport and redistribution of angular momentum, and ultimately the properties of disk-forming material.
The recent development of observational capabilities (increased bandwidth of the Sub-millimeter Array (SMA), the upgrade of the NOEMA interferometer, and development of the ALMA observatory) have allowed us to observe a wide range of molecular lines as probes, not only to study the pristine chemistry in star-forming cores, but also to selectively study the physical processes at work in protostellar interiors, down Alves et al. 2017Alves et al. , 2018. Both panels show the ALMA dust continuum emission at sub-millimeter wavelength as a background image, and contours of integrated CO (2 → 1) emission tracing the outflow at blue/red velocities. The B-field lines, obtained from the polarized dust emission, are shown towards both objects as green line segments. The synthesized beam of the observations are shown in the lower right corners. to scales where disks form (Jørgensen et al. 2020;Öberg and Bergin 2021). The warm inner envelope, apart from showing emission from complex organic molecules (COMs), also presents compact emission from small molecules like H 2 S, SO, OCS and H 13 CN, most likely related to ice sublimation and high-temperature chemistry (Tychoniec et al. 2021).
Molecular line observations are also used to characterize the angular momentum contained in the star-forming gas, ultimately responsible for the formation of the protostellar disks. Recent observations of Class 0 protostellar envelopes at large scales (>1000 au) suggest the specific angular momentum scales with envelope radii, following a power-law relation j ∝ r 1.8 between 1000 and 10000 au, with values ranging from 10 −5 to a few 10 −3 km s −1 pc at 1000 au (Yen et al. 2015b;Pineda et al. 2019;Heimsoth et al. 2021). In the CALYPSO survey, molecular line observations at sub-arcsecond resolution have allowed characterization of protostellar gas motions down to ∼ 50 au scales. Gaudel et al. (2020) finds a break in the angular momentum evolution with radius, from a steep profile j ∝ r 1.6 at radii r > 1600 au, to a quasi-flat profile at radii 50-1600 au. These profiles are shown in the right panel of Figure 2, together with the SMA MASSES measurements from Heimsoth et al. (2021) in the left panel.
Envelope mass infall rates are estimated by modelling molecular lines showing inverse P-Cygni profiles (Pineda et al. 2012;Mottram et al. 2013;Evans et al. 2015) but these are rare due to he intrinsic complexity of the velocity field of the gas in the inner envelopes. Typical values measured on scales of a few hundred au suggest a gas mass accretion rate ∼ 10 −5 M per year in Class 0 protostars. In the Class I protostars of the ALMA Ophiuchus survey, typical mass accretion rates are a few 10 −7 M per year (Artur de la Villarmois et al. 2019). The rather high rates found in Class 0 protostars at first sight seem inconsistent with their observed protostellar luminosities because one would expect typical accretion luminosities of a few tens of L while the median luminosity is observed to be around a few L (with large variations from object to object). This conundrum, known as the luminosity problem (Dunham and Vorobyov 2012;Dunham et al. 2014), may be solved by by invoking episodic accretion, an idea that is also supported by observations of molecular species that are found at radii larger than expected based on their gas temperature. Assuming the chemical timescale is considerably longer than the cooling timescale of the gas, sudden temperature changes due to short but vigorous accretion bursts would explain why CO is observed where the current envelope temperature predicts it should be frozen onto dust grains (< 30K) for example (Anderl et al. 2016;Cieza et al. 2016;Frimann et al. 2017). A recent ALMA survey of N 2 H + and HCO + toward 39 Class 0 and I protostars in Perseus suggest almost all sources in the sample show evidence for post-burst signatures in N 2 H + while the frequency of the bursts may decrease with protostellar evolution, from a burst every 2400 yr in the Class 0 stage to 8000 yr in the Class I stage (Hsieh et al. 2019). Moreover, observations show that the velocity field of the gas at radii r < 5000 au is not organized as expected from the collapse of axisymmetric rotating cores, with many envelopes exhibiting reversal of their velocity fields or multiple velocity components (Gaudel et al. 2020;Maureira et al. 2017), and well-developed asymmetric features which may trace preferred pathways that funnel accretion, such as streamers (Pineda et al. 2020) or supersonic infall along outflow cavity walls (Cabedo et al. 2021).
The study of the molecular line emission at the disk surface and at the interface between the rotationally-supported disk and the infalling-rotating envelope may reveal the physical conditions of the accreted gas, at scales where the kinetic energy of the infalling gas is converted into thermal energy and rotational motions. The gas from the infalling envelope landing onto the disk can create shocks for example by friction between the rotating material and the infalling material, raising the temperatures of the gas and dust at the disk-envelope interface to values much higher than from heating by stel- lar photons alone. These accretion shocks may therefore be crucial in setting the pristine chemical composition of young disks (Sakai et al. 2014b). Observational signatures of accretion shocks have been reported, on scales between the centrifugal barrier (where most of the gas kinetic energy contained in infalling motion is converted to rotational motion, producing an azimuthal velocity, v θ , larger than the Keplerian velocity) and the centrifugal radii (inside which the circular gas motions are nearly Keplerian), in a handful of young protostars. For instance, SO and other sulfur-containing species were found to be enhanced at a radius ∼ 100 au around the L1527 disk as a result of sublimation of grain mantles due to a weak accretion shock (Sakai et al. 2014a;Miura et al. 2017). Gas temperatures are also found to be enhanced in the shocked region from ∼30 K pre-shock to >60 K inside the shocked region (Sakai et al. 2017). In IRAS16293-2422A, for comparison, it is enhanced from 80-120 K to 130-160 K (Oya et al. 2016). Equivalently, warm SO 2 emission, possibly related to an accretion shock, was observed toward the B335 Class 0 protostar (Bjerkeli et al. 2019) and a few Class I sources (Oya et al. 2019;Artur de la Villarmois et al. 2019). The observed abundances of SO can be reproduced either with low-density shocks (< 10 6 cm −3 ) in a weak UV field, if the SO ice is not thermally sublimated, or high density shocks in strong UV fields (van Gelder et al. 2021). This second hypothesis seem to be supported by tentative evidence of a correlation between the amount of warm SO 2 and the bolometric luminosity, as well as observed SO 2 /SO column density ratios in the Class I source Elias 29 (Artur de la Villarmois et al. 2019). However, highangular resolution observations (∼ 30 au) of the Class I source TMC1-A show narrow SO emission lines coming from a ringshaped morphology which may be linked to the warm inner envelope (Harsono et al. 2021) and SO is also found to trace molecular outflows and jets in some objects (Podio et al. 2016;Tabone et al. 2017;Podio et al. 2021): emission from warm SO and SO 2 is thus not an unambiguous tracer of accretion shocks. Observations of mid-infrared shock tracers such as H 2 O, high-J CO, [S I], and [O I] with the James Webb Space Telescope may allow a finer characterization of shock conditions and put constraints on both the chemical composition of the gas incorporated into the protostellar disks and the magnetic field coupling at the disk/envelope interface.
Protostellar disks: observed properties
In the standard paradigm of star and planet formation, circumstellar disks have long since been described as a natural, almost unavoidable, outcome of the envelope's angular momentum conservation during its collapse, ultimately leading to the formation of a young star (see e.g. the reviews by Bodenheimer 1991; Larson 2010). As such, they were a sound and convenient solution to the so-called "angular momentum problem" in star formation (see e.g., Mestel 1965; Prentice and Ter Haar 1971, for early reviews), as buffer structures preventing the transfer of all the envelope's angular momentum to the forming star. In the traditional analytical description of the "inside-out" collapse of a singular isothermal sphere (SIS) in solid-body rotation (Terebey et al. 1984), the centrifugal radius grows very quickly with time with t 3 as a result of incoming material with increasingly larger specific angular momentum, producing disk radii of a few hundreds of au in a few thousands years (Stahler et al. 1994). In a magnetized scenario, however, disk growth is mitigated by magnetic braking and diffusive processes, such as ambipolar diffusion, regulating magnetic flux evolution, and the centrifugal radius grows more slowly, as t (Basu 1997(Basu , 1998. Although different initial conditions may better reproduce observations of prestellar and protostellar cores (such as, e.g., Bonnort-Ebert spheres, see for example Banerjee et al. 2004;Keto et al. 2015), and affect the details of disk formation during the main accretion phase, it is a common feature of all protostellar formation models that the introduction of magnetic fields will produce somewhat smaller rotationally-supported disks. While models and their main features are described in the following section 3, we outline here how observations of the youngest embedded protostars have been used to discriminate between them, and ultimately shed light on the role of key physical ingredients in disk formation, and in determining pristine properties of planet-forming disks.
Disk radii
Disentangling the disk from the envelope contribution on scales where protostellar disks are expected to form has long been an observational challenge (Looney et al. 2000;Jørgensen et al. 2009). Since PP VI, many sensitive high-resolution interferometer surveys of protostellar populations have been carried out. Observations of the dust continuum emission at small envelope radii, and comparison with protostellar models of dust emission, has allowed us to determine which protostars likely host a disk, and to estimate dusty disk radii down to the observational limit (typically a few tens of au). We review here of some of the largest recent surveys, and their conclusions. Figure 3 presents all dust disk sizes reported in the literature at short wavelengths λ < 2.7mm, associated with embedded protostars.
The CALYPSO survey (Maury et al. 2010(Maury et al. , 2019 used the Plateau de Bure Interferometer, at 1.3 mm and 2.7 mm, to characterize disk properties of 26 Class 0 and Class I protostars.
Modeling the visibility profiles of the millimeter dust continuum emission with a combination of envelope and disk contributions, they found an average disk size of < 50 au ± 10 au in the Class 0 objects, and 115 ± 15 au in the Class I objects. While this survey had the advantage of samples from several different star forming regions, it is likely biased towards the more luminous sources that were easily observed in the pre-ALMA era.
The VANDAM survey produced VLA observations of the 8 mm dust continuum emission which were used to measure the radii of all observed Class 0 and Class I disks in Perseus using a power-law intensity model. Segura- Cox et al. (2018) find 14/43 (33%) Class 0 and 4/37 (11%) Class I sources have resolved disks with r > 12 au, and 62/80 (78%) of Class 0 and I protostars are not associated with resolved disk emission within r > 8 au. Cox et al. (2017) observed a sample consisting largely of Class II YSOs, Flat and Class I objects in Ophiuchus with ALMA in the 870 µm dust continuum emission and found a median radius of 12.6 au for the Class I/Flat Sources by fitting a 2D Gaussian to the images. A complementary study of Ophiuchus done by Encalada et al. (2021) found a median radius of 23.5 au for Class I protostars. The VANDAM survey, also (Tobin et al. 2020), looked at Orion protostars at 870 µm using ALMA: from the Gaussian fitting of the dust continuum emission, they find the median dust disk radii for the 69 Orion Class 0 protostars ∼ 48 au and ∼ 38.0 au for the 110 Class I sources. While these results confirm Class 0 disks are typically compact, they reveal median dust disk sizes smaller than the median radii computed by using all the literature available (see Figure 3). This difference may stem from the statistical completeness of the VANDAM survey in one single region, while other Class I disk sizes estimates such as the ones reported in Maury et al. (2019) may be biased with mostly bright Class I protostars in the sample. Note however that Orion is a unique cloud due to its very active environment as well as the number of massive stars forming therein: this could be the cause of the peculiar decrease in disk sizes from Class 0 to Class I protostars, which do not seem to be routinely observed in other star-forming regions.
We stress that characterising disk size depends markedly on the choice of observations performed, and the analysis carried out. For example in some cases, disk radii are estimated from the additional thermal component which cannot be accounted by small-radii envelope emission. Alternatively, some stud-ies assume the envelope emission is completely filtered out on sub-arcsecond scales and attribute the entire emission obtained from long-baseline maps as exclusively due to disk emission. While simple Gaussian models measure the spatial extent of the dust thermal emission, they ignore the potential contribution from envelope emission, which may differ, statistically, from Class 0 to Class I with the envelope dispersal/accretion. This method may overlook complex structures (streamers, outflow cavities, etc.) which are often reported when carefully examining the spatial distribution in sensitive dust emission maps. Finally, the choice of wavelength where to observe the thermal emission of the dust may also influence the dusty disk sizes observed. Using long wavelengths may be more sensitive to the spatial distribution of large grains, especially at more advanced protostellar stages when dust settling may have had time to operate. This was noted by Tobin et al. (2020) who finds systematically more extended 0.8mm emission than 8mm emission in their sample of Orion disks. The use of a shorter wavelength may bias the sizes of more evolved sources because optically-thick emission is better modeled by flatter Gaussian sources with larger FWHMs. . The two horizontal lines show the median disk radii of 61 au and 81 au for each Class (0 in blue, I in orange) respectively, from these distributions. We stress that upper-limit radii from (yet?) unresolved disks are not included in those statistics so these median values should certainly be interpreted as upper-bounds.
These aforementioned studies rely mainly on the analysis of the thermal dust emission, but protostellar disks are gaseous structures, which sizes should be evaluated also using the gas emission. This seems even more important as recent observations towards Class II YSOs have revealed a large population of compact dusty disks but suggest as well that the gaseous disks may be on average a factor of two larger than the dust extent (see, e.g., Barenfeld et al. 2016;Sanchis et al. 2021;Ansdell et al. 2016Ansdell et al. , 2018, , as well as the PPVII review chapter by Miotello et al.). This could suggest the existence of a potential bias in the youngest protostellar disk size distribution, which are mostly obtained from dust emission. The dusty disk sizes could be different from gaseous disk sizes, although at the high densities typical of Class 0 inner envelopes, no significant gas/dust decoupling can be expected if the grain sizes remain relatively small . While the small sample of protostellar disks characterized in dust and molecular line emission appears to show gaseous radii similar to dust radii, only a larger sample can confirm this.
Embedded disks are expected to be largely rotationally supported and as such their gaseous sizes could be measured with the detection of (nearly) Keplerian gas motions around the central stellar embryo. A few studies have been able to detect Keplerian rotation from protostellar dusty disks, finding gaseous radii close to the dust radii in Class 0 protostars (Ohashi et al. 2014;Yen et al. 2017;Bjerkeli et al. 2019;Maret et al. 2020) and Class I protostars (Harsono et al. 2014;Chou et al. 2014;Yen et al. 2014;Aso et al. 2015). Observations may also be able to trace the location of the centrifugal radius and centrifugal barrier of the gas, thanks to shock tracers, as detailed in the previous section 2.4 (Sakai et al. 2014a;Alves et al. 2017;Lee et al. 2017b;Imai et al. 2019).
The small numbers are due to the complexity of the velocity field in embedded protostars on the smallest scales. Deciphering the kinematic signature of disks in embedded protostars is hampered by the intrinsic complexity of the protostellar environment which naturally produces entangled contributions from e.g., envelope infall, rotation, and outflowing gas, on overlapping scales and mixed along the line of sights. For example in the HH 212 Class 0/I protostar the sequential analysis of ALMA observations providing increased spatial resolution and sensitivity have led to a revision downwards of the disk radius from ∼100 au (Codella et al. 2014;Lee et al. 2014) to a smaller < 44 au radius (Lee et al. 2017b), thanks to the characterization of complex kinematic structures contributing at intermediate radii 50-200 au (Podio et al. 2015). Similarly in L483, where Oya et al. (2017) argued that the centrifugal barrier in this source is at radii of 30-200 au while Jacobsen et al. (2019) suggest molecular lines trace infalling gas down to radii of 10-15 au, rather than Keplerian rotation.
Moreover, the kinematic signature of embedded and heavily accreting disk structures may present significant deviations from purely axisymetric Keplerian rotation curves (e.g. because of the generation of relatively massive tidal streams and spiral arms, Takakuwa et al. 2014;Pérez et al. 2016;Tobin et al. 2016;Takakuwa et al. 2017;Alves et al. 2019), making it difficult to characterize young embedded disks as easily as their older counterparts. An example is found in the L1527 Class 0/I protostar where the ∼ 60 au disk, although one of the first resolved kinematically and seen in a favorable edge-on configuration, was recently proposed to be warped because of either anisotropic accretion of gas with different rotational axes, or misalignment of the rotation axis of the disk with the magnetic field direction (Sakai et al. 2019).
Confirming the rotationally-supported nature of candidate protostellar disks observed in the dust continuum emission is also observationally expensive as it requires sensitive observations of faint molecular emission, from species which remain optically thin at high densities. Future observational studies should not only rely on observations of multiple molecular species, but also model them jointly to characterize the gaseous disks in the embedded phases.
Disk masses
Several observational studies have measured disk masses in embedded protostars, using the emission due to dust which is then corrected to a gas mass using an assumed gas-to-dust ratio. In Orion, Tobin et al. (2020) use the 870 µm ALMA dust continuum fluxes to measure the disk masses, finding a median dust mass 52.5 M ⊕ for the 69 Class 0 single protostars, and 15.2 M ⊕ for the 110 Class I protostars. The choice of wavelength(s) for the dust observations is crucial not only for estimating disk sizes but also their masses. For example in Perseus, Tychoniec et al. (2020) measure median dust masses of the embedded disks ∼ 158 M ⊕ for Class 0 and 52 M ⊕ for Class I from the VLA dust continuum emission at centimeter wavelengths. In comparison, the lower limits on the median dust disk masses extrapolated from sub-millimeter dust emission in ALMA bands are in better agreement with the masses found in Orion, 47 M ⊕ and 12 M ⊕ for 38 Class 0 and 39 Class I respectively in Perseus.
These disk dust masses are however subject to several caveats due to the possible presence of optically thick dust emission and what is assumed as regards the dust opacity and temperatures in these yet largely unconstrained environments. First, most studies are based on a single sub-millimeter flux measurement, the mass estimates assume optically thin dust emission: if the dust emission is not optically thin (as shown in several cases, see Ko et al. 2020 and discussions therein), then the dust masses will be lower limits. We stress that this could cause a bias leading to an underestimate of the masses of the most evolved protostellar disks, which could be optically-thick as collapse proceeds (Galván-Madrid et al. 2018;Zhu et al. 2019). Regarding mass estimates based on dust emission at longer (cm) wavelengths, the contribution of large dust grains to the observed emission could be enormously variable from object to object, and tracing mostly the compact emission from the inner disk. Obtaining a dust mass from cm dust emission is thus also highly uncertain. (Sub-)millimeter dust emission is also a uncertain probe of dust masses, unfortunately, as it suffers from uncertainties on dust emissivity properties which have been found to be very different from those of ISM dust, see section 4. Finally, the considerable unknowns regarding the typical gas and dust temperature of these young disks, actively accreting and bombarded by envelope material, also impair the dust continuum observations in providing accurate disk masses. That said a few clues have been recently come to light suggesting these disks are warmer than their older counterparts, although these conclusions still suffer from poor statistics (van't Hoff et al. 2020).
Demographics of older, Class II, disks appear to imply that most do not contain enough material to form the known census of exoplanets (Ansdell et al. 2016;Manara et al. 2018). Both dust and gas mass estimates (usually from CO emission) suffer from large uncertainties and their reliability is still widely debated by the community (e.g., Aikawa et al. 1996;Yoneda et al. 2016;Thi et al. 2001;Dodson-Robinson et al. 2018;Powell et al. 2019;Bergin et al. 2013;Favre et al. 2013;Kama et al. 2016;McClure et al. 2016). Details are beyond the scope of this review as we focus on the youngest disks. Nevertheless, the resultant uncertainties highlight the importance of determining disk masses from the earliest phases, as such masses determine how early planets might form. Recent observations largely suggest the dust masses of young Class 0 and I disks are larger, by at least a factor of a few, compared to more evolved Class II disks. In Ophiuchus, for example, the mean Class I disk dust mass is found to be significantly lower than in Perseus and Orion, ∼ 2. Nevertheless it is still about 5 times greater than the mean Class II disk mass in the same region, although the dispersion in each class is so high that there is a large overlap between the two distributions. Interestingly, some annular structures in the dust continuum emission of Class I protostars have recently been observed (Harsono et al. 2018;Segura-Cox et al. 2020;Sheehan et al. 2020). While their origin is still debated, it is possible they are carved by the early planetary seeds. Observations in the coming years of the gas kinematics in young disks should help constrain whether this is the case. Moreover, these preliminary findings, regarding both dust masses and their spatial distributions, still need to be validated through the use of large, high spatial resolution, multi-wavelength samples that allow one to remove current degeneracies. If confirmed, they would make embedded protostars the likely cradles of planetary formation.
Outflows and jets in embedded protostars
Jets from young stars have been observed from the X-ray regime right down in frequency to the radio band. Historically the emphasis has been on studying them at optical and near-infrared wavelengths but with the advent of high spatial and spectral resolution millimeter arrays such as ALMA, increasingly the focus has shifted to molecular emission lines and more embedded sources. Whether we are looking at the ionized, neutral atomic or molecular component, the presence of line emission ensures we can determine fundamental outflow parameters such as velocity, density, temperature, etc. Of course these various species need not be co-located, nor are they expected to be, and so no single set of waveband limited derived parameters can characterise an outflow. Outflow emission arises primarily from the cooling zones of shocks, shocks that are generated by higher velocity gas catching up with slower material ahead of it. This means the strength of the shock is determined by velocity differences rather than the actual outflow velocity with respect to the source.
There is insufficient space here to provide an in-depth review of the properties of outflows generated by the wide variety of YSOs, or even proto-brown dwarfs, from which they are observed, the reader instead is referred to Frank et al. (2014), Bally (2016) and Ray and Ferreira (2021). Here we will con-centrate on a few salient properties of those from the most embedded (Class 0/I) sources and describe in general terms how observations can give us clues to their launching mechanism. Note that as Class 0/I sources are highly embedded, observations have largely been confined to millimeter wavelengths, i.e. their molecular emission, although not exclusively.
Of course molecular outflows from young stars have been known for several decades (e.g., Arce et al. 2007) but the realisation that such outflows can appear (e.g. in SiO, SO or even CO) just as jet-like as their ionized/neutral atomic counterparts was slower in coming (e.g., Plunkett et al. 2015;Lee et al. 2017a; Jhan and Lee 2021). Studies using mm interferometers of Class 0/I sources show their outflows can be traced right back to the protostar with in some cases very narrow opening angles (Bjerkeli et al. 2019) and velocities comparable to the young star's escape velocity (Hirano et al. 2010). All of this suggests such outflows, or at least their most highly collimated component i.e. the jet, arise from a region no greater than ≈1 au centred on the YSO. A classic example, and something of a poster child is HH 212 (e.g., Lee 2020, and see Fig. 4). While extended emission in H 2 is observed on pc scales, SiO can be traced back to at least 10 au from the source. Certainly the presence of a disk is a necessary, although not necessarily sufficient, condition for jet launching. The other ingredient needed, as in almost all astrophysical models for the generation of jets is a magnetic field (e.g., Livio 2011). In 2.7.2, we will examine the evidence for magnetic fields in outflows but here we mention some plausibility arguments as to why we think they are present and why they are thought to be dynamically important as they transfer angular momentum from infalling/rotating motions to outflowing motions, at least close to a YSO. While there is very little data on magnetic fields threading the base of the jets and protostellar disk surfaces where disk winds are launched in embedded sources, we know that the surface magnetic fields of Class II/III young stars have strengths of several kG (e.g., Donati et al. 2020) as derived from Zeeman splitting measurements. Data on field strengths in the innermost region of protostellar disks (i.e. a few stellar radii) is sparse but observations obtained from spectro-polarimetry in the disk of FU Ori (Donati et al. 2005) lead to similar values as derived for the parent star. In addition studies of paleomagnetism in chondrules from the early period in the formation of the Solar System (Weiss et al. 2021) suggest field strengths of ≈ 1 G at 1 au. Such values, in combination with the expected density, temperature and velocity of a stellar/disk wind, are clearly capable of collimating a wind into a jet, at least very close to the star. The most efficient means of achieving this is for the ionised gas (dragging through collisional coupling associated atomic/neutral material with it) to be centrifugally ejected along the magnetic field lines. These field lines are either anchored to the star (stellar winds), at the star-disk interface (so-called X-winds), or alternatively in the disk itself (Disk or D-winds). For a review of these various options the reader is referred to Pudritz and Ray (2019) but also see 3.1.5. In each case the magnetic field associated with the wind/outflow, while starting off largely poloidal, increasingly, particularly in the case of D-winds, becomes toroidal with distance as a result of the material carrying more and more angular momentum (see below). Tension in these wound-up toroidal fields then generate forces towards the axis, known as "hoop stresses" that focus the outflow further into a jet (Spruit et al. 1997).
As explained elsewhere in this chapter (3.1.5) in order for accretion to occur in a protostellar disk, angular momentum must either be removed poloidally, for example through a magnetic tower flow (3.1.5, through a disk wind or outflow, or alternatively redistributed radially outwards within the disk (for example through gravitational torques at large disk radii, Vorobyov and Basu (2006Basu ( , 2010Basu ( , 2015. In any event, in recent years it has become increasingly clear that angular momentum in a disk is removed through the former mechanisms as possible sources of disk viscosity, e.g. MRI driven turbulence, seem unlikely to work due to non-ideal MHD effects (e.g. for Class 0 disks see Kawasaki et al. (2021)). While winds/outflows from the star, star-disk interface and the disk itself are all likely to play a role in angular momentum transport (Pudritz and Ray 2019), it is clear that a significant fraction of the angular momentum must be removed through a disk wind/outflow in any event for material to reach the inner disk radius and be accreted onto the star along magnetic field lines as suggested by observations (Gravity Collaboration et al. 2020). Thus depending on where the wind/outflow is launched, it is expected to carry varying amounts of angular momentum per unit mass. Thus rotation is expected to be a key signa-ture of disk generated outflows (Pudritz and Ray 2019). The search for rotation in ionized/neutral atomic jets has been challenging from an observational perspective as it is difficult to spatially resolve such jets transversely, the maximum rotation (toroidal) velocity is only a small fraction of the poloidal velocity and the signal to noise ratio is often not high (e.g. Frank et al. 2014). All of these issues are less of a problem with mm-interferometers such as ALMA, and even radio interferometers such as the Jansky VLA, that not only have high spatial resolution, to resolve the outflow transversely, but also the necessary high spectral resolution. Thus, for example, at the start of the ALMA era, Hirota et al. (2017) observed rotation in the outflow from Orion Source I. In turn, using basic magnetocentrifugal wind theory, including angular momentum conservation (for details, see Anderson et al. 2003), they could estimate from where the outflow was launched in the disk and discovered it to be far ( 10 au) from the protostar (see also Bjerkeli et al. 2016). That the magnetic field has the configuration and strength expected to collimate this outflow has also recently been found (Hirota et al. 2020). Rotation has now been detected in many molecular outflows including TMC1A (Bjerkeli et al. 2016), HH 211 (Lee et al. 2018a), HH 212 (Lee et al. 2017a), for several outflow in NGC1333 (Choi et al. 2011;Chen et al. 2016b;Zhang et al. 2018), HH 30 (Louvet et al. 2018) and OMC2/FIR6b (Matsushita et al. 2021). In the latter it was found that the high-velocity jet axis is significantly inclined from the low-velocity outflow axis, indicating that the launching radii of the high-and low-velocity components differ in an inclined disk. In all cases a disk origin for the jet/outflow is indicated and, where observed, the direction of disk rotation matches that of the outflow (see Fig. 4) as would be expected. It would also appear that all of these observations support the direct disk wind expected from core collapse simulations.
2.7. The role of magnetic fields in shaping protostellar disks and outflows: observational keys
Observations of magnetic fields and disks
Here we discuss the main clues that observations provide regarding the role played by magnetic fields in setting disk sizes and masses during the embedded phase.
In a simple scenario for the collapse of a rotating core, even a small difference in initial specific angular momentum on large scales is expected to result in a large change in the radius of the resultant rotationally-supported disk. Observations of the angular momentum on scales where the gas should contribute to the building of the disk (Sai et al. 2022) suggest most protostars should give rise to disks that are larger, by a factor of a few, than those observed (Gaudel et al. 2020). Here it is assumed that the angular momentum contained in rotation is entirely transmitted to the disk scale. Naively such observations potentially support models with efficient magnetic braking. Note however, that several theoretical studies have shown that the observations of angular momentum in protostellar envelopes can be satisfactorily reproduced without large scale core rotation, questioning whether observed angular momentum values really stem from rotational motion (Verliat et al. 2020;Xu and Kunz 2021;Lee et al. 2021c).
From an observational perspective, the relationship between the outflow axis and the mean magnetic field direction has received a lot of recent attention. This is because it is thought that the efficiency of magnetic braking in redistributing angular momentum (and hence preventing the growth of disks to large radii) when a core collapses depends on the configuration of the core's magnetic field with respect to its rotation axis (Joos et al. 2012;Hirano et al. 2020). Starlight polarization observations show large-scale magnetic fields that are misaligned with outflows powered by evolved (Class II) young stellar objects (e. g., Ménard and Duchêne 2004), but the fields are better aligned with outflows from younger protostars (e. g. Vrba et al. 1986;Targon et al. 2011;Soam et al. 2015). Single-dish observations examining the relative orientation of the B-field on core-scales with the outflow axis are inconclusive: some studies suggest cores exhibit a mean B-field that is not randomly distributed with respect to the large-scale outflow axis (Soam et al. 2015;Yen et al. 2021b) while others suggest no correlation (Doi et al. 2020). Finally, some sub-mm polarization observations of dense cores have found the direction of the core's minor axis correlates with their magnetic field orientation (Chapman et al. 2013). A flattening in this preferential direction could be explained by the development of large magnetic pseudo-disks (equatorial flattened gas structures which are not supported by rotation Galli and Shu 1993).
Likewise, observations using interferometers have not reached any firm conclusions regarding correlations between outflow, disk and magnetic field axes. On very small envelope scales, the mean B-field direction is observed to be randomly aligned with respect to the outflow axis (Hull et al. 2013). However, as reported by Hull et al. (2014), there is evidence that cores with lower fractional polarization tend to have their outflows perpendicular to the mean B-field. This suggests the existence of a non-negligible toroidal field (caused by the core/disk rotation) in addition to the poloidal one coming from envelope-disk accretion. An SMA survey of 20 low-mass protostars (Galametz et al. , 2020 has found that protostellar envelopes tend to have a higher angular momentum on 1000 au scales if the mean envelope magnetic field measured on similar scales is misaligned with respect to the rotational axis of the core (assumed to coincide with the outflow axis). In contrast, observations analyzed in Yen et al. (2021a) have shown no correlation of dust continuum protostellar disk radii with misalignment of the magnetic fields and outflow axes in Orion A cores. Both results may be consistent with the magnetic field being efficient at reducing the amount of angular momentum transmitted to the inner envelope scales (< 1000 au), inhibiting the formation of large hydro-like disks. However, they may also support the hypothesis that the angular momentum responsible for the formation and growth in size of protostellar disks has a more local origin, on a few hundreds of au scales, as discussed previously.
Comparison of observed properties with models of disk formation in both specific objects and statistical samples may also help our understanding. Maury et al. (2018) has shown that the geometry of the B field, small disk size and the kinematics of the B335 inner envelope can only be explained by a family of MHD models where the initially poloidal field is pulled in the direction of collapse, while at the same time being strong enough (µ ∼ 6) to partially counteract the transfer of angular momentum inwards. In B335, they show the magnetic field likely regulates the formation of the disk, constraining its size to < 20 au (see Figure 5). The role of the magnetic field in determining the disk size in B335 has been also proposed in follow on studies (Yen et al. 2019;Bjerkeli et al. 2019;Imai et al. 2019). Detailed analyses of the gas kinematics, disk sizes, and B field configuration in other objects are needed to ascertain whether the magnetically-regulated disk growth in B335 is common, or a special case.
Since Protostars and Planets VI, and despite their complexity, observations have successfully explored the properties of Class 0 disks in large part thanks to the increased spatial resolution and sensitivity of new millimeter surveys. As shown in Figure 3, the vast majority of observations now point towards a population of young protostellar disks that are primarily compact. From a statistical point of view this, and the confirmed presence of magnetic fields on the scales where disks form, may naively point to magnetic fields having a key dynamical role in regulating the early growth of stellar embryos and their surrounding disks. Indeed, such disk sizes are difficult to reconcile with purely hydrodynamical models of disk formation and, if the B-field is well coupled, magnetic braking naturally produces such small disks (see, e.g. Lebreuilly et al. 2021, for a statistical comparison of observed disk sizes and protostellar disk sizes in magnetized models). However, observations are still largely unable to characterize how effective magnetic fields are at coupling on the smallest scales where protostellar disks form. This coupling is mostly due to small grains and the ionized species in the gas (see Section 3). Measuring the amount of these two key ingredients in and around embedded disks is thus fundamental to setting constraints to the role of the magnetic field, and the importance of diffusive processes, such as ambipolar diffusion, in counteracting the outward transport of angular momentum from magnetic braking. In embedded sources, the ionization rate from cosmic rays is inferred using various chemical signatures, often HCO + , and N 2 H + (Bron et al. 2021). Measurements suggest typical cosmic ray ionization rates ζ ∼ 10 −17 − 10 −15 s −1 with large uncertainties (Ceccarelli et al. 2014;Podio et al. 2014;Favre et al. 2017Favre et al. , 2018. Such measurements however remain rare, not only because they are difficult to make from an observational perspective but also because of chemical model degeneracy. In any event such high values seem inconsistent with ionization from external radiation only: galactic cosmic rays, mostly relativistic protons, are the dominant source of ionization in dense molecular gas where ultraviolet radiation cannot penetrate but their flux should be severely attenuated inside dense cores (Grenier et al. 2015;Padovani et al. 2018). Several recent studies (Padovani et al. 2015;Silsbee et al. 2018;Padovani et al. 2021;Fitz Axen et al. 2021;Gaches and Offner 2018) have investigated the role of shocks at the protostellar surface and magnetic acceleration within jets as efficient forges to locally accelerate low-energy cosmic rays, and their role in increasing the ionization rate of the shielded protostellar material. While the ionization can be increased locally, for example at the accretion shock, the typical gas densities in these regions H Fig. 5.-Observations and best model to match the properties of the B335 protostar, from Maury et al. (2018). These authors have demonstrated that the observed B field geometry shown in the left panel, the small disk size and the kinematics of the B335 inner envelope can only be explained by a family of MHD models where the initially poloidal field is pulled in the dominant direction of the collapse, but yet is strong enough (µ ∼ 6) to partially counteract the transfer of angular momentum inwards and set the disk size in this Class 0 protostar: the best model is shown in the right panel.
should lead to a significant decrease in the ionization fraction. Observational studies of the chemical composition of the dense gas, and other properties could shed light on the small scale conditions that determine the coupling of the magnetic field to the disk gas. Note that, in a recent study of the TMC1-A Class I protostar, Harsono et al. (2021) estimate an ionization from cosmic rays ζ ∼ 10 −17 s −1 at the disk surface. This low ionization suggests the B-field is weakly coupled and accretion through the disk may not be driven by the magneto-rotational instability.
Future radio observations will undoubtedly bring more constraints on the dynamical role of magnetic fields in regulating protostellar collapse and in setting pristine disk properties. Future large surveys will allow us to statistically investigate the dynamical role magnetic fields play by careful examination of their relationship with the angular momentum and mass transport processes from envelopes to disks (Galametz et al. 2020;Yen et al. 2021a). The true spatial distribution of magnetic fields on disk-forming scales may become routinely observed thanks to the development of polarimetric capabilities and modeling tools, e.g. in connection with polarized dust emission. Finally, observations with, for example, the next generation of mid-infrared facilities will help characterize the small scale structures of these young disks, still largely unresolved spatially. These studies may allow us to put observational constraints on the models of magnetic field properties and coupling efficiency inside protostellar disks (Lee et al. 2021d), and test how mass accretion onto a protostar might be mediated through its disk.
Observations of Magnetic Fields in Outflows Versus Prediction
Perhaps the most challenging physical quantity to determine in an outflow from a young star is the strength and direction of its magnetic field. Measuring the Zeeman effect for example using optical/near-infrared atomic lines is not practical as any splitting is so weak. Interestingly, however, in the near future it may be possible to use molecular lines, e.g., those of sulphur monoxide SO (Cazzoli et al. 2017). This of course might be particularly pertinent in the case of molecular outflows from less evolved sources. In general, however, indirect methods have to be employed to assess field strengths. For example, as pointed out many years ago, magnetic fields can cushion an outflow shock, altering the degree of compression in the postshock radiative cooling zone and changing observed emission line ratios (Hartigan et al. 1994). Just measuring line ratios however is not sufficient as the shock solutions are degenerate. Instead additional parameters, such as the spatial extent of the cooling zone are needed, to break the degeneracy and this requires very high spatial resolution, for example as afforded by HST (Hartigan and Wright 2015).
An alternative approach, that at least can be applied in some outflows, is to measure their non-thermal radio emission. Of course most of the radio flux from outflows is thermal in origin and is simply the free-free emission from the collimated ionized jet. That said a number of outflows show pockets of non-thermal emission, usually well away from the parent YSO (Anglada et al. 2018). A classic example is HH 80/81 which has been shown through VLA observations to display polarized synchrotron emission (Carrasco-González et al. 2010). This indicates, perhaps somewhat surprisingly, the presence of relativistic particles which presumably have been energised through diffusion shock acceleration even though the shock velocities are low (Rodríguez-Kamenetzky et al. 2016). In any event minimum energy requirements including a equipartition field, in conjunction with the know non-thermal flux, can lead us to estimates of the magnetic field (for example following Longair (2011)). These values, typically 20-200 µG, appear consistent with what we might expect in the regions observed far away from the collimation zone. More recently it has emerged that low frequency radio observations of nonthermal emission can offer us a novel method of deriving at least the field strength, if not its direction. This technique relies on the fact that in an outflow there can be a mixture of relativistic particles and thermal electrons. The latter, depending on the electron density, de-collimates the forward-beamed radiation from the non-thermal electrons, giving rise to a decrease in flux, or effectively a low frequency turnover in the synchrotron spectrum. If the thermal electron density is known independently, for example from optical emission line diagnostics, then the magnetic field can be derived. This, so-called Razin Effect, may have been seen in an outflow for the first time Finally, it is worth noting that the GK effect (see 2.1) has been used to probe the magnetic field in protostellar outflows. In particular Lee et al. (2018c) have detected this effect in SiO observations of the HH 211 outflow using ALMA. Despite the inherent uncertainty of 90 0 in the magnetic field orientation, the implied magnetic field geometry may well be largely toroidal as expected for a collimating field.
3. A unified scenario for the formation of protostars, protoplanetary disks, jets, and outflows
As described in §2, the rapid progress of observational research since PP VI has provided strong constraints on theoretical models. In §3, we describe the theoretical progress and present a comprehensive scenario for the formation and early evolution of protostars, disks, and outflows that is consistent with observational constraints. The key mechanism of this scenario is the coupling, decoupling, and recoupling of the gas and magnetic field at appropriate scales.
3.1. From cloud cores to protostars: formation and evolution of outflows, jets, and protoplanetary disks 3.1.1. Essential microscopic physics for the formation of protostars, disks, and outflows
In this section, as a basis for the ones that follow, we will introduce the physical processes that are crucial for the formation and evolution of protostars and protoplanetary disks.
Molecular cloud cores, from which protostars, disks, and outflows are formed, are generally weakly ionized (with typical electron abundances on the order of 10 −7 ; Bergin and Tafalla 2007). Applying the Ohm's law, the electric field in the neutral frame (E ) is tied to the current density (j) as,
j = σ E + σ H B B × E ⊥ + σ P E ⊥ ,(1)
where σ , σ P , and σ H are parallel, Pedersen, and Hall conductivities, respectively, and E andE ⊥ are the components of E parallel and perpendicular to the magnetic field B.
The three components of the conductivity tensor can be expressed as (Norman and Heyvaerts 1985;Wardle and Ng 1999;Wardle 2007),
σ = ec B i Z i n i β i,H2 ,(2)σ P = ec B i Z i n i β i,H2 1 + β 2 i,H2 ,(3)σ H = − ec B i Z i n i β 2 i,H2 1 + β 2 i,H2 ;(4)
here, e and c are elementary charge and speed of light, Z i , n i , β i,H2 are the charge number, number density, and Hall parameter of the charged species i, respectively. The Hall parameter β i,H2 , which is the ratio of the gyro-frequency and neutral collision frequency, determines the relative importance of the Lorentz and drag forces in balancing the electric force. By solving equation (1) for the electric field and substituting it into Faraday's law. The induction equation can be formulated as (e.g., Kunz and Mouschovias 2009),
∂B ∂t = ∇ × (v × B) − ∇ × η Ohm ∇ × B + η Hall (∇ × B) × B B + η AD B B × (∇ × B) × B B ,(5)
which departs from the ideal MHD limit by the introduction of the three non-ideal MHD terms, characterized by the Ohmic (η Ohm ), Hall (η Hall ), and ambipolar (η AD ) resistivities. The resistivities are related to the conductivities as
η AD = c 2 4π ( σ P σ 2 P + σ 2 H − 1 σ ) ,(6)η Ohm = c 2 4πσ ,(7)η Hall = c 2 4π ( σ H σ 2 P + σ 2 H ) .(8)
The microscopic origin of these three non-ideal MHD effects is related to how the particles interact in each scenario. Ohmic dissipation is the collision between neutral and charged particles whereas ambipolar diffusion is charged-particles slipping through neutral particles. The Hall effect is the velocity difference between positively and negatively charged particles. In the cloud core and disk, η Hall (or equivalently σ H ) is negative over almost the entire density range (see, Figure 6). This may be counter-intuitive. Therefore, the following will discuss the origin of the negative σ H . The key is the total dust charge. The total dust charge is negative in low density regions due to the large thermal velocity of the electrons. In this case, due to charge neutrality, total charge of ion is slightly larger than the charge of electron. Consider the regime in which the Hall parameter of ions (β i ), electrons (β e ), and dust (β d ) obeys, β i 1, β e 1, β d 1 i.e., ions and electrons are tied with magnetic field but dust is not. In this case, σ H → ec/B(n e − n i ) (we can neglect the contribution of dust because β d 1). Therefore, σ H is negative because n e < n i . Intuitively, it can be understood as follows. In a situation where negatively charged dust is coupled with neutral gas and the ions and electrons are coupled to the magnetic field, a relative motion of the ion electron mixture (which is slightly positively charged) and the negatively charged dust creates the electric current. In this case, the role of positive and negative charges is reversed compared to ordinary Hall effect, and it causes the negative σ H .
In general, an ionization chemistry network (Oppenheimer and Dalgarno 1974;Umebayashi and Nakano 1990;Nakano et al. 2002;Marchand et al. 2016;Wurster 2016;Zhao et al. 2018b;Guillet et al. 2020a) should be properly constructed in order to obtain the resistivities in the induction equation (5). Cosmic-rays (CRs) are the main source of ionization that triggers the chain of ionization chemistry as most of the interstellar UV-radiation is attenuated in dense molecular cores (visual extinction Av>4 mag; McKee 1989). On the other hand, dusts act as a charge absorber and can also be a major contributor to conductivity. As an example, Figure 6 shows the resistivities values for various dust distributions. Note that the Ohmic resistivity is many orders of magnitude smaller than the ambipolar resistivity in ρ g 10 −10 g cm −3 (or in n g 10 13 cm −3 ).
In this section, we use the following approximated formulae for Ohmic and ambipolar resisitivities for a quantitative discussion. Ohmic dissipation, η Ohm , is approximated as (Nakano et al. 2002;Machida et al. 2007, see also Figure 6) η Ohm ∼ 1.6 × 10 13 ρ g,10 −16 g cm −3 T 10K cm 2 s −1 ,
where ρ g and T denote the gas density and temperature, and f X means f X = ( f X ). For ambipolar diffusion, the diffusion rate, η AD , is approximated as (Shu 1983;Zhao et al. 2018b, see also Figure 6)
η AD ∼ 2 × 10 18 cm 2 s −1 × ρ −1/2 g,10 −16 g cm −3
(ρ g, g cm −3 < 10 −16 ) 1 (10 −16 < ρ g, g cm −3 < 10 −13 ) ρ g,10 −13 g cm −3 (10 −13 < ρ g, g cm −3 < 10 −9 ).
Here, we assume B = 0.2n 1/2 H µG (Nakano et al. 2002) and flux freezing during the evolution. This is valid for a supercritical core and its isothermal collapse phase (ρ g 10 −13 g cm −3 or n g 10 10 cm −3 ) as we will discuss in subsequent section.
Another potentially important magnetic diffusion process is turbulent (or reconnection) diffusion. On small scales, reconnection from turbulent vortices causes effective magnetic field diffusion. The effective resistivity of this process is given as (Lazarian 2005;Santos-Lima et al. 2021),
η turb = L turb δv turb min(1, M 3 A ).(11)
where L turb and δv turb are the injection scale and turbulent velocity at the scale. M A is the Alfven Mach number M A = δv turb /v A and is typically M A 1 in cloud cores.
3.1.2. Isothermal collapse phase: magnetic diffusion timescale vs free-fall timescale
The gravitational collapse of molecular cloud core initially proceeds isothermally. The compressional heating by gravita- tional contraction is immediately radiated away via dust thermal emission and the gas keeps its temperature approximately constant at T = 10 K. In this phase, the effective polytropic index (Γ eff ) is Γ eff = 1 where P gas = K s ρ Γeff (K s is a constant) is smaller than the critical value of 4/3 for spherical gravitational collapse, and the pressure gradient force never overtakes the gravity. As observations have shown (Crutcher 2012), most clouds cores are supercritical. Therefore, magnetic pressure is also not enough to support the core against gravity. As a result, the gas contracts with free-fall timescale.
In general, the evolution of the magnetic field is determined by the balance between the gas dynamical timescale and the magnetic diffusion timescale t diff = L 2 /η where L is the length scale of the system and η is the magnetic resistivity. In the case of isothermal collapse phase, the gas dynamical timescale is free-fall timescale, because 3D simulations have shown that even in the presence of a magnetic field, once gravitational collapse begins, evolution proceeds in freefall timescale (e.g., Bate et al. 2014), and observationally, the lifetime of a cores with n g > 10 6 cm −3 is about free-fall time (Könyves et al. 2015). Therefore, by comparing the diffusion timescales of ambipolar diffusion, Ohmic dissipation, and turbulent diffusion with the free-fall time, we can understand the behavior of the magnetic field in the isothermal collapse phase.
As shown in the equation (9) and (10), ambipolar diffusion dominates the Ohmic dissipation during the isothermal collapse phase (ρ g 10 −13 g cm −3 or n g 10 10 cm −3 ).
Thus, let us compare the free-fall timescale and the ambipolar diffusion timescale using the equation (10),
t ff t AD = η AD t ff λ 2 J =
2.2 × 10 −3 (ρ g, g cm −3 < 10 −16 ) 7.1 × 10 −2 ρ 1/2 g,10 −13 g cm −3 (10 −16 < ρ g, g cm −3 < 10 −13 )
where t ff = 3π/(32Gρ g ) and we assume the length scale to be L = λ J = πc 2 s /(Gρ g ), where λ J is the Jeans length. This estimate shows t ff < 0.1t AD even at the density where ambipolar diffusion is most efficient in isothermal collapse phase, and that the free-fall timescale is much shorter than the ambipolar diffusion timescale. More quantitatively, to diffuse the magnetic field over a spatial scale of Jeans length λ J = 8.7 × 10 2 ρ g,10 −16 g cm −3 au, it takes about 3.4×10 11 ρ −2 g,10 −16 g cm −3 yr with Ohmic dissipation and 2.7×10 6 ρ −1 g,10 −16 g cm −3 yr with ambipolar diffusion (we assume ρ g > 10 −16 g cm −3 here). This estimate is consistent with previous studies with more realistic η O and η A (Nakano and Umebayashi 1986;Umebayashi and Nakano 1990). Hence, this estimate shows that ambipolar diffusion (as well as Ohmic dissipation) may not play a role during the isothermal collapse phase.
The estimate is also consistent with three-dimensional simulations. Tsukamoto et al. (2015b) and Masson et al. (2016) compared the magnetic field evolution between ideal and nonideal MHD simulations and showed that, in the isothermal collapse phase, the magnetic field evolution is identical between simulations confirmed that, even with Ohmic dissipation and ambipolar diffusion, the gas and magnetic field are completely coupled during the isothermal collapse phase.
Note that equation (10) and hence (12) assumes the flux freezing. This assumption is justified and our discussion is selfconsistent because we conclude that ambipolar diffusion does not work during isothermal collapse phase.
Then, can turbulent (or reconnection) diffusion play the role during the isothermal collapse phase? The following inequality holds for the turbulent resistivity,
η turb = L turb δv turb min(1, M 3 A ) ≤ L turb δv turb .(13)
Here, we assume that the injection length is of the order of Jeans length L turb ∼ λ J . As shown in Ward- Thompson et al. (2007), the turbulent velocity is subsonic in the cloud core. Thus, we have the inequality δv turb < c s and we can show that the ratio of the free-fall timescale to the magnetic diffusion timescale due to the turbulent reconnection diffusion as
t ff t turb = η turb t ff λ 2 J ≤ δv turb t ff λ J ≤ c s t ff λ J ∼ t ff t sound < 1,(14)
where t sound = λ J /c s is the sound-crossing timescale. The last inequality follows from the fact that the sound-crossing time is longer than the free-fall time due to the graviationally collapsing core. Hence, the turbulent reconnection diffusion also may not play a role in the isothermal collapse phase. Note, however, that since supersonic turbulence exists in massive cores, turbulent diffusion could be an important effect there.
In summary, any magnetic diffusion may not play a role during the isothermal collapse phase, which is consistent with previous studies (Nakano and Umebayashi 1986;Umebayashi and Nakano 1990;Tomida et al. 2015;Masson et al. 2016). The estimates made in this subsection show that the magnetic field structure in the cloud core and envelope reflects the dynamics of the gas. Therefore, it should be possible to study the dynamics of the gas from the structure of the magnetic field, and vice versa.
Formation of hydrostatic first-core and magnetic diffusion
As gravitational collapse proceeds and the central density increases, drastic changes in gas thermal evolution and magnetic diffusion occur. Once the density reaches ρ g ∼ 10 −13 g cm −3 (or n g ∼ 10 10 cm −3 ), the gas becomes optically thick to the dust thermal emission, and the compressional heating overtakes the radiative cooling. The effective polytropic index increases to Γ eff = 5/3, which is larger than the critical value of 4/3, and the pressure gradient force begins to balance the gravitational force. As a result, the pressure supported first hydrostatic core (or first core) forms (Larson 1969;Masunaga et al. 1998;Commerçon et al. 2011;Commerçon et al. 2012;Tomida et al. 2013;Vaytet and Haugbølle 2017;Bhandare et al. 2018), whose size and mass are determined by the Jeans length ∼ 1 to 10 au and Jeans mass ∼ 10 −2 to 10 −1 M (Larson 1969;Masunaga et al. 1998).
In the first core, a dramatic increase in resistivity happens, which is crucial for magnetic field evolution. As shown in Figure 6 (and Equation 10), the ambipolar resistivity begins to increase around the density of ρ g 10 −13 g cm −3 (or n g 10 10 cm −3 ), as density increases due to adsorption of ion/electron by dusts. Hence, in the first core, t ff t AD ∼ 2.2ρ 3/2 g,10 −12 g cm −3 .
and the diffusion timescale becomes shorter than the free-fall timescale at ρ g 10 −12 g cm −3 (or n g 10 11 cm −3 ). Furthermore, gas is supported by the pressure gradient force, and the first core has a much longer lifetime than the freefall time. Thus, the magnetic diffusion timescale becomes significantly shorter than the lifetime of the first core, its magnetic field weakens and its mass-to-flux ratio increases mainly due to ambipolar diffusion (η AD is still larger than η Ohm in ρ g 10 −10 g cm −3 or n g 10 13 cm −3 Tomida et al. 2015;Tsukamoto et al. 2015b;Masson et al. 2016;Tsukamoto et al. 2017b). Simultaneously, decoupling between the gas and magnetic field in the first-core suppresses angular momentum removal by magnetic tension and allows the first-core to retain its rotation, which is essential for the formation of a circumstellar disk at the protostar formation epoch (Machida and Matsumoto 2011;Dapp et al. 2012;Tsukamoto et al. 2015b;Wurster et al. 2018b;Vaytet et al. 2018). Figure 7 shows the density and plasma β(= P gas /P mag ) (where P gas and P mag gas and magnetic pressure) maps of the first core formed in ideal MHD simulation and non-ideal MHD simulation (with Ohmic dissipation and ambipolar diffusion). We can see the striking difference of plasma β in the first core between the two simulations. Without magnetic diffusion, the plasma β inside the first core is β ∼ 10, and a strong magnetic field retains in the first core. On the other hand, the plasma β at the center is β ∼ 10 4 in non-ideal MHD simulations. This significant difference induced by magnetic diffusion significantly alters the fate of the first core, and the formation process of the protostars and circumstellar disk.
Recent simulations also show that the magnetic field saturates in the first core and has a universal value of B ∼ 0.1 G (Masson et al. 2016;Tsukamoto et al. 2017b;Hennebelle et al. 2020;Xu and Kunz 2021). Figure 8 shows the comparison of the magnetic field strength of the first-core in ideal MHD and non-ideal MHD simulation (with ambipolar diffusion). Ambipolar diffusion becomes stronger as the magnetic flux is brought into the first core. This causes negative feedback on magnetic field amplification. As a result, the magnetic field saturates at the specific value of B ∼ 0.1 G.
Second collapse and birth of protostar and circumstellar disk
The seeds of protostar and jet begin to sprout in the inner part of the first core as it evolves. The first core adiabatically evolves and its central temperature increases as T ∝ ρ Γ eff −1 (Larson 1969;Masunaga et al. 1998;Vaytet et al. 2012;Tomida et al. 2015). When the central temperature reaches T ∼ 10 3 K at ρ g ∼ 10 −8 g cm −3 (or n g ∼ 10 15 cm −3 ), the magnetic field and gas recouple due to thermal ionization of potassium and go back to the ideal MHD regime but with a weak magnetic field (β 1) due to magnetic diffusion occurring at the first core scale. Subsequently, the molecular hydrogen begins to dissociate, an endothermic reaction, at ∼ 2 × 10 3 K and decreases the effective polytropic index Γ eff ∼ 1.1, which is smaller than the critical value. As a result, the gravitational collapse in the first core resumes, and is called as the "second collapse". The second collapse lasts until the density reaches ρ g ∼ 10 −2 g cm −3 (or n g ∼ 10 21 cm −3 ), and the molecular hydrogen has wholly dissociated. Then the effective polytropic index becomes Γ eff ∼ 5/3, and the pressure supported (second) core or protostar forms, whose mass is determined by the Jeans mass at ∼ 10 −3 to 10 −2 M (Larson 1969;Masunaga and Inutsuka 2000;Vaytet and Haugbølle 2017). This is the birth of a protostar.
The magnetic field amplification during the second collapse determines the magnetic field strength of the newly born protostar. Due to the conservation of the magnetic flux, the magnetic field in the ideal MHD regime evolves as B ∝ R −2 . Thus, during the second collapse the central part of the first core (∼ 1 au) collapses to the protostellar size of 0.01 au, and the magnetic field strength increases from the critical value of the magnetic field in the first core, |B| ∼ 10 −1 G to ∼ 10 3 G, which is consistent with protostar observations (Johns-Krull et al. 2009). On the other hand, when we conduct the ideal MHD simulations of the collapsing cloud core, the magnetic field strength of the central protostar is more than ten times larger than that in the non-ideal MHD simulations (Tsukamoto et al. 2015b;Vaytet et al. 2018;Wurster et al. 2018a;Machida and Basu 2019). Therefore, the Ohmic dissipation and ambipolar diffusion are the key to solve the classical "magnetic flux problem" (Mouschovias et al. 1985) in star formation.
During the second collapse, most of the gas in the first core is distributed around the protostar with angular momentum that is sufficient to allow the centrifugal force to balance gravity. Therefore, most of gas in the first core quickly transforms into a rotationally supported disk. This is the birth of the circumstellar disk. Figure 9 shows this transformation. The left panel shows the bird's eye view of the forming protoplanetary disk (red region) inside the first core (orange region) shortly after the formation of the protostar. The right panel shows the infall velocity inside the first-core shortly after the protostar formation. In the ideal MHD simulation (red line), gas is directly accreted to the protostar (∼ 0.01 au), whereas in the non-ideal MHD simulation, gas accretion stops at 1 au-indicating that Fig. 9.-Left panel shows the first core (orange iso-density surface) and inner disc (red iso-density surface) at t = 1.7 yr after the formation of the protostar (from Machida and Matsumoto 2011). The density distribution on the x = 0, y = 0 and z = 0 plane is projected on to each wall surface. The velocity vectors on the z = 0 plane are also projected on to the bottom wall surface. Right panel shows the radial velocity just after the formation of the protostar (from Tsukamoto et al. 2015b). The red, orange, and blue lines show the radial velocity in ideal MHD, non-ideal MHD with Ohmic dissipation, and non-ideal MHD with Ohmic dissipation and ambipolar diffusion, respectively. the centrifugal force balances with gravity at ∼ 1 au, and the protoplanetary disk is forming. The size of the newly born circumstellar disk is approximately 1 to 10 au, which corresponds to the size of the first core.
Outflow/jet launching from the newly born disk and protostar
The formation process of protostars and circumstellar disks described in the previous section is also closely related to the formation of outflows and jets. Protostellar outflows and jets have been ubiquitously observed in star-forming regions. Wide-angle and slow-velocity flows are called 'molecular outflow' or 'low-velocity outflow' in molecular line emissions. On the other hand, well-collimated high-velocity flows are called 'collimated jet' or 'optical jet' and are observed by molecular and atomic line emissions from radio to optical wavelengths. Hereafter, we refer to the former as 'the outflow' and the latter as 'the jet. ' Magnetically driven wind scenarios have been proposed as the driving mechanism of these flows (e.g., Pudritz and Norman 1986). These winds have been investigated in past theoretical and simulation studies, in which the outflow and jet are magnetocentrifugally driven (Blandford and Payne 1982) or driven by magnetic pressure gradient force (Uchida and Shibata 1985). However, in these past studies, the settings for driving the outflow and jet were highly idealized. Considerably idealized (or an unrealistic) circumstellar disk and magnetic field configurations were setup for the emergence of winds. For example, only poloidal fields (or dipole magnetic fields) were assumed within a thin disk as the initial state in past studies. The problem is that such an idealized condition (artificial distribution of density or magnetic field) is not actually realized in the star formation process.
Tomisaka (1998,2000,2002) successfully reproduced the outflow in the star formation simulation with a two dimensional ideal MHD code, for the first time. These studies simulated the gravitational collapse process of star forming cores, spatially resolving both prestellar core and protostar. In this simulation, the outflow and jet naturally appear in gravitationally collapsing clouds. Furthermore, driving the outflow and jet has been comprehensively investigated in many star-formation simulations (Machida et al. 2004(Machida et al. , 2006Banerjee and Pudritz 2006;Machida et al. 2008;Hennebelle and Fromang 2008;Ciardi and Hennebelle 2010;Bate 2011;Price et al. 2012;Joos et al. 2012;Seifried et al. 2012b;Tomida et al. 2013;Bate et al. 2014;Lewis and Bate 2017;Kölligan and Kuiper 2018;Wurster et al. 2018b;Xu and Kunz 2021). With these studies, the driving mechanism of outflow and jet in the star formation process can be established.
In a first core, the gravitational collapse slows and the collapse timescale is longer than the rotation timescale. Thus, magnetic field lines are twisted around the first core, and the low-velocity outflow emerges due to the magnetocetrifugal force before the formation of the protostar. The outflow driven by the first core continues to be produced by the rotationally supported disk after the protostar has formed.
During the main accretion phase, the disk gas intermittently falls onto the protostar due to the episodic accretion, inducing an episodic mass ejection or a time-variable outflow (and jet) (Machida and Basu 2019). The outflow physical parameters such as mass, momentum, momentum flux, kinetic energy and kinetic luminosity, are in good agreement with observations, suggesting that the molecular outflow is directly driven by the disk (e.g., Machida and Hosokawa 2013). A wire-like (relatively strong), radially-distributed magnetic field guides the outflow, which is mainly driven by the magnetocentrifugal mechanism and creates a wide opening angle flow.
On the other hand, a weak magnetic field around the protostar, due to Ohmic dissipation and ambipolar diffusion, is passively twisted by the rotation of protostar and disk inner edge and generates a strong magnetic pressure gradient force to drive the jet. Figure 10 illustrates this evolution. The left panel shows the magnetic field configuration around the protostar just after its formation in the ideal MHD simulation, and shows that an hourglass-shaped magnetic field has been realized. This indicates that the strong magnetic field suppresses the gas rotation and that the gas motion can not twist the magnetic field. Conversely, the weak magnetic field due to magnetic diffusion is easily twisted during the second collapse. The middle panel shows the magnetic field configuration in the non-ideal MHD simulation, and shows that the magnetic field is passively twisted by the rotation. This passive twisting of the magnetic field drives the jets from the vicinity of the protostar, as shown in the right panel. Therefore, magnetic diffusion also plays an essential role in driving the protostellar jet. The mass-loss rate of the jet obtained from the 3D simulation is ∼ 10 −5 M yr −1 (Machida and Basu 2019), and is in agreement with the observed value in a protostar, ∼ 3 × 10 −6 M yr −1 (Lee 2020).
Lastly, we comment on the classical scenario for driving the outflow. It had been considered that the low-velocity outflow is entrained by the high-velocity (primary) jet (so-called the entrainment scenario). In this scenario, the jet appeared near the protostar and provided linear momentum to the ambient gas through some mechanism (Arce et al. 2007). The ambient gas entrained by the jet is then observed as the low-velocity outflow. Recently, observations of outflow rotation could resolve the long-standing debate on what is driving the outflow. It had been considered that we would unveil the outflow driving mechanism if this rotation could be measured. In the entrainment scenario, the primary jet, which provides both linear and angular momentum to the outflow (or the entrained gas), has only a small amount of angular momentum, because the jet driving region is located near the protostar. Thus, it is expected that the outflowing gas receives a very small amount of angular momentum from the primary jet. On the other hand, in the direct disk driven scenario proposed from the core collapse simulations, the outflow should have a large amount of angular momentum because the outflow driving region is much far from the protostar where the rotation velocity and angular momentum are large (Lee et al. 2021a).
Evolution of protoplanetary disks from Class 0 to the end of the Class I phase in YSOs
At the time of PP VI, many researchers were seriously considering the possibility that highly efficient magnetic braking completely suppresses the disk formation in the early phase of protostar evolution (so-called magnetic braking catastrophe (MBC) (see the review of PP VI Li et al. 2014b). MBC is, in the strict sense, resolved by the scenario described in §3.1.4 because the circumstellar disk with a size of ∼ 1 au simultaneously forms at the formation epoch of protostar (Note that a disk with a size of ∼ 1 au is out of the scope of the studies of MBC by Mellon and Li 2008Li , 2009Li et al. 2011, because the inner boundary is at r ∼ 6 au in their simulations).
However, theoretically, magnetic braking is estimated to be very efficient, and findings of large disks with size of several 10 to 100 au in the embedded phases by recent observational surveys (see section2.5.1) force researchers to revisit the disk size evolution more precisely.
In this subsection, we review the physical mechanisms which weaken the magnetic braking and may play key roles in determining the disk size evolution during the early phases of star formation.
Misalignment between magnetic field and rotation vector
In many theoretical studies, it is assumed for simplicity that the angular momentum of the parent cloud core is aligned with its magnetic field. However, in real molecular cloud cores, the magnetic field and rotation vector are often observed to be misaligned (Hull et al. 2013(Hull et al. , 2014Yen et al. 2021b). Hennebelle and Ciardi (2009) first reported that the rotationally supported disk more easily forms in cores with misalignment between the magnetic field and the angular momentum vector. More quantitatively, Joos et al. (2012) showed that the mean specific angular momentum of the central region in a perpendicular core (in which the initial magnetic field and angular momentum are perpendicular) is about two times larger than that in a parallel core (in which the initial magnetic field and angular momentum are parallel). The influence of misalignment is also investigated by Li et al. (2013). The authors obtained results consistent with Hennebelle and Ciardi (2009), and concluded that disk formation becomes possible when µ 4, where µ is the mass-to-flux ratio of the core normalized by its critical value.
By examining the angular momentum evolution of fluid elements during the cloud core collapse, Tsukamoto et al. (2018) elucidated three mechanisms by which misalignment affects the angular momentum evolution (and ultimately the size) of the disk, 1. the selective accretion of fluid elements with large (small) angular momentum to the central perpendicular (parallel) core;
2. magnetic braking in the isothermal-collapse phase;
3. magnetic braking in the disk;
The selective accretion is a process in which the magnetic tension suppresses mass accretion from the direction perpendicular to the magnetic field and selectively enhances mass accretion parallel to the field. As a result, the fluid elements with a larger (small) angular momentum accrete onto the central region in a perpendicular (parallel) core. This effect may play a role until the envelope disappears. Thus, this effect can lead to the misinterpretation of the impact of misalignment on magnetic braking efficiency on time scales shorter than the time scale of envelope dissipation, and it makes the magnetic braking appear stronger (weaker) in parallel (perpendicular) cores. (In reality, it is just that the angular momentum of the accreted gas is smaller (larger)) Tsukamoto et al. (2018) showed that the magnetic braking during the isothermal collapse phase is stronger in perpendicular cores than in parallel cores. This result is opposite to that expected from the previous studies such as Ciardi (2009) andJoos et al. (2012), but is consistent with the classical discussion of magnetic braking by Mouschovias (1985) and the simulation results of prestellar collapse phase (Matsumoto and Tomisaka 2004). The inconsistency between the trend of magnetic braking in the isothermal collapse phase and the results of Ciardi (2009) andJoos et al. (2012), which investigate the disk evolution after protostar formation, may suggest that magnetic braking during the prestellar collapse phase (or in the envelope) is not the dominant mechanism determining the evolution of the disk size, otherwise the size of the disks should be opposite in the parallel and perpendicular cores.
Of the three mechanisms, magnetic braking in the disk seems to play the dominant role for disk size evolution in the main accretion phase because the disk is supported by the centrifugal force and the magnetic field has a much longer time than the local free-fall time to extract the gas angular momentum. Furthermore, because the magnetic field fans out around the central region and has an hour-glass-like geometry, the magnetic braking in the disk with the parallel magnetic field can be enhanced (Mouschovias 1985;Hennebelle and Ciardi 2009;Tsukamoto et al. 2018). Note also that Li et al. (2013) (see also Hirano et al. 2020) point out that the angular momentum removal by disk outflows also plays an important role in the parallel core which further favors the growth of the disk in the misaligned core in which outflow formation is suppressed (e.g., Joos et al. 2012;Li et al. 2013;Wurster et al. 2016;Tsukamoto et al. 2017b). Note, however, that Marchand et al. (2020) demonstrated that magnetic braking is primary and outflows are secondary in angular momentum transport.
In summary, because mechanisms 1 and 3 work effectively during the (early) main accretion phase, misalignment seems to promote disk growth. On the other hand, in the prestellar collapse phase, the presence of misalignment seems to enhance the angular momentum removal from the central region due to mechanism 2. Note, however, that it has been pointed out that the impact of misalignment is not so prominent once ambipolar diffusion is considered (Masson et al. 2016), probably because of the suppressed magnetic braking inside the disk. To observationally confirm that misalignment plays a significant role in disk evolution, it is important to investigate a correlation between outflow activity (rather than direction), magnetic field orientation, and disk size. Several studies suggest that misalignment suppresses outflow formation, and that there is a correlation among the disk size, outflow activity, and misalignment (Li et al. 2013;Hirano et al. 2020). If such correlation is observed, it provides evidence that misalignment affects disk evolution. -Lima et al. (2012) suggested that turbulence in the cloud core weakens magnetic braking. They compared the simulation results for a coherently rotating core and a turbulent core and found that a rotationally supported disk is formed only in the turbulent cloud core. Similar results were obtained by Seifried et al. (2012aSeifried et al. ( , 2013. Santos-Lima et al. (2012) suggested that random motions due to turbulence cause small-scale magnetic reconnection events and provide an effective magnetic diffusion that efficiently removes magnetic flux, and disks with a size of r ∼ 100 au are formed even in the ideal MHD limit. However, their results are obtained with relatively coarse numerical resolution of r 1 au. With such a coarse resolutions in ideal MHD, the numerical magnetic diffusion can promote the magnetic flux loss and can artificially suppress the magnetic braking. Joos et al. (2013) investigated the impact of turbulence with a higher numerical resolution of ∼ 0.4 au. Their results suggests that the impact of turbulence is limited. For example, they compared the mass-to-flux ratio of the central region (e.g., r ∼ 100 au) between cores with subsonic turbulence and without turbulence, with a realistic magnetic field strength of µ ≤ 5. Their results show that the difference is about the factor of two and is not so significant. Furthermore, they showed that the mass of the disk decreases as the numerical resolution increases. In other words, they point out that the problem of numerical convergence cannot be resolved even with resolutions below 1 au in the case of ideal MHD simulations with turbulence.
Turbulence
Santos
Furthermore, it is pointed out that the impact of turbulence becomes less important once the non-ideal MHD effect is considered (Lam et al. 2019;Wurster and Lewis 2020). These results, as well as the analytic estimate of turbulent (reconnection) diffusion rate in §3.1.2, suggest that the turbulent diffusion may not play a significant role for the disk size evolution.
Ohmic dissipation and ambipolar diffusion
So far, we have examined the mechanisms that weaken magnetic braking in the ideal MHD limit. However, the ideal MHD approximation is not valid in real YSOs because of their low ionization degree. Non-ideal MHD effects can decouple the magnetic field and gas, and can decrease the magnetic braking efficiency in the disk. Thus, non-ideal effects may affect the size evolution of circumstellar disks. In this section, we review the influence of Ohmic dissipation and ambipolar diffusion on early disk evolution.
In the context of disk size evolution, Ohmic dissipation becomes effective in the dense inner region of the disk of ρ g > 10 −10 g cm −3 (or n g > 10 13 cm −3 ). Machida et al. (2011) used 3D simulations with Ohmic dissipation and found that the disk size is r 10 au in t 10 4 yr after the formation of the protostar (or Class 0 phase where the envelope is very massive). This result suggests that it is still difficult to form disks with sizes of several tens of au in the Class 0 phase by Ohmic dissipation alone.
Ambipolar diffusion is a more efficient magnetic diffusion process in the disk and envelope. It has two important properties. One is that ambipolar resistivity η AD is much larger than the Ohmic resistivity η Ohm in almost the entire region of the disk. The ambipolar resistivity becomes an increasing function of density at ρ g 10 −12 g cm −3 (or n g 10 11 cm −3 ) and dominates the Ohmic resistivity in ρ g 10 −9 g cm −3 (or n g 10 14 cm −3 ) (e.g., Tomida et al. 2015;Tsukamoto et al. 2015b;Marchand et al. 2016;Zhao et al. 2018b, see also Figure 6 and equations (9) and (10)). 3D simulations have shown that ambipolar diffusion allows the formation of a relatively extended disk with a size of r 10 au even in the early disk evolution phase (t 10 4 yr after protostar formation, Masson et al. 2016;Zhao et al. 2018a;Tsukamoto et al. 2020). They also showed that the disk is massive and the spiral arms induced by gravitational instability form despite a relatively strong magnetic field.
Since ambipolar diffusion is also effective in the envelope, the magnetic flux brought into the central region by accretion is distributed around the disk. It has been proposed that the accretion towards the magnetic flux tube results in the formation of a shock (ambipolar diffusion shock Li and McKee 1996;Ciolek and Königl 1998).
The resistivities (η Ohm , η AD , and η Hall ) strongly depend on the dust properties as well as the cosmic-ray (CR) ionization rate, and any change of these parameters is expected to affect the disk size evolution (Zhao et al. 2016;Wurster et al. 2018c;Dzyurkevich et al. 2017;Koga et al. 2019a;Guillet et al. 2020a) The CR ionization rate (ζ CR ) generally gauges the overall abundances of charge species. The typical level of ζ CR in dense cores (inferred from chemical analysis) ranges from a few times 10 −18 s −1 to a few times 10 −16 s −1 (Caselli et al. 1998;Padovani et al. 2009;Ivlev et al. 2019), with ζ CR ≈ 10 −17 s −1 as the "standard" rate. However, the current methods for constraining ζ CR rely on the measured abundance of molecular ions such as HCO + , DCO + and N 2 H + (Caselli et al. 1998;van der Tak and van Dishoeck 2000;Doty et al. 2002), which can be largely uncertain due to the freeze-out of molecules in cloud cores. Nevertheless, recent non-ideal MHD simulations show that disk formation is possible for such a range of ζ CR . If only ambipolar diffusion and Ohmic dissipation are considered, a slightly higher CR ionization rate (ζ CR a few times 10 −17 s −1 ) than the standard rate at core scale can suppress the formation of disks with sizes of several tens of au (Zhao et al. 2016;Kuffmeier et al. 2020).
Dusts plays an important role in both determining the abun-dance of ions and electrons and in determining the conductivity. Dust adsorbs charged particles and determines the ionization degree of the gas phase. On the other hand, a large population of charged small grains ( 100Å) with Hall parameter close to unity can determine the conductivities (Li 1999;Padovani et al. 2014;Zhao et al. 2016;Dzyurkevich et al. 2017), and decrease the ambipolar and Hall resistivities at envelope and disk densities (Zhao et al. 2018b;Koga et al. 2019b;Marchand et al. 2020). The main reason for the suppression of disk formation in early non-ideal MHD work (Mellon and Li 2009;Li et al. 2011) is largely attributed to the use of ionization chemistry assuming the presence of a large number of small charged grains. As shown in analytical work, the smallest grains are rapidly depleted in cold dense environments (Ossenkopf 1993;Hirashita 2012;Köhler et al. 2012;Guillet et al. 2020a;Silsbee et al. 2020), which is supported by the non-detection of spinning dust grain emission (produced by small dust grains of 100Å) in recent Galactic cold core surveys (Tibbs et al. 2016). Therefore, given the observational and chemical constraints in cloud cores on the microphysical parameters ζ CR and the minimum grain sizes, the formation of circumstellar disks in low-mass protostellar cores should be relatively universal.
Although the microscopic physics which determines the magnetic diffusion rate, and hence disk evolution, looks complicated, there is a simple essence that determines disk size evolution under a strong magnetic field with magnetic diffusion, which is the balance of coupling and decoupling between the magnetic field and the gas in the disk. Hennebelle et al. (2016) showed that ambipolar diffusion limits the disk size to a few tens of au during Class 0 phase by assuming that the generation timescale of toroidal magnetic field equals to the ambipolar diffusion timescale and the magnetic braking timescale equals to the rotation timescale. This analytic estimate reproduces well in many simulations of disk evolution during the Class 0 phase (Hennebelle et al. 2020;Mignon-Risse et al. 2021a).
The microscopic origin of ambipolar diffusion is ion-neutral drift. In other words, if ambipolar diffusion works effectively, a velocity difference between ions (e.g., H 3+ , HCO + , Tassis et al. 2012;Zhao et al. 2018b) and neutral gases (e.g., CO) is expected to occur. Thus, observations of ion-neutral drift are essential to test the importance of ambipolar diffusion. Theoretical studies have suggested that the velocity difference is possibly observable (Zhao et al. 2018a;Tsukamoto et al. 2020;Marchand et al. 2020) in the disk and inner envelope (i.e., 100 to 1000 au scale) of late Class 0 to Class I YSOs. Although recent attempts to observe ion-neutral drift have resulted in nondetection for B335 (Yen et al. 2018), possibly because it is too young (as shown in Tsukamoto et al. 2020, ion-neutral drift is more likely observable in more evolved objects), or because it is highly ionized. Future observational attempts to detect ionneutral drift are extremely important.
Hall effect
Unlike other non-ideal effects, which are just magnetic diffusion, the Hall effect has a unique feature. The Hall effect can actively induce a clockwise rotation (when η Hall < 0) around a magnetic field by generating a toroidal magnetic field from the poloidal magnetic field (Wardle and Ng 1999;Braiding and Wardle 2012a,b).
During gravitational collapse, the magnetic field is dragged toward the center and an hourglass-shaped magnetic field structure is formed. At the "neck" of the hourglass-shaped magnetic field, a toroidal current exists. Then, the Hall effect drags the magnetic field toward the azimuthal direction as if the gas rotates with the velocity v H = −η Hall (∇ × B)/B (see equation (5)). The generated toroidal magnetic field exerts a toroidal magnetic tension on the gas and induces the rotation. The combination of the rotation induced by the Hall effect and the inherent rotation of the cloud core sometimes assists the disk size growth, and causes various interesting velocity structures in YSOs. Krasnopolsky et al. (2011) used two-dimensional simulations to investigate the impact of the Hall effect in the context of disk formation for the first time. They focused on the dynamical behavior induced by the Hall effect and showed that a circumstellar disk with a size of r 10 au can be formed with sufficiently high η Hall . Another finding of Krasnopolsky et al. (2011) is the formation of a counter-rotating envelope opposite to the disk rotation. As a back reaction to the Hall effect induced rotation in the midplane, angular momentum conservation causes the upper layer to rotate in the opposite direction of the induced rotation. As a result, a counter-rotating (upper) envelope emerges. Li et al. (2011) investigated the effect of the Hall effect in two-dimensional simulations that included all the non-ideal MHD effects using realistic magnetic diffusion coefficients. They confirmed that Hall-induced rotation occurs even when other non-ideal effects are considered. They also showed the formation of the counter-rotating region opposite to the initial rotation in the upper region of the envelope at ∼ 100 au. Tsukamoto et al. (2015a) conducted three-dimensional simulations of disk formation which included all the non-ideal effects as well as a radiative transfer for the first time. They followed the evolution until shortly after the formation of the protostar. They showed that a disk ∼ 20 au in size is formed at the protostellar formation epoch when the magnetic field and initial rotation vector of the core are anti-parallel (in this case, the Hall induced rotation and initial rotation has the same direction). On the other hand, a disk 1 au in size formed with the parallel configuration (in this case, the Hall induced rotation and initial rotation has the opposite direction). Because the Hall effect strengthens or weakens magnetic braking depending on whether the angle between the magnetic field and the angular momentum of the cloud core is acute or obtuse, they suggest that disk size could be bimodal in the Class 0 phase with the Hall effect.
More recently, Wurster et al. (2016) investigated the impact of the Hall effect several thousand years after the formation of the protostar with three-dimensional simulations. They also confirmed that disk size could be bimodal. Marchand et al. (2018Marchand et al. ( , 2019 also reproduced the evolution induced by the Hall effect described above. On the other hand, they pointed out that the Hall effect is difficult to handle numerically, and that it is necessary to pay attention to whether the conservation of angular momentum is satisfied in the implementation. Tsukamoto et al. (2017b) investigated the impact of the Hall effect in the misaligned core. They showed that, in a core with an acute angle misalignment between the magnetic field and the angular momentum, the disk size growth is suppressed, while in a core with an obtuse angle, the disk size growth is promoted. Zhao et al. (2020Zhao et al. ( , 2021 investigated the impact of dust models on the Hall effect and the resulting evolution of the disk. When Hall effect is included, the upper limit of ζ CR for disk formation increases to a few times 10 −16 s −1 , matching the upper range of the estimated ζ CR in cloud cores. They also showed that the Hall effect could flip the rotation direction at ∼ 100 au from initial forward rotation to the counter rotation, making a counter-rotating disk. This kind of counter-rotating disk has recently been observed Takakuwa et al. (2018). If future observations reveal that counter-rotating structures are relatively common, it will be evidence of the importance of the Hall effect.
Disappearance of the envelope
As discussed in §3.2.3, the balance between magnetic braking and magnetic diffusion essentially limits the size of the disk to a few tens of au when a relatively massive protostellar envelope is present. However, this envelope will progressively be accreted and dissipated while the star is forming. Fig. 11.-Time evolution of disk radius of non-ideal MHD simulations with µ = 1 (black), µ = 3 (red), ideal MHD simulation with µ = 1 (blue), and hydrodynamics simulation (green) against the elapsed time after the circumstellar disk formation where µ is the mass-to-flux ratio normalized by its critical value. The free-fall timescale at the center of the cloud, t ff,c , and at the cloud boundary, t ff,b (t ff,b roughly corresponds to the epoch at which the disk mass becomes larger than the envelope mass), are plotted in each panel (from Machida et al. (2011)).
Magnetic braking is a process that transfers the angular momentum from the disk to the envelope. Thus, once the envelope loses its ability to receive angular momentum, the magnetic braking becomes less efficient Hennebelle et al. 2020). The long-term simulations of Machida et al. (2011), which cover from the prestellar core to the end of Class I phase (10 5 yr after protostar formation) have provided some predictions on how the disk sizes evolve during the late embedded stages, as a consequence of the envelope losing an order of magnitude of mass. Figure 11 shows the long-term evolution of disk size (t 10 5 yr after protostar formation) in ideal and non-ideal MHD simulations. In the early stage of protostar formation (t < 10 4 yr), in which the envelope is massive, the disk size is only about 10 au. On the other hand, as the system evolves to t > 10 4 yr, which corresponds to the epoch when the envelope mass becomes smaller than that of protostar and disk, the disk size begins to increase and reaches ∼ 100 au. This disk size evolution process may explain the 100 au scale disks in some Class 0/I YSOs even with a relatively strong magnetic field in the natal cloud cores.
Similarities between low-and high-mass star formation
Young massive protostars differ from their low-mass counterparts essentially because of their intense luminosity throughout their evolution. By chronological order, the accretion luminosity (up to stellar masses of 10 M , e.g., Hosokawa and Omukai 2009), the internal luminosity and eventually the ionising radiation (from mass > 30 M Kuiper and Hosokawa 2018) shape the evolution, the accretion and the ejection of young massive protostars. It is has been shown that magnetic fields regulate disk and outflow formation within massive protostars as well (e.g., Banerjee et al. 2006;Seifried et al. 2013). It is thus of prime importance to estimate the relative balance of the radiative and magnetic fields effects throughout the early evolution of young massive protostars in order to characterise the accretion and ejections processes. Since last PP VI, a lot of progress has been made in order to study the combined effects of magnetic fields and radiative feedback on the formation of disks and outflows in young massive protostars. In the following, we review state-of-the-art simulations, focusing on the small scales of collapsing massive dense core (typically < 1000 au) leading to the formation of isolated star-diskoutflow systems. We do not review the recent work focusing on the large scale fragmentation of collapsing massive dense cores as this goes beyond the scope of this chapter.
Disk formation and properties
Various processes that shape disk formation in young protostars have been investigated in the recent years. In the first class of models, magnetic fields have been neglected. Rather, these models focus on the accurate treatment of the radiative protostellar feedback, using hybrid irradiation schemes (Kuiper et al. 2010;Klassen et al. 2014;Rosen et al. 2017;Mignon-Risse et al. 2020). Rosen et al. (2016) and Rosen et al. (2019) report 3D radiation-hydrodynamics models of the collapse of 150 M massive dense cores. They show that accretion disks form regardless of the core's initial virial state and that the disk supplies material to the star, especially at late times (M > 25 M ). The typical disk they report has a radius < 1000 AU and eventually becomes gravitationally unstable, leading to disk fragmentation at late times. Similar results on disk properties have been reported in Klassen et al. (2016) and Mignon-Risse et al. (2020) for clouds without initial turbulence, except that the later studies do not report disk fragmentation. This discrepancy might originate from the choice of initial rotation level as well as of the numerical resolution (Meyer et al. 2018). Lastly, Kuiper and Hosokawa (2018) also report disk formation and late evolution in 2D axisymetric models including photoionization. In this case, the disk gets destroyed in the late evolutionary stages as the outflow broadens when photoionization starts operating.
The picture drawn above changes dramatically once magnetic fields are taken into account. In the first series of studies, ideal MHD has been introduced. As with low-mass protostars, magnetic braking is operating and disk formation is suppressed in high magnetization (µ < 10, Seifried et al. 2011). With initial turbulence, Seifried et al. (2013); Myers et al. (2013) report the formation of 50-150 AU Keplerian disks. Similar to low-mass star formation, magnetic fields greatly affect the disk size. However, these models suffer from convergence issues; in particular regarding the disk properties, since diffusion is mainly controlled by numerical resolution (see §3.2.2). In the past five years, the non-ideal MHD framework developed for low-mass star formation has then been applied to massive protostars. Matsushita et al. (2017) present 3D simulations of collapsing massive dense cores in the aligned rotator configuration (the angular momentum of the parent cloud core is aligned with its initial magnetic field) which include the effects of Ohmic resistivity and mimic the gas thermal behaviour with a barotropic EOS. Disks form in all cases, with sizes 50-300 AU and masses < 10 M -their exact properties depend on the initial conditions (mass, magnetisation) as well as on the resistivity (larger disks form with larger resistivity). Only the most massive case, with an accretion rateṀ > 10 −2 M yr −1 exhibits fragmentation. Similarly, Kölligan and Kuiper (2018) report the formation of 100-1000 AU disks in 2D axisymmetric simulations of the collapse of 100 M dense cores using Ohmic diffusion and an isothermal equation of state. In these models, the disks do not fragment. Given the differences between the numerical methods and setups of the two latter studies, it is difficult to draw a firm conclusion on the effect of Ohmic diffusion on disk fragmentation around massive protostars.
More recently, Mignon-Risse et al. (2021a) and Commerçon et al. (2022) present 100 M dense core collapse simulations including the effect of radiative transfer (continuum and protostellar feedback) and magnetic fields with ambipolar diffusion. Commerçon et al. (2022) perform a comparison between hydrodynamical, ideal MHD, and ambipolar diffusion cases, and report similar properties regarding disk formation and early evolution as for low-mass star formation in the aligned rotator case. Disk fragmentation occurs only in the hydrodynamical case. The disk formed in the non-ideal MHD case is characterized by plasma β > 1, Keplerian rotation, and has essentially a vertical magnetic field in the inner regions (R < 200 AU), while the toroidal magnetic pressure dominates in ideal MHD with β < 1. Regarding magnetic field evolution, the central accumulation of magnetic flux gets limited when ambipolar diffusion operates, with a plateau similar to the one found in the low-mass regime (Masson et al. 2016), but with a larger amplitude around 1 G (Commerçon et al. 2022). Finally, the disk radius evolution is consistent with the ambipolar diffusion regulated scenario (Hennebelle et al. 2016). Mignon-Risse et al. (2021a) further confirm the disk properties regulated by ambipolar diffusion in similar models but include initial turbulence and a more accurate radiative transfer scheme for protostellar feedback (hybrid irradiation instead of grey FLD, Mignon-Risse et al. 2020). In the case of initial super-Alfvénic turbulence, Mignon-Risse et al. (2021a) report disk fragmentation events that lead to the formation of stable binary systems with separation 300 − 700 AU. Interestingly, individual disks form around secondary fragments, whose properties are consistent with magnetic regulation as well.
Outflow mechanisms
Outflows are ubiquitous in star forming regions. Contrary to the case of low-mass star formation, various mechanisms can be at play to accelerate outflowing gas in young massive protostars: magnetic fields (e.g., Banerjee and Pudritz 2007), radiative force (e.g., Yorke and Sonnhalter 2002) and ionization (e.g., Dale et al. 2005). While the radiative force and the ionization are regulated by the luminosity emerging from the young massive protostar that has already entered the Pre-Main Sequence evolution (M 20 M ), the magnetic outflows are expected to be launched earlier. Banerjee et al. (2006) have shown in ideal MHD models that magnetic outflows are launched in massive protostars similarly to low-mass case. Interestingly, these outflows produce cavities out of which radiation pressure can be released (Krumholz et al. 2005). Seifried et al. (2012b) studied in more detail the mechanisms at the origin of magnetic outflows and jets (magneto-centrifugal acceleration versus magnetic pressure gradients, see section 3.1.5). They report that both mechanisms contribute to the acceleration of the outflowing material: the outflows are launched from the disks via centrifugal acceleration and then the toroidal magnetic pressure progressively contributes to the further acceleration away from the disks. These ideal MHD models have been revised in recent works which include resistive effects. Matsushita et al. (2017) report that the outflow is driven by the outer disk region, i.e., where the ionization is sufficient for the magnetic field and neutral gas to be well coupled. They also measure a ratio between the mass outflow rate and the mass accretion rate that is nearly constant, from 10 to 50%, throughout the stellar mass spectrum they consider (30 − 1500 M ). Commerçon et al. (2022) and Mignon-Risse et al. (2021b) report the first 3D models of outflow formation which account for both radiative feedback and ambipolar diffusion. They show that magnetic processes dominate the outflow launching at early stages up to masses 20 M . Mignon-Risse et al. (2021b) report similar results to Seifried et al. (2012b) on the origin of the magnetic outflows: both the centrifugal acceleration and the magnetic pressure gradients contribute to the outflow. Interestingly, Mignon-Risse et al. (2021b) find that initial turbulence greatly affects the outflow launching and report monopolar outflows in their most (supersonic) turbulent case. Lastly, outflows are launched perpendicularly to the disk, but with no correlation to the parent cloud magnetic field orientation.
Nonetheless, further work is required in order to remark on the origin of magnetic outflows and their impact on massive star formation since both numerical resolution and time integration remain insufficient. From their 2D resistive models, Kölligan and Kuiper (2018) conclude that sub-au resolution is required to obtain converged results on the magneto-centrifugal jets. Additionally, further time integration is needed in order to remark about the fate of the magnetic acceleration once the the pressure and later ionization begin to dominate the acceleration (M > 20 M ).
Altogether, these results strongly suggest that the same mechanisms are at play in the prostostellar disk and outflow formation process for both low-and high-mass star formation. The scenario we present in this chapter might thus also apply to the early evolution of young massive protostars.
4. Dust evolution: a key feature to interpret observed disk properties and MHD models 4.1. The "Missing mass problem in Class 0/I YSOs"
As we have described in previous sections, the recent theoretical studies predict disk sizes in relatively good agreement with the observed sizes of embedded disks. However, there is still a discrepancy between the observations and predictions from the theoretical studies. The most salient issue is the one of disk masses. Observational estimates of embedded disk masses find typical median gas masses 0.001 − 0.01 M (see the detailed discussion in §2.5.2). Taken at face value, Class 0/I disks are thus on average an order of magnitude lighter than that suggested from non-ideal MHD simulations. In this section, we discuss the recent studies regarding disk masses and the discrepancy between observations and simulations. We explore one possible solution to this discrepancy: dust evolution and its impact on both disk evolution models and their comparison to observations. Non-ideal MHD simulations have shown that, even with a relatively strong magnetic field, disks tend to become massive and gravitationally unstable (or Toomre's Q value is Q ∼ 1) Tsukamoto et al. 2015a,b;Tomida et al. 2015;Masson et al. 2016;Wurster et al. 2016;Tsukamoto et al. 2015a,b;Kuffmeier et al. 2017;Tomida et al. 2017;Zhao et al. 2018a;Machida and Basu 2019;Xu and Kunz 2021). Figure 12 shows examples of simulation results in which the gravitationally unstable disk is formed. As shown in this figure, the disk becomes massive and spiral arms induced by gravitational instability form. The simulations employed different numerical methods/codes, inner boundaries, and initial conditions, yet obtained consistent results. Note that the spiral arms are transient and gravitationally unstable disks are not always nonaxisymmetric (Tomida et al. 2017;Tsukamoto et al. 2017a). Thus, whether a disk is massive or not cannot be determined strictly from the fact that its morphology is axisymmetric (although non-axisymmetric structure, which can be interpreted as the result of gravitational instability, have been observationally found in e.g., Elias 2-27 Pérez et al. 2016;Tomida et al. 2017;Paneque-Carreño et al. 2021), and obtaining a mass estimate from the radiative flux is necessary.
The formation of massive disks is also supported by the formation scenario that disks are formed from first cores. As mentioned above, most of gas in the first core does not directly accrete to the protostar, but transforms to a disk around the protostar. Since the mass of the first core (∼ 10 −2 to 10 −1 M ) is about 10 times larger than the mass of the protostar at its formation epoch ∼ 10 −3 to 10 −2 M , the disk is expected to be very massive immediately after the formation of the protostar (Inutsuka et al. 2010;Inutsuka 2012).
The massive disk seems to not be short-lived (e.g., 10 4 yr). Machida et al. (2011) and Machida and Hosokawa (2013) investigated very long-term evolution of circumstellar disks (until 10 5 years after the formation of the protostar, corresponding to the end of Class 0 to in Class I phase) and showed that gravitationally unstable disks may frequently form even in strongly magnetized cloud cores once it grows to several 10 au. Figure 13 shows the long-term (t ∼ 10 5 yr) mass evolution of the protostar, disk, and envelope. The figure shows that the disk has relatively large mass ( 0.1M ) even ∼ 10 5 yr after the formation of the protostar. Therefore, from a theoretical point of view, it is expected that gravitationally unstable disks should be frequently observed in the Class 0/I phase. Thus, there is a discrepancy between disk masses obtained from simulations and those from observations, dubbed the "Missing mass problem in the Class 0/I phase". A simple theoretical estimate is useful to understand why the disk mass in the simulations tends to be so large. Considering the current estimates of protostellar lifetimes ( 10 6 yr), large mass accretion rates ofṀ disk ∼ 10 −6 M yr −1 in the disk, which is also similar value to the accretion rates estimated from observations (e.g., Dunham et al. 2014;, on average are required to grow the protostar to ∼ 1 M in the required timescale. Using this fact and considering a viscous accretion disk model (Shakura and Sunyaev 1973), one can estimate the time and vertically averaged viscous α value of disk (see for more details, Tsukamoto et al. 2017a),
α = 1.2Ṁ disk,3×10 −6 M yr −1 Q 10 T −3/2 30K ,(16)
where we assume c s = k B T /m g = 1.9 × 10 4 T 1/2 10K cm s −1 . The equation (16) shows that there is a trade-off between α and Q of the disk. For a given mass accretion rate, a large α is required to achieve a large Q value (or small disk mass).
This estimate shows that an extremely efficient angular momentum transport of α ∼ 1 on average is necessary over the entire period of evolution to achieveṀ disk ∼ 10 −6 M yr −1 in a disk with Q ∼ 10. Note that mass supplied by the envelope accretion to disk is on the order ofṀ envelope ∼ 10 −6 M yr −1 in Class 0/I YSOs, and thatṀ disk ∼ 10 −6 M yr −1 is also necessary to keep the disk mass small, otherwise mass accumulates in the disk and its mass inevitably increases. However, such a highly efficient mechanism of angular momentum transport is not realized in the above-mentioned simulations because of non-ideal MHD effects. As a result, Q tends to become small (Q ∼ 1) in the simulations to reduce α to the realistic level (α ∼ 0.1). Note that this estimate neglects the temporal variation of the mass accretion rate (episodic accretion), which may lead to realistic α even for low-mass disks during quiescent periods, but requires ultra-efficient angular momentum transport during burst periods.
A possible mechanism that has been overlooked in previous theoretical studies and that can significantly change the mass evolution of the disk is the effect of dust evolution on the magnetic resistivity. Most numerical simulations of disk evolution consider sub-micron-sized dust to compute the resistivities: this results in highly efficient Ohmic dissipation and ambipolar diffusion in the disk. However, if the dust grains are significantly larger than the typical ISM dust size, the magnetic resistivity can become many orders of magnitude smaller, causing the recoupling between the magnetic field and the gas inside the disk.
Note also that disk mass in the simulations is sensitive to the treatment of inner boundary or sink Machida et al. (2014Machida et al. ( , 2016; Hennebelle et al. (2020); Xu and Kunz (2021), and there is a study which reported formation of a long-lived lightweight disk with the sink radius of ∼ 4 au (Lee et al. 2021c). Thus, further research on appropriate inner boundary conditions is needed.
On the other hand, disk mass estimates from observations of dust thermal emission also include a number of assumptions on dust properties subjecting them to large errors (such as the dust opacity and the dust-to-gas mass ratio, see §2.5 for more detail). We stress that an issue with dust properties may hamper our understanding of Class II disk masses, for which observations suggest the dust mass is smaller than the Minimum Mass for Solar Nebula and is insufficient for planets to form (this is so-called "missing mass problem", see the PPVII chapter of Miotello et al., and discussion in §2.5.2).
Note also that the disk mass estimate from CO, which is another widely used tracer of disk mass, also suffers from possible CO depletion and uncertainty in the CO-to-H 2 ratio (e.g., Aikawa et al. 1996;Thi et al. 2001;Yoneda et al. 2016;Dodson-Robinson et al. 2018;Powell et al. 2019), which is highlighted by the discrepancy between the mass estimate using HD and using CO (Bergin et al. 2013;Favre et al. 2013;Kama et al. 2016;McClure et al. 2016).
In the following §4.2, we describe the tantalizing thread of observational evidence that may suggest that the dust properties in protostellar envelopes and disks differ significantly from the assumed dust properties implemented in models, possibly because of dust evolution. We then describe more quantitatively the important consequences caused by dust growth, in §4.3.
Observations of dust grain properties in embedded protostars
In the last decade, observations have attempted to put constraints on dust properties in star-forming structures by measuring the spectral indices of the thermal dust emission at (sub-)millimeter wavelengths (Martin et al. 2012;Chen et al. 2016a;Sadavoy et al. 2016;Chacón-Tanarro et al. 2017). In the regime of optically thin dust emission, and in the Rayleigh-Jeans regime (high dust temperatures compared to hν/k), the dust emissivity scales as a power law with frequency and thus the flux density of the thermal dust emission F (ν) depends on the frequency as ν (2+β) , with β the dust emissivity. Dualfrequency observations in the submm and mm regimes are thus routinely used to measure the dust emissivity β. Observational studies show that star-forming clouds and cores present significant variations in β at ∼ 0.1 pc scales. Moreover, a fair fraction of the dense material involved in the star formation process exhibit β values much lower than the typical values of ∼ 1.7 observed in the diffuse ISM, although the flattening of the spectral energy distribution may depend on the choice of wavelengths used to measure the spectral index underlying the determination of β. Studies in the infrared also suggest that dust grains are significantly larger than the typical 0.1µm sizes in the ISM, with indications of micrometer-size grains in the outer layers of molecular cores (Steinacker et al. 2015;Saajasto et al. 2021). At the other end of the star formation process, observations of T-Tauri circumstellar disks reveal β from 2 to 0, compatible with the presence of millimeter grains (see e. g., the review by Testi et al. 2014 and the recent survey by Tazzari et al. 2021). ALMA observations suggest that some 1-2 Myr old disks may already be hosting planets (Jin et al. 2016;Manara et al. 2018;Clarke et al. 2018), which would mean that the formation of planetesimal seeds already occurs in the first million years after the onset of collapse, while the disk and the star are being built concomitantly. Observations have also suggested that partial grain growth up to sizes 10-100 µm could have already occurred in a few objects during the Class I phase (see e.g. Miotello et al. 2014;Harsono et al. 2018;Lee et al. 2020;Zhang et al. 2021).
We have shown, earlier in this review chapter, that the progenitors of planet-forming disks form at the Class 0 stage, but start very compact in size. Thus, Class 0 protostars are ideal candidates to study the pristine properties of the dust grains building these disks. Recent studies analyzing the dust properties in Class 0 protostars at different scales have suggested that the grain properties may already be significantly different from typical ISM dust, in these objects less than a million year old. At the small radii probed by interferometric observations where the dust emission is a combination of envelope and disk emission, Tychoniec et al. (2020) find a mean β ∼ 0.4 in the compact dusty components of Perseus protostars, believed to trace mostly the dust enclosed in young protostellar disks. Interferometric observations have also found low spectral indices at millimeter wavelengths in the dusty ∼ 60 au disk surrounding L1527, a Class 0/I protostar and in the Class 0 protostar, L1157 (Nakatani et al. 2020;Chiang et al. 2012). In the Orion region, recent ALMA observations confirmed the low spectral indices of dust emission in 16 protostars at scales < 100 au (Bouvier et al. 2021). Analyzing the dust properties of 12 nearby Class 0 protostars observed at 1.3 and 3.2 mm with the PdBI (CALYPSO survey), Galametz et al. (2019) found that most Class 0 protostars, sampled from different star-forming regions, show significantly shallow dust emissivities at envelope radii ∼ 100 − 500 au, with β < 1 (see left panel of Figure 14). These observations have also uncovered a radial evolution of the dust emissivity, with many objects showing a decrease of β toward small envelope radii. Their analysis of dust emissivity in the u-v domain limits the biases due to interferometric mapping reconstruction (i.e. filtering effects depending on wavelengths) and also rules out the contamination from smallscale protostellar disks in decreasing the dust emissivity index in inner envelopes-these results hence suggest the dust contained in young protostellar envelopes has different emissivity than the typical dust observed in the diffuse ISM.
A mixing of dust temperatures along the line of sight could artificially produce decrease in β, but the typical temperature range in embedded protostars at scales probed by these interferometric observations (envelope radii 50-1000 au) is large enough to produce such a significant flattening of the spectral index at millimeter wavelengths. An origin of these low spectral indices due to different grain composition than typical ISM grains is another possibility, although it is very challenging for models to significantly change the grain composition during the ∼1 Myr that the prestellar phase lasts. However, such low emissivities are expected from models of fluffy aggregates and grain coagulation in dense clouds: recent studies on various interstellar dust analogues suggest that mm β values <1, like those observed in these young protostars, can only be produced by grains larger than 100 µm (Ysard et al. 2019). Significant grain growth in Class 0 envelopes are also consistent with the theoretical predictions of the polarized radiative transfer models (Valdivia et al. 2019, see left panel of Fig.14), showing large > 100 µm grains are required to produce millimeter polarized dust emission at polarization fractions similar to those observed routinely in protostellar envelopes. (Galametz et al. 2019). All protostellar envelopes probed at radii ∼ 500 au show lower β values than the progenitor diffuse ISM. Right: Radial profile of the polarization fraction from the 1.3 mm dust emission predicted from magnetically aligned grains (RATs) in a typical low-mass protostellar envelope (Valdivia et al. 2019). Large (> 10 µm) dust grains are required to match the observations of the polarized emission (grey area).
If these low spectral indices measured in the millimeter wavelengths do indeed trace a population of large dust grains in the inner layers of protostellar envelopes, such presence of large grains has many implications. Indeed, these embedded protostars are less than 0.1 Myr old, thus the timescales to grow grains up to sizes > 1 µm in star-forming envelopes at typical gas densities n H2 ∼ 10 6 cm −3 may need to be revised. As an illustration, current models predict that at such gas densities, it takes a bit less than a Myr to grow grains up to a few microns (Ormel et al. 2009). While considering porous dust grains may help in speeding the process, taken at face values the observations of millimeter size grains are not consistent with the dust evolution models. If small grains are not only coupled to the gas but also to the magnetic field, a magnetized collapse may also help to segregate the grains with different charges, and ultimately shorten the grain coagulation timescale (Guillet et al. 2020a). A more likely explanation involves the high density conditions that reign in protostellar disks, making them efficient forges to build up big dust grains that are re-injected in envelopes due to protostellar winds and outflows (Wong et al. 2016;Bate and Lorén-Aguilar 2017;Lebreuilly et al. 2020;Ohashi et al. 2021;Tsukamoto et al. 2021a). Such a scenario may also explain the apparent lack of millimeter grains in the L1544 prestellar core (Chacón-Tanarro et al. 2017), since a disk would be required to build big grains in short timescales.
Dust properties also heavily influence magnetic field coupling as the small grains are the main charge carriers, and their disappearance to form bigger grains significantly changes the efficiency of magnetic braking (Guillet et al. 2020a;Zhao et al. 2021). Moreover, the size of the dust grains have an effect on both the physical structure of the accretion shock and on the associated chemistry (Guillet et al. 2007;Miura et al. 2017)the increase in grain size leads to weaker coupling between gas and dust, lower dust temperatures, and dust being less efficient at cooling the shock material. Therefore, characterizing dust properties and early evolution in embedded protostars is crucial to understand the physics at work to form the end product, i.e. the stars, disks, and planets: it will undoubtedly be one of the main focuses of research in the field in the coming years. The following section describes some key consequences of the early dust evolution in protostellar environments, as expected from numerical models of protostellar formation.
4.3. Impact of dust growth on magnetic field evolution and dust opacity:possible solutions to the missing mass problem
As discussed in previous section, observations of the dust thermal emission in the Class 0/I YSOs have shown the possible dust growth. Theoretically, the dust is also expected to grow in the disk of Class 0/I, with the growth timescale of several 10 3 yr, which is about 100 times shorter than the age of Class 0/I YSOs (see, e.g., Okuzumi et al. 2012;Tsukamoto et al. 2017a).
Impact of dust growth on charge state and resistivity
Dust growth can change magnetic resistivities which are the fundamental parameters for the formation and evolution of disks. As we discussed above, the dust has two different roles in determining resistivities. The first is that the small dust is responsible for the conductivity. The second is that it is the absorber of charged particles. Zhao et al. (2018a) showed that the omission of tiny dust increases the resistivity by decreasing dust conductivity, hence promoting disk growth ( §3.2). However, when the dust grows to a d > 1 µm and when maximum dust size determines the total surface area, the (absence of ) second role becomes important in determining the resistivities. In this case, dust growth reduces the total surface area and the grains lose their ability to absorb ions and electrons. As a result, the dust growth is expected to change the resistivity to what it would be in the absence of dust. Here, we quantitatively investigate the effect of dust growth on the resistivities.
In Figure 15, we calculate the resistivities with large (single sized) dust at the midplane of a disk (using the method of Tsukamoto et al. 2021b, outlined in their Appendix A). The figure shows that the resistivity in the disk decreases as the dust size increases (especially in the inner dense region) and converges to a single power law which solely determined by the equilibrium of cosmic-ray ionization and gas-phase recombination. The interpretation of this result is straightforward; large dust of a d 100 µm does not contribute to the determination of the resistivity (even though the dust is significantly charged for a d > 10 µm, which is properly calculated in our plot, for example, Z ∼ −10 4 for 1 mm dusts at T = 100 K).
Although the result is rather simple, its impact on disk evolution is possibly significant. For example, at r ∼ 10 au, η Ohm and η AD are reduced by a factor of 10 2 to 10 4 . With such a small resistivity, it is expected that the system essentially behaves as ideal MHD. Therefore, the disk evolution may be significantly affected by dust growth. Such a "co-evolution of disk and dust" has not been investigated with multi-dimensional simulations and is a promising future research area.
In particular, this could be a possible solution for the missing mass problem in Class 0/I YSOs by promoting mass accretion in the disk. Once the disk goes back to the ideal MHD regime, the strong magnetic braking in the disk and the development of magneto-rotational instability (MRI) over a large area of the disk are expected. This will significantly enhance the mass accretion from the disk to the protostar. As a result, the mass of the disk is likely to be reduced. Therefore, the dust growth and subsequent resistivity decrease may solve the missing mass problem in Class 0/I YSOs.
Note that, in Figure 15, the effect of dust growth is possibly exaggerated compared to the realistic case in which dust grains have a size distribution. When the dust grains have a size distribution and there is a certain amount of small dust, we expect them to increase the resistivity. This would be especially true if such small dusts determine the total surface area. Fig. 15.-η AD (solid), η Ohm (dashed), and η Hall (dotted) at the midplane of a disk. The black, blue, red, and orange lines show the resistivties of a d = 1 µm, 10 µm, 100 µm, and 1000 µm, respectively. The disk surface density is assumed to be Σ g = 45.6 r −12/7 50au gcm −2 , which corresponds to the gravitationally unstable disk with M * = 0.3 M . The disk temperature and midplane density are given as T = 150 r −3/7 AU K and ρ g = Σ(r)/H scale , respectively, where H scale = c s /Ω is the disk scale height. The magnetic field strength is assumed to be B = 30 mG. We consider the cosmic-ray ionization rate of ξ CR = 10 −17 s −1 as the only ionization source. The dust-togas mass ratio is fixed to be 10 −2 .
Impact of dust growth on the dust thermal emission
The mass of most of the disk in Class 0/I YSOs is estimated from its dust thermal emission by assuming a dust-to-gas mass ratio, dust opacity, and that disk is optically thin. However, recent studies have suggested that these assumptions may break down in at least some disks (Galván-Madrid et al. 2018;Liu et al. 2018). For convenience, we estimate the optical depth of the circumstellar disk. Assuming the dust absorption opacity at (sub-)mm as κ mm ∼ 2 cm 2 g −1 (e.g., Beckwith et al. 1990;Ossenkopf and Henning 1994;Birnstiel et al. 2018) and a dustto-gas mass ratio of 0.01, the optical depth of disk with constant Q value is estimated as:
τ mm = 0.9Q −1 1 M 1/2 * ,0.3 M r −12/7 50au .(17)
where M * is the prototar mass and r is the disk radius. We assume T = 150 r −3/7 1au K (Chiang and Goldreich 1997). This indicates that a massive disk with Q ∼ 1 and size of 50 au is optically thick in almost the entire region with (sub-)mm wavelengths, and that the assumption of optically thin is not valid. Note that this estimate is for a face-on disk, and a disk with inclination has a larger optical depth.
This estimate is reminiscent of the inconsistency of disk mass in Orion between from ALMA 0.87 mm and from VLA 9 mm (Tobin et al. 2020). As shown in Figure 71 of Tobin et al. (2020), the estimate of the Class 0/I disk mass in the Orion region using ALMA 0.87 mm is typically about 10 times smaller than that using VLA 9 mm, possibly suggesting that the disk tends to be optically thick at sub-mm wavelengths.
A natural curiosity may be that, since the dust (absorption) opacity at 9 mm is highly uncertain, extrapolating the opacity from mm to 9 mm with dust spectral index β = 0 (due to dust growth) should yield a consistent mass estimate. However, this idea does not work. This is because the decrease in β due to dust growth (e.g., to a d > 1 cm) is not caused by an increase in dust opacity in the longer wavelength, but by a decrease in dust opacity in the shorter wavelength (see Miyake and Nakagawa (1993) or Figure 4 of Birnstiel et al. (2018) for example), or because (mono-sized) dust opacity obeys κ λ = const for a d /λ 1 and κ λ ∝ a −1 d for a d /λ 1 (Bohren and Huffman 1998;Krügel 2007). Therefore, as the dust grows from micron sized, the grains first reach mm sizes and the dust opacity at mm wavelengths begins to decrease. Then, when the dust grains reach cm sizes, the dust opacity at cm wavelength begins to decrease. In other words, dust growth primarily causes an underestimation of disk mass at (sub-)mm, and not an overestimation at cm. Furthermore, it has been shown that dust scattering, in an optically thick disk causes the underestimate of optical depth, and the disk appears optically thin and lightweight. The dust scattering at (sub-) mm wavelengths has recently been widely recognized as new evidence of dust growth (Kataoka et al. 2015;Yang et al. 2016). For example, the change of the polarization pattern in Class I YSOs HL Tau shows that dust scattering is effective and that the dust may grow to ∼ 100 µm (Kataoka et al. 2016(Kataoka et al. , 2017) (Note, however, that recent laboratory experiments have shown that detailed size estimates should be made with caution because of the strong influence of the dust shape (Muñoz et al. 2021)). Dust scattering decreases the dust thermal emission from an optically thick region of the disk to be I ν ∼ √ 1 − ω ν B ν (T ), where ω ν is the single scattering albedo (Miyake and Nakagawa 1993;Zhu et al. 2019), because scattering reduces the depth where photons can escape. Zhu et al. (2019) show that the estimated disk mass with Q = 1 and a radius of r = 50 au is a tenth smaller than the actual mass (see their Figure 7) in (sub-)mm dust thermal emission. Thus, an optically thick, heavy disk with scattering can be misidentified as an optically thin, lightweight disk if the dust scattering is ignored. Hence, it is possible that a significant amount of disk mass is hidden from (sub-)mm observations if dust grows to ∼ 100 µm.
If dust grows further to a d 1 cm, which is expected from the short growth timescale in the disk, it inevitably causes radial drift, which also leads an underestimate of disk mass due to dust depletion. It has been recognized that dust disks evolve on much shorter time scales than gas disks and that dust can be depleted at a relatively early stage of disk evolution, when dust growth and dust radial migration are taken into account (Takeuchi et al. 2005;Birnstiel et al. 2012). Tsukamoto et al. (2017a) investigated the evolution of dust-to-gas mass ratio in a disk undergoing envelope mass accretion. They showed that the dust-to-gas mass ratio of the entire disk could be 3-10 times smaller than the ISM dust-to-gas mass ratio as the dust grows. Furthermore, once the dust size exceeds the observation wavelength, the opacity begins to decreases. Therefore, the dust thermal emission further decreases. This dust depletion and decrease of the opacity may also cause the underestimate of disk mass.
We believe that these recent studies regarding a change of resistivity and dust thermal emission due to the dust growth have the potential to ease the tension between observations and simulations, and reconcile the discrepancy.
Conclusions
In this chapter, we have examined the latest observational results and theoretical scenarios on the formation and evolution processes of protostars, protoplanetary disks, and outflows. Many observed properties of protostar/disk systems are in agreement with magnetized models, and could be a consequence of the coupling, decoupling, and recoupling processes of the magnetic field and gas at different scales and density regimes.
For example, the interplay of magnetic braking and magnetic diffusion naturally explains the radii of observed Class 0/I disks. The scenario that outflows are ejected directly from the disk via magnetic torque, rather than by the entrainment of the protostar jet, is also consistent with recent observations of outflow rotation, and evidence of outflow ejection around the edge of the disk from high-resolution observations of launching regions. Furthermore, the relatively weak magnetic field of the protostar (∼ 1 kG) could be due non-ideal MHD effects, causing a decoupling in the vicinity (< 1 au) of the protostar. According to the models, the weak magnetic field in and nearby the protostar could also be key for jet formation.
However, some salient issues remain to be addressed in the future. The relatively massive protostellar disks predicted from non-ideal MHD simulations are not observed, although these findings could be reconciled if dust properties need to be revised as suggested by recent observational works. It also remains uncertain how protostellar disk masses evolve with time, and what are the best observational tracers to characterize disks at different stages, as they evolve. Finally, the importance of dust evolution on both the magnetic processes and the mechanisms of planet formation, which may have already begun in these early protostellar evolutionary stages, are challenging questions yet to be adressed by future theoretical and observational investigations.
Fig. 1 .
1-Illustrative examples of protostellar magnetic fields mapped with millimeter/sub-millimeter polarized dust continuum emission on similar envelope/disk scales in the L1448-2A Class 0 protostar (left panel, from Kwon et al. 2019) and the [BHB2007]11 Class 0/I protostar (right panel, data published in
Fig. 2 .
2-Specific angular momentum of the gas measured at clouds and core scales (left panel, fromHeimsoth et al. 2021) and inside protostellar envelopes (right panel, fromGaudel et al. 2020).
Fig. 3 .
3-Compilation of Class 0 (in blue) and Class I (in orange) protostellar disk radii, from the literature, observed from the dust continuum emission at (sub-)millimeter wavelengths (λ < 2.7mm). From Oya et al. (2014); Lee et al. (2018b); Yen et al. (2015a, 2017); Aso et al. (2015); Ohashi et al. (2014); Brinch and Jørgensen (2013); Harsono et al. (2014); Miotello et al. (2014); Tobin et al. (2020); Sheehan and Eisner (2017); Maury et al. (2019)
8 − 3.8 M ⊕ (Williams et al. 2019; Encalada et al. 2021).
Fig. 4 .
4-ALMA SiO and continuum observations of the rotating outflow from the Class 0 protostar HH 212. While the outflow itself extends to much larger scales, this shows the region within ≈ 120 au of the central source, at a resolution of ≈ 8 au on top of the continuum map of the disk. The maps show the intensity (in unit of K km s −1 ) integrated over the outflow velocity range. (a) A chain of SiO knots trace the primary jet emanating from the disk testifying to the episodic nature of the outflow. (b) Blueshifted and redshifted SiO emission of the jet plotted with the continuum (disk) emission. The direction of rotation of the disk is shown with blue and red arrows and this is the same as the direction of rotation of the red and blueshifted jets. FromLee et al. (2017a) reproduced with permission c Springer Nature.
(Feeney-Johansson et al. 2019). The derived magnetic field strengths are in agreement with what might be expected based on the expansion of the outflow while allowing for some amplification due to shock compression (Feeney-Johansson et al. 2019).
Fig. 6 .
6-The magnetic resistivities computed for different dust-grain size distributions (top left: η Ohm ; top right: η AD ; bottom left: η Hall ; bottom right: η Ohm /η AD ) with a min =0.005 µm and a max =0.25 µm (MRN: black), a min =0.03 µm and a max =0.25 µm (blue), a min =0.1 µm and a max =0.25 µm (red), a=1.0 µm (cyan) from (Zhao et al. 2018b). The reference level of ambipolar resistivity of Shu (1991) is shown in purple dashed line (top right).
Fig. 7 .
7-Density (top panels) and plasma β (bottom panels) map of the first core in the ideal MHD (left panels) and nonideal MHD (right panels) simulations(Tsukamoto et al. 2015b) from edge-on view. Red arrows show the velocity field.
Fig. 8 .
8-Magnetic field strength as a function of the density at the first-core formation epoch in ideal MHD (red) and non-ideal MHD (blue) simulations at 2000 years after first core formation. The solid and dashed black lines show B ∝ ρ 2/3 and B ∝ ρ 1/2 , respectively (fromMasson et al. 2016).
Fig. 10 .
10-Left and middle panels show density isosurfaces around the newly born protostar in the ideal MHD (left) and non-ideal MHD (middle) simulations. The magnetic field morphology is shown as lines and are colored according to the magnitude of the magnetic field (fromVaytet et al. 2018). The right panel shows the magnetic field profile and jet launching after the protostar formation (fromTomida et al. 2013). The edge of the right panel is ∼ 0.27 au. The high-density region (ρ > 10 −5 g cm −3 ) is visualized with the orange surface. White arrows denote the direction of the fluid motion and white lines show the magnetic field. Fast outflowing gas (v z > 3 km s −1 ) is volume rendered in pale yellow.
Fig. 12 .
12-Examples of the disks formed in non-ideal MHD simulations (fromMachida et al. 2011;Tsukamoto et al. 2015a;Masson et al. 2016;Zhao et al. 2018a). The spiral arms are caused by gravitational instability.
Fig. 13 .
13-Long term evolution ( 10 5 yr after the formation of the protostar) of mass in disks, protostars, and envelopes (fromMachida and Hosokawa 2013). The red, solid black, dashed black, and blue lines represent the mass of the disk, protostar, envelope, and outflow, respectively.
Fig. 14 .
14-Left: Observed millimeter dust emissivity index in a sample of young protostars
This 2-column preprint was prepared with the AAS L A T E X macros v5.2.
. Y Aikawa, Astrophys. J. 467684Aikawa Y. et al., 1996 Astrophys. J., 467, 684.
. F O Alves, Astron. Astrophys. 54214Alves F. O. et al., 2012 Astron. Astrophys., 542, A14.
. F O Alves, Astron. Astrophys. 5691Alves F. O. et al., 2014 Astron. Astrophys., 569, L1.
. F O Alves, Astron. Astrophys. 6033Alves F. O. et al., 2017 Astron. Astrophys., 603, L3.
. F O Alves, Astron. Astrophys. 61656Alves F. O. et al., 2018 Astron. Astrophys., 616, A56.
. F O Alves, Science. 36690Alves F. O. et al., 2019 Science, 366, 6461, 90.
. S Anderl, Astron. Astrophys. 5913Anderl S. et al., 2016 Astron. Astrophys., 591, A3.
. J M Anderson, Astrophys. J. Lett. 5902107Anderson J. M. et al., 2003 Astrophys. J. Lett., 590, 2, L107.
. G Anglada, Astron. Astrophys. Rev. 263Anglada G. et al., 2018 Astron. Astrophys. Rev., 26, 1, 3.
. M Ansdell, Astrophys. J. 82846Ansdell M. et al., 2016 Astrophys. J., 828, 1, 46.
. M Ansdell, Astrophys. J. 85921Ansdell M. et al., 2018 Astrophys. J., 859, 1, 21.
. H G Arce, Protostars and Planets V (B. Reipurth, D. Jewitt, and K. Keil245Arce H. G. et al., 2007 Protostars and Planets V (B. Reipurth, D. Je- witt, and K. Keil), p. 245.
. Artur De, La Villarmois, E , Astron. Astrophys. 62627Astrophys. J.Artur de la Villarmois E. et al., 2019 Astron. Astrophys., 626, A71. Aso Y. et al., 2015 Astrophys. J., 812, 1, 27.
. Y Aso, arXiv:2107.10646arXiv e-printsAso Y. et al., 2021 arXiv e-prints, arXiv:2107.10646.
. J Bally, Annu. Rev. Astron. Astrophys. 54491Bally J., 2016 Annu. Rev. Astron. Astrophys., 54, 491.
. R Banerjee, R E Pudritz, Astrophys. J. 6412949Banerjee R. and Pudritz R. E., 2006 Astrophys. J., 641, 2, 949.
. R Banerjee, R E Pudritz, The Astrophysical Journal. 6601479Banerjee R. and Pudritz R. E., 2007 The Astrophysical Journal, 660, 1, 479.
. R Banerjee, Mon. Not. R. Astron. Soc. 355248Banerjee R. et al., 2004 Mon. Not. R. Astron. Soc., 355, 1, 248.
. R Banerjee, Mon. Not. R. Astron. Soc. 373240Astrophys. J.Banerjee R. et al., 2006 Mon. Not. R. Astron. Soc., 373, 3, 1091. Barenfeld S. A. et al., 2016 Astrophys. J., 827, 2, 142. Basu S., 1997 Astrophys. J., 485, 1, 240.
. S Basu, Astrophys. J. 509229Basu S., 1998 Astrophys. J., 509, 1, 229.
. M R Bate, Mon. Not. R. Astron. Soc. 4172036Bate M. R., 2011 Mon. Not. R. Astron. Soc., 417, 3, 2036.
. M R Bate, P Lorén-Aguilar, Mon. Not. R. Astron. Soc. 4651089Bate M. R. and Lorén-Aguilar P., 2017 Mon. Not. R. Astron. Soc., 465, 1, 1089.
. M R Bate, Mon. Not. R. Astron. Soc. 437924Astron. J.Bate M. R. et al., 2014 Mon. Not. R. Astron. Soc., 437, 1, 77. Beckwith S. V. W. et al., 1990 Astron. J., 99, 924.
. E A Bergin, M Tafalla, Annu. Rev. Astron. Astrophys. 45339Bergin E. A. and Tafalla M., 2007 Annu. Rev. Astron. Astrophys., 45, 1, 339.
. E A Bergin, Nature. 493644Bergin E. A. et al., 2013 Nature, 493, 644.
. A Bhandare, Astron. Astrophys. 61895Bhandare A. et al., 2018 Astron. Astrophys., 618, A95.
. T Birnstiel, Astron. Astrophys. 539148Birnstiel T. et al., 2012 Astron. Astrophys., 539, A148.
. T Birnstiel, Astrophys. J. Lett. 869245Birnstiel T. et al., 2018 Astrophys. J. Lett., 869, 2, L45.
. P Bjerkeli, Nature. 540406Bjerkeli P. et al., 2016 Nature, 540, 7633, 406.
. P Bjerkeli, Mon. Not. R. Astron. Soc. 631883Astron. Astrophys.Bjerkeli P. et al., 2019 Astron. Astrophys., 631, A64. Blandford R. D. and Payne D. G., 1982 Mon. Not. R. Astron. Soc., 199, 883.
Angular Momentum Evolution of Young Stars. P Bodenheimer, NATO Advanced Study Institute (ASI) Series C (S. Catalano and J. R. Stauffer). 3401Bodenheimer P., 1991 Angular Momentum Evolution of Young Stars, vol. 340 of NATO Advanced Study Institute (ASI) Series C (S. Catalano and J. R. Stauffer), p. 1.
. C F Bohren, D R Huffman, Bohren C. F. and Huffman D. R., 1998
Absorption and Scattering of Light by Small Particles. Absorption and Scattering of Light by Small Particles.
. M Bouvier, arXiv:2107.10743arXiv e-printsBouvier M. et al., 2021 arXiv e-prints, arXiv:2107.10743.
. C R Braiding, M Wardle, Mon. Not. R. Astron. Soc. 4273188Braiding C. R. and Wardle M., 2012a Mon. Not. R. Astron. Soc., 427, 3188.
. C R Braiding, M Wardle, Mon. Not. R. Astron. Soc. 422261Braiding C. R. and Wardle M., 2012b Mon. Not. R. Astron. Soc., 422, 261.
. R Brauer, Astron. Astrophys. 60728Astron. Astrophys.Brauer R. et al., 2017 Astron. Astrophys., 607, A104. Brinch C. and Jørgensen J. K., 2013 Astron. Astrophys., 559, A82. Bron E. et al., 2021 Astron. Astrophys., 645, A28.
. V Cabedo, Astron. Astrophys. 6531209ScienceCabedo V. et al., 2021 Astron. Astrophys., 653, A166. Carrasco-González C. et al., 2010 Science, 330, 6008, 1209.
. P Caselli, Astrophys. J. 499234Caselli P. et al., 1998 Astrophys. J., 499, 1, 234.
. G Cazzoli, Astron. Astrophys. 20Cazzoli G. et al., 2017 Astron. Astrophys., 605, A20.
. C Ceccarelli, Astrophys. J. Lett. 7902151Astrophys. J.Ceccarelli C. et al., 2014 Astrophys. J. Lett., 790, 1, L1. Chacón-Tanarro A. et al., 2017 Astron. Astrophys., 606, A142. Chandrasekhar S. and Fermi E., 1953 Astrophys. J., 118, 113. Chapillon E. et al., 2012 Astron. Astrophys., 537, A60. Chapman N. L. et al., 2013 Astrophys. J., 770, 2, 151.
. M C Chen, -Y , Astrophys. J. 82695Chen M. C.-Y. et al., 2016a Astrophys. J., 826, 1, 95.
. X Chen, Astrophys. J. 82472Chen X. et al., 2016b Astrophys. J., 824, 2, 72.
. E I Chiang, P Goldreich, H.-F Chiang, Astrophys. J. 4902168Astrophys. J.Chiang E. I. and Goldreich P., 1997 Astrophys. J., 490, 368. Chiang H.-F. et al., 2012 Astrophys. J., 756, 2, 168.
. M Choi, Astrophys. J. Lett. 72870Astrophys. J.Choi M. et al., 2011 Astrophys. J. Lett., 728, 2, L34. Chou T.-L. et al., 2014 Astrophys. J., 796, 1, 70.
. A Ciardi, P Hennebelle, Mon. Not. R. Astron. Soc. 40939Ciardi A. and Hennebelle P., 2010 Mon. Not. R. Astron. Soc., 409, 1, L39.
. L A Cieza, Nature. 535258Cieza L. A. et al., 2016 Nature, 535, 7611, 258.
. G E Ciolek, A Königl, Astrophys. J. 504257Ciolek G. E. and Königl A., 1998 Astrophys. J., 504, 1, 257.
. C J Clarke, Astrophys. J. Lett. 8662176Astrophys. J.Clarke C. J. et al., 2018 Astrophys. J. Lett., 866, 1, L6. Clemens D. P. et al., 2016 Astrophys. J., 833, 2, 176.
. C Codella, Astron. Astrophys. 5685Codella C. et al., 2014 Astron. Astrophys., 568, L5.
. B Commerçon, Astron. Astrophys. 53013Commerçon B. et al., 2011 Astron. Astrophys., 530, A13.
. B Commerçon, Astron. Astrophys. 54598Commerçon B. et al., 2012 Astron. Astrophys., 545, A98.
. B Commerçon, Astron. Astrophys. 65852Commerçon B. et al., 2022 Astron. Astrophys., 658, A52.
. E G Cox, Astrophys. J. 851283Cox E. G. et al., 2017 Astrophys. J., 851, 2, 83.
. E G Cox, Astrophys. J. 855292Cox E. G. et al., 2018 Astrophys. J., 855, 2, 92.
. R M Crutcher, R M Crutcher, A J Kemball, Frontiers in Astronomy and Space Sciences. 5066Annu. Rev. Astron. Astrophys.Crutcher R. M., 2012 Annu. Rev. Astron. Astrophys., 50, 29. Crutcher R. M. and Kemball A. J., 2019 Frontiers in Astronomy and Space Sciences, 6, 66.
. R M Crutcher, Astrophys. J. 725466Crutcher R. M. et al., 2010 Astrophys. J., 725, 1, 466.
. J E Dale, Mon. Not. R. Astron. Soc. 35835Astron. Astrophys.Dale J. E. et al., 2005 Mon. Not. R. Astron. Soc., 358, 1, 291. Dapp W. B. et al., 2012 Astron. Astrophys., 541, A35.
. L Davis, Physical Review. 815890Davis L., 1951 Physical Review, 81, 5, 890.
. S E Dodson-Robinson, Astrophys. J. Lett. 868466NatureDodson-Robinson S. E. et al., 2018 Astrophys. J. Lett., 868, 2, L37. Doi Y. et al., 2020 Astrophys. J., 899, 1, 28. Donati J.-F. et al., 2005 Nature, 438, 7067, 466.
. J F Donati, Mon. Not. R. Astron. Soc. 491446Astron. Astrophys.Donati J. F. et al., 2020 Mon. Not. R. Astron. Soc., 491, 4, 5660. Doty S. D. et al., 2002 Astron. Astrophys., 389, 446.
. M M Dunham, E I Vorobyov, Astrophys. J. 74752Dunham M. M. and Vorobyov E. I., 2012 Astrophys. J., 747, 1, 52.
M M Dunham, Protostars and Planets VI. 195Dunham M. M. et al., 2014 Protostars and Planets VI (H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning), p. 195.
. N Dzyurkevich, Astron. Astrophys. 6032149Astrophys. J.Dzyurkevich N. et al., 2017 Astron. Astrophys., 603, A105. Encalada F. J. et al., 2021 Astrophys. J., 913, 2, 149.
. C Eswaraiah, Astrophys. J. Suppl. 912537Astrophys. J.Eswaraiah C. et al., 2021 Astrophys. J. Lett., 912, 2, L27. Evans N. J. et al., 2009 Astrophys. J. Suppl., 181, 2, 321. Evans N. J. et al., 2015 Astrophys. J., 814, 1, 22. Falceta-Gonçalves D. et al., 2008 Astrophys. J., 679, 1, 537.
. E Falgarone, Astron. Astrophys. 487247Falgarone E. et al., 2008 Astron. Astrophys., 487, 1, 247.
. C Favre, Astrophys. J. Lett. 776238Favre C. et al., 2013 Astrophys. J. Lett., 776, 2, L38.
. C Favre, Astron. Astrophys. 60882Favre C. et al., 2017 Astron. Astrophys., 608, A82.
. C Favre, Astrophys. J. Lett. 85943Astrophys. J.Favre C. et al., 2018 Astrophys. J., 859, 2, 136. Feeney-Johansson A. et al., 2019 Astrophys. J. Lett., 885, 1, L7. Fitz Axen M. et al., 2021 Astrophys. J., 915, 1, 43.
A Frank, Protostars and Planets VI. 451Frank A. et al., 2014 Protostars and Planets VI (H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning), p. 451.
. S Frimann, Astron. Astrophys. 602120Frimann S. et al., 2017 Astron. Astrophys., 602, A120.
. B A L Gaches, S S R Offner, Astrophys. J. 861287Gaches B. A. L. and Offner S. S. R., 2018 Astrophys. J., 861, 2, 87.
. M Galametz, Astron. Astrophys. 616139Galametz M. et al., 2018 Astron. Astrophys., 616, A139.
. M Galametz, Astron. Astrophys. 6325Galametz M. et al., 2019 Astron. Astrophys., 632, A5.
. M Galametz, Astron. Astrophys. 64439Astrophys. J.Galametz M. et al., 2020 Astron. Astrophys., 644, A47. Galli D. and Shu F. H., 1993 Astrophys. J., 417, 220. Galván-Madrid R. et al., 2018 Astrophys. J., 868, 1, 39.
. M Gaudel, Gravity CollaborationAstron. Astrophys. Kylafis N. D.637547NatureGaudel M. et al., 2020 Astron. Astrophys., 637, A92. Girart J. M. et al., 2006 Science, 313, 5788, 812. Girart J. M. et al., 2009 Science, 324, 5933, 1408. Goldreich P. and Kylafis N. D., 1982 Astrophys. J., 253, 606. Gravity Collaboration et al., 2020 Nature, 584, 7822, 547.
. I A Grenier, Annu. Rev. Astron. Astrophys. 53199Grenier I. A. et al., 2015 Annu. Rev. Astron. Astrophys., 53, 199.
. V Guillet, Astron. Astrophys. 476263Guillet V. et al., 2007 Astron. Astrophys., 476, 1, 263.
. V Guillet, Astron. Astrophys. 64317Guillet V. et al., 2020a Astron. Astrophys., 643, A17.
. V Guillet, Astron. Astrophys. 6342141Astrophys. J.Guillet V. et al., 2020b Astron. Astrophys., 634, L15. Harris R. J. et al., 2018 Astrophys. J., 861, 2, 91. Harrison R. E. et al., 2021 Astrophys. J., 908, 2, 141.
. D Harsono, Astron. Astrophys. 56277Harsono D. et al., 2014 Astron. Astrophys., 562, A77.
. D Harsono, Nature Astronomy. 2646Harsono D. et al., 2018 Nature Astronomy, 2, 646.
. D Harsono, Astron. Astrophys. 64672Harsono D. et al., 2021 Astron. Astrophys., 646, A72.
. P Hartigan, A Wright, Astrophys. J. 81112Hartigan P. and Wright A., 2015 Astrophys. J., 811, 1, 12.
. P Hartigan, Astrophys. J. 436125Hartigan P. et al., 1994 Astrophys. J., 436, 125.
. D J Heimsoth, arXiv:2112.09848arXiv e-printsHeimsoth D. J. et al., 2021 arXiv e-prints, arXiv:2112.09848.
. P Hennebelle, A Ciardi, Astron. Astrophys. 50629Hennebelle P. and Ciardi A., 2009 Astron. Astrophys., 506, L29.
. P Hennebelle, S Fromang, Astron. Astrophys. 4779Hennebelle P. and Fromang S., 2008 Astron. Astrophys., 477, 1, 9.
. P Hennebelle, S.-I Inutsuka, Frontiers in Astronomy and Space Sciences. 65Hennebelle P. and Inutsuka S.-i., 2019 Frontiers in Astronomy and Space Sciences, 6, 5.
. P Hennebelle, Astrophys. J. Lett. 8308Hennebelle P. et al., 2016 Astrophys. J. Lett., 830, L8.
. P Hennebelle, Astron. Astrophys. 63567Hennebelle P. et al., 2020 Astron. Astrophys., 635, A67.
. N Hirano, Astrophys. J. 71758Hirano N. et al., 2010 Astrophys. J., 717, 1, 58.
. S Hirano, Astrophys. J. 898118Hirano S. et al., 2020 Astrophys. J., 898, 2, 118.
. H Hirashita, Mon. Not. R. Astron. Soc. 42221263Hirashita H., 2012 Mon. Not. R. Astron. Soc., 422, 2, 1263.
. T Hirota, Nature Astronomy. 1146Hirota T. et al., 2017 Nature Astronomy, 1, 0146.
. T Hirota, Astrophys. J. 896157Hirota T. et al., 2020 Astrophys. J., 896, 2, 157.
. T Hosokawa, K Omukai, T.-H Hsieh, Astrophys. J. 6912149Astrophys. J.Hosokawa T. and Omukai K., 2009 Astrophys. J., 691, 1, 823. Hsieh T.-H. et al., 2019 Astrophys. J., 884, 2, 149.
. C L H Hull, Q Zhang, Frontiers in Astronomy and Space Sciences. 63Hull C. L. H. and Zhang Q., 2019 Frontiers in Astronomy and Space Sciences, 6, 3.
. C L H Hull, Astrophys. J. 7682159Hull C. L. H. et al., 2013 Astrophys. J., 768, 2, 159.
. C L H Hull, Astrophys. J. Suppl. 213292Astrophys. J.Hull C. L. H. et al., 2014 Astrophys. J. Suppl., 213, 1, 13. Hull C. L. H. et al., 2017a Astrophys. J., 847, 2, 92.
. C L H Hull, Astrophys. J. Lett. 84229Hull C. L. H. et al., 2017b Astrophys. J. Lett., 842, 2, L9.
. M Imai, Astrophys. J. Lett. 873221Imai M. et al., 2019 Astrophys. J. Lett., 873, 2, L21.
. S Inutsuka, Progress of Theoretical and Experimental Physics. 1Inutsuka S., 2012 Progress of Theoretical and Experimental Physics, 2012, 1, 01A307.
. S.-I Inutsuka, Astrophys. J. Lett. 7182176Astrophys. J.Inutsuka S.-i. et al., 2010 Astrophys. J. Lett., 718, 2, L58. Ivlev A. V. et al., 2019 Astrophys. J., 884, 2, 176.
. S K Jacobsen, Astron. Astrophys. A29. Jhan K.-S. and Lee C.-F.62911Astrophys. J.Jacobsen S. K. et al., 2019 Astron. Astrophys., 629, A29. Jhan K.-S. and Lee C.-F., 2021 Astrophys. J., 909, 1, 11.
. Jin S , Astrophys. J. 81831Astron. J.Jin S. et al., 2016 Astrophys. J., 818, 1, 76. Johns-Krull C. M. et al., 2009 Astrophys. J., 700, 2, 1440. Jones T. J. et al., 2015 Astron. J., 149, 1, 31.
. M Joos, Astron. Astrophys. 543128Joos M. et al., 2012 Astron. Astrophys., 543, A128.
. M Joos, Astron. Astrophys. 55417Joos M. et al., 2013 Astron. Astrophys., 554, A17.
. J K Jørgensen, Astron. Astrophys. 5072861Jørgensen J. K. et al., 2009 Astron. Astrophys., 507, 2, 861.
. J K Jørgensen, Annu. Rev. Astron. Astrophys. 58727Jørgensen J. K. et al., 2020 Annu. Rev. Astron. Astrophys., 58, 727.
. M Kama, Astron. Astrophys. 59283Kama M. et al., 2016 Astron. Astrophys., 592, A83.
. R Kandori, Astrophys. J. 848110Kandori R. et al., 2017 Astrophys. J., 848, 2, 110.
. R Kandori, Astrophys. J. 857100Kandori R. et al., 2018 Astrophys. J., 857, 2, 100.
. R Kandori, Astrophys. J. 89155Kandori R. et al., 2020a Astrophys. J., 891, 1, 55.
. R Kandori, PASJ. 72Kandori R. et al., 2020b PASJ, 72, 1, 8.
. A Kataoka, Astrophys. J. 80978Kataoka A. et al., 2015 Astrophys. J., 809, 1, 78.
. A Kataoka, Astrophys. J. 82054Kataoka A. et al., 2016 Astrophys. J., 820, 1, 54.
. A Kataoka, Astrophys. J. Lett. 8445Kataoka A. et al., 2017 Astrophys. J. Lett., 844, 1, L5.
. Y Kawasaki, Mon. Not. R. Astron. Soc. 5045588Kawasaki Y. et al., 2021 Mon. Not. R. Astron. Soc., 504, 4, 5588.
. E Keto, Mon. Not. R. Astron. Soc. 4463731Keto E. et al., 2015 Mon. Not. R. Astron. Soc., 446, 4, 3731.
. J M Kirk, Mon. Not. R. Astron. Soc. 3691445Kirk J. M. et al., 2006 Mon. Not. R. Astron. Soc., 369, 3, 1445.
. M Klassen, The Astrophysical Journal. 7974Klassen M. et al., 2014 The Astrophysical Journal, 797, 4.
. M Klassen, Astrophys. J. 82328Klassen M. et al., 2016 Astrophys. J., 823, 28.
. C.-L Ko, Astrophys. J. 8892172Ko C.-L. et al., 2020 Astrophys. J., 889, 2, 172.
. S Koga, Mon. Not. R. Astron. Soc. 48422119Koga S. et al., 2019a Mon. Not. R. Astron. Soc., 484, 2, 2119.
. S Koga, Mon. Not. R. Astron. Soc. 48422119Koga S. et al., 2019b Mon. Not. R. Astron. Soc., 484, 2, 2119.
. M Köhler, A61. Kölligan A. and Kuiper R. 548182Astronomy and AstrophysicsKöhler M. et al., 2012 Astron. Astrophys., 548, A61. Kölligan A. and Kuiper R., 2018 Astronomy and Astrophysics, 620, A182.
. V Könyves, Astron. Astrophys. 58454Astrophys. J.Könyves V. et al., 2015 Astron. Astrophys., 584, A91. Krasnopolsky R. et al., 2011 Astrophys. J., 733, 54.
An Introduction to the Physics of Interstellar Dust. E Krügel, Krügel E., 2007 An Introduction to the Physics of Interstellar Dust.
. M R Krumholz, Astrophys. J. Lett. 61833Krumholz M. R. et al., 2005 Astrophys. J. Lett., 618, 1, L33.
. M Kuffmeier, Astrophys. J. 846Kuffmeier M. et al., 2017 Astrophys. J., 846, 1, 7.
. M Kuffmeier, Astron. Astrophys. 63986Kuffmeier M. et al., 2020 Astron. Astrophys., 639, A86.
. R Kuiper, T Hosokawa, Astron. Astrophys. 616101Kuiper R. and Hosokawa T., 2018 Astron. Astrophys., 616, A101.
. R Kuiper, Astron. Astrophys. 51181Kuiper R. et al., 2010 Astron. Astrophys., 511, A81+.
. M W Kunz, T C Mouschovias, W Kwon, Astrophys. J. 69325Astrophys. J.Kunz M. W. and Mouschovias T. C., 2009 Astrophys. J., 693, 2, 1895. Kwon W. et al., 2019 Astrophys. J., 879, 1, 25.
. K H Lam, Mon. Not. R. Astron. Soc. 489271Mon. Not. R. Astron. Soc.Lam K. H. et al., 2019 Mon. Not. R. Astron. Soc., 489, 4, 5326. Lankhaar B. and Vlemmings W., 2019 Astron. Astrophys., 628, A14. Lankhaar B. et al., 2018 Nature Astronomy, 2, 145. Larson R. B., 1969 Mon. Not. R. Astron. Soc., 145, 271.
Magnetic Fields in the Universe: From Laboratory and Stars to Primordial Structures. R B Larson, A E M Lazarian, G De Gouveia Dal Pino, A Lugones, Lazarian, of Physics Conference Series. American Institute73Larson R. B., 2010 Reports on Progress in Physics, 73, 1, 014901. Lazarian A., 2005 Magnetic Fields in the Universe: From Laboratory and Stars to Primordial Structures., vol. 784 of American Institute of Physics Conference Series (E. M. de Gouveia dal Pino, G. Lu- gones, and A. Lazarian), pp. 42-53.
. A Lazarian, T Hoang, Mon. Not. R. Astron. Soc. 378910Lazarian A. and Hoang T., 2007 Mon. Not. R. Astron. Soc., 378, 3, 910.
. Le Gouellec, V J M , Astron. Astrophys. 885112Astron. Astrophys.Le Gouellec V. J. M. et al., 2019 Astrophys. J., 885, 2, 106. Le Gouellec V. J. M. et al., 2020 Astron. Astrophys., 644, A11. Lebreuilly U. et al., 2020 Astron. Astrophys., 641, A112.
. U Lebreuilly, Astron. Astrophys. Rev. 94. Lee C.-F. et al.91775Astrophys. J.Lebreuilly U. et al., 2021 Astrophys. J. Lett., 917, 1, L10. Lee C.-F., 2020 Astron. Astrophys. Rev., 28, 1, 1. Lee C.-F. et al., 2014 Astrophys. J., 786, 2, 114. Lee C.-F. et al., 2017a Nature Astronomy, 1, 0152. Lee C.-F. et al., 2017b Science Advances, 3, 4, e1602935. Lee C.-F. et al., 2018a Astrophys. J., 856, 1, 14. Lee C.-F. et al., 2018b Astrophys. J., 863, 1, 94. Lee C.-F. et al., 2018c Nature Communications, 9, 4636. Lee C.-F. et al., 2021a Astrophys. J. Lett., 907, 2, L41. Lee C.-F. et al., 2021b Astrophys. J., 910, 1, 75.
. S Lee, Astron. Astrophys. A101. Lee Y.-N. et al.88936Astrophys. J.Lee S. et al., 2020 Astrophys. J., 889, 1, 20. Lee Y.-N. et al., 2021c Astron. Astrophys., 648, A101. Lee Y.-N. et al., 2021d Astrophys. J., 922, 1, 36.
. B T Lewis, M R Bate, Mon. Not. R. Astron. Soc. 4673324Lewis B. T. and Bate M. R., 2017 Mon. Not. R. Astron. Soc., 467, 3, 3324.
H B Li, Protostars and Planets VI. 101Li H. B. et al., 2014a Protostars and Planets VI (H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning), p. 101.
. Z.-Y Li, Astrophys. J. 5262806Li Z.-Y., 1999 Astrophys. J., 526, 2, 806.
. Z.-Y Li, C F Mckee, Z.-Y Li, Astrophys. J. 46482Astrophys. J.Li Z.-Y. and McKee C. F., 1996 Astrophys. J., 464, 373. Li Z.-Y. et al., 2011 Astrophys. J., 738, 180. Li Z.-Y. et al., 2013 Astrophys. J., 774, 82.
Z Y Li, Protostars and Planets VI. 173Li Z. Y. et al., 2014b Protostars and Planets VI (H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning), p. 173.
. H B Liu, Astron. Astrophys. 61254Liu H. B. et al., 2018 Astron. Astrophys., 612, A54.
. J Liu, Astrophys. J. 87743Liu J. et al., 2019 Astrophys. J., 877, 1, 43.
. J Liu, Astrophys. J. 91979Liu J. et al., 2021 Astrophys. J., 919, 2, 79.
. J Liu, Astrophys. J. 92530Liu J. et al., 2022 Astrophys. J., 925, 1, 30.
M Livio, of Physics Conference Series (J. E. McEnery, J. L. Racusin, and N. Gehrels). American Institute1358Livio M., 2011 Gamma Ray Bursts 2010, vol. 1358 of American Institute of Physics Conference Series (J. E. McEnery, J. L. Racusin, and N. Gehrels), pp. 329-333.
M S Longair, High Energy Astrophysics. Cambridge University PressLongair M. S., 2011 High Energy Astrophysics, Cambridge University Press.
. L W Looney, Astrophys. J. 529477Looney L. W. et al., 2000 Astrophys. J., 529, 1, 477.
. F Louvet, Astron. Astrophys. 618120Louvet F. et al., 2018 Astron. Astrophys., 618, A120.
. M N Machida, S Basu, Astrophys. J. 8762149Machida M. N. and Basu S., 2019 Astrophys. J., 876, 2, 149.
. M N Machida, T Hosokawa, Mon. Not. R. Astron. Soc. 43121719Machida M. N. and Hosokawa T., 2013 Mon. Not. R. Astron. Soc., 431, 2, 1719.
. M N Machida, T Matsumoto, Mon. Not. R. Astron. Soc. 4132767Machida M. N. and Matsumoto T., 2011 Mon. Not. R. Astron. Soc., 413, 2767.
. M N Machida, Mon. Not. R. Astron. Soc. 3481Machida M. N. et al., 2004 Mon. Not. R. Astron. Soc., 348, 1, L1.
. M N Machida, Astrophys. J. Lett. 6471198Astrophys. J.Machida M. N. et al., 2006 Astrophys. J. Lett., 647, 2, L151. Machida M. N. et al., 2007 Astrophys. J., 670, 1198.
. M N Machida, Astrophys. J. 676555PASJMachida M. N. et al., 2008 Astrophys. J., 676, 2, 1088. Machida M. N. et al., 2011 PASJ, 63, 555.
. M N Machida, Mon. Not. R. Astron. Soc. 4382278Machida M. N. et al., 2014 Mon. Not. R. Astron. Soc., 438, 2278.
. M N Machida, Mon. Not. R. Astron. Soc. 4633Astron. Astrophys.Machida M. N. et al., 2016 Mon. Not. R. Astron. Soc., 463, 4246. Manara C. F. et al., 2018 Astron. Astrophys., 618, L3.
. P Marchand, Astron. Astrophys. 59218Marchand P. et al., 2016 Astron. Astrophys., 592, A18.
. P Marchand, Astron. Astrophys. 61937Marchand P. et al., 2018 Astron. Astrophys., 619, A37.
. P Marchand, Astron. Astrophys. 63166Marchand P. et al., 2019 Astron. Astrophys., 631, A66.
. P Marchand, Astrophys. J. 900180Marchand P. et al., 2020 Astrophys. J., 900, 2, 180.
. S Maret, Astron. Astrophys. 63515Maret S. et al., 2020 Astron. Astrophys., 635, A15.
. P G Martin, Astrophys. J. 75128Martin P. G. et al., 2012 Astrophys. J., 751, 1, 28.
. J Masson, Astron. Astrophys. 58732Masson J. et al., 2016 Astron. Astrophys., 587, A32.
. H Masunaga, S Inutsuka, H Masunaga, Astrophys. J. 531346Astrophys. J.Masunaga H. and Inutsuka S., 2000 Astrophys. J., 531, 350. Masunaga H. et al., 1998 Astrophys. J., 495, 346.
. T Matsumoto, K Tomisaka, Astrophys. J. 616266Matsumoto T. and Tomisaka K., 2004 Astrophys. J., 616, 1, 266.
. Y Matsushita, Mon. Not. R. Astron. Soc. 47023Astrophys. J.Matsushita Y. et al., 2017 Mon. Not. R. Astron. Soc., 470, 1, 1026. Matsushita Y. et al., 2021 Astrophys. J., 916, 1, 23.
. M J Maureira, Astron. Astrophys. 84977Astron. Astrophys.Maureira M. J. et al., 2017 Astrophys. J., 849, 2, 89. Maury A. J. et al., 2010 Astron. Astrophys., 512, A40. Maury A. J. et al., 2011 Astron. Astrophys., 535, A77.
. A J Maury, Mon. Not. R. Astron. Soc. 47776Astron. Astrophys.Maury A. J. et al., 2018 Mon. Not. R. Astron. Soc., 477, 2, 2760. Maury A. J. et al., 2019 Astron. Astrophys., 621, A76.
. M K Mcclure, Astrophys. J. 831167McClure M. K. et al., 2016 Astrophys. J., 831, 167.
. C F Mckee, R R Mellon, Li Z.-Y Mellon, R R , Li Z.-Y , Astrophys. J. 345922Astrophys. J.McKee C. F., 1989 Astrophys. J., 345, 782. Mellon R. R. and Li Z.-Y., 2008 Astrophys. J., 681, 1356. Mellon R. R. and Li Z.-Y., 2009 Astrophys. J., 698, 922.
. F Ménard, G Duchêne, Astron. Astrophys. 425161Mestel LQJRASMénard F. and Duchêne G., 2004 Astron. Astrophys., 425, 973. Mestel L., 1965 QJRAS, 6, 161.
. D M Meyer, 3615Meyer D. M. et al., 2018 3637, 3615.
. R Mignon-Risse, Astron. Astrophys. 63532Astron. Astrophys.Mignon-Risse R. et al., 2020 Astron. Astrophys., 635, A42. Mignon-Risse R. et al., 2021a Astron. Astrophys., 652, A69. Mignon-Risse R. et al., 2021b Astron. Astrophys., 656, A85. Miotello A. et al., 2014 Astron. Astrophys., 567, A32.
. H Miura, Astrophys. J. 83947Miura H. et al., 2017 Astrophys. J., 839, 1, 47.
. K Miyake, Y Nakagawa, J C Mottram, Astron. Astrophys. 106772Astrophys. J.Miyake K. and Nakagawa Y., 1993 Icarus, 106, 20. Mottram J. C. et al., 2013 Astron. Astrophys., 558, A126. Mouschovias T. C., 1985 Astron. Astrophys., 142, 41. Mouschovias T. C. et al., 1985 Astrophys. J., 291, 772.
. O Muñoz, Astrophysical Journal. 256217Astrophys. J. Suppl.Muñoz O. et al., 2021 Astrophys. J. Suppl., 256, 1, 17. Myers A. T. et al., 2013 Astrophysical Journal, 766, 2.
. P C Myers, S Basu, arXiv:2104.02597PASJ. 30671arXiv e-printsMyers P. C. and Basu S., 2021 arXiv e-prints, arXiv:2104.02597. Nakano T. and Nakamura T., 1978 PASJ, 30, 671.
. T Nakano, T Umebayashi, Mon. Not. R. Astron. Soc. 221319Nakano T. and Umebayashi T., 1986 Mon. Not. R. Astron. Soc., 221, 319.
. T Nakano, Astrophys. J. 573199Nakano T. et al., 2002 Astrophys. J., 573, 199.
. R Nakatani, Astrophys. J. Lett. 8952Nakatani R. et al., 2020 Astrophys. J. Lett., 895, 1, L2.
. C Norman, J Heyvaerts, Astron. Astrophys. Oberg K. I. and Bergin E. A.1471Phys. Rep.Norman C. and Heyvaerts J., 1985 Astron. Astrophys., 147, 2, 247. Oberg K. I. and Bergin E. A., 2021 Phys. Rep., 893, 1.
. N Ohashi, Astrophys. J. 796131Ohashi N. et al., 2014 Astrophys. J., 796, 2, 131.
. S Ohashi, Astrophys. J. 90780Ohashi S. et al., 2021 Astrophys. J., 907, 2, 80.
. S Okuzumi, Astrophys. J. 752106Okuzumi S. et al., 2012 Astrophys. J., 752, 106.
. M Oppenheimer, A Dalgarno, C W Ormel, Astron. Astrophys. 192617Astron. Astrophys.Oppenheimer M. and Dalgarno A., 1974 Astrophys. J., 192, 29. Ormel C. W. et al., 2009 Astron. Astrophys., 502, 3, 845. Ossenkopf V., 1993 Astron. Astrophys., 280, 617.
. V Ossenkopf, T Henning, Y Oya, Astron. Astrophys. 2912152Astrophys. J.Ossenkopf V. and Henning T., 1994 Astron. Astrophys., 291, 943. Oya Y. et al., 2014 Astrophys. J., 795, 2, 152.
. Y Oya, Astrophys. J. 824288Oya Y. et al., 2016 Astrophys. J., 824, 2, 88.
. Y Oya, Astrophys. J. 837174Oya Y. et al., 2017 Astrophys. J., 837, 2, 174.
. Y Oya, Astrophys. J. 881112Oya Y. et al., 2019 Astrophys. J., 881, 2, 112.
. M Padovani, Astron. Astrophys. 501619Padovani M. et al., 2009 Astron. Astrophys., 501, 2, 619.
. M Padovani, Astron. Astrophys. 57133Padovani M. et al., 2014 Astron. Astrophys., 571, A33.
. M Padovani, Astron. Astrophys. 58213Padovani M. et al., 2015 Astron. Astrophys., 582, L13.
. M Padovani, Astron. Astrophys. 614111Padovani M. et al., 2018 Astron. Astrophys., 614, A111.
. M Padovani, Frontiers in Astronomy and Space Sciences. Pattle K. and Fissel L.649215Astron. Astrophys.Padovani M. et al., 2021 Astron. Astrophys., 649, A149. Paneque-Carreño T. et al., 2021 Astrophys. J., 914, 2, 88. Pattle K. and Fissel L., 2019 Frontiers in Astronomy and Space Sciences, 6, 15.
. L M Pérez, Astron. Astrophys. 35370NaturePérez L. M. et al., 2016 Science, 353, 6307, 1519. Pineda J. E. et al., 2012 Astron. Astrophys., 544, L7. Pineda J. E. et al., 2019 Astrophys. J., 882, 2, 103. Pineda J. E. et al., 2020 Nature Astronomy, 4, 1158. Plunkett A. L. et al., 2015 Nature, 527, 7576, 70.
. L Podio, Astron. Astrophys. 56564Podio L. et al., 2014 Astron. Astrophys., 565, A64.
. L Podio, Astron. Astrophys. 58185Podio L. et al., 2015 Astron. Astrophys., 581, A85.
. L Podio, Astron. Astrophys. 5934Podio L. et al., 2016 Astron. Astrophys., 593, L4.
. L Podio, Astron. Astrophys. 64845Podio L. et al., 2021 Astron. Astrophys., 648, A45.
. D Powell, Astrophys. J. 878116Powell D. et al., 2019 Astrophys. J., 878, 2, 116.
. A J R Prentice, Ter Haar, D , Mon. Not. R. Astron. Soc. 151177Prentice A. J. R. and Ter Haar D., 1971 Mon. Not. R. Astron. Soc., 151, 177.
. D J Price, Frontiers in Astronomy and Space Sciences. Pudritz R. E. and Ray T. P.42354Mon. Not. R. Astron. Soc.Price D. J. et al., 2012 Mon. Not. R. Astron. Soc., 423, 1, L45. Pudritz R. E. and Norman C. A., 1986 Astrophys. J., 301, 571. Pudritz R. E. and Ray T. P., 2019 Frontiers in Astronomy and Space Sciences, 6, 54.
. R Rao, Astrophys. J. 70727Astrophys. J.Rao R. et al., 2009 Astrophys. J., 707, 2, 921. Ray T. P. and Ferreira J., 2021 New A Rev., 93, 101615. Rodríguez-Kamenetzky A. et al., 2016 Astrophys. J., 818, 1, 27.
. A Rosen, Journal of Computational Physics. 3302108The Astrophysical JournalRosen A. et al., 2017 Journal of Computational Physics, 330, 924. Rosen A. L. et al., 2016 Mon. Not. R. Astron. Soc., 463, 3, 2553. Rosen A. L. et al., 2019 The Astrophysical Journal, 887, 2, 108.
. M Saajasto, Astron. Astrophys. 6472165Astrophys. J.Saajasto M. et al., 2021 Astron. Astrophys., 647, A109. Sadavoy S. I. et al., 2016 Astron. Astrophys., 588, A30. Sadavoy S. I. et al., 2018 Astrophys. J., 859, 2, 165.
. S I Sadavoy, Astrophys. J. Suppl. 2. Sai J. et al.24512Astrophys. J.Sadavoy S. I. et al., 2019 Astrophys. J. Suppl., 245, 1, 2. Sai J. et al., 2022 Astrophys. J., 925, 1, 12.
. N Sakai, Astrophys. J. Lett. 791238Sakai N. et al., 2014a Astrophys. J. Lett., 791, 2, L38.
. N Sakai, Nature. 50778Sakai N. et al., 2014b Nature, 507, 78.
. N Sakai, Mon. Not. R. Astron. Soc. 46776Sakai N. et al., 2017 Mon. Not. R. Astron. Soc., 467, 1, L76.
. N Sakai, Nature. 565206Sakai N. et al., 2019 Nature, 565, 7738, 206.
. E Sanchis, Mon. Not. R. Astron. Soc. 649228NatureSanchis E. et al., 2021 Astron. Astrophys., 649, A19. Santos-Lima R. et al., 2012 Astrophys. J., 747, 21. Santos-Lima R. et al., 2021 Mon. Not. R. Astron. Soc., 503, 1, 1290. Segura-Cox D. M. et al., 2015 Astrophys. J. Lett., 798, 1, L2. Segura-Cox D. M. et al., 2018 Astrophys. J., 866, 2, 161. Segura-Cox D. M. et al., 2020 Nature, 586, 7828, 228.
. D Seifried, Mon. Not. R. Astron. Soc. 41721054Seifried D. et al., 2011 Mon. Not. R. Astron. Soc., 417, 2, 1054.
. D Seifried, Mon. Not. R. Astron. Soc. 42340Seifried D. et al., 2012a Mon. Not. R. Astron. Soc., 423, 1, L40.
. D Seifried, Mon. Not. R. Astron. Soc. 422347Seifried D. et al., 2012b Mon. Not. R. Astron. Soc., 422, 1, 347.
. D Seifried, Mon. Not. R. Astron. Soc. 4323320Seifried D. et al., 2013 Mon. Not. R. Astron. Soc., 432, 3320.
. N I Shakura, R A Sunyaev, P D Sheehan, J A Eisner, P D Sheehan, Astron. Astrophys. 24202Astrophys. J.Shakura N. I. and Sunyaev R. A., 1973 Astron. Astrophys., 24, 337. Sheehan P. D. and Eisner J. A., 2017 Astrophys. J., 851, 1, 45. Sheehan P. D. et al., 2020 Astrophys. J., 902, 2, 141. Shu F. H., 1983 Astrophys. J., 273, 202.
. K Silsbee, Astrophys. J. 863188Silsbee K. et al., 2018 Astrophys. J., 863, 2, 188.
. K Silsbee, Mon. Not. R. Astron. Soc. 641341Astrophys. J.Silsbee K. et al., 2020 Astron. Astrophys., 641, A39. Soam A. et al., 2015 Astron. Astrophys., 573, A34. Soam A. et al., 2019 Astrophys. J., 883, 1, 9. Soler J. D. et al., 2016 Astron. Astrophys., 596, A93. Spruit H. C. et al., 1997 Mon. Not. R. Astron. Soc., 288, 2, 333. Stahler S. W. et al., 1994 Astrophys. J., 431, 341.
. J Steinacker, Astron. Astrophys. 58270Steinacker J. et al., 2015 Astron. Astrophys., 582, A70.
. I W Stephens, Astrophys. J. Lett. 76973Astron. Astrophys.Stephens I. W. et al., 2013 Astrophys. J. Lett., 769, 1, L15. Surcis G. et al., 2013 Astron. Astrophys., 556, A73.
. B Tabone, Astron. Astrophys. 6076Tabone B. et al., 2017 Astron. Astrophys., 607, L6.
. S Takakuwa, Astrophys. J. 7961Takakuwa S. et al., 2014 Astrophys. J., 796, 1, 1.
. S Takakuwa, Astrophys. J. 83751Astrophys. J.Takakuwa S. et al., 2017 Astrophys. J., 837, 1, 86. Takakuwa S. et al., 2018 Astrophys. J., 865, 1, 51.
. T Takeuchi, Astrophys. J. 62754Astrophys. J.Takeuchi T. et al., 2005 Astrophys. J., 627, 286. Targon C. G. et al., 2011 Astrophys. J., 743, 1, 54.
. K Tassis, Astrophys. J. 754Tassis K. et al., 2012 Astrophys. J., 754, 1, 6.
. M Tazzari, Mon. Not. R. Astron. Soc. 5062139Astrophys. J.Tazzari M. et al., 2021 Mon. Not. R. Astron. Soc., 506, 4, 5117. Teague R. et al., 2021 Astrophys. J., 922, 2, 139.
. S Terebey, Astrophys. J. 286529Terebey S. et al., 1984 Astrophys. J., 286, 529.
L Testi, Protostars and Planets VI. 339Testi L. et al., 2014 Protostars and Planets VI (H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning), p. 339.
. W F Thi, Astrophys. J. 56121074Thi W. F. et al., 2001 Astrophys. J., 561, 2, 1074.
. C T Tibbs, Mon. Not. R. Astron. Soc. 4562130Astrophys. J.Tibbs C. T. et al., 2016 Mon. Not. R. Astron. Soc., 456, 3, 2290. Tobin J. J. et al., 2016 Nature, 538, 7626, 483. Tobin J. J. et al., 2020 Astrophys. J., 890, 2, 130.
. K Tomida, Astrophys. J. 763Tomida K. et al., 2013 Astrophys. J., 763, 1, 6.
. K Tomida, Astrophys. J. 801117Tomida K. et al., 2015 Astrophys. J., 801, 117.
. K Tomida, Astrophys. J. Lett. L163. Tomisaka K.835306Astrophys. J.Tomida K. et al., 2017 Astrophys. J. Lett., 835, 1, L11. Tomisaka K., 1998 Astrophys. J. Lett., 502, 2, L163. Tomisaka K., 2000 Astrophys. J. Lett., 528, 1, L41. Tomisaka K., 2002 Astrophys. J., 575, 1, 306.
. T H Troland, R M Crutcher, Y Tsukamoto, Astrophys. J. Lett. 68026Astrophys. J.Troland T. H. and Crutcher R. M., 2008 Astrophys. J., 680, 1, 457. Tsukamoto Y. et al., 2015a Astrophys. J. Lett., 810, L26.
. Y Tsukamoto, Mon. Not. R. Astron. Soc. 45295PASJTsukamoto Y. et al., 2015b Mon. Not. R. Astron. Soc., 452, 278. Tsukamoto Y. et al., 2017a Astrophys. J., 838, 151. Tsukamoto Y. et al., 2017b PASJ, 69, 95.
. Y Tsukamoto, Astrophys. J. 86822Tsukamoto Y. et al., 2018 Astrophys. J., 868, 1, 22.
. Y Tsukamoto, Astrophys. J. 896158Tsukamoto Y. et al., 2020 Astrophys. J., 896, 2, 158.
. Y Tsukamoto, Astrophys. J. Lett. 9202148Astrophys. J.Tsukamoto Y. et al., 2021a Astrophys. J. Lett., 920, 2, L35. Tsukamoto Y. et al., 2021b Astrophys. J., 913, 2, 148.
. Ł Tychoniec, Astron. Astrophys. 64019Tychoniec Ł. et al., 2020 Astron. Astrophys., 640, A19.
. Ł Tychoniec, arXiv:2107.03696PASJ. Uchida Y. and Shibata K.37515arXiv e-printsTychoniec Ł. et al., 2021 arXiv e-prints, arXiv:2107.03696. Uchida Y. and Shibata K., 1985 PASJ, 37, 515.
. T Umebayashi, T Nakano, Mon. Not. R. Astron. Soc. 243103Umebayashi T. and Nakano T., 1990 Mon. Not. R. Astron. Soc., 243, 103.
. V Valdivia, Mon. Not. R. Astron. Soc. 4897. van der Tak F. F. S. and van Dishoeck E. F.48879Astron. Astrophys.Valdivia V. et al., 2019 Mon. Not. R. Astron. Soc., 488, 4, 4897. van der Tak F. F. S. and van Dishoeck E. F., 2000 Astron. Astrophys., 358, L79.
. M L Gelder, arXiv:2107.09750Astron. Astrophys. 90160arXiv e-printsAstron. Astrophys.Gelder M. L. et al., 2021 arXiv e-prints, arXiv:2107.09750. van't Hoff M. L. R. et al., 2020 Astrophys. J., 901, 2, 166. Vaytet N. and Haugbølle T., 2017 Astron. Astrophys., 598, A116. Vaytet N. et al., 2012 Astron. Astrophys., 543, A60.
. N Vaytet, Astron. Astrophys. 6155Vaytet N. et al., 2018 Astron. Astrophys., 615, A5.
. A Verliat, Astron. Astrophys. 635130Verliat A. et al., 2020 Astron. Astrophys., 635, A130.
. W H T Vlemmings, Astron. Astrophys. I. and Basu S.6242115Astrophys. J.Vlemmings W. H. T. et al., 2019 Astron. Astrophys., 624, L7. Vorobyov E. I. and Basu S., 2006 Astrophys. J., 650, 2, 956. Vorobyov E. I. and Basu S., 2010 Astrophys. J., 719, 2, 1896. Vorobyov E. I. and Basu S., 2015 Astrophys. J., 805, 2, 115.
. F J Vrba, Astrophys. J. Protostars and Planets V (B. Reipurth, D. Jewitt, and K. Keil9266Astron. J.Vrba F. J. et al., 1986 Astron. J., 92, 633. Ward-Thompson D. et al., 2007 Protostars and Planets V (B. Reipurth, D. Jewitt, and K. Keil), p. 33. Ward-Thompson D. et al., 2017 Astrophys. J., 842, 1, 66.
. M Wardle, Astrophys. Space Sci. 31135Wardle M., 2007 Astrophys. Space Sci., 311, 35.
. M Wardle, C Ng, B P Weiss, Mon. Not. R. Astron. Soc. 30341PASJWardle M. and Ng C., 1999 Mon. Not. R. Astron. Soc., 303, 239. Weiss B. P. et al., 2021 Science Advances, 7, 1, eaba5967. Williams J. P. et al., 2019 Astrophys. J. Lett., 875, 2, L9. Wong Y. H. V. et al., 2016 PASJ, 68, 4, 67. Wurster J., 2016 PASA, 33, e041.
. J Wurster, B T Lewis, Mon. Not. R. Astron. Soc. 4953795Wurster J. and Lewis B. T., 2020 Mon. Not. R. Astron. Soc., 495, 4, 3795.
. J Wurster, Mon. Not. R. Astron. Soc. 4571037Wurster J. et al., 2016 Mon. Not. R. Astron. Soc., 457, 1037.
. J Wurster, Mon. Not. R. Astron. Soc. 48122450Wurster J. et al., 2018a Mon. Not. R. Astron. Soc., 481, 2, 2450.
. J Wurster, Mon. Not. R. Astron. Soc. 47521859Wurster J. et al., 2018b Mon. Not. R. Astron. Soc., 475, 2, 1859.
. J Wurster, Mon. Not. R. Astron. Soc. 47622063Wurster J. et al., 2018c Mon. Not. R. Astron. Soc., 476, 2, 2063.
. W Xu, M W Kunz, Mon. Not. R. Astron. Soc. 5024911Xu W. and Kunz M. W., 2021 Mon. Not. R. Astron. Soc., 502, 4, 4911.
. H Yang, Mon. Not. R. Astron. Soc. A58. Yen H.-W. et al.45633Astrophys. J.Yang H. et al., 2016 Mon. Not. R. Astron. Soc., 456, 3, 2794. Yen H.-W. et al., 2014 Astrophys. J., 793, 1, 1. Yen H.-W. et al., 2015a Astrophys. J., 812, 2, 129. Yen H.-W. et al., 2015b Astrophys. J., 799, 2, 193. Yen H.-W. et al., 2017 Astrophys. J., 834, 2, 178. Yen H.-W. et al., 2018 Astron. Astrophys., 615, A58. Yen H.-W. et al., 2019 Astrophys. J., 871, 2, 243. Yen H.-W. et al., 2021a Astrophys. J., 916, 2, 97. Yen H.-W. et al., 2021b Astrophys. J., 907, 1, 33.
. H Yoneda, Astrophys. J. 833105Yoneda H. et al., 2016 Astrophys. J., 833, 105.
. H W Yorke, C Sonnhalter, N Ysard, Astron. Astrophys. 56918Astron. Astrophys.Yorke H. W. and Sonnhalter C., 2002 Astrophys. J., 569, 2, 846. Ysard N. et al., 2019 Astron. Astrophys., 631, A88. Zhang C.-P. et al., 2021 Astron. Astrophys., 646, A18.
. Y Zhang, Astrophys. J. 86476Zhang Y. et al., 2018 Astrophys. J., 864, 1, 76.
. B Zhao, Mon. Not. R. Astron. Soc. 46022050Zhao B. et al., 2016 Mon. Not. R. Astron. Soc., 460, 2, 2050.
. B Zhao, Mon. Not. R. Astron. Soc. 4734868Zhao B. et al., 2018a Mon. Not. R. Astron. Soc., 473, 4, 4868.
. B Zhao, Mon. Not. R. Astron. Soc. 47822723Zhao B. et al., 2018b Mon. Not. R. Astron. Soc., 478, 2, 2723.
. B Zhao, Mon. Not. R. Astron. Soc. 4923375Zhao B. et al., 2020 Mon. Not. R. Astron. Soc., 492, 3, 3375.
. B Zhao, Mon. Not. R. Astron. Soc. 505218Astrophys. J. Lett.Zhao B. et al., 2021 Mon. Not. R. Astron. Soc., 505, 4, 5142. Zhu Z. et al., 2019 Astrophys. J. Lett., 877, 2, L18.
| [] |
[
"Diagnostic Quality Assessment for Low-Dimensional ECG Representations",
"Diagnostic Quality Assessment for Low-Dimensional ECG Representations"
] | [
"Péter Kovács \nDepartment of Numerical Analysis\nEötvös Loránd University\nPázmány Péter sétány 1/c1117BudapestHungary\n",
"Carl Böck \nInstitute of Signal Processing\nJKU LIT SAL eSPML Lab\nJohannes Kepler University Linz\nAltenberger Strasse 694040LinzAustria\n",
"Thomas Tschoellitsch \nClinic of Anesthesiology and Intensive Care Medicine\nJohannes Kepler University Linz\nKrankenhausstraße 94020LinzAustria\n",
"Mario Huemer \nInstitute of Signal Processing\nJohannes Kepler University Linz\nAltenberger Straße 694040LinzAustria\n",
"Jens Meier \nClinic of Anesthesiology and Intensive Care Medicine\nJohannes Kepler University Linz\nKrankenhausstraße 94020LinzAustria\n"
] | [
"Department of Numerical Analysis\nEötvös Loránd University\nPázmány Péter sétány 1/c1117BudapestHungary",
"Institute of Signal Processing\nJKU LIT SAL eSPML Lab\nJohannes Kepler University Linz\nAltenberger Strasse 694040LinzAustria",
"Clinic of Anesthesiology and Intensive Care Medicine\nJohannes Kepler University Linz\nKrankenhausstraße 94020LinzAustria",
"Institute of Signal Processing\nJohannes Kepler University Linz\nAltenberger Straße 694040LinzAustria",
"Clinic of Anesthesiology and Intensive Care Medicine\nJohannes Kepler University Linz\nKrankenhausstraße 94020LinzAustria"
] | [] | A B S T R A C TThere have been several attempts to quantify the diagnostic distortion caused by algorithms that perform low-dimensional electrocardiogram (ECG) representation. However, there is no universally accepted quantitative measure that allows the diagnostic distortion arising from denoising, compression, and ECG beat representation algorithms to be determined. Hence, the main objective of this work was to develop a framework to enable biomedical engineers to efficiently and reliably assess diagnostic distortion resulting from ECG processing algorithms. We propose a semiautomatic framework for quantifying the diagnostic resemblance between original and denoised/reconstructed ECGs. Evaluation of the ECG must be done manually, but is kept simple and does not require medical training. In a case study, we quantified the agreement between raw and reconstructed (denoised) ECG recordings by means of kappa-based statistical tests. The proposed methodology takes into account that the observers may agree by chance alone. Consequently, for the case study, our statistical analysis reports the "true", beyond-chance agreement in contrast to other, less robust measures, such as simple percent agreement calculations. Our framework allows efficient assessment of clinically important diagnostic distortion, a potential side effect of ECG (pre-)processing algorithms. Accurate quantification of a possible diagnostic loss is critical to any subsequent ECG signal analysis, for instance, the detection of ischemic ST episodes in long-term ECG recordings. | 10.1016/j.compbiomed.2022.106086 | [
"https://export.arxiv.org/pdf/2210.00218v1.pdf"
] | 252,259,739 | 2210.00218 | ce2e9013729acca55ace5b75fa6b14f83b8699d2 |
Diagnostic Quality Assessment for Low-Dimensional ECG Representations
Péter Kovács
Department of Numerical Analysis
Eötvös Loránd University
Pázmány Péter sétány 1/c1117BudapestHungary
Carl Böck
Institute of Signal Processing
JKU LIT SAL eSPML Lab
Johannes Kepler University Linz
Altenberger Strasse 694040LinzAustria
Thomas Tschoellitsch
Clinic of Anesthesiology and Intensive Care Medicine
Johannes Kepler University Linz
Krankenhausstraße 94020LinzAustria
Mario Huemer
Institute of Signal Processing
Johannes Kepler University Linz
Altenberger Straße 694040LinzAustria
Jens Meier
Clinic of Anesthesiology and Intensive Care Medicine
Johannes Kepler University Linz
Krankenhausstraße 94020LinzAustria
Diagnostic Quality Assessment for Low-Dimensional ECG Representations
A R T I C L E I N F Obaseline removal clinical evaluation diagnostic distortion measures ECG denoising kappa statistics
A B S T R A C TThere have been several attempts to quantify the diagnostic distortion caused by algorithms that perform low-dimensional electrocardiogram (ECG) representation. However, there is no universally accepted quantitative measure that allows the diagnostic distortion arising from denoising, compression, and ECG beat representation algorithms to be determined. Hence, the main objective of this work was to develop a framework to enable biomedical engineers to efficiently and reliably assess diagnostic distortion resulting from ECG processing algorithms. We propose a semiautomatic framework for quantifying the diagnostic resemblance between original and denoised/reconstructed ECGs. Evaluation of the ECG must be done manually, but is kept simple and does not require medical training. In a case study, we quantified the agreement between raw and reconstructed (denoised) ECG recordings by means of kappa-based statistical tests. The proposed methodology takes into account that the observers may agree by chance alone. Consequently, for the case study, our statistical analysis reports the "true", beyond-chance agreement in contrast to other, less robust measures, such as simple percent agreement calculations. Our framework allows efficient assessment of clinically important diagnostic distortion, a potential side effect of ECG (pre-)processing algorithms. Accurate quantification of a possible diagnostic loss is critical to any subsequent ECG signal analysis, for instance, the detection of ischemic ST episodes in long-term ECG recordings.
Introduction
In medical applications, expert systems require both the extraction of reliable information from biomedical signals and efficient representation of this knowledge. Extracting relevant features is essential, since these are usually either fed to rule-based expert systems or machine learning methods or used directly by medical experts to derive a diagnosis. Clearly, the main objective of biomedical signal processing methods is therefore to efficiently represent important medical knowledge, eliminating noisy and redundant signal features while keeping the informative ones. However, applying denoising or feature reduction algorithms may lead to unintentional removal of diagnostic information. Common metrics in signal processing, such as signal-tonoise ratio (SNR), mean squared error (MSE), and standard error (STDERR), quantify the numerical error, but not the loss of diagnostic information. For instance, in the case of evaluating an electrocardiogram (ECG) that is superimposed with noise (e.g., power-line interference or baseline wander), the values of the previously mentioned objective measures may improve significantly if denoising algorithms are applied or the signal is represented in a low-dimensional space. However, these measures do not consider possible diagnostic distortions that may change the interpretation of the electrocardiogram (ECG) curves, and -as a consequence -the diagnosis. Signal-quality indices (SQI) form another class of metrics which lies in between quantitative and qualitative distortion measures. These application-oriented metrics are intended to quantify the suitability of ECG signals for deriving reliable estimation of particular medical features, such as the heart rate [1,2]. Note that the use cases of SQIs are limited to the application in question, and thus they are not applicable to perform a thorough agreement analysis between diagnostic features of the raw ECG and its low-dimensional representations. In fact, to date there is no universally accepted quantitative measure that captures the diagnostic distortion of such biomedical signal processing methodologies.
In this paper, we propose a testing framework that is suitable for measuring the diagnostic resemblance between preprocessed ECG signals and original recordings. This enables medical validation of ECG processing algorithms, such as filtering and data compression, and extraction/monitoring of clinical features. The latter has recently grown in importance, mainly due to the increasing amounts of data recordable (e.g., long-term ECG recordings), which require automated information extraction in order to be manageable by medical experts. In addition to standard clinical features (e.g., QT interval, QRS duration), the shapes of individual waves and characteristic shape changes over time are of high diagnostic interest and have been shown to carry important information [3]. However, it is exactly this information that might get lost when applying algorithms for low-dimensional information representation. Eliminating noisy signal features, such as the baseline wander, may lead inadvertently to removal of ischemic ST episodes [4], which has a direct influence on diagnosis. Further, waveshape feature interpretation might change: for instance, an originally positive wave may then be identified as a biphasic one. Consequently, a methodology is needed that enables quantitative representation of such potentially diagnostically relevant changes. However, reliable evaluation of whether diagnostic information has changed due to algorithmic processing will always require human expertise. Our test design minimizes the workload and does not require the evaluating person to be a medically qualified, since the test involves deciding between distinct wave shape types rather than making a specific diagnosis. Biomedical engineers can therefore easily carry out this evaluation themselves and can improve their algorithms more rapidly and efficiently, as they do not have to wait for feedback from medical experts who are able to diagnose a possibly very specific pathology. We demonstrate this by comparing original ECG recordings (selected from [5]) and their corresponding low-dimensional representations, which were obtained by applying an approach we have previously developed [6]. Our framework is intended to help biomedical engineers in evaluating the diagnostic distortion of an ECG processing algorithm for real-world recordings in the early stages of development. In contrast to [6], where we evaluated diagnostic distortion based on synthetic data only, in this work we elaborate on real ECG recordings, as they appear in daily clinical practice. Although synthetic data is crucial in biomedical signal processing, since they provide the ground truth (clean signal), these signals do not capture the wide variety of possible ECG morphologies and noise. It should be emphasized, however, that alongside the proposed diagnostic distortion analysis, a complete performance report must asses the robustness of the tested algorithms with respect to the noise level [7], the perturbation of the input parameters [8], etc.
The methodology we propose for agreement analysis is inspired by former work on diagnostic distortion measures, which we review in Section 2. The design concepts of our testing framework are discussed in Section 3. Section 4 briefly describes our previous work on low-dimensional ECG representations [9], followed by an analysis of the agreement between the diagnostic features of original ECGs and their approximations. Finally, Sections 5 and 6 discuss the results and conclude the paper, respectively.
Related work
The approaches most closely related to ours seek to quantify diagnostic relevance in ECG signal compression. Compression is very similar to feature extraction in that ECG data samples are represented in a low-dimensional (feature) space from which the original ECG signal can be restored. In order to prove that the reconstruction preserves diagnostic information, the quality of the restored ECG signal must be evaluated.
Diagnostic distortion can be measured by objective methods that are based on mathematical models. The most commonly used objective evaluation method uses the percent root-mean-square difference (PRD), measuring the squared error between original and reconstructed ECG signal. The PRD is a numerical quantity that assumes equal error contribution over the whole ECG. This is not appropriate for evaluating diagnostic distortion, since numerical error of the same degree, for instance, in the approximations of the QRS complex and the P wave do not imply the same level of diagnostic error. The weighted diagnostic distortion (WDD) measure was an early attempt to tackle this problem and to quantify the diagnostic error of compressed ECG signals [10]. It compares amplitude, duration, and shape features of original and reconstructed ECGs, and assigns weights to the corresponding error terms based on their diagnostic relevance. These features are, however, extracted automatically, and thus the procedure is prone to inaccuracies of the ECG delineation algorithms. Later, Al-Fahoum [11] introduced the wavelet-based weighted PRD (WWPRD), which quantifies the diagnostic error in wavelet space: In each wavelet subband, the PRD between the wavelet coefficients of the original and the reconstructed ECG is calculated, and then the error contribution of each subband is weighted based on its diagnostic significance. Although the WWPRD can be easily calculated and correlates very well with clinically evaluated results, it is heavily influenced by the presence of noise in the relevant subbands. Manikandan et al. [12] therefore proposed the waveletenergy-based diagnostic distortion (WEDD) measure, a reweighted variant of the WWPRD. In WEDD, the weights are equal to the relative energy between the overall signal and the corresponding subbands. This way low-energy highfrequency noise can be suppressed in the PRD calculation. However, low-frequency high-energy noise which overlaps with the diagnostic content of the ECG is counted in the WEDD. Fig. 1 illustrates this phenomena for an ECG distorted by a sinusoidal baseline drift. Even though the diagnostic remained unchanged, the WEDD, WWPRD, and PRD measures suggest a significant distortion of the ECG curve. All these metrics have their limitations in medical validation, as discussed in relation to baseline removal algorithms.
Clearly, application of such objective evaluation methods does not require the feedback from cardiologists, and thus the test results are not influenced by intra-and interobserver variability. Despite their advantages, objective distortion measures are not fully accepted in the ECG signal processing community due to their lack of medical validation [13]. Subjective methods, such as the mean opinion score (MOS), are therefore considered to be the gold Analyzing the diagnostic distortion after perfect baseline removal, using objective measures: PRD=20.4%, WWPRD=31.2%, WEDD=26.0%, which corresponds to the quality groups not bad and bad according to Tab. 8 in [12].
standard [13]. In this case, quality is assessed by cardiologists, who check whether original and reconstructed signals imply the same diagnosis. The results of these tests are then converted into a single measure called the MOS error value, which quantifies the diagnostic distortion of the compression/feature extraction technique under study. The MOS test introduced by Zigel et al. [10] and its variant [14] remain in use for evaluating ECG processing algorithms. In these studies, cardiologists rated the general quality of the ECG recordings, and investigated whether the same interpretation would result from the medical features of the original and the processed ECGs. To this end, a set of ECG features was considered which included both simple shape features, such as positivity/negativity of the T wave, and more complex morphological characteristics, such as left/right bundle branch block (LBBB/RBBB) and premature ventricular contraction (PVC). However, some of these features are ambiguous (e.g., waveform symmetry, which is difficult to distinguish from slightly asymmetric cases), and thus increase intra-and interobserver variability. Other morphological characteristics are just too complex to identify without the help of clinicians. Another drawback of previous MOS tests is that they consider only the overall percentage of agreement and ignore the possibility that the observers may agree by chance alone. For instance, the presence of a delta wave in the QRS complex is relatively rare [15], and therefore in most cases cardiologists will simply exclude this diagnosis.
We developed a carefully designed MOS test for evaluating the diagnostic relevance of ECG preprocessing algorithms that uses only simple shape features which can be recognized by biomedical experts and which reduce the intra-and interobserver variability. The results of our test support design decisions and speed up the development of ECG processing algorithms, since input from the medical experts is then only needed in the final testing phase. Note Table 1 Recordings selected for evaluating the ECG beat representation algorithm [5]. that current approaches consider the proportion of observed agreement alone as an index of concordance. we, in contrast suggest the use of Cohen's kappa in assessing the performance of ECG preprocessing algorithms, as it takes into account agreement by chance [16,17,18].
Test design
The main objective of the test design was to allow fast and reliable evaluation of the diagnostic distortion caused by low-dimensional ECG signal representation. Together with a medical expert we selected 26 recordings from the Massachusetts general hospital/Marquette fundation (MGHMF) database [5,19]. This database has a detailed patient guide, which allows recordings to be chosen according to the occurrence of various pathologies and waveforms. An overview of the selected recordings is given in Table 1; for more detailed information we refer to the patient guide provided in [19]. Our main inclusion criterion for selecting the recordings was the occurrence of various (abnormal) wave morphologies, for instance, positive/negative, biphasic, and flattened waves. Further, different manifestations of the QRS complex (R, Rs, RS, etc.) and the ST segment (e.g., elevated or depressed) were important criteria, since these are key diagnostic features which should be preserved by ECG compression / denoising algorithms. A medical expert chose the recordings based on the patient guide and visual judgement of the ECG strips.
Subsequently, as described in Sec. 4.1, these recordings were transformed into a low-dimensional representation using Hermite and sigmoidal functions combined with spline interpolation [6]. These functions have been extensively studied in several ECG related medical applications, such as data compression [20], heartbeat clustering [21], and myocardial infarction detection [22]. In order to investigate the diagnostic distortion of Hermite-based ECG decomposition, a test set was built that included 32 original 3-lead ECG recordings and 32 reconstructed 3-lead ECG recordings, 12 of which (6 original and 6 reconstructed) occured twice. This was done to allow assessment of self-consistency (withinobserver agreement). In total, 64 3-lead ECG recordings (i.e., 192 ECG strips), were therefore evaluated by experts. The test set was split into 4 subsets, each of which was to be processed on a different day to avoid exhaustion and possible resulting inaccuracies that would bias the results. The recordings were arranged in a pseudo-randomised order with the restriction that original and reconstructed ECG recordings were not allowed to occur in the same subset. To simulate daily clinical practice, the ECG recordings were presented on a standard ECG grid,(10 mm/mV and 25 mm/s). An example recording is shown in Fig. 2 (note that this recording is scaled for better visibility). The questionnaire for assessing diagnostic distortion was then designed based mainly on two factors:
First, in our preliminary experiments, we realized that even highly experienced physicians were not able to identify with confidence more complex pathologies such as a left bundle branch block (LBBB) or a right bundle branch block (RBBB) based on the ECG recordings alone. This is because additional laboratory tests or additional ECG leads would be needed for a sufficiently accurate diagnosis. Therefore, if the questionnaire offers options such as LBBB and RBBB, experts tend not to tick these boxes unless it is a very clear case, which could of course bias the evaluation significantly. The results may lead one to believe that the reconstructed (lowdimensional) ECG still has retained the complete diagnostic information, but this might just be due to the ECG always being labeled as normal by the expert.
Second, development of a signal representation algorithm should require medical expertise only in the final testing phase. We therefore included only simple evaluation criteria for judging the diagnostic distortion of the P and T waves, the QRS complex, and the ST segment, as illustrated in Fig. 2.
Specifically, this means that in a first step the ECG wave segments of all available leads are to be evaluated according to their general shape features, for instance, insignificant, positive, negative, and biphasic in the case of the P wave. Clearly, one of these options must be selected. Additionally, depending on the segment investigated, the evaluating person may tick an optional box which indicates a (general) pathology indicated by the wave.
Subsequently, the quality of the single leads is to be rated, where the experts are asked to focus on the signal clarity of the wave. This allows assessment of whether the low-dimensional representation degrades, retains, or even increases the quality of the ECG recording. Quality improvement would imply that noisy signal features were successfully eliminated while important diagnostic features were retained. Finally, a main diagnosis is to be given as free text. However, it should be mentioned, that this is considered optional and should only be answered by medical experts (in the final testing stage). This allows assessing whether the main diagnosis changed between ECG recording and low-dimensional signal representation and serves as an additional source for identifying a possible diagnostic distortion.
For our case study a total of 3 physicians were briefed with the information above and with additional instructions in written (see supplementary material) and oral form.
Case-study: low-dimensional ECG representation
In the best case, low-dimensional ECG representation preserves important diagnostic/morphological features, while redundant and noisy signal features are mostly eliminated. This is, however, a difficult task, since the frequency spectra of the ECG and possible noise (e.g., baseline wander) overlap in most cases [23]. Therefore, the diagnostic distortion of these preprocessing techniques must be investigated before they are applied to real-world problems. In this study, we tested the reliability of our former work approach (Sec. 4.1) by assessing its between-method agreement. More specifically, by means of statistical tests we checked whether two measurements (i.e., the original, and the reconstructed lowdimensional signal) produce the same diagnostic features defined in our MOS test.
In order to estimate the consensus between original and reconstructed ECGs, we computed the proportion of observed agreement and the coefficients between each (original and reconstructed) feature pair. In the case of dichotomous features, these quantities can be calculated as follows:
= − 1 − , = + , = 1 1 + 2 2 2 ,(1)
where denotes the chance agreement, , are the marginal totals, and and are the numbers of agreements on present and absent values of the corresponding feature (see Tab. 2). Note that, in the case of morphological features, we defined more than two mutually exclusive categories for which the calculations in Eq. (1) can be generalized according to [24,25]. Cohen's kappa is widely used in reliability studies in clinical research [16,17,18]. The kappa values range from −1 to 1; = 0 suggests that the observed agreement is not better than would be expected by chance alone, while = 1 implies perfect agreement, and negative values indicate potential systematic disagreement between the observers. Other values of kappa can be interpreted based on Tab. 3 as proposed by Landis and Koch [26].
In our case, achieving a perfect agreement (i.e., = 1) is unrealistic because the evaluating cardiologists typically have different levels of experience and mental fatigue. Taking this into account, we also report max , which expresses the maximum attainable kappa provided that the marginal totals , are fixed. The value of max can easily be calculated by substituting with ,max = min( 1 , 1 ) + min( 2 , 2 ) ∕ in Eq. (1). The difference max − indicates the unachieved agreement beyond chance constrained by the marginal totals [16].
In ECG compression, it is common practice to provide as a measure of diagnostic concordance [14]. However, results can be misleading, especially when most of the observations fall within a single category [27]. For instance, previous studies [10,14] considered the presence of delta waves in the QRS complex; however, compared to other QRS shape features these occur relatively rarely [15]. In most cases, cardiologists would therefore agree on the absence of delta waves in both the original and the filtered signals. To avoid such effects, alongside the percent-agreement figures, we report the corresponding coefficients, their confidence intervals, and the maximum attainable kappa.
ECG beat representation -algorithm
Among the classes of low-dimensional ECG representations, the Hermite-based decomposition has been widely studied especially for extracting features in advance to machine learning algorithms (see e.g., Chapter 12 in Ref. [28]). These approaches utilize similarities between the shapes of Hermite functions and ECG waveforms. In a recent work, we extended the theoretical framework of Hermite-based ECG models by sigmoidal functions combined with piecewise polynomial interpolation [6]. One of the main goals of this work was to reduce noisy and redundant signal features while retaining diagnostically important waveform features. Hence, we sought not only to reduce dimensionality, but also to simultaneously denoise the signal and to segment the ECG into its fundamental waves (P-QRS-T). We used adaptive Hermite and sigmoidal functions to extract important ECG waveform information, while piecewise polynomial interpolation captured mainly the undesired baseline wander. Additionally, because of the (smooth) basis functions we selected, high-frequency noise was also reduced. Figure 3 shows an example ECG trace, which was segmented into its fundamental parts, that is, P wave, QRS complex, ST/T segment, T wave, and baseline estimation. As can be seen, the low-dimensional representation accurately describes the underlying ECG. This was achieved by developing a nonlinear least-squares model with an appropriate set of basis functions. This model was first tailored precisely to a single person by nonlinear global optimization, and then readjusted beat-by-beat by means of nonlinear local optimization. Optimization was carried out with respect to the translation and dilation of the basis functions used to represent the single waves (P-QRS-T). Thus, we created a person-specific model which allows tracking morphological changes in a low-dimensional space while retaining the characteristic shape information of the ECG trace.
Experiments and results
Following the recommendations of Watson and Petrie [17], we conducted three experiments to evaluate the reliability of our previously published approach to low-dimensional ECG representation [6]. To this end, three cardiologists (C1, C2, C3), who were not allowed to discuss their answers with each other, filled out the blind test described in Section 3 and illustrated in Figure 2. We partitioned the test records into four work packages to decrease the daily workload for the cardiologists and thus minimize factors that can negatively influence the evaluation, such as mental fatigue and lack of motivation.
Between-method agreement
In this experiment, we studied the diagnostic concordance between the features of the original (raw) ECG and of the reconstructed signal [6]. Fig. 4 shows that the proportion of observed agreements is about 80% for all cardiologists and all ECG features except for the P wave, where cardiologist C1 achieved only 60% agreement. However, note that the self-consistency of C1 was also low for the P wave (see, e.g., Fig. 7). This is due to the P wave being a low-amplitude ECG component which has more ambiguous characteristics in the presence of noise than the QRS and the T waves. This also explains why the between-method concordances in Fig. 4 vary considerably between cardiologists in the case of the P wave. We also evaluated the diagnostic concordance between the pathological wave shapes of the original and the reconstructed ECG signals. According to Fig. 5, the observed agreement was greater than 80% in most cases, and close to 100% for the P wave and for the QRS complex. Consequently, the low-dimensional beat representation investigated did not significantly increase ambiguity in terms of pathological versus non-pathological waveshape class.
We used kappa statistics to analyze the chance-corrected observed agreement between the features of the original and the filtered ECG (see Tab. 4). The highest with the narrowest confidence intervals and the closest max was achieved for the QRS complex. Second best was the T wave, with substantial agreement between the original ECG and the reconstructed signal (cf. Table 3). The kappa scores of the ST segment morphology are lower, which indicates fair agreement between the features. However, it seems that the low-dimensional representation preserved the ST depression and elevation, as indicated by high kappa scores. Furthermore, the maximum attainable kappa was observed for C2 and C3 in the case of ST depression. This corroborates our previous claims for our joint Hermite sigmoid model [6], namely, that the sigmoid functions perform well in modeling the on/offset shifts of the QRS complex and the ST elevation/depression. As with the diagnostic concordance in Fig. 4, the kappa scores of the P wave morphology are Table 4 Using statistics to analyze between-method agreement between original and filtered ECG features. not as consistent as the scores of the previously mentioned features. In this case, three different levels of agreement (i.e., fair, moderate, and substantial) can be observed between the original and the filtered ECG features. Note that none of the confidence intervals includes 0, and we would thus reject the null hypothesis on = 0. This means that no evidence of agreement by chance alone was found.
P wave QRS complex ( ) Confidence intervals ( ) Confidence intervals Cardiologist C1 C2 C3 C1 C2 C3 C1 C2 C3 C1 C2 C3T wave ST segment ( ) Confidence intervals ( ) Confidence intervals Cardiologist C1 C2 C3 C1 C2 C3 C1 C2 C3 C1 C2 C3
Inter-rater agreement
We also evaluated the reproducibility of our tests by analyzing the inter-rater agreement. Fig. 6 shows the interrater diagnostic concordance. As with the between-method concordance, the percent agreement is around 80%, except for the QRS complex, for which the value is considerably lower than for the other features. The results show that the interpretation of these features can vary between cardiologists. For instance, a qRs-type QRS complex with very lowamplitude q and s waves can easily be misclassified as R, qR, or Rs wave depending on the stringency of the examiner. This explains the relatively low inter-rater concordance, which is also supported by the kappa statistics in Tab. 5. The kappa values related to the QRS morphology are lower than those for the T wave, but the uncertainty of these estimates of is also higher. Overall, in most cases there was moderate agreement between the cardiologists on the morphological features, except for the T wave, where the level of agreement was substantial. The discrepancies were probably caused by the different levels of medical experience and by the fact that the clinical standards and decision rules applied can vary between cardiologists.
Within-observer agreement
To assess the repeatability of our test, we studied the within-observer concordance (Fig. 7) by using 12 repeated Table 5 Using statistics to analyze inter-rater agreement for the ECG features. records including three leads. The agreement observed was much higher than in the between-method and inter-rater cases. This was to be expected, since an individual cardiologist's interpretation of ECG features should not vary much. The results indicate high self-consistency among the medical experts, which is also supported by the kappa statistics. In fact, is very close to max and suggests almost perfect agreement for the P wave and the QRS complex, and substantial agreement for the T wave and the ST segment.
P wave QRS complex ( ) Confidence intervals ( ) Confidence intervals Cardiologist C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3T wave ST segment ( ) Confidence intervals ( ) Confidence intervals Cardiologist C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3 C1-C2 C1-C3 C2-C3
The kappa values are very low for C2 and C3 in the cases of general ST morphology and ST elevation, respectively. This is due to the first paradox of , which is caused by the high prevalence of the corresponding ST features. For instance, C3 considered ST elevation to be absent in almost all the repeated records, and agreed with himself in 35 of the overall 36 cases including the three leads. Although we would expect almost perfect agreement, the high prevalence reduces the value of kappa, since ≈ (see e.g., [29,30]). For the same reason, is not applicable (n/a) in the case of elevated ST for C2, who achieved perfect agreement with himself on the absence of elevated ST in 36 out of 36 cases. This implies perfect agreement, but the denominator in Eq. (1) becomes zero due to = 1. In summary, we observed a high level of self-consistency among the cardiologists, which demonstrates the robustness of this study.
Discussion
Although is the most commonly used agreement measure in the literature, it is often criticized as being somewhat difficult to interpret in particular situations [16]. For instance, the prevalence of the attributes affects the magnitude of , which was in fact the case with the ST morphologies in Tab. 6. This effect becomes apparent when the proportion of agreements on one attribute differs significantly from those for the others. High prevalence causes high chance agreement , which reduces the value of accordingly. Regarding our experiments on the various types of concordance, we found that the attributes of positive P and T waves and the horizontal ST segment had the highest prevalence. This was to be expected, since these are the most common waveforms in the ECG. In these cases, low values of do not necessarily imply low rates of overall agreement. Therefore, alongside the value of , we also reported the diagnostic concordance for each experiment in Figs. 4-7. Note that there is greater potential for disagreement in the case of a large number of optional categories, as is the case for the QRS complex. Thus, the high values of indicate very strong agreement especially for the QRS morphologies, where we used 10 different shape categories. Generally, in order to counteract high prevalence and the resulting low values, this approach could be extended with interesting/rare ECG recordings, which would result in a more heterogeneous set of wave shapes.
We also evaluated the general quality scores for the original signal ( ) and for the reconstructed ECG ( ). Fig. 8 plots the differences − for each cardiologist and for the mean. Excluding the outliers, the differences have negative signs, which indicates an improved quality in the case of the reconstructed signal [6]. Although this visual enhancement is expected, filtering does not necessarily result in better diagnostic quality. For instance, FIR and IIR filters can remove the noise in the targeted frequency band, but they may also introduce ringing artifacts due to the well-known Gibbs phenomenon [28]. However, the between-method agreement study and the quality score differences show that our joint Hermite-sigmoid model [6] represents the ECG in a lowdimensional space without diagnostic distortion, and even enhances the visual quality of the ECG in most cases.
Hence, compared to the objective measures PRD, WW-PRD [11], and WEDD [12], we obtain a more reliable assessment of the diagnostic distortion and signal quality of the reconstructed signal. While Fig. 9 illustrates that specifically the wavelet-based objective measures perform well in case of (very) low and very high frequent noise (record index 8, mgh056), which is not overlapping with the relevant ECG subbands, Fig. 10 reveals their weaknesses (record index 20, mgh184). In this case the ECG is superimposed by baseline wander noise that overlaps with relevant ECG subbands, therefore the objective measures show a high diagnostic distortion. In fact, the high values (PRD=67.5%, WWPRD=29.0%, WEDD=59.7%) all correspond to low quality groups according to Tab. 8 in [12]. However, the signal quality is significantly improved by the low dimensional representation (Fig. 10), hence, the objective measures are misleading for this recording. In fact, the visual inspection of the original and the filtered ECGs conducted by three cardiologists confirmed the quality improvement for this record (see Fig. 8), which did not indicate any important diagnostic loss. Clearly, the objective measures are helpful and reliable in many cases (e.g., Fig. 9), nevertheless, they are limited for noise overlapping with relevant ECG subbands, demanding frameworks as suggested in this work that overcome this limitation.
Furthermore, as illustrated in Figure 2, the questionnaire also offers the option of proposing a main diagnosis for the three ECG leads. This is intended to be answered by medical experts only, who are briefed that they do not have to give a definitive answer, but their best guess based on the three leads. This provides an additional source for identifying diagnostic distortions which might not be covered by the standard questionnaire (in case diagnosis differs significantly between raw and reconstructed signals). Clearly, this question is challenging and should therefore -if it is to bring value -only be answered in the final stage of algorithm evaluation by experienced, meticulous experts. In our case, the three experts gave a main diagnosis, showing good agreement in most of the cases, that is, 73 %, 88 % and 92 %, respectively. A more detailed analysis by a medical expert, who subsequently investigated why the raw and reconstructed ECG recordings led to different diagnoses uncovered a loss of diagnostic information in the ST segment. On this basis our previously published algorithm [6] can be further improved in the future. Alesanco et al. [14] argued that this third question should be presented in a semiblind way, which means, showing both the raw and the reconstructed ECGs to the medical expert at the same time and asking whether they would evaluate them differently. This could potentially reduce intra-subject variability caused, for instance, by varying levels of attention when judging the raw and the reconstructed recordings. However, the drawback of this is that -even unintentionally -one typically tends to find the same characteristics in the two recordings if they are presented at the same time. Consequently, this may lead to misjudging an important diagnostic loss, and therefore we believe that the blind test is more appropriate in this case.
Conclusion
We have proposed a methodology for quantifying the diagnostic distortion of ECG signal processing algorithms, for instance, for filtering, segmentation, and data compression. These low-dimensional ECG representations may lead to distortion of the diagnostic information contained in the ECG, which we quantified using Cohen's kappa. Note that is affected by prevalence, and thus the corresponding quality scores are not meant to perform direct comparisons between ECG processing algorithms of different studies.
Instead, our goal was to design a testing framework that includes a questionnaire which minimizes the time to train non-medical staff to evaluate the diagnostic distortion of ECG decompositions. To this end, we chose scoring rubrics such that the interpretation of original and reconstructed ECGs is clear and some level of objectivity is imposed on the rating scale. The proposed test is therefore free of ambiguous medical features, such as symmetry and notches. In a case study, we considered Hermite-based characterization of ECG waveforms, which is a very popular topic in this field. Particularly, we showed that low-dimensional heartbeat signal representation by means of Hermite and sigmoidal functions preserves diagnostically relevant features of the ECG.
Figure 1 :
1Figure 1: Analyzing the diagnostic distortion after perfect baseline removal, using objective measures: PRD=20.4%, WWPRD=31.2%, WEDD=26.0%, which corresponds to the quality groups not bad and bad according to Tab. 8 in [12].
Figure 2 :
2Example recording for the evaluation of ECG preprocessing / compression algorithms. The single leads are investigated according to the wave shapes and possible pathologies present, before judging the general quality of the ECG lead and optionally giving a main diagnosis.
Figure 3 :
3Simultaneous baseline wander removal, wave segmentation, and low-dimensional beat representation[6].
Figure 4 :Figure 5 :
45Between-method diagnostic concordance for the ECG features extracted from the original and the filtered signals. Between-method diagnostic concordance for the pathological waveshapes extracted from the original and the filtered signals.
Figure 6 :
6Inter-rater diagnostic concordance for the ECG features.
Figure 7 :
7Within-observer diagnostic concordance for ECG features.
Figure 8 :
8Quality score differences between the original and the filtered ECG signals.
Figure 9 :Figure 10 :
910For (very) low and high frequent noise (not in the relevant ECG subbands) the wavelet-based objective measures accurately capture the low diagnostic distortion of the low-dimensional ECG representation (WWPRD=9.46%, WEDD=7.89%), while PRD=14.38% already indicates a diagnostic distortion. The high values of the objective measures (PRD=67.5%, WWPRD=29.0%, WEDD=59.7%) indicate a high diagnostic distortion, although the low dimensional representation perfectly removes the noise and keeps the most important diagnostic features which is in line with the results of our framework.
1 .
1Interpretation (check the according boxes) General quality score for the single leads (highlight one number for each lead)P
I
II
V
Insignificant
Positive
Negative
Biphasic
Pathologic
QRS
I
II
V
R
Rs
RS
rS
QS
Qr
QR
qR
qRs
rSr
Pathologic
T
I
II
V
Positive
Negative
Biphasic
Ascending
Descending
Flat / Flattened
Pathologic
ST
I
II
V
Horizontal
Ascending
Descending
Depressed
Elevated
Pathologic
2. I
1-cannot interprete safely
2-redo recording if at all possible
3-tolerable
4-good
5-excellent
II
1-cannot interprete safely
2-redo recording if at all possible
3-tolerable
4-good
5-excellent
V
1-cannot interprete safely
2-redo recording if at all possible
3-tolerable
4-good
5-excellent
3. What main diagnosis would you give for this record?
Table 2
2Contingency table in the case of dichotomous ECG features.Feature of the
original ECG
Feature of the filtered ECG
Totals
Present
Absent
Present
a
b
1
Absent
c
d
2
Totals
1
2
Table 3
3Strength of agreement interpretation of kappa values[26].Kappa Statistic
Strength of Agreement
< 0.00
Poor
0.00 − 0.20
Slight
0.21 − 0.40
Fair
0.41 − 0.60
Moderate
0.61 − 0.80
Substantial
0.81 − 1.00
Almost perfect
Table 6
6Using statistics to analyze within-observer agreement for the ECG features.P wave
QRS complex
(
)
Confidence intervals
(
)
Confidence intervals
Cardiologist
C1
C2
C3
C1
C2
C3
C1
C2
C3
C1
C2
C3
Morphology 0.49 (0.49) 0.82 (0.91) 0.82 (0.82) (0.22, 0.76) (0.58, 1.00) (0.58, 1.00) 0.82 (0.89) 0.85 (0.89) 0.86 (0.86) (0.67, 0.97) (0.72, 0.99) (0.74, 0.99)
T wave
ST segment
(
)
Confidence intervals
(
)
Confidence intervals
Cardiologist
C1
C2
C3
C1
C2
C3
C1
C2
C3
C1
C2
C3
Morphology 0.79 (0.92) 0.67 (0.79) 0.78 (0.82) (0.62, 0.96) (0.46, 0.87) (0.60, 0.96) 0.73 (0.91) 0.08 (0.28) 0.78 (0.78) (0.44, 1.00) (-0.38, 0.54) (0.54, 1.00)
Depressed
-
-
-
-
-
-
0.66 (1.00) 0.88 (1.00) 0.61 (0.61) (0.42, 0.91) (0.72, 1.00) (0.25, 0.97)
Elevated
-
-
-
-
-
-
0.79 (0.79) n/a (n/a) 0.00 (0.00) (0.37, 1.00) (n/a, n/a) (-1.00, 1.00)
Signal-quality indices for the electrocardiogram and photoplethysmogram: Derivato and applicatons to wireless monitoring. C Orphanidou, T Bonnici, P Charlton, D Clifton, D Vallance, L Tarassenko, IEEE Journal of Biomedical and Health Informatics. 19C. Orphanidou, T. Bonnici, P. Charlton, D. Clifton, D. Vallance, L. Tarassenko, Signal-quality indices for the electrocardiogram and photoplethysmogram: Derivato and applicatons to wireless monitor- ing, IEEE Journal of Biomedical and Health Informatics 19 (2015) 832-838.
Quality assessment of ambulatory ecg using wavelet entropy of the hrv signal. C Orphanidou, I Drobnjak, IEEE Journal of Biomedical and Health Informatics. 21C. Orphanidou, I. Drobnjak, Quality assessment of ambulatory ecg using wavelet entropy of the hrv signal, IEEE Journal of Biomedical and Health Informatics 21 (2017) 1216-1223.
Techniques for ventricular repolarization instability assessment from the ECG. P Laguna, J P Martínez Cortés, E Pueyo, Proceedings of the IEEE. 104P. Laguna, J. P. Martínez Cortés, E. Pueyo, Techniques for ventricular repolarization instability assessment from the ECG, Proceedings of the IEEE 104 (2016) 392-415.
Comparison of Baseline Wander Removal Techniques considering the Preservation of ST Changes in the Ischemic ECG : A Simulation Study. G Lenis, N Pilia, A Loewe, W H W Schulze, O Dössel, Computational and Mathematical Methods in Medicine. 2017G. Lenis, N. Pilia, A. Loewe, W. H. W. Schulze, O. Dössel, Com- parison of Baseline Wander Removal Techniques considering the Preservation of ST Changes in the Ischemic ECG : A Simulation Study, Computational and Mathematical Methods in Medicine 2017 (2017).
The masschussetts general hospitalmarquette fondation hemodynamic and electrocardiographic database -comprehensive collection of critical care waveforms. W J P , F P J , T R S , R R M , J Clinical Monitoring. 7W. J. P., F. P. J., T. R.S., R. R.M., The masschussetts general hospital- marquette fondation hemodynamic and electrocardiographic database -comprehensive collection of critical care waveforms, J Clinical Monitoring 7 (19911) 96-97.
ECG beat representation and delineation by means of variable projection. C Böck, P Kovács, P Laguna, J Meier, M Huemer, IEEE Transactions on Biomedical Engineering. 68C. Böck, P. Kovács, P. Laguna, J. Meier, M. Huemer, ECG beat representation and delineation by means of variable projection, IEEE Transactions on Biomedical Engineering 68 (2021) 2997-3008.
A noise stress test for arrhythmia detectors. G B Moody, W Muldrow, M R G , Computers in Cardiology. 11G. B. Moody, W. Muldrow, M. R. G., A noise stress test for arrhythmia detectors, Computers in Cardiology 11 (1984) 381-384.
Protocol to assess robustness of ST analysers: a case study. F Jager, G B Moody, R G Mark, Physiological Measurement. 25F. Jager, G. B. Moody, R. G. Mark, Protocol to assess robustness of ST analysers: a case study, Physiological Measurement 25 (2004) 1-15.
Epileptic seizure classification of EEG time-series using rational discrete short time Fourier transform. K Samiee, P Kovács, M Gabbouj, IEEE Trans. Biomed. Eng. 62K. Samiee, P. Kovács, M. Gabbouj, Epileptic seizure classification of EEG time-series using rational discrete short time Fourier transform, IEEE Trans. Biomed. Eng. 62 (2014) 541-552.
The weighted diagnostic distortion (WDD) measure for ECG signal compression. Y Zigel, A Cohen, A Katz, IEEE Trans. Biomed. Eng. 47Y. Zigel, A. Cohen, A. Katz, The weighted diagnostic distortion (WDD) measure for ECG signal compression, IEEE Trans. Biomed. Eng. 47 (2000) 1424-1430.
Quality assessment of ECG compression techniques using a wavelet-based diagnostic measure. A S Al-Fahoum, IEEE Trans. Inf. Technol. Biomed. 10A. S. Al-Fahoum, Quality assessment of ECG compression tech- niques using a wavelet-based diagnostic measure, IEEE Trans. Inf. Technol. Biomed. 10 (2006) 182-191.
Wavelet energy based diagnostic distortion measure for ECG. M M S , S Dandapat, Biomed. Signal Process. Control. 2M. M. S., S. Dandapat, Wavelet energy based diagnostic distortion measure for ECG, Biomed. Signal Process. Control 2 (2007) 80-96.
A Comparative Analysis of Methods for Evaluation of ECG Signal Quality after Compression. A Němcová, R Smíšek, L Maršánová, L Smital, M Vítek, BioMed Research. 1868519A. Němcová, R. Smíšek, L. Maršánová, L. Smital, M. Vítek, A Comparative Analysis of Methods for Evaluation of ECG Signal Quality after Compression, BioMed Research International 2018 (2018) 1868519:1-26.
Automatic Real-Time ECG Coding Methodology Guaranteeing Signal Interpretation Quality. A Alesanco, J García, IEEE Trans. Biomed. Eng. 55A. Alesanco, J. García, Automatic Real-Time ECG Coding Methodol- ogy Guaranteeing Signal Interpretation Quality, IEEE Trans. Biomed. Eng. 55 (2008) 2519-2527.
Prevalence and electrocardiographic forms of the Wolff-Parkinson-White syndrome. S R , G L , F F , C J C , C J M , Arch. Mal. Cooeur Vaiss. 75S. R., G. L., F. F., C. J.C., C. J.M., Prevalence and electrocardio- graphic forms of the Wolff-Parkinson-White syndrome, Arch. Mal. Cooeur Vaiss. 75 (1982) 1389-1399.
The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements. J Sim, C C Wright, Physical Therapy. 85J. Sim, C. C. Wright, The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements, Physical Therapy 85 (2005) 257-268.
Method agreement analysis: A review of correct methodology. P F Watson, A Petrie, Theriogenology. 73P. F. Watson, A. Petrie, Method agreement analysis: A review of correct methodology, Theriogenology 73 (2010) 1167-1179.
Design of Studies for Medical Research. D Machin, M J Campbell, John Wiley & Sons LtdChichester, UKD. Machin, M. J. Campbell, Design of Studies for Medical Research, John Wiley & Sons Ltd, Chichester, UK, 2005.
A L Goldberger, L A N Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, H E Stanley, Physiotoolkit Physiobank, Physionet , Components of a new research resource for complex physiologic signals. 101A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, H. E. Stanley, PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals, Circulation 101 (2000) 215-220.
Efficient compression of QRS complexes using Hermite expansion. A Sandryhaila, S Saba, M Püschel, J Kovacevic, IEEE Transactions on Signal Processing. 60A. Sandryhaila, S. Saba, M. Püschel, J. Kovacevic, Efficient compres- sion of QRS complexes using Hermite expansion, IEEE Transactions on Signal Processing 60 (2012) 947-955.
Clustering ECG complexes using Hermite functions and selforganizing maps. M Lagerholm, C Peterson, G Braccini, L Edenbrandth, L Sörnmo, IEEE Trans. Biomed. Eng. 47M. Lagerholm, C. Peterson, G. Braccini, L. Edenbrandth, L. Sörnmo, Clustering ECG complexes using Hermite functions and self- organizing maps, IEEE Trans. Biomed. Eng. 47 (2000) 838-717.
Detecting acute myocardial infarction in the 12-lead ECG using Hermite expansions and neural networks. H Haraldsson, L Edenbrandt, M Ohlsson, Artificial Intelligence in Medicine. 32H. Haraldsson, L. Edenbrandt, M. Ohlsson, Detecting acute my- ocardial infarction in the 12-lead ECG using Hermite expansions and neural networks, Artificial Intelligence in Medicine 32 (2004) 127- 136.
L Sörnmo, P Laguna, Bioelectrical Signal Processing in Cardiac and Neurological Applications. BurlingtonAcademic PressECG signal processingL. Sörnmo, P. Laguna, Chapter 7 -ECG signal processing, in: Bioelectrical Signal Processing in Cardiac and Neurological Appli- cations, Academic Press, Burlington, 2005, pp. 453 -566.
Integration and generalization of kappas for multiple raters. A J Conger, Psychol. Bull. 88A. J. Conger, Integration and generalization of kappas for multiple raters, Psychol. Bull. 88 (1980) 322-328.
Kappa coefficient calculation using multiple ratings per subject: a special communication. S M Haley, J S Osberg, Phys. Ther. 69S. M. Haley, J. S. Osberg, Kappa coefficient calculation using multiple ratings per subject: a special communication, Phys. Ther. 69 (1989) 970-974.
The measurement of observer agreement for categorical data. J R Landis, G G Koch, Biometrics. 33J. R. Landis, G. G. Koch, The measurement of observer agreement for categorical data, Biometrics 33 (1977) 159-174.
A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. E S Stemler, Practical Assessment, Research, and Evaluation. 9E. S. Stemler, A comparison of consensus, consistency, and mea- surement approaches to estimating interrater reliability, Practical Assessment, Research, and Evaluation 9 (2004) 1-12.
Advanced Methods And Tools for ECG Data Analysis. G D Clifford, F Azuaje, P Mcsharry, Artech House, Massachusetts, USAG. D. Clifford, F. Azuaje, P. McSharry, Advanced Methods And Tools for ECG Data Analysis, Artech House, Massachusetts, USA, 2006.
High agreement but low kappa: I. The problems of two paradoxes. A R Feinstein, D V Cicchetti, Journal of Clinical Epidemiology. 43A. R. Feinstein, D. V. Cicchetti, High agreement but low kappa: I. The problems of two paradoxes, Journal of Clinical Epidemiology 43 (1990) 543-549.
High agreement but low kappa: II. Resolving the paradoxes. D V Cicchetti, A R Feinstein, Journal of Clinical Epidemiology. 43D. V. Cicchetti, A. R. Feinstein, High agreement but low kappa: II. Resolving the paradoxes, Journal of Clinical Epidemiology 43 (1990) 551-558.
| [] |
[] | [
"V V Val'kov \nKirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia\n",
"D M Dzebisashvili \nKirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia\n",
"M M Korovushkin \nKirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia\n",
"A F Barabanov \nVereshchagin Institute for High Pressure Physics\n108840TroitskRussia\n"
] | [
"Kirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia",
"Kirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia",
"Kirensky Institute of Physics\nFederal Research Center KSC SB RAS\n660036KrasnoyarskRussia",
"Vereshchagin Institute for High Pressure Physics\n108840TroitskRussia"
] | [] | Stability of the d-wave pairing with respect to the intersite Coulomb repulsion in cuprate superconductorsWithin the spin-fermion model for cuprate superconductors, the influence of the intersite Coulomb interactions V2 and V ′ 2 between holes located at the next-nearest-neighbor oxygen ions of CuO2 plane on the implementation of the d x 2 −y 2 -wave pairing is studied. It is shown that d-wave pairing can be suppressed only for unphysically large values of V2 and V ′ 2 . | 10.1016/j.jmmm.2016.12.090 | [
"https://arxiv.org/pdf/1611.00158v2.pdf"
] | 14,258,461 | 1611.00158 | 1b7c758e738e660f0f690d01475ae4ec76cc9a13 |
3 Nov 2016
V V Val'kov
Kirensky Institute of Physics
Federal Research Center KSC SB RAS
660036KrasnoyarskRussia
D M Dzebisashvili
Kirensky Institute of Physics
Federal Research Center KSC SB RAS
660036KrasnoyarskRussia
M M Korovushkin
Kirensky Institute of Physics
Federal Research Center KSC SB RAS
660036KrasnoyarskRussia
A F Barabanov
Vereshchagin Institute for High Pressure Physics
108840TroitskRussia
3 Nov 2016
Stability of the d-wave pairing with respect to the intersite Coulomb repulsion in cuprate superconductorsWithin the spin-fermion model for cuprate superconductors, the influence of the intersite Coulomb interactions V2 and V ′ 2 between holes located at the next-nearest-neighbor oxygen ions of CuO2 plane on the implementation of the d x 2 −y 2 -wave pairing is studied. It is shown that d-wave pairing can be suppressed only for unphysically large values of V2 and V ′ 2 .
INTRODUCTION
It is known that the real structure of CuO 2 plane is characterized by the spatial separation of the subsystem of holes located at oxygen ions and the subsystem of spins localized at copper ions ( Fig. 1). Besides, a number of features is caused by the presence of two oxygen ions in the unit cell of copper-oxygen plane. The minimal realistic microscopic model for cuprates is the three-band p − d model (the Emery model) [1,2]. This model takes into account the d x 2 −y 2 -orbitals of copper ions and p x -and p y -orbitals of oxygen ions. However, along with the realism, the multiband character of the Emery model leads to cumbersome analysis of cuprates physics. That is why a number of studies in this direction is carried out in the framework of the Hubbard model and its effective low-energy variants, such as t−J and t − J * models on the simple square lattice. In these models, the same fermions form the charge and the spin subsystems.
Along with the number of important results, such an approach has a serious disadvantage: the Cooper pairing of fermions caused by the kinematic [3], exchange [4,5], and spin-fluctuation mechanisms considered in the Hubbard [6,7], t−J [4,5], or t−J * [8,9] models is suppressed by the intersite Coulomb repulsion V 1 of charge carriers located at the neighboring sites. This effect is most pronounced in the d channel [10] and the Cooper instability disappears completely at V 1 ∼ 1 − 2 eV.
In our previous paper [11], it has been shown that, because of the two-orbital character of the subsystem of holes located at oxygen sites and the spatial separation of this subsystem from that of spins at copper ions, the superconducting phase in high-T c cuprates is stable with respect to the strong Coulomb repulsion of holes located at the nearest-neighbor oxygen sites if the order parameter has the d x 2 −y 2 -symmetry. This effect is due to the symmetry properties of the Coulomb potential. Here V1 denotes the Coulomb interaction between holes located at the nearest-neighbor oxygen sites and V2 and V ′ 2 correspond to the Coulomb interactions of holes located at the nextnearest-neighbor oxygen sites.
Note that in Ref. [11] the stability of the d-wave pairing was proved only for the case of the intersite Coulomb repulsion of holes located at the nearest-neighbor oxygen ions,V 1 , while the role of the Coulomb repulsion between holes located at the more distant oxygen ions, V 2 , is still unclear (the possibility of influence of V 2 on the superconducting d-wave pairing has been also mentioned in Ref. [12]). In this paper, we study the role of the Coulomb interaction between holes located at the next-nearest-neighbor oxygen ions on CuO 2 -plane in the implementation of the superconducting d x 2 −y 2wave pairing.
MODEL
In the strongly correlated regime, when the Hubbard repulsion energy U d is large, i.e., U d > ∆ pd ≫ t pd , the p− d model is reduced to the spin-fermion model [13,14] describing the subsystem of oxygen holes interacting with the spins located at copper ions. The Hamiltonian of the spin-fermion model is represented in the form
H =Ĥ 0 +Ĵ +Î +V ,(1)H 0 = kα ξ 0 (k x )a † kα a kα + ξ 0 (k y )b † kα b kα +t k (a † kα b kα + b † kα a kα ) , J = J N f kqαβ e if (q−k) u † kα (S f σ αβ )u qβ ,Î = I 2 f m S f S m , V = V 2 fn f + x 2n f + x 2 +y + V 2 fn f + y 2n f + y 2 +x +V ′ 2 fn f + x 2n f + x 2 +x + V ′ 2 fn f + y 2n f + y 2 +y ,(2)
where
ξ 0 (k x(y) ) = ε p − µ + τ (1 − cos k x(y) ), t k = (2τ − 4t) sin k x 2 sin k y 2 , u kβ = sin k x 2 a kβ + sin k y 2 b kβ , τ = t 2 pd ∆ pd 1 − ∆ pd U d − ∆ pd − 2V pd , J = 4t 2 pd ∆ pd 1 + ∆ pd U d − ∆ pd − 2V pd ,(3)I = 4t 4 pd (∆ pd + V pd ) 2 1 U d + 2 2∆ pd + U p .
The HamiltonianĤ 0 describes the oxygen holes in the momentum representation. Here a † kα (a kα ) are the hole creation (annihilation) operators in the oxygen subsystem with the p x -orbitals ( Fig. 1), α = ±1/2 is the spin projection. Similarly, b † kα (b kα ) are operators in the oxygen subsystem with the p y -orbitals. The bare one-site energy of oxygen holes is ε p , µ is the chemical potential, and t is the hopping integral. The operatorĴ describes the exchange interaction between the subsystem of oxygen holes and the subsystem of the spins localized at copper ions. Here, S f is the operator of a spin localized at site with index f and σ = (σ x , σ y , σ z ) is the vector of the Pauli matrices. The operatorÎ describes the superexchange interaction between the neighboring spins at copper ions. The intersite Coulomb interaction between holes is described by the operatorV . As far as the role of the Coulomb repulsion V 1 between holes located at the nearest oxygen sites was clarified in Ref. [11], here we do not take into account the corresponding term in the HamiltonianV and restrict ourselves to a treatment of the interactions V 2 and V ′ 2 be-tween the next-nearest neighbors (Fig. 1). In the Hamiltonian,n f +x(y)/2 = σn f +x(y)/2,σ is the operator of the number of holes at the oxygen site f +x(y)/2, where x = (1, 0) and y = (0, 1) are the lattice basis vectors in the units of the lattice parameter.
When writing the Hamiltonian (1), we take into account that the hopping integrals in the first and the second terms can have different signs for different hopping directions owing to the different phases of the wave functions.
Below we use the commonly accepted set of parameters of the model: t pd = 1.3 eV, ∆ pd = 3.6 eV, U d = 10.5 eV, V pd = 1.2 eV [15][16][17]. Note that for this set, the parameter of the superexchange energy I = 0.136 eV (1570 K) agrees well with the experimental data on cuprate superconductors [17]. For the hopping integral of the holes, we use the value t = 0.1 eV [18], and we suppose that the parameters of the intersite Coulomb interactions are
V 2 = V ′ 2 = 0.5 − 1.5 eV.
It is important that the exchange energy between the localized and itinerant spins calculated using the expression (3) is large, namely, J = 3.4 eV ≫ τ ≈ 0.1 eV. Therefore, to describe the oxygen holes dynamics it is necessary to take into account the exchange interaction rigorously. This problem is solved using the following basis set of operators [18,19]
a kα , b kα , L kα = 1 N f qβ e if (q−k) (S f σ αβ )u qβ ,(4)
where the third operator describes the strong spincharge coupling.
EQUATIONS FOR GREEN'S FUNCTIONS
For consideration of the conditions for the Cooper instability, we supplement the basis set (4) by the oper-
ators (ᾱ = −α) a † −kᾱ , b † −kᾱ , L † −kᾱ .(5)
The equations for the normal G ij and the anomalous F ij Green's functions obtained by the method [20,21] can be represented in the form (j = 1, 2, 3)
(ω − ξ x )G 1j = δ 1j + t k G 2j + J x G 3j + ∆ 1k F 1j , (ω − ξ y )G 2j = δ 2j + t k G 1j + J y G 3j + ∆ 2k F 2j , (ω − ξ 3 )G 3j = δ 3j K k + (J x G 1j + J y G 2j )K k + ∆ 3k F 3j , (ω + ξ x )F 1j = ∆ * 1k G 1j − t k F 2j − J x F 3j , (ω + ξ y )F 2j = ∆ * 2k G 2j − t k F 1j − J y F 3j , (ω + ξ L )F 3j = ∆ * 3k G 3j − (J x F 1j + J y F 2j )K k .(6)
Here, G 11 = a k↑ |a † k↑ , G 21 = b k↑ |a † k↑ , and G 31 = L k↑ |a † k↑ . The functions G i2 and G i3 are determined in a similar way with the only difference that a † k↑ is replaced by b † k↑ and L † k↑ , respectively. The anomalous Green's functions are defined as
F 11 = a † −k↓ |a † k↑ , F 21 = b † −k↓ |a † k↑ , F 31 = L † −k↓ |a † k↑ .
For F i2 and F i3 , the same type of notation regarding the second index is used. The functions involved in (6) are determined by the expressions
ξ x(y) = ξ 0 (k x(y) ), J x(y) = J sin k x(y) 2 , K k = 3 4 − C 1 γ 1k , ξ L (k) = ε p − µ − 2t + 5τ /2 − J +[(τ − 2t)(−C 1 γ 1k + C 2 γ 2k ) + τ (−C 1 γ 1k + C 3 γ 3k )/2 +JC 1 (1 + 4γ 1k )/4 − IC 1 (γ 1k + 4)]K −1 k .(7)
Here, γ jk are the square lattice invariants: γ 1k = (cos k x + cos k y )/2, γ 2k = cos k x cos k y , γ 3k = (cos 2k x + cos 2k y )/2. In the course of deriving (6), we assume that the state of localized moments corresponds to the quantum spin liquid. Therefore, the spin correlation functions C j = S 0 S rj satisfy the relations
C j = 3 S x 0 S x rj = 3 S y 0 S y rj = 3 S z 0 S z rj ,(8)
where r j is the position of a copper ion within the coordination sphere j. Besides, S x f = S y f = S z f = 0. From (6), it follows that the spectrum of the Fermi excitations in the normal phase is determined by the solution of the dispersion equation
det k (ω) = (ω − ξ x )(ω − ξ y )(ω − ξ L ) − 2J x J y t k K k −(ω − ξ y )J 2 x K k − (ω − ξ x )J 2 y K k − (ω − ξ L )t 2 k = 0. (9)
The spectrum is characterized by three bands ǫ 1k , ǫ 2k and ǫ 3k [22]. The branch ǫ 1k with the minimum at a point close to (π/2, π/2) of the Brillouin zone arises owing to the strong spin-fermion coupling. At the low value of the number of holes per one oxygen ion n p , the dynamics of holes is determined by the characteristics of the lower band ǫ 1k . This band is separated by an appreciable gap from the upper bands ǫ 2k and ǫ 3k [22]. The introduced order parameters ∆ j,k are related to the anomalous averages as follows
∆ 1k = − 2 N q V 2 cos(k y − q y ) + V ′ 2 cos(k x − q x ) a q↑ a −q↓ , ∆ 2k = − 2 N q V 2 cos(k x − q x ) + V ′ 2 cos(k y − q y ) b q↑ b −q↓ , ∆ 3k = 1 N q I k−q L q↑ L −q↓ + C 1x a q↑ a −q↓ +C 1y b q↑ b −q↓ + C 1 ψ q a q↑ b −q↓ + b q↑ a −q↓ K −1 q − 1 N q V 2 (C 1 cos k y − C 2 γ 2k ) cos q y +V ′ 2 − 3 8 + C 1 cos k x − C 3 2 cos 2k x cos q x K −1 q a q↑ a −q↓ − 1 N q V 2 (C 1 cos k x − C 2 γ 2k ) cos q x(10)+V ′ 2 − 3 8 + C 1 cos k y − C 3 2 cos 2k y cos q y K −1 q b q↑ b −q↓ .
Here C 1x(1y) = C 1 sin 2 q x(y) 2 , ψ k = sin k x 2 sin k y 2 and I k = 4Iγ 1k .
EQUATIONS FOR THE SUPERCONDUCTING ORDER PARAMETERS
For the analysis of the conditions for the appearance of the Cooper instability, we express the anomalous Green's functions in terms of the ∆ * lk parameters in the linear approximation
F nm (k, ω) = 3 l=1 S (l) nm (k, ω)∆ * lk /Det k (ω),(11)
where
Det k (ω) = −det k (ω)det k (−ω), S(1)11 (k, ω) = Q 3y (k, ω)Q 3y (k, −ω), S(2)11 (k, ω) = S (1) 22 (k, ω) = Q 3 (k, ω)Q 3 (k, −ω), S(1)33 (k, ω) = K k S (3) 11 (k, ω) = K 2 k Q y (k, ω)Q y (k, −ω), S(2)22 (k, ω) = Q 3x (k, ω)Q 3x (k, −ω), S(2)33 (k, ω) = K k S (3) 22 (k, ω) = K 2 k Q x (k, ω)Q x (k, −ω), S(1)12 (k, ω) = S (1) 21 (k, −ω) = Q 3 (k, ω)Q 3y (k, −ω), S(2)12 (k, ω) = S (2) 21 (k, −ω) = Q 3 (k, ω)Q 3x (k, −ω), S (3) 12 (k, ω) = S (3) 21 (k, −ω) = K k Q x (k, ω)Q y (k, −ω), S(3)33 (k, ω) = K k Q xy (k, ω)Q xy (k, −ω).(12)
The functions used here are Using the spectral theorem [23], we find the expressions for the anomalous averages and finally arrive at the closed set of uniform integral equations for the superconducting order parameters (l = 1, 2, 3)
Q x(y) (k, ω) = (ω − ξ x(y) )J y(x) + t k J x(y) , Q 3 (k, ω) = (ω − ξ L )t k + J x J y K k , Q 3x(3y) (k, ω) = (ω − ξ L )(ω − ξ x(y) ) − J 2 x(y) K k , Q xy (k, ω) = (ω − ξ x )(ω − ξ y ) − t 2 k .(13)∆ * 1k = 2 N lq (V 2 cos(k y − q y ) + V ′ 2 cos(k x − q x ))M (l) 11 (q)∆ * lq , ∆ * 2k = 2 N lq (V 2 cos(k x − q x ) + V ′ 2 cos(k y − q y ))M (l) 22 (q)∆ * lq , ∆ * 3k = 1 N lq I k−q C 1x M+ V 2 (C 1 cos k y − C 2 γ 2k ) cos q y + V ′ 2 − 3 8 + C 1 cos k x − C 3 2 cos 2k x cos q x M (l) 11 (q) + V 2 (C 1 cos k x − C 2 γ 2k ) cos q x(14)+V ′ 2 − 3 8 + C 1 cos k y − C 3 2 cos 2k y cos q y M (l) 22 (q) ∆ * lq K q , where M (l) nm (q) = S (l) nm (q, E 1q ) + S (l) nm (q, −E 1q ) 4E 1q (E 2 1q − E 2 2q )(E 2 1q − E 2 3q ) tanh E 1q 2T .
Below, we use the system (14) to find the critical superconducting temperature.
In the Fig. 2, we illustrate the results obtained by solving Eq. 14 for the d x 2 −y 2 -wave pairing, where ∆ lk = ∆ l1 · (cos k x − cos k y ) + ∆ l2 · (cos 2k x − cos 2k y ).
One can see from Fig. 2 that an increase in V 2 and V ′ 2 leads to suppression of the d-wave pairing, however superconductivity is maintained up to unphysically large values V 2 = V ′ 2 = 1.5 eV of the Coulomb interaction between holes located at the next-nearest-neighbor oxygen ions (for comparison, the intensity of the Coulomb interaction between nearest-neighbor oxygen ions V 1 = 1 − 2 eV [16]).
CONCLUSION
To conclude, we have shown that the intersite Coulomb repulsion between holes located at the nextnearest-neighbor oxygen ions of CuO 2 plane suppresses the d x 2 −y 2 -wave pairing only at unphysically large values of the Coulomb interaction V 2 = V ′ 2 = 1.5 eV. Taking into account our previous result [11] on cancelation of the effect of the Coulomb interaction V 1 for the nearest-neighbor oxygen sites on the d-wave pairing, we conclude that an account for the real structure of CuO 2 plane leads to stability of the d x 2 −y 2 -wave pairing towards the strong intersite Coulomb repulsion. It is obvious that an account for the Coulomb interaction V 3 does not effect on the superconducting d-wave pairing because of the same "symmetry reason" as that for V 1 [11].
Fig. 1 .
1Structure of CuO2 plane.
Fig. 2 .
2Critical temperature for the transition to the superconducting d x 2 −y 2 phase versus doping at four values of the Coulomb repulsion parameter V2 and V ′ 2 .
The work was supported by the Russian Foundation for Basic Research (RFBR) and partly by the Government of Krasnoyarsk Region (project nos. 16-42-240435 and 16-42-243057). The work of A.F.B. was supported by RFBR (project no. 16-02-00304). The work of M.M.K. was supported by grant of the President of the Russian Federation (project MK-1398.2017.2).
. V J Emery, Phys. Rev. Lett. 582794V. J. Emery, Phys. Rev. Lett. 58 (1987), p. 2794.
. C M Varma, S Schmitt-Rink, E Abrahams, Solid State Commun. 62681C. M. Varma, S. Schmitt-Rink and E. Abrahams, Solid State Commun. 62 (1987), p. 681.
. R O Zaitsev, V A Ivanov, Sov. Phys. Solid State. 291475R. O. Zaitsev and V. A. Ivanov, Sov. Phys. Solid State 29 (1987), p. 1475.
. Yu A Izyumov, Phys. Usp. 40215Yu. A. Izyumov, Phys. Usp. 40 (1997), p. 445; 42 (1999), p. 215.
High-Temperature Cuprate Superconductors. N M Plakida, Springer-VerlagBerlin-HeidelbergN. M. Plakida, High-Temperature Cuprate Supercon- ductors, Springer-Verlag, Berlin-Heidelberg (2010).
. R O Zaitsev, JETP. 98780R. O. Zaitsev, JETP 98 (2004), p. 780.
. V V , M M Korovushkin, JETP. 112108V. V. Val'kov and M. M. Korovushkin, JETP 112 (2011), p. 108.
. V Yu, G M Yushankhai, R B Vujicic, Zakula, Phys. Lett. A. 151254V. Yu. Yushankhai, G. M. Vujicic and R. B. Zakula, Phys. Lett. A 151 (1990), p. 254.
. V V Val'kov, T A Val'kova, D M Dzebisashvili, S G Ovchinnikov, JETP Lett. 75378V. V. Val'kov, T. A. Val'kova, D. M. Dzebisashvili and S. G. Ovchinnikov, JETP Lett. 75 (2002), p. 378.
. N M Plakida, V S Oudovenko, Eur. Phys. J. B. 86631JETPN. M. Plakida and V. S. Oudovenko, Eur. Phys. J. B 86 (2013), p. 115; JETP 146 (2014), p. 631.
. V V Val'kov, D M Dzebisashvili, M M Korovushkin, A F Barabanov, JETP Lett. 103385V. V. Val'kov, D. M. Dzebisashvili, M. M. Korovushkin and A. F. Barabanov, JETP Lett. 103 (2016), p. 385.
. N M Plakida, arXiv:1607.02935N. M. Plakida, arXiv: 1607.02935.
. A F Barabanov, L A Maksimov, G V Uimin, JETP Lett. 47622A. F. Barabanov, L. A. Maksimov and G. V. Uimin, JETP Lett. 47 (1988), p. 622.
. J Zaanen, A M Oleś, Phys. Rev. B. 379423J. Zaanen and A. M. Oleś, Phys. Rev. B. 37 (1988), p. 9423.
. M S Hybertsen, M Schluter, N E Christensen, Phys. Rev. B. 399028M. S. Hybertsen, M. Schluter and N. E. Christensen, Phys. Rev. B 39 (1989), p. 9028.
. M H Fischer, E.-A Kim, Phys. Rev. B. 84144502M. H. Fischer and E.-A. Kim, Phys. Rev. B 84 (2011), p. 144502.
. M Ogata, H Fukuyama, Rep. Prog. Phys. 7136501M. Ogata and H. Fukuyama, Rep. Prog. Phys. 71 (2008), p. 036501.
. D M Dzebisashvili, V V , A F Barabanov, JETP Lett. 98528D. M. Dzebisashvili, V. V. Val'kov and A. F. Barabanov, JETP Lett. 98 (2013), p. 528.
. V V Val'kov, D M Dzebisashvili, A F Barabanov, JETP. 118959V. V. Val'kov, D. M. Dzebisashvili and A. F. Barabanov, JETP 118 (2014), p. 959.
. R Zwanzig, Phys. Rev. 124983R. Zwanzig, Phys. Rev. 124 (1961), p. 983.
. H Mori, Prog. Theor. Phys. 33423H. Mori, Prog. Theor. Phys. 33 (1965), p. 423.
. V V Val'kov, D M Dzebisashvili, A F Barabanov, Phys. Lett. A. 379421V. V. Val'kov, D. M. Dzebisashvili and A. F. Barabanov, Phys. Lett. A 379 (2015), p. 421.
. D N Zubarev, Sov. Phys. Usp. 3320D. N. Zubarev, Sov. Phys. Usp. 3 (1960), p. 320.
| [] |
[
"A first-principles method to calculate fourth-order elastic constants of solid materials",
"A first-principles method to calculate fourth-order elastic constants of solid materials",
"A first-principles method to calculate fourth-order elastic constants of solid materials",
"A first-principles method to calculate fourth-order elastic constants of solid materials"
] | [
"Abhiyan Pandit \nDepartment of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA\n",
"Angelo Bongiorno \nDepartment of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA\n\nThe Graduate Center\nCity University of New York\n10016New YorkNYUSA\n",
"Abhiyan Pandit \nDepartment of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA\n",
"Angelo Bongiorno \nDepartment of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA\n\nThe Graduate Center\nCity University of New York\n10016New YorkNYUSA\n"
] | [
"Department of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA",
"Department of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA",
"The Graduate Center\nCity University of New York\n10016New YorkNYUSA",
"Department of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA",
"Department of Chemistry\nCollege of Staten Island\n10314Staten IslandNYUSA",
"The Graduate Center\nCity University of New York\n10016New YorkNYUSA"
] | [] | A first-principles method is presented to calculate elastic constants up to the fourth order of crystals with the cubic and hexagonal symmetries. The method relies on the numerical differentiation of the second Piola-Kirchhoff stress tensor and a density functional theory approach to compute the Cauchy stress tensors for a minimal list of strained configurations of a reference state. The number of strained configurations required to calculate the independent elastic constants of the second, third, and fourth order is 24 and 37 for crystals with the cubic and hexagonal symmetries, respectively. Here, this method is applied to five crystalline materials with the cubic symmetry (diamond, silicon, aluminum, silver, and gold) and two metals with the hexagonal close packing structure (beryllium and magnesium). Our results are compared to available experimental data and previous computational studies. Calculated linear and nonlinear elastic constants are also used, within a nonlinear elasticity treatment of a material, to predict values of volume and bulk modulus at zero temperature over an interval of pressures. To further validate our method, these predictions are compared to results obtained from explicit density functional theory calculations. | 10.1016/j.cpc.2023.108751 | [
"https://export.arxiv.org/pdf/2302.01965v1.pdf"
] | 256,615,346 | 2302.01965 | 8c05333df6784674b39b188615828bfe4c38f528 |
A first-principles method to calculate fourth-order elastic constants of solid materials
3 Feb 2023
Abhiyan Pandit
Department of Chemistry
College of Staten Island
10314Staten IslandNYUSA
Angelo Bongiorno
Department of Chemistry
College of Staten Island
10314Staten IslandNYUSA
The Graduate Center
City University of New York
10016New YorkNYUSA
A first-principles method to calculate fourth-order elastic constants of solid materials
3 Feb 2023Density functional theorynonlinear elasticitysecond Piola-Kirchhoff stress tensorfinite differentiationthird-order elastic constantsfourth-order elastic constantsxPK2x program
A first-principles method is presented to calculate elastic constants up to the fourth order of crystals with the cubic and hexagonal symmetries. The method relies on the numerical differentiation of the second Piola-Kirchhoff stress tensor and a density functional theory approach to compute the Cauchy stress tensors for a minimal list of strained configurations of a reference state. The number of strained configurations required to calculate the independent elastic constants of the second, third, and fourth order is 24 and 37 for crystals with the cubic and hexagonal symmetries, respectively. Here, this method is applied to five crystalline materials with the cubic symmetry (diamond, silicon, aluminum, silver, and gold) and two metals with the hexagonal close packing structure (beryllium and magnesium). Our results are compared to available experimental data and previous computational studies. Calculated linear and nonlinear elastic constants are also used, within a nonlinear elasticity treatment of a material, to predict values of volume and bulk modulus at zero temperature over an interval of pressures. To further validate our method, these predictions are compared to results obtained from explicit density functional theory calculations.
Introduction
The elastic constants of a material define the relationship between stress and applied strain [1]. The linear coefficients in this relationship correspond to the second-order elastic constants (SOECs) [1]. These coefficients relate to the elastic moduli of a material and are important, for example, to quantify the linear response to a deformation, and to calculate the speed of sound waves. The techniques to measure and calculate SOECs are well established, and in fact these coefficients are known for a broad class of materials [2]. Nonlinear elastic constants characterize the anharmonic elastic behavior of a material, and they are of both fundamental and practical importance as they govern how thermoelastic properties change with temperature and pressure [1,3,4]. The experimental determination of these nonlinear elastic coefficients is challenging [5,6], and computational methods are needed to predict the values of these materials parameters [7][8][9][10][11][12]. In this work, we present a new method to calculate from first principles elastic constants of a material up to the fourth order.
The isothermal third-order elastic constants (TOECs) correspond to the first-order anharmonic terms in the series expansion of the free energy of the material with respect to the Green-Lagrangian strain [1]. These elastic coefficients characterize the nonlinear elastic behavior of * Angelo Bongiorno. [email protected] a material, and they are related to materials properties such as the long-wavelength phonon anharmonicities [13], sound attenuation [14], the thermodynamic Grüneisen parameter [3,15], thermal expansion and thermal conductivity [16][17][18], and the intrinsic mechanical strength [1,19]. TOECs are typically obtained from acoustoelastic experiments [14], wherein sound velocities are measured for a material under different stress conditions [14,[20][21][22]. These experiments are challenging and subjected to error margins [23], and for this reason, these coefficients are known for a restricted class of materials [10,[24][25][26][27].
The conventional approach to calculate TOECs relies on the use of a density functional theory (DFT) calculations to construct either energy or stress versus strain curves along a number of deformation modes (see Ref. [28] and references therein). In this approach, the whole set of linear and nonlinear coefficients are then deduced from a nonlinear least-square fitting of the energy-strain or stressstrain relationships [7,8,[28][29][30][31][32]. The application of this method to materials with the cubic symmetry is straightforward, as the number of independent SOECs and TOECs to be determined is only 3 and 6, respectively. However, for materials with a lower symmetry, this method becomes increasingly cumbersome and less attractive, as demonstrated by the very few number of applications appeared so far in literature (see Ref. [28] and references therein). An alternative approach to calculate TOECs was proposed very recently by one of the authors [33]. In this method, elastic constants are obtained by combining DFT calcula-tions and a finite deformation approach [2], where each TOEC is calculated independently by second-order numerical differentiation of the second Piola-Kirchhoff (PK2) stress tensor [33]. This method has general applicability, and so far it has been applied to both 2D and 3D materials, with the cubic, hexagonal, and orthorhombic symmetries [4,33]. Furthermore, recently this method was used in combination with the quasi-harmonic approximation to calculate TOECs at finite temperature [4].
Fourth-and higher-order elastic constants govern the anharmonic regime of material subjected to large deformations [8,11,12,34,35]. Knowledge of these higherorder elastic coefficients allow to describe and predict mechanical instability points of a material [36,37], as well as to characterize the nature of elastic phase transitions [8,12]. The experimental determination of fourth-order elastic constants (FOECs) is extremely challenging, as large uniaxial stresses need to be applied in acoustoelastic experiments to obtain reliable values of these high-order elastic coefficients [5,6]. For this reason, to the best of our knowledge, so far FOECs have been measured only for very few materials [5,6]. DFT calculations have been employed to calculate FOECs [7][8][9][10][11][12]38]. In these computational studies, FOECs were obtained by using the approach relying on fitting energy-strain or stress-strain curves. Although straightforward and in principle general, the computational workload and intricacy of this approach increase significantly for low-symmetry materials. Indeed, to the best of our knowledge, to date this approach has been used to calculate FOECs of materials with only the cubic symmetry [7][8][9][10][11][12].
In this work, we extend the new method developed to calculate TOECs [33] based on finite deformations and numerical differentiation of the PK2 stress tensor to the calculation of FOECs. The most important advantage of the present method over existing approaches is that each non-linear elastic constant is calculated independently, by considering up to 8 deformed configurations of the reference state. Thanks to this, our method can be easily applied to any material, regardless of its symmetry. Here we apply the method to calculate SOECs, TOECs, and FOECs of five crystalline materials with the cubic symmetry (diamond, silicon, aluminum, silver, and gold), and two materials with the hcp structure (magnesium and beryllium).
This manuscript is organized as follows. In Sec. 2, we introduce basic notions of nonlinear elasticity theory, we provide details about the finite difference formulas to calculate SOECs, TOECs, and FOECs, and we discuss technical aspects of the numerical implementation of our methods. In Sec. 3, we present results and discuss the application of our method to the aforementioned materials. Conclusions and outlook are provided in Sec. 4.
Methods
Notions of nonlinear elasticity theory
The Green-Lagrangian strain, µ ij , is defined as [1,33,39]:
µ ij = 1 2 (F ki F kj − δ ij ),(1)
where subscript indices refer to Cartesian coordinates, δ ij is the Kronecker delta function, and F ij are components of the deformation gradient. This tensor is defined as:
F ij = ∂x i ∂X j(2)
where x i and X i are the Cartesian coordinates of a material point in the deformed and reference states, respectively. The Helmholtz free energy density, A, can be written as a series expansion in terms of the Lagrangian strain as follows [1,8,12,33,39]:
A = 1 2 ∂ 2 A ∂µ ij ∂µ lm µ ij µ lm + 1 6 ∂ 3 A ∂µ ij ∂µ lm ∂µ pq µ ij µ lm µ pq + 1 24 ∂ 4 A ∂µ ij ∂µ lm ∂µ pq ∂µ rs µ ij µ lm µ pq µ rs + · · · = 1 2 C (2) ijlm µ ij µ lm + 1 6 C (3) ijlmpq µ ij µ lm µ pq + 1 24 C (4) ijlmpqrs µ ij µ lm µ pq µ rs + · · · ,(3)
where C
(2) ijlm , C(3)
ijlmpq , and C (4) ijlmpqrs are the isothermal SOECs, TOECs, and FOECs of the material in the reference state, respectively. Given a reference state, the PK2 stress tensor, P ij , can be defined in terms of the Helmholtz free energy density, A, as:
P ij = ∂A ∂µ ij ,(4)
whereas the relationship between PK2 and Cauchy stress, σ ij , is [1,4,33,39]:
σ ij = V V ′ F il P lm F jm ,(5)
where V ′ and V are the volumes of the (same) material points x and X in the deformed and reference states, respectively. Eqs. 3 and 4 allow to define the relationship between PK2 stress tensor and linear and nonlinear elastic constants. Adopting the Voigt notation, this relationship takes the following form:
P α = C(2)αβ µ β + 1 2 C (3) αβγ µ β µ γ + 1 6 C (4) αβγδ µ β µ γ µ δ ,(6)
where Greek indices run from 1 to 6, and are related to the Cartesian indices pairs as follows: 1 → xx, 2 → yy, 3 → zz, 4 → yz, 5 → zx, and 6 → xy. For sake of completeness, here below we also express the linear and nonlinear elastic constants in terms of the PK2 stress tensor:
C (2) αβ = ∂P α ∂µ β C (3) αβγ = ∂ 2 P α ∂µ β µ γ C (4) αβγδ = ∂ 3 P α ∂µ β µ γ µ δ .(7)
The present method relies on the definitions above to calculate SOECs, TOECs, and FOECs of a material using periodic DFT approach. In this work, temperature effects are disregarded and all calculations are carried out in static conditions.
Finite difference formulas to calculate elastic constants
To calculate SOECs, we use the following central finite difference formula (Eq. 7):
C (2) αβ = P (+β) α − P (−β) α 2ξ ,(8)
where ξ is a strain parameter, and P (±β) α is the αcomponent of the PK2 stress tensor of a deformed configuration obtained by applying to the reference state a six-dimensional strain vector, µ, with component β equal to ±ξ, and the rest of the components equal to zero. In case of TOECs, we have two different cases. TOECs with at least two out of three indices equal to each other can be calculated using the following second-order central finite difference formula:
C (3) αββ = P (+β) α + P (−β) α + 2P (0) α 2ξ 2 ,(9)
where P (0) α refers to the α-component of the PK2 stress tensor of the reference state, which is equal to the Cauchy stress tensor. In case of TOECs whose indices are all different, we use the following formula:
C (3) αβγ = P (+β,+γ) α − P (−β,+γ) α − P (+β,−γ) α + P (−β,−γ) α 4ξ 2 ,(10)
where P (±β,±γ) α is the α-component of the PK2 stress tensor of a deformed configuration obtained by applying to the reference state a six-dimensional strain vector, µ, with components β and γ equal to ±ξ, and the rest of the components equal to zero. In case of FOECs, we have derived the following finite difference formulas to calculate the different types of coefficients:
C (4) αβββ = P (+2β) α − 2P (+β) α + 2P (−β) α − P (−2β) α 2ξ 3 C (4) αβγγ = P (+β,+2γ) α − P (−β,+2γ) α + P (+β,−2γ) α − P (−β,−2γ) α − 2(P (+β) α − P (−β) α ) 8ξ 3 C (4) αβγδ = (P (+β,+γ,+δ) α − P (+β,+γ,−δ) α − P (+β,−γ,+δ) α + P (+β,−γ,−δ) α − P (−β,+γ,+δ) α + + P (−β,+γ,−δ) α + P (−β,−γ,+δ) α − P (−β,−γ,−δ) α )/8ξ 3 ,(11)
where P (±β,±γ,±δ) α is the α-component of the PK2 stress tensor of a deformed configuration obtained by applying to the reference state a six-dimensional strain vector, µ, with components β, γ, and δ equal to ±ξ, and the rest of the components equal to zero.
For sake of clarity, we consider the calculation of the two nonlinear elastic constants, C
123 and C (4) 1255 , of a material with an arbitrary symmetry. Thus, in case of C
123 , we consider the following 4 strain vectors:
(0, +ξ, +ξ, 0, 0, 0), (0, −ξ, +ξ, 0, 0, 0),
(0, +ξ, −ξ, 0, 0, 0), (0, −ξ, −ξ, 0, 0, 0).(12)
Each strain vector is used to generate a deformed configuration of the reference state, and the resulting P 1 components of the PK2 stress tensors are then used in Eq. 10 to calculate C
123 . In case of C
1255 , we use the second formula in Eq. 11, with the component P 1 of the PK2 stress tensor resulting from the following 6 deformations:
(0, +ξ, 0, 0, +2ξ, 0), (0, −ξ, 0, 0, +2ξ, 0), (0, +ξ, 0, 0, −2ξ, 0), (0, −ξ, 0, 0, −2ξ, 0), (0, +ξ, 0, 0, 0, 0), (0, −ξ, 0, 0, 0, 0).(13)
These two examples show that, in contrast to conventional approaches [8,11,12,28,38,40], our method allows to calculate each nonlinear elastic constant independently, regardless of the symmetry of the material.
SOECs, TOECs, and FOECs of crystals with the cubic or hexagonal symmetry
In this work, we apply our method to materials with the cubic and hexagonal symmetry. A material belonging to the cubic system (point groups: 432, 43m, and m3m) has 3, 6, and 11 independent SOECs, TOECs, and FOECs, respectively [1,8,34,35]. To calculate the 3 independent SOECs, we use the following 4 strain vectors:
(0, 0, 0, 0, 0, 0), (+ξ, 0, 0, 0, 0, 0), (−ξ, 0, 0, 0, 0, 0), (0, 0, 0, +ξ, 0, 0, 0). (14) We highlight that, due to the cubic symmetry, P (+4)
4 = −P (−4) 4
, and therefore only one deformation is needed to calculate C (2) 44 . To calculate the 6 independent TOECs, in addition to the deformations in Eq. 14, we use the following 4 strain vectors:
(+ξ, +ξ, 0, 0, 0, 0), (+ξ, −ξ, 0, 0, 0, 0),
(−ξ, −ξ, 0, 0, 0, 0), (0, 0, 0, +ξ, +ξ, 0).(15)
Also in this case, the list above excludes strain vectors that lead to redundant deformed states of a material with a cubic symmetry. The 11 independent FOECs are obtained by considering the following 16 additional strain vectors:
(+2ξ, 0, 0, 0, 0, 0), (−2ξ, 0, 0, 0, 0, 0), (+2ξ, +ξ, 0, 0, 0, 0), (−2ξ, +ξ, 0, 0, 0, 0),
(+2ξ, −ξ, 0, 0, 0, 0), (−2ξ, −ξ, 0, 0, 0, 0), (+ξ, 0, 0, +2ξ, 0, 0), (−ξ, 0, 0, +2ξ, 0, 0), (+ξ, 0, 0, 0, +2ξ, 0), (−ξ, 0, 0, 0, +2ξ, 0), (0, 0, 0, +ξ, +ξ, +ξ), (0, 0, 0, −ξ, +ξ, +ξ),
(0, 0, 0, +2ξ, 0, 0), (0, 0, 0, +ξ, +2ξ, 0), (0, +ξ, 0, 0, 0, 0), (0, −ξ, 0, 0, 0, 0).
In total, to calculate all the independent SOECs, TOECs, and FOECs of a material with the cubic symmetry (point groups: 432, 43m, and m3m), our method requires 24 strain vectors (including the null vector for the reference state). A material with the hexagonal symmetry (point groups: 622, 6mm,6m2, and 6/mmm) has 5, 10, and 19 independent SOECs, TOECs, and FOECs, respectively [1,41]. To calculate the 5 independent SOECs, we use the following 6 strain vectors:
(0, 0, 0, 0, 0, 0), (+ξ, 0, 0, 0, 0, 0),
(−ξ, 0, 0, 0, 0, 0), (0, 0, 0, +ξ, 0, 0), (0, 0, +ξ, 0, 0, 0), (0, 0, −ξ, 0, 0, 0).(17)
To calculate the 10 independent TOECs, in addition to the strain vectors above, we need to account for the following 6 strain vectors:
(0, +ξ, +ξ, 0, 0, 0), (0, −ξ, +ξ, 0, 0, 0), (0, +ξ, −ξ, 0, 0, 0), (0, −ξ, −ξ, 0, 0, 0), (0, +ξ, 0, 0, 0, 0), (0, −ξ, 0, 0, 0, 0).
To obtain the 19 independent FOECs, we use the following 25 additional strain vectors:
(+2ξ, 0, 0, 0, 0, 0), (−2ξ, 0, 0, 0, 0, 0), (+2ξ, +ξ, 0, 0, 0, 0), (+2ξ, −ξ, 0, 0, 0, 0), (−2ξ, +ξ, 0, 0, 0, 0), (−2ξ, −ξ, 0, 0, 0, 0), (+ξ, 0, +2ξ, 0, 0, 0), (−ξ, 0, +2ξ, 0, 0, 0), (+ξ, 0, −2ξ, 0, 0, 0), (−ξ, 0, −2ξ, 0, 0, 0), (+ξ, 0, 0, +2ξ, 0, 0), (−ξ, 0, 0, +2ξ, 0, 0), (+ξ, 0, 0, 0, +2ξ, 0), (−ξ, 0, 0, 0, +2ξ, 0), (+ξ, 0, 0, 0, 0, +2ξ), (−ξ, 0, 0, 0, 0, +2ξ), (0, +2ξ, +ξ, 0, 0, 0), (0, +2ξ, −ξ, 0, 0, 0),
(0, −2ξ, +ξ, 0, 0, 0), (0, −2ξ, −ξ, 0, 0, 0), (0, 0, +2ξ, 0, 0, 0), (0, 0, −2ξ, 0, 0, 0), (0, 0, +ξ, +2ξ, 0, 0), (0, 0, −ξ, +2ξ, 0, 0), (0, 0, 0, +2ξ, 0, 0).(19)
In total, our method requires 37 strain vectors to calculate all the independent SOECs, TOECs, and FOECs of a material belonging to the hexagonal crystal system (point groups: 622, 6mm,6m2, and 6/mmm).
Technical aspects of the method implementation
Our method to calculate linear and nonlinear elastic constants is implemented in codes that are part of the software package xPK2x, which is available under the GNU General Public License (Version 3) on GitHub [42]. This software package encompasses three Fortran modules, a Bash script, several example applications, and relevant documentation [42]. Our method relies on an (external) periodic DFT approach to optimize geometries and calculate the Cauchy stress tensor. To this end, the current version of xPK2x is designed to be compatible with the Quantum Espresso software package [43,44]. For sake of clarity, here below we discuss the numerical operations and tasks implemented and carried out by the modules provided in xPK2x. We refer to the documentation available on GitHub [42] for additional information regarding installation and use of the programs.
The calculation of a set of elastic constants of a material requires, as a first step, to select a a periodic unit cell to describe the material in a reference state. The unit cell has a volume V and geometry V :
V = a 1,x a 2,x a 3,x a 1,y a 2,y a 3,y a 1,z a 2,z a 3,z ,(20)
where a 1 , a 2 , a 3 are the unit cell vectors. We remark that although the choice of the reference state and corresponding supercell is arbitrary, in this work we reports results obtained by considering primitive unit cells, and reference states yielding a zero static pressure. Then, given the list of elastic constants to be calculated, then next operation consists in determining the finite difference formulas to be used, and therefore list of strain vectors required to generate the deformed configurations of the reference state. Geometry of the reference state and corresponding supercell, fractional coordinates of the atoms including in it, list of six-dimensional strain vectors to generate the deformed configurations, and the strain parameter multiplying the strain vectors, all these are input parameters for the module str2pk of the software package xPK2x [42]. In particular, the numerical tasks implemented in the module str2pk are:
• Importing the geometry of the reference state and (fractional) coordinates of the atoms in the supercell (not necessarily a primitive unit cell), and reading the list of strain vectors. For each strain vector, which we can express in both the Voigt and tensorial forms as
µ = (ξ 1 , ξ 2 , ξ 3 , ξ 4 , ξ 5 , ξ 6 ) µ = ξ 1 ξ 6 /2 ξ 5 /2 ξ 6 /2 ξ 2 ξ 4 /2 ξ 5 /2 ξ 4 /2 ξ 3 ,(21)
str2pk calculates the deformation gradient, F , as follows. First, the Cholesky decomposition of the following 3×3 matrix is carried out (see Eq. 1):
2µ + I = DD T .(22)
Then, a single value factorization of D is carried out, to obtain D = W SV T , where W and V are unitary matrices, and S is the diagonal matrix of singular values. Finally, the rotation-free deformation gradient (right stretch tensor) is defined as F = V SV T (R = W V T is the rotation tensor).
• Then, the deformation gradient, F , is used to generate the unit cell of the deformed configuration by using Eq. 2. In particular, since we consider only homogeneous deformations of a material described by the use of a periodic unit cell V , Eq. 2 assumes the form:
F = V ′ V −1 ,(23)
where V ′ is the 3×3 matrix defining the geometry of the material in the deformed state,
V ′ = a ′ 1,x a ′ 2,x a ′ 3,x a ′ 1,y a ′ 2,y a ′ 3,y a ′ 1,z a ′ 2,z a ′ 3,z .(24)
Thus, from Eq. 23, the deformed configuration is obtained as,
V ′ = F V .(25)
• Geometry and dimensions of the unit cells describing the deformed configurations, and (fractional) coordinates of the atoms in the unit cells, are printed out in text files.
The next step then consists in using a periodic DFT approach [43,44] to optimize the geometry of each deformed configuration of the reference state, and calculate the corresponding Cauchy stress tensors, σ. The list of Cauchy stress tensors are then supplied to a second module, pk2ecs [42], for the final calculation of the desired list of elastic constants. In detail, the numerical tasks implemented in this module are:
• For each deformed configuration V ′ , Eq. 5 is used to calculate the PK2 stress tensor from the deformation gradient, F , and the calculated Cauchy stress tensor, as follows:
P = V ′ V F −1 σF −T ,(26)
where V ′ is the volume of the deformed configuration.
• This operation is repeated for each strain vector, and the corresponding list of PK2 stress tensors is finally plugged into the finite difference formulas (Eqs. [8][9][10][11] to calculate the selected SOECs, TOECs, and FOECs.
We remark that the xPK2x package provides the lists of strain vectors required to calculate the independent SOECs, TOECs, and FOECs of a material with the cubic and hexagonal symmetry, and that the modules str2pk and pk2ecs are designed to be user-friendly for these classes of materials. However, we also remark that the module str2pk can be used to generate any list of strained configurations for a reference state of a material with an arbitrary symmetry, and that the xPK2x package includes an additional module pk2open that can be adapted and extended to the calculation of any elastic constant of the second, third, or fourth order. Instructions and examples about how to combine the modules str2pk and pk2open can be found on GitHub [42].
Results and discussion
Technical details of the DFT calculations
In this work, we use the "pw.x" code of the Quantum Espresso package [43,44] to carry out DFT calculations, and we use our method to calculate the full set of independent SOECs, TOECs, and FOECs of diamond, silicon, aluminum, silver, gold, beryllium, and magnesium. To describe these materials, we use primitive unit cells, and plane-wave energy cutoffs of 150 and 600 Ry to represent wavefunctions and electronic charge density, respectively. In case of Au, we use a local density approximation [45] for the exchange and correlation energy functional, whereas the Perdew-Burke-Ernzerhof parametrization [46] of the generalized gradient approximation is used for the other materials.
To describe the diamond structure of C and Si, we use ultrasoft psudopotentials (C.pbe-n-rrkjus psl.1.0.0.UPF and Si.pbe-nl-rrkjus psl.1.0.0.UPF) and uniform grids of 10×10×10 k-points to sample the Brillouin zone. To describe the fcc structure of Ag and Au, we use the ultrasoft pseudopotentials (Ag.pbe-spn-rrkjus psl.1.0.0.UPF and Au.pz-spn-rrkjus psl.1.0.0.UPF) from the Quantum Espresso library: https://github.com/dalcorso/pslibrary, whereas in case of fcc Al, we use a norm-conserving psudopotential [47] generated by using the fhi98PP software [48] that was tested and used in a previous study [33]. To sample the Brillouin zone of the primitive unit cell of these two metals, we use a uniform grid of 25×25×25 k-points. In case of hcp Be and Mg, we use an ultrasoft (Be.pbe-nrrkjus psl.1.0.0.UPF) and a norm-conserving psudopotential [47], respectively. The latter psudopotential was generated by using the fhi98PP software [48] and was tested and used in a previous study [33]. To sample the Brillouin zones of Be and Mg, we use a grid of 20×20×14 k-points. With these technical details, we obtain the equilibrium lattice parameters at zero temperature reported in Table 1. These results are in agreement with experimental data. For testing purposes, in case of Al, we calculate the nonlinear elastic constants for increasing values of the planewave energy cutoff, as well as for denser grids of k-points in the Brillouin zone. All DFT calculations are carried out by using stringent convergence criteria: 10 −14 Ry for selfconsistency and 10 −6 a.u. for forces.
Second-and third-order elastic constants
The independent SOECs and TOECs of crystals with the cubic and hcp structures calculated using our method are listed in Tables 2 and 3, respectively. These tables report also available experimental data and previous values calculated by using the conventional approach relying on fitting energy-strain or stress-strain curves [7][8][9][10][11][12]58]. We remark that our results are in overall good agreement with both experimental data and previous computational studies. It is to be noted that measurements of TOECs are typically carried out at finite temperature, and that sample microstructure and defects are known to affect to some extent the experimental data [8,10]. We attribute to these two factors the origin of the small differences between our results and the experimental data. As for the differences between our results and those of previous computational studies, we argue that these stem mainly from the following two reasons. One, the technical aspects of the DFT calculations, namely plane-wave energy cutoffs, pseudopotentials, convergence thresholds, and the exchange and correlation energy functional. Two, the details of the fitting procedure used to deduce the full set of independent linear and nonlinear elastic constants [40]. To corroborate this argument, and at the same time, to demonstrate the validity of our method and results, we adopt the conventional approach based on fitting an energy-strain curve to calculate the elastic constants C
1111 , and C (5) 11111 of Si (Fig. 1). To this end, we use a fifth-order polynomial function to fit the energy versus strain data points computed from DFT for a set of deformed configurations of Si obtained by applying a uniaxial strain along the x direction (Fig. 1). The fitting procedure yields the following values: C µ 1 ∆E/V 0 (GPa) Figure 1: Energy density of cubic Si relative to that one of the reference state versus uniaxial Lagrangian strain. The solid black line shows the fifth-order polynomial function fitting the data (red discs) calculated from DFT. V 0 is the volume of the reference state, whereas µ 1 is the first component of the strain tensor in Voigt notation; the remaining components are zero.
Fourth-order elastic constants
To assess the accuracy of our results, we carry out convergence tests for the selected FOECs of fcc Al as a function of the strain parameter (ξ), and also by considering DFT calculations of increasing precision (Fig. 2). The results of these calculations show that FOECs (and TOECs) converge rapidly for increasing values of both the k-points grid density and plane-wave energy cutoff. Also, these calculations show that FOECs are sensitive to the value of the strain parameter used to generate the deformed configurations of a reference state. In particular, Fig. 2 shows that while several FOECs fluctuate significantly for strain parameters smaller than 0.0075, all the independent FOECs converge and plateau for strain parameters larger than 0.01. Table 2: Independent SOECs and TOECs (in GPa) of cubic diamond, silicon, aluminum, silver, and gold calculated using the method presented in this work. For each material, the first row shows our results, the second row reports experimental data, and the remaining rows show previous results obtained by using the conventional approach based on fitting energy-strain or stress-strain data points.
Crystal C Table 3: Independent SOECs and TOECs (in GPa) of hcp beryllium and magnesium. For each crystal, the first row shows our results, the second row reports experimental data, and the remaining rows show previous results obtained by using the conventional approach. Table 4 reports calculated values of FOECs of diamond, Si, Al, Ag, and Au. To the best of our knowledge, experimental data for these coefficients are missing from literature. Values of FOECs obtained using the present method are in reasonable agreement with previous results obtained by employing the conventional approach. We remark that our method yields results in excellent agreement with FOECs obtained by fitting energy-strain curves. In fact, as discussed above, these two methods yield values of C (4) 1111 for Si equal to 2586 and 2555 GPa, respectively. Therefore, once again we are inclined to attribute the differences between our results and previous calculations [8,10,38,40] to both different technicalities of the DFT calculations and details of the fitting procedure.
C (2) 11 C (2) 12 C (2) 13 C (2) 33 C (2) 44 C (3) 111 C (3) 112 C (3) 113 C (3) 123 C (3) 133 C (3) 144 C (3) 155 C (3) 222 C (3) 333 C(3)
It is interesting to notice that Hiki et al. [27,59] suggested that "the contribution from the closed-shell repulsive interaction between nearest-neighbor ions becomes predominant for determining the higher order elastic constants for materials with markedly overlapped closed shells", and therefore that FOECs of metals such as Ag and Au should obey the following approximate relationships:
C (4) 1111 = 2C
Using our values for Ag in Table 4, we find C (4) only corroborates the argument put forward by Hiki et al. [27,59], but it further validates the correctness of our method.
Existing methods based on fitting energy-strain or stress-strain curves become cumbersome and difficult to apply in case of materials with a symmetry lower than the cubic. In contrast, our method is easily applicable to
1122 (green filled squares), C materials of any symmetry, and the computational workload increases only moderately as the symmetry of the material decreases. Here, to demonstrate the potential of the present method, we calculate the independent FOECs of hcp Be and Mg. The results of these calculations are shown in Table 5. To the best of our knowledge, FOECs of these two materials have so far neither been measured nor calculated.
Potential application of our method
Fourth-and higher-order elastic constants describe the elastic response of a material subjected to large deformations [8,11,12,34,35]. Knowledge of these higher-order elastic coefficients can be thus used to predict, within the context of a nonlinear elasticity theory treatment, both the strain response and SOECs of a material subjected to an external pressure (or stress). In this section, we show that indeed SOECs, TOECs, and most importantly, FOECs, can be used for this purpose, and that FOECs expand the predictive power of the numerical framework relying on nonlinear elasticity theory to larger intervals of strain and pressures. Here we show the results obtained for fcc Si and hcp Mg.
We use both DFT calculations and nonlinear elasticity theory to calculate the volume, V (p), and bulk modulus, B 0 (p), of Si and Mg at zero temperature over a finite interval of pressures. In detail, we use variable-cell optimization calculations [43,44] and the finite difference formulas in Eq. 8 to calculate from DFT, first the volume, and then the SOECs of Si and Mg at a pressure p. To calculate B 0 (p) of fcc Si and hcp Mg, we use the formulas [39,60,61] B 0 (p) = C
(p) + 2C
(2) 12 (p) + p 3 (28) and
B 0 = 2(C (2) 11 (p) + C(2)
12 (p)) + C
(p) + 4C
(2)
13 (p) + 3p 9 ,(29)
respectively. We also calculate the same quantities, V (p) and B 0 (p), within the context of nonlinear elasticity theory by employing elastic coefficients calculated with the present method. In particular, we use the values of SOECs, TOECs, and FOECs for Si and Mg reported in Tables 2-5. We underline that these coefficients are obtained by considering a reference state yielding a zero static pressure at zero temperature. Then, we use a self-consistent variational approach to solve Eqs. 5 and 6 and determine the strain required to deform the reference state and obtain a configuration for the material, V (p), yielding a pressure p [4]. After determining the geometry of the material at p, we proceed to calculate the SOECs and therefore the bulk modulus B 0 (p) using the same approach relying on the finite difference formulas in Eq. 8. However, in this case, the Cauchy and hence PK2 stress tensor resulting from a deformation of the state V (p) is not calculated explicitly from DFT, but instead it is again derived from Eqs. 5 and 6 as outlined in the following diagram:
V (p)μ − →F ,Ṽ V (p) − −− → µ, F µ − → P (µ) F − → . . . . . . F − → σ(µ) =σ(μ)F − →P (μ),(30)
whereμ andF are the Lagrangian strain and corresponding deformation gradient mapping V (p) to one of its deformed states,Ṽ , whereas µ and F are the strain and deformation gradient mapping V (p) toṼ . Thanks to this last correspondence, Eq. 6 can be used to extrapolate the value of the PK2 stress tensor inṼ resulting from the deformation of V (p), whereas Eq. 5 can be used to, first, calculate the Cauchy stress, σ(µ) =σ(μ), and then the PK2 stress tensor resulting from the deformation of V (p), which is needed to calculate its SOECs. The results of these two sets of calculations are compared in Figs. 3 and 4 for Si and Mg, respectively. These comparisons show, as expected, that the formalism relying on nonlinear elasticity theory yields results that agree with those obtained from DFT over larger intervals of pressure for increasing the order of the truncation in Eq. 6, i.e. considering the higher-order elastic constants. In particular, while in case of the equation of state V (p), a good agreement is already reached by considering only SOECs and TOECs, in case of B 0 (p), the inclusion of FOECs in Eq. 6 is necessary to achieve an excellent agreement over the full intervals of pressures.
Conclusion
We presented a method to calculate second-, third-, and fourth-order elastic constants of crystals with the cubic and hexagonal symmetry. This first-principles method relies on the numerical differentiation of the second Piola-Kirchhoff stress tensor and a minimal list of strained configurations of a reference state for a material. In particular, the number of configurations required to calculate the independent elastic constants up to the fourth order is 24 and 37 for a crystal with the cubic and hexagonal symmetry, respectively. Although here we have shown applications to materials with the cubic and hexagonal symmetry, our method has general applicability as, regard- To validate our method, here we calculated the elastic constants up to the fourth order of five and two materials with the fcc and hcp structures, respectively. Comparisons of our results with available experimental data and previous calculations show that our method is reliable and accurate. We have also used a formalism based on nonlinear elasticity theory to predict the equation of state and elastic properties of a material over finite intervals of pressure. This formalism requires as input parameters linear and nonlinear elastic constants of a material in a reference state, and its predictive power improves as higher-order elastic constants are accounted for. Our method has the potential to be extended to the calculation of elastic constants of the fifth or higher order of a material with an arbitrary symmetry. Therefore, the present method has the potential to enhance the capabilities of the aforementioned formalism based on nonlinear elasticity theory to predict, for example, thermoelastic behaviors [4], the occurrence of solid phase transitions [40], and values of ideal yield strengths [40].
Acknowledgements
GPa. These values are in excellent agreement with the elastic constants computed by using the present method reported inTables 2 and 4.
Figure 2 :
2Values
cyan open squares) of fcc Al calculated from DFT by considering (top panel) uniform grids of k-points of increasing density (and a fixed plane-wave energy cutoff of 150 Ry and a strain parameter of 0.015), (middle panel) increasing values of the plane-wave energy cutoff used to represent wavefunctions (and a fixed 25×25×25 grid of k-points and a strain parameter of 0.015), and (bottom panel) increasing values of the strain parameter. These last calculations are carried out using a plane-wave energy cutoff of 150 Ry and a 25×25×25 grid of k-points.
Figure 3 :
3Top panel, unit-cell volume relative to that one at zero pressure and, bottom panel, bulk modulus of cubic Si versus pressure. Black solid line shows results obtained from DFT calculatios, whereas discs and circles show results obtained from nonlinear elasticity theory: green circles, blue circles, and red discs show results obtained by considering only SOECs, SOECS and TOECs, and all the elastic constants up to FOECs, respectively.
Figure 4 :
4Same as Fig. 3 for hcp Mg less of symmetry, each elastic constant of any order can be calculated independently by carrying out several DFT calculations. This important aspect is what differentiates our method from conventional approaches based on fitting energy-strain or stress-strain curves.
Table 1 :
1Lattice parameters (inÅ) deduced from DFT calculations for the crystalline materials investigated in this study. Experimental values are also reported for comparison.Crystal Space group
a
c
Exp. (a/c)
C
F d3m
3.57
-
3.57 [7]
Si
F d3m
5.47
-
5.43 [49, 50]
Al
F m3m
4.07
-
4.03 [51]
Ag
F m3m
4.16
-
4.07 [51]
Au
F m3m
4.05
-
4.08 [52]
Be
P 6 3 /mmc
2.27 3.58 2.29/3.58 [53]
Mg
P 6 3 /mmc
3.24 5.28 3.18/5.15 [54]
344Be
This work 275
40
30
309
141 -3160 211
33
-170
52
-139 -344 -2414 -3826 -948
Exp. [55]
294
27
14
357
162
-
-
-
-
-
-
-
-
-
-
Ref. [56]
333
16
5
392
171 -5093 1187 707
-87
-838 -435 -475 -2845 -2048 -489
Mg
This work
54
23
17
58
15
-702
-31
-1
-43
-101
-21
-72
-546
-619 -155
Exp. [57]
59
26
-
62
16
-663
-178
30
-76
-86
-30
-58
-864
-726 -193
Ref. [33]
58
24
19
62
16
-602
-190
4
-55
-107
-60
-50
-762
-657 -163
Ref. [28]
68
28
20
70
18
-784
-241
97
-46
-116
-52
-29 -1081 -554 -154
Table 4 :
4Independent FOECs (in GPa) of cubic diamond, silicon, aluminum, silver, and gold obtained by using the present method. Our results are compared to values calculated by employing the conventional approach relying on fitting energy-strain curves.C
(4)
1111
C
(4)
1112
C
(4)
1122
C
(4)
1123
C
(4)
1144
C
(4)
1155
C
(4)
1255
C
(4)
1266
C
(4)
1456
C
(4)
4444
C
(4)
4455
C This work 36057 9864
6768
-519 -1747 12628
284
9662
1236 12926 1169
Ref. [10]
26687 9459
6074
-425 -1385 10741 -264
8192
487
11328
528
Si This work 2586
2112
1885
576
-671
833
-422
742
-46
1268
-2
Ref. [40]
613
2401
1275
1053
5071
4050 -2728 -514
66
-2553
-577
Al This work 10102 2210
2441
-609
-68
3016
159
2553
224
2812
180
Ref. [8]
9916
2656
3708 -1000 -578
3554
-91
4309
148
3329
127
Ag This work 8346
4429
4204
333
99
3735
21
3813
-39
3638
-86
Ref. [8]
13694 7115
6652
-387
-154
5295
3
6718
-196
5416
-75
Au This work 17113 8114
8814
874
860
7462
-634
7372
-257
8258
-61
Ref. [8]
17951 8729
9033
416
691
7774
-752
9402
-170
8352
15
Ref. [38]
10094 8280
8402
1507
235
5549 -1534 8252
2
3640 -5763
Table 5 :
5Independent FOECs (in GPa) of hcp beryllium and magnesium calculated by using the present method.C
(4)
1111
C
(4)
1112
C
(4)
1113
C
(4)
1122
C
(4)
1133
C
(4)
1123
C
(4)
1144
C
(4)
1155
C
(4)
1166
C
(4)
1223
Be 32466
-3
358
-3529 -3721
881
-1902 -1342 -2880 1770
Mg 8638
-79
-243
119
-57
-47
-69
-40
-188
266
C
(4)
1233
C
(4)
1244
C
(4)
1255
C
(4)
1333
C
(4)
1344
C
(4)
1355
C
(4)
3333
C
(4)
3344
C
(4)
4444
Be -2113 3838
18
9934
229
1629
9986
8380 -5202
Mg
347
353
-30
828
392
240
5684
1402 -1073
/C (4) 1112 =1.9, C (4) 1111 /C (4) 1122 =2.0, C (4) 1111 /C (4) 1155 =2.2, C (4) 1111 /C (4) 1266 =2.2, and C (4) 1111 /C (4) 4444 =2.3, i.e. all values close to 2.0, whereas the remaining FOECs are much smaller than C (4)
This work is supported by the National Science Foundation (NSF), Award No. DMR-2036176. We acknowledge the support of the CUNY High Performance Computing Center, the PSC-CUNY grants 62651-0050 and 63913-0051.
. J Clayton, Nonlinear Mechanics of Crystals. SpringerDordrechtJ. Clayton, Nonlinear Mechanics of Crystals, Springer, Dor- drecht, 2011.
Charting the complete elastic properties of inorganic crystalline compounds. M Jong, W Chen, T Angsten, A Jain, R Notestine, A Gamst, M Sluiter, C K Ande, S Van Der Zwaag, J J Plata, C Toher, S Curtarolo, G Ceder, K A Persson, M Asta, Sci. Data. 2150009M. de Jong, W. Chen, T. Angsten, A. Jain, R. Notestine, A. Gamst, M. Sluiter, C. K. Ande, S. van der Zwaag, J. J. Plata, C. Toher, S. Curtarolo, G. Ceder, K. A. Persson, M. Asta, Charting the complete elastic properties of inorganic crystalline compounds, Sci. Data 2 (2015) 150009.
Calculation of mode grüneisen parameters made simple. D Cuffari, A Bongiorno, Phys. Rev. Lett. 124215501D. Cuffari, A. Bongiorno, Calculation of mode grüneisen pa- rameters made simple, Phys. Rev. Lett. 124 (2020) 215501.
Enhancing efficiency and scope of first-principles quasiharmonic approximation methods through the calculation of third-order elastic constants. A Bakare, A Bongiorno, Phys. Rev. Materials. 643803A. Bakare, A. Bongiorno, Enhancing efficiency and scope of first-principles quasiharmonic approximation methods through the calculation of third-order elastic constants, Phys. Rev. Ma- terials 6 (2022) 043803.
Nonlinear pressure dependence of elastic constants and fourth-o rder elastic constants of cesium halides. Z P Chang, G R Barsch, Phys. Rev. Lett. 19Z. P. Chang, G. R. Barsch, Nonlinear pressure dependence of elastic constants and fourth-o rder elastic constants of cesium halides, Phys. Rev. Lett. 19 (1967) 1381-1382.
The second-order pressure derivatives of the elastic moduli of a machinable glass ceramic. D Gerlich, S Hart, Journal of Applied Physics. 55D. Gerlich, S. Hart, The second-order pressure derivatives of the elastic moduli of a machinable glass ceramic, Journal of Applied Physics 55 (1984) 877-879.
Stresses in semiconductors: Ab initio calculations on si, ge, and gaas. O H Nielsen, R M Martin, Phys. Rev. B. 32O. H. Nielsen, R. M. Martin, Stresses in semiconductors: Ab initio calculations on si, ge, and gaas, Phys. Rev. B 32 (1985) 3792-3805.
Ab initio calculations of second-, third-, and fourth-order elastic constants for single crystals. H Wang, M Li, Phys. Rev. B. 79224102H. Wang, M. Li, Ab initio calculations of second-, third-, and fourth-order elastic constants for single crystals, Phys. Rev. B 79 (2009) 224102.
Higher-order elastic constants and megabar pressure effects of bcc tungsten: Ab initio calculations. Y K Vekilov, O M Krasilnikov, A V Lugovskoy, Y E Lozovik, Phys. Rev. B. 94104114Y. K. Vekilov, O. M. Krasilnikov, A. V. Lugovskoy, Y. E. Lo- zovik, Higher-order elastic constants and megabar pressure ef- fects of bcc tungsten: Ab initio calculations, Phys. Rev. B 94 (2016) 104114.
Diamond's third-order elastic constants: ab initio calculations and experimental investigation. A V Telichko, S V Erohin, G M Kvashnin, P B Sorokin, B P Sorokin, V D Blank, J. Mater. Sci. 523447A. V. Telichko, S. V. Erohin, G. M. Kvashnin, P. B. Sorokin, B. P. Sorokin, V. D. Blank, Diamond's third-order elastic con- stants: ab initio calculations and experimental investigation, J. Mater. Sci. 52 (2017) 3447.
Fourth-order elastic moduli of polycrystals. O M Krasilnikov, Y K Vekilov, Phys. Rev. B. 100134107O. M. Krasilnikov, Y. K. Vekilov, Fourth-order elastic moduli of polycrystals, Phys. Rev. B 100 (2019) 134107.
First-principles calculation of higher-order elastic constants using exact deformation-gradient tensors. S P Lepkowski, Phys. Rev. B. 102134116S. P. Lepkowski, First-principles calculation of higher-order elastic constants using exact deformation-gradient tensors, Phys. Rev. B 102 (2020) 134116.
Thirdorder elastic constants, vibrational anharmonicity, and the invar behavior of the fe 72 pt 28 alloy. G A Saunders, H B Senin, H A A Sidek, J Pelzl, Phys. Rev. B. 48G. A. Saunders, H. B. Senin, H. A. A. Sidek, J. Pelzl, Third- order elastic constants, vibrational anharmonicity, and the invar behavior of the fe 72 pt 28 alloy, Phys. Rev. B 48 (1993) 15801- 15806.
Third-order elastic constants and the velocity of small amplitude elastic waves in homogeneously stressed media. R N Thurston, K Brugger, Phys. Rev. 13516043R. N. Thurston, K. Brugger, Third-order elastic constants and the velocity of small amplitude elastic waves in homogeneously stressed media, Phys. Rev. 135 (1964) 16043.
Third-order elastic constants and grüneisen parameters of silicon and germanium between 3 and 300 o k. J Philip, M A Breazeale, J. Appl. Phys. 54752J. Philip, M. A. Breazeale, Third-order elastic constants and grüneisen parameters of silicon and germanium between 3 and 300 o k, J. Appl. Phys. 54 (1983) 752.
J M Ziman, Electrons and Phonons. Clarendon, OxfordJ. M. Ziman, Electrons and Phonons, Clarendon, Oxford, 1960.
Higher order elastic constants of solids. Y Hiki, Ann. Rev. Mater. Sci. 11Y. Hiki, Higher order elastic constants of solids, Ann. Rev. Mater. Sci. 11 (1981) 51-73.
D C Wallace, Thermodynamics of Crystals. New York, USADover PublicationsD. C. Wallace, Thermodynamics of Crystals, Dover Publica- tions, New York, USA, 1998.
Mesoscale modeling of nonlinear elasticity and fracture in ceramic polycrystals under dynamic shear and compression. J Clayton, R Kraft, R Leavy, Int. J. Solids and Struct. 492686J. Clayton, R. Kraft, R. Leavy, Mesoscale modeling of nonlinear elasticity and fracture in ceramic polycrystals under dynamic shear and compression, Int. J. Solids and Struct. 49 (2012) 2686.
Pure modes for elastic waves in crystals. K Brugger, J. Appl. Phys. 36759K. Brugger, Pure modes for elastic waves in crystals, J. Appl. Phys. 36 (1965) 759.
Effect of uniaxial stress on the zone-center optical phonon of diamond. M H Grimsditch, E Anastassakis, M Cardona, Phys. Rev. B. 18M. H. Grimsditch, E. Anastassakis, M. Cardona, Effect of uni- axial stress on the zone-center optical phonon of diamond, Phys. Rev. B 18 (1978) 901-904.
Reappraisal of experimental values of third-order elastic constants of some cubic semiconductors and metals. A S Johal, D J Dunstan, Phys. Rev. B. 7324106A. S. Johal, D. J. Dunstan, Reappraisal of experimental values of third-order elastic constants of some cubic semiconductors and metals, Phys. Rev. B 73 (2006) 024106.
Experimental determination of thirdorder elastic constants of diamond. J M Lang, Y M Gupta, Phys. Rev. Lett. 106125502J. M. Lang, Y. M. Gupta, Experimental determination of third- order elastic constants of diamond, Phys. Rev. Lett. 106 (2011) 125502.
Third-order elastic constants of aluminum. J F Thomas, Phys. Rev. 175J. F. Thomas, Third-order elastic constants of aluminum, Phys. Rev. 175 (1968) 955-962.
Electronic effects in the elastic constants of n-type silicon. J J Hall, Phys. Rev. 161J. J. Hall, Electronic effects in the elastic constants of n-type silicon, Phys. Rev. 161 (1967) 756-761.
Elastic constants of silver and gold. J R Neighbours, G A , Phys. Rev. 111J. R. Neighbours, G. A. Alers, Elastic constants of silver and gold, Phys. Rev. 111 (1958) 707-712.
Anharmonicity in noble metals; higher order elastic constants. Y Hiki, A V Granato, Phys. Rev. 144Y. Hiki, A. V. Granato, Anharmonicity in noble metals; higher order elastic constants, Phys. Rev. 144 (1966) 411-419.
Elastic3rd: A tool for calculating third-order elastic constants from first-principles calculations. M Liao, Y Liu, S.-L Shang, F Zhou, N Qu, Y Chen, Z Lai, Z.-K Liu, J Zhu, Computer Physics Communications. 261107777M. Liao, Y. Liu, S.-L. Shang, F. Zhou, N. Qu, Y. Chen, Z. Lai, Z.-K. Liu, J. Zhu, Elastic3rd: A tool for calculating third-order elastic constants from first-principles calculations, Computer Physics Communications 261 (2021) 107777.
Third-order elastic constants of magnesium. ii. theoretical. E R Naimon, T Suzuki, A V Granato, Phys. Rev. B. 4E. R. Naimon, T. Suzuki, A. V. Granato, Third-order elastic constants of magnesium. ii. theoretical, Phys. Rev. B 4 (1971) 4297-4305.
First-principles calculations of second-and third-order elastic constants for single crystals of arbitrary symmetry. J Zhao, J M Winey, Y M Gupta, Phys. Rev. B. 7594105J. Zhao, J. M. Winey, Y. M. Gupta, First-principles calculations of second-and third-order elastic constants for single crystals of arbitrary symmetry, Phys. Rev. B 75 (2007) 094105.
Nonlinear elasticity of monolayer graphene. E Cadelano, P L Palla, S Giordano, L Colombo, Phys. Rev. Lett. 102235502E. Cadelano, P. L. Palla, S. Giordano, L. Colombo, Nonlinear elasticity of monolayer graphene, Phys. Rev. Lett. 102 (2009) 235502.
Nonlinear elastic response of strong solids: First-principles calculations of the third-order elastic constants of diamond. A Hmiel, J M Winey, Y M Gupta, M P Desjarlais, Phys. Rev. B. 93174113A. Hmiel, J. M. Winey, Y. M. Gupta, M. P. Desjarlais, Nonlin- ear elastic response of strong solids: First-principles calculations of the third-order elastic constants of diamond, Phys. Rev. B 93 (2016) 174113.
First-principles calculation of third-order elastic constants via numerical differentiation of the second piola-kirchhoff stress tensor. T Cao, D Cuffari, A Bongiorno, Phys. Rev. Lett. 121216001T. Cao, D. Cuffari, A. Bongiorno, First-principles calculation of third-order elastic constants via numerical differentiation of the second piola-kirchhoff stress tensor, Phys. Rev. Lett. 121 (2018) 216001.
Fourth-order elastic coefficients in crystals. T S G Krishnamurty, Acta Cryst. 16839T. S. G. Krishnamurty, Fourth-order elastic coefficients in crys- tals, Acta Cryst. 16 (1963) 839.
Fourth-order elastic coefficients. P B Ghate, J. Appl. Phys. 35337P. B. Ghate, Fourth-order elastic coefficients, J. Appl. Phys 35 (1964) 337.
Crystal instabilities at finite strain. J Wang, S Yip, Phys. Rev. Lett. 714182J. Wang, S. Yip, Crystal instabilities at finite strain, Phys. Rev. Lett. 71 (1993) 4182.
Fifth-degree elastic energy for predictive continuum stress-strain relations and elastic instabilities under large strain and complex loading in silicon. H Chen, N Zarkevich, V I Levitas, D D Johnson, X Zhang, Comp. Mater. Sci. 6115H. Chen, N. Zarkevich, V. I. Levitas, D. D. Johnson, X. Zhang, Fifth-degree elastic energy for predictive continuum stress-strain relations and elastic instabilities under large strain and complex loading in silicon, npj Comp. Mater. Sci. 6 (2020) 115.
. M Liao, Y Liu, F Zhou, T Han, D Yang, N Qu, Z Lai, Z.-K , M. Liao, Y. Liu, F. Zhou, T. Han, D. Yang, N. Qu, Z. Lai, Z.-K.
A high-efficient strain-stress method for calculating higher-order elastic constants from first-principles. J Liu, Zhu, Computer Physics Communications. 280108478Liu, J. Zhu, A high-efficient strain-stress method for calculating higher-order elastic constants from first-principles, Computer Physics Communications 280 (2022) 108478.
Thermoelasticity of stressed materials and comparison of various elastic constants. D C Wallace, Phys. Rev. 162D. C. Wallace, Thermoelasticity of stressed materials and com- parison of various elastic constants, Phys. Rev. 162 (1967) 776- 789.
Fifth-degree elastic energy for predictive continuum stress-strain relations and elastic instabilities under large strain and complex loading in silicon. H Chen, N A Zarkevich, V I Levitas, D D Johnson, X Zhang, Comp. Mater. 6115H. Chen, N. A. Zarkevich, V. I. Levitas, D. D. Johnson, X. Zhang, Fifth-degree elastic energy for predictive continuum stress-strain relations and elastic instabilities under large strain and complex loading in silicon, npj Comp. Mater. 6 (2020) 115.
The independent fourth-order elastic coefficients for the trigonal and hexagonal symmetry classes. X Markenscoff, Journal of Applied Physics. 50X. Markenscoff, The independent fourth-order elastic coeffi- cients for the trigonal and hexagonal symmetry classes, Journal of Applied Physics 50 (1979) 1325-1327.
A software package to calculate elastic constants up to the fourth order from first principles. A Pandit, A Bongiorno, A. Pandit, A. Bongiorno, A software package to calculate elastic constants up to the fourth order from first principles, (https://github.com/abongiox/xPK2x).
Quantum espresso: a modular and open-source software project for quantum simulations of materials. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L Chiarotti, M Cococcioni, I Dabo, A Corso, S De Gironcoli, S Fabris, G Fratesi, R Gebauer, U Gerstmann, C Gougoussis, A Kokalj, M Lazzeri, L Martin-Samos, N Marzari, F Mauri, R Mazzarello, S Paolini, A Pasquarello, L Paulatto, C Sbraccia, S Scandolo, G Sclauzero, A P Seitsonen, A Smogunov, P Umari, R M Wentzcovitch, J. Phys.: Cond. Matter. 2139395502P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococ- cioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fab- ris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbrac- cia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, R. M. Wentzcovitch, Quantum espresso: a modular and open-source software project for quantum simulations of materials, J. Phys.: Cond. Matter 21 (39) (2009) 395502.
Advanced capabilities for materials modelling with quantum ESPRESSO. P Giannozzi, O Andreussi, T Brumme, O Bunau, M B Nardelli, M Calandra, R Car, C Cavazzoni, D Ceresoli, M Cococcioni, N Colonna, I Carnimeo, A D Corso, S De Gironcoli, P Delugas, R A Distasio, A Ferretti, A Floris, G Fratesi, G Fugallo, R Gebauer, U Gerstmann, F Giustino, T Gorni, J Jia, M Kawamura, H.-Y Ko, A Kokalj, E Küçükbenli, M Lazzeri, M Marsili, N Marzari, F Mauri, N L Nguyen, H.-V Nguyen, A O De-La Roza, L Paulatto, S Poncé, D Rocca, R Sabatini, B Santra, M Schlipf, A P Seitsonen, A Smogunov, I Timrov, T Thonhauser, P Umari, N Vast, X Wu, S Baroni, J. Phys.: Cond. Matter. 29465901P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerst- mann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thon- hauser, P. Umari, N. Vast, X. Wu, S. Baroni, Advanced capa- bilities for materials modelling with quantum ESPRESSO, J. Phys.: Cond. Matter 29 (2017) 465901.
Self-interaction correction to densityfunctional approximations for many-electron systems. J P Perdew, A Zunger, Phys. Rev. B. 23J. P. Perdew, A. Zunger, Self-interaction correction to density- functional approximations for many-electron systems, Phys. Rev. B 23 (1981) 5048-5079.
Generalized gradient approximation made simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77 (1996) 3865- 3868.
Efficient pseudopotentials for planewave calculations. N Troullier, J L Martins, Phys. Rev. B. 43N. Troullier, J. L. Martins, Efficient pseudopotentials for plane- wave calculations, Phys. Rev. B 43 (1991) 1993-2006.
Ab initio pseudopotentials for electronic structure calculations of poly-atomic systems using densityfunctional theory. M Fuchs, M Scheffler, Comput. Phys. Commun. 119M. Fuchs, M. Scheffler, Ab initio pseudopotentials for electronic structure calculations of poly-atomic systems using density- functional theory, Comput. Phys. Commun. 119 (1999) 67-98.
Perfection of the lattice of dislocation-free silicon, studied by the lattice-constant and density method. M E Straumanis, P Boregeaud, W J James, J. Appl. Phys. 321382M. E. Straumanis, P. Boregeaud, W. J. James, Perfection of the lattice of dislocation-free silicon, studied by the lattice-constant and density method, J. Appl. Phys. 32 (61) 1382.
The Structures of the Elements. J Donohue, WileyNew YorkJ. Donohue, The Structures of the Elements, Wiley, New York, 1974.
Lattice parameters and thermal expansion coefficients of ai, ag and mo at low temperatures. comparison with dilatometric data*. M E Straumanis, C L Woodard, Acta Cryst. 27549M. E. Straumanis, C. L. Woodard, Lattice parameters and ther- mal expansion coefficients of ai, ag and mo at low temperatures. comparison with dilatometric data*, Acta Cryst. A27 (1971) 549.
A comparison of volume compressions of silver and gold up to 150 gpa. Y Akahama, H Kawamura, A K Singh, J. Appl. Phys. 954767Y. Akahama, H. Kawamura, A. K. Singh, A comparison of vol- ume compressions of silver and gold up to 150 gpa, J. Appl. Phys. 95 (2004) 4767.
Lattice parameter and hardness measurements on high purity beryllium. K Mackay, N Hill, Journal of Nuclear Materials. 82K. Mackay, N. Hill, Lattice parameter and hardness measure- ments on high purity beryllium, Journal of Nuclear Materials 8 (2) (1963) 263-264.
Study of the phase transformations and equation of state of magnesium by synchrotron x-ray diffraction. D Errandonea, Y Meng, D Häusermann, T Uchida, Journal of Physics: Condensed Matter. 1581277D. Errandonea, Y. Meng, D. Häusermann, T. Uchida, Study of the phase transformations and equation of state of magnesium by synchrotron x-ray diffraction, Journal of Physics: Condensed Matter 15 (8) (2003) 1277.
Beryllium's monocrystal and polycrystal elastic constants. A Migliori, H Ledbetter, D J Thoma, T W Darling, Journal of Applied Physics. 955A. Migliori, H. Ledbetter, D. J. Thoma, T. W. Darling, Beryl- lium's monocrystal and polycrystal elastic constants, Journal of Applied Physics 95 (5) (2004) 2436-2440.
Ideal strength and ductility in metals from second-and third-order elastic constants. M Jong, I Winter, D C Chrzan, M Asta, Phys. Rev. B. 9614105M. de Jong, I. Winter, D. C. Chrzan, M. Asta, Ideal strength and ductility in metals from second-and third-order elastic con- stants, Phys. Rev. B 96 (2017) 014105.
Third-order elastic constants of magnesium. i. experimental. E R Naimon, Phys. Rev. B. 4E. R. Naimon, Third-order elastic constants of magnesium. i. experimental, Phys. Rev. B 4 (1971) 4291-4296.
Fourth order elastic moduli of diamond structure materials. D Gerlich, J. Appl. Phys. 774373D. Gerlich, Fourth order elastic moduli of diamond structure materials, J. Appl. Phys. 77 (1995) 4373.
Anharmonicity in noble metals: Some thermal properties. Y Hiki, J F Thomas, A V Granato, Phys. Rev. 153Y. Hiki, J. F. Thomas, A. V. Granato, Anharmonicity in noble metals: Some thermal properties, Phys. Rev. 153 (1967) 764- 771.
Pseudopotential calculation of the bulk modulus and phonon dispersion of the bcc and hcp structures of titanium. M Jafari, N Zarifi, M Nobakhti, A Jahandoost, M Lame, Phys. Scr. 8365603M. Jafari, N. Zarifi, M. Nobakhti, A. Jahandoost, M. Lame, Pseudopotential calculation of the bulk modulus and phonon dispersion of the bcc and hcp structures of titanium, Phys. Scr. 83 (2011) 065603.
First-principles study of electronic and elastic properties of hexagonal layered crystal mos2 under pressure. J.-N Yuan, Y Cheng, X.-Q Zhang, X.-R Chen, L.-C Cai, Zeitschrift für Naturforschung A. 70529J.-N. Yuan, Y. Cheng, X.-Q. Zhang, X.-R. Chen, L.-C. Cai, First-principles study of electronic and elastic properties of hexagonal layered crystal mos2 under pressure, Zeitschrift für Naturforschung A 70 (2015) 529.
| [
"https://github.com/dalcorso/pslibrary,",
"https://github.com/abongiox/xPK2x)."
] |
[
"Optimal Status Updating with a Finite-Battery Energy Harvesting Source",
"Optimal Status Updating with a Finite-Battery Energy Harvesting Source"
] | [
"Baran Tan Bacinoglu [email protected] \nAuburn University\nALUSA\n",
"Yin Sun \nAuburn University\nALUSA\n",
"Elif Uysal \nAuburn University\nALUSA\n",
"Volkan Mutlu [email protected] \nAuburn University\nALUSA\n",
"Metu \nAuburn University\nALUSA\n",
"Ankara \nAuburn University\nALUSA\n",
"Turkey \nAuburn University\nALUSA\n"
] | [
"Auburn University\nALUSA",
"Auburn University\nALUSA",
"Auburn University\nALUSA",
"Auburn University\nALUSA",
"Auburn University\nALUSA",
"Auburn University\nALUSA",
"Auburn University\nALUSA"
] | [] | We consider an energy harvesting source equipped with a finite battery, which needs to send timely status updates to a remote destination. The timeliness of status updates is measured by a non-decreasing penalty function of the Age of Information (AoI). The problem is to find a policy for generating updates that achieves the lowest possible time-average expected age penalty among all online policies. We prove that one optimal solution of this problem is a monotone threshold policy, which satisfies (i) each new update is sent out only when the age is higher than a threshold and (ii) the threshold is a non-increasing function of the instantaneous battery level. Let τB denote the optimal threshold corresponding to the full battery level B, and p(·) denote the age-penalty function, then we can show that p(τB) is equal to the optimum objective value, i.e., the minimum achievable timeaverage expected age penalty. These structural properties are used to develop an algorithm to compute the optimal thresholds. Our numerical analysis indicates that the improvement in average age with added battery capacity is largest at small battery sizes; specifically, more than half the total possible reduction in age is attained when battery storage increases from one transmission's worth of energy to two. This encourages further study of status update policies for sensors with small battery storage.Index Terms-Age of information; age-energy tradeoff; nonlinear age penalty, threshold policy; optimal threshold; energy harvesting; battery capacity. which implies :Now, consider the case when r = 0 and ℓ = B − 2 for(50):which implies:Suppose that the inequality below is true for j ≥ ℓ + 1:J α (0, j + 1) − J α (0, j + 2) ≤ J α (0, j) − J α (0, j + 1).(53) | 10.1109/jcn.2019.000033 | [
"https://arxiv.org/pdf/1905.06679v2.pdf"
] | 155,100,089 | 1905.06679 | ddd533d221b30e67ba82828887be1718d0a3563d |
Optimal Status Updating with a Finite-Battery Energy Harvesting Source
17 May 2019
Baran Tan Bacinoglu [email protected]
Auburn University
ALUSA
Yin Sun
Auburn University
ALUSA
Elif Uysal
Auburn University
ALUSA
Volkan Mutlu [email protected]
Auburn University
ALUSA
Metu
Auburn University
ALUSA
Ankara
Auburn University
ALUSA
Turkey
Auburn University
ALUSA
Optimal Status Updating with a Finite-Battery Energy Harvesting Source
17 May 2019Index Terms-Age of informationage-energy tradeoffnon- linear age penalty, threshold policyoptimal thresholdenergy harvestingbattery capacity
We consider an energy harvesting source equipped with a finite battery, which needs to send timely status updates to a remote destination. The timeliness of status updates is measured by a non-decreasing penalty function of the Age of Information (AoI). The problem is to find a policy for generating updates that achieves the lowest possible time-average expected age penalty among all online policies. We prove that one optimal solution of this problem is a monotone threshold policy, which satisfies (i) each new update is sent out only when the age is higher than a threshold and (ii) the threshold is a non-increasing function of the instantaneous battery level. Let τB denote the optimal threshold corresponding to the full battery level B, and p(·) denote the age-penalty function, then we can show that p(τB) is equal to the optimum objective value, i.e., the minimum achievable timeaverage expected age penalty. These structural properties are used to develop an algorithm to compute the optimal thresholds. Our numerical analysis indicates that the improvement in average age with added battery capacity is largest at small battery sizes; specifically, more than half the total possible reduction in age is attained when battery storage increases from one transmission's worth of energy to two. This encourages further study of status update policies for sensors with small battery storage.Index Terms-Age of information; age-energy tradeoff; nonlinear age penalty, threshold policy; optimal threshold; energy harvesting; battery capacity. which implies :Now, consider the case when r = 0 and ℓ = B − 2 for(50):which implies:Suppose that the inequality below is true for j ≥ ℓ + 1:J α (0, j + 1) − J α (0, j + 2) ≤ J α (0, j) − J α (0, j + 1).(53)
I. INTRODUCTION
The Age of Information (AoI), or simply the age, was proposed in [2], [3] as a performance metric that measures the freshness of information in status-update systems. For a flow of information updates sent from a source to a destination, the age is defined as the time elapsed since the newest update available was generated at the source. That is, if U (t) is the largest among the time-stamps of all packets received by time t, the age is defined as:
∆(t) = t − U (t),(1)
AoI is a particularly relevant performance metric for statusupdate applications that have growing importance in remote monitoring [4], [5], machine-type communication, industrial manufacturing, telerobotics, Internet of Things and social networks. In many applications, the timeliness of status updates not only determines the quality of service, but also affects other design goals such as the controllability of a dynamical system This paper was presented in part at IEEE ISIT 2018 [1]. This work was supported in part by NSF grant CCF-1813050, ONR grant N00014-17-1-2417 and TUBITAK grant no 117E215. that relies on the updates of sensing and control signals. AoI quantifies the timeliness of status-updates from the perspective of the receiver rather than throughput or delay based measures that are actually channel-centric. Moreover, AoI is also related to measures such as the time-average mean-square error (MSE) for remote estimation. An example of this is the result in [6] which showed remote estimation of a Wiener process minimizing MSE reduces to an AoI optimization problem when the sampling times at the transmitting side are independent of the process. While AoI optimization based on linear functions of the age ∆(t) is a relevant performance goal for most scenarios, the performance of some applications may be related to nonlinear functions of the age. For example, the change in the value of stale data can be less/more significant as its age grows. In such cases, the penalty of data staleness can be modelled as a non-linear function p(∆(t)) of the age ∆(t), i.e., the age-penalty. This function is chosen to be non-decreasing so that a decrease in age-penalty can be only possible when the age is less. Accordingly, the optimization of the age-penalty parallels to average AoI optimization while it might have distinct optimality conditions. Ideally, AoI is minimized when status updates are frequent and fresh. That is, good AoI performance requires packets with low delay received regularly. A limitation in the minimization of AoI is a constraint on the long-term average update rate which may be due to an average power budget for the channel over which status updates are sent. A stricter constraint is to keep a detailed budget on the number of status updates by allowing update transmission when a replenishable resource becomes available. This is the case of energy harvesting communication systems where each update consumes a certain amount of the harvested energy, if available. In the related literature of AoI optimization for energy harvesting communication systems, energy harvesting process is considered as an arrival process where each harvest carries energy that is worth of an update [7]- [14]. The goal of AoI optimization in such formulations is to find an optimal timing of update instants in order to minimize average AoI while transmission opportunities are subject to the availability of energy. Energy arrivals occur irregularly or randomly, which models an energy harvesting scenario. The main challenge in optimizing time average expected age under random energy arrivals is that in the case of an energy outages (empty battery), the transmitter must idle for an unknown duration of time. If it is the case that such random durations are inevitable, they introduce a tension for the regulation of inter-update durations. Another challenge is due to the finiteness of battery sizes. Theoretically, it is possible to achieve asymptotically optimal average AoI by employing simple schemes assuming infinite [8] or sufficiently large battery [9] sizes. However, when the battery size is comparable to the energy required per update, such simple schemes do not allow performance guarantees. Consequently, it is important to explore optimal policies under such regimes where performance depends heavily on the statistics of energy arrivals and the battery size.
This study is motivated by the aforementioned challenges of optimizing AoI in energy harvesting systems, capturing both the randomness of energy arrivals and finite energy storage capability. In addition capturing both challenges we go further, by optimizing not only average age itself, but a more general age penalty function p(∆(t)) that is not necessarily linear (see [15]- [20]). Hence, the problem considered in this study is an age-penalty optimization problem where status updates consume discrete units of energy that are randomly generated, i.e., harvested, such that the number of energy units that can be stored at a time is limited by a finite value which is called battery capacity.
Under the assumption of Poisson energy arrivals, we show the structure of solutions for the age-penalty optimization problem. The structure of the optimal solution reflects a basic intuition about the optimal strategy: Updates should be sent when the update is valuable (when the age is high) and the energy is cheap (the battery level is high). We show that the optimal solution is given by a stopping rule according to which an update is sent when its immediate cost is surpassed by the expected future cost. For Poisson energy arrivals, this stopping rule can be found in the set of policies that we refer as monotone threshold policies. Monotone threshold policies have the property that each update is sent only when the age is higher than a certain threshold which is a non-increasing function of the instantaneous battery level. One of our key results is that the value of the age-penalty function at the optimal threshold corresponding to the full battery level is exactly equal to the optimal value of the average age-penalty.
A. Contributions
The contributions of this paper can be summarized as follows:
• We formulate the general average age-penalty optimization problem for sending status updates from an energy harvesting source. This generalizes the AoI optimization goal in the prior studies [1], [8], [9], [11], [12] to a nonlinear function of age. In addition to the generalization on the objective, the optimization is carried out over a more general policy space defined only using the causality assumption. We prove that solutions to this general optimization problem can be found among threshold-type policies. • We show that, for optimal threshold-type policies with non-decreasing thresholds, the value of the penalty function at the threshold corresponding to the highest battery level is equal to the minimum value of the average agepenalty. As this optimal threshold is also the minimum of optimal thresholds at different battery levels, this implies that inter-update durations under such a policy is always above the minimum value of the average age-penalty. • For the case when the age-penalty function is linear, i.e., average AoI minimization problem, we provide the optimal thresholds for integer battery size up to 4. These results show that the most significant decrease in the minimum average AoI happens when incrementing the battery capacity of unit size (capable of holding one packet transmission's worth of energy) to two units. The minimum achievable average AoI with a battery size of 4 units is only about 10% larger than the ultimate minimum average AoI with infinite battery capacity. That is a promising result for small sensor systems. • For average AoI minimization problem, we provide an algorithm that can find near optimal policies achieving average AoI values arbitrarily close to the optimal values for any given battery capacity, is proposed. This algorithm provides a methodical way to derive near optimal policies utilizing analytical results.
B. Paper Organization
The rest of the paper is organized as follows. In Section II, the related work is discussed and summarized. In Section III, the system model and the formulation of the AoI optimization problem are described. In Section IV, the main results on the structural properties of the solution to the AoI optimization problem are shown and an algorithm to derive solutions for arbitrary integer battery sizes is provided. In Section V, the numerical results validating analytical results and also showing optimal solutions for integer battery size up to 4 are presented. In Section VI, the paper is concluded summaring the results and insights obtained over the course of this study.
II. RELATED WORK
Several studies on AoI considered this performance metric under various queueing system models comparing service disciplines and queue management policies (e.g., [21]- [29]). A common observation in these studies was that many queueing/service policies that are throughput and delay optimal but are often suboptimal with respect to AoI, while AoI-optimal policies can be throughput and delay optimal, at the same time. This showed that AoI optimization is quite different than optimization with respect to classical performance metrics. This required many queueing models to be re-addressed under respect to age related objectives. Moreover, queueing system formulations typically assume no precise control on the transmission or generation times of status updates. However, such control is important for age optimization [16], [17].
A direct control on the generation times of status updates is possible through a control algorithm that runs at the source. This is the "generate-at-will" assumption formulated in [7], [10] and studied in [6], [16], [17]. In [7], the problem of AoI optimization for a source, which is constrained by an arbitrary sequence of energy arrivals was studied. In [10], AoI optimization was considered for a source that harvests energy at a constant rate under stochastic delays experienced by the status update packets. The results in these studies showed suboptimality of work-conserving transmission schemes. Often, introducing a waiting time before sending the next update is optimal. That is, for maximum freshness, one may sometimes send updates at a rate lower than one is allowed to which may be counter-intuitive at first sight.
The problem in [7] was extended to a continuous-time formulation with Poisson energy arrivals, finite energy storage (battery) capacity, and random packet errors in the channel in [8]. An age-optimal threshold policy was proposed for the unit battery case, and the achievable AoI for arbitrary battery size was bounded for a channel with a constant packet erasure probability. The concurrent study in [9], limited to the special cases of unit battery capacity and infinity battery capacity computed the same threshold-type policies under these assumptions. These special cases were investigated also for noisy channels with a constant packet erasure probability in [13], [14]. The case for a battery with 2-units capacity was studied in [11] and the optimal policies for this case characterized as threshold-type policies similar to the optimal policy for unit battery capacity introduced in [8] and [9]. Optimal policies for arbitrary battery sizes were characterized via Lagrangian approach in [12] and using optimal stopping theory in [1].
III. SYSTEM MODEL
Consider an energy harvesting transmitter that sends update packets to a receiver, as illustrated in Fig 1. Suppose that the transmitter has a finite battery which is capable of storing up to B units of energy. Similar to [8], we assume that the transmission of an update packet consumes one unit of energy. The energy that can be harvested arrive in units according to a Poisson process with rate µ H . Let E(t) denote the amount of energy stored in the battery at time t such that 0 ≤ E(t) ≤ B. The timing of status updates are controlled by a sampler which can monitor the battery level E(t) for all t. We assume that the initial age and the initial battery level are zero, i.e., ∆(0) = 0 and E(0) = 0. Let H(t) and A(t) denote the number of energy units that have arrived during [0, t] and the number of updates sent out during [0, t], respectively. Hence, {H(t), t ≥ 0} and {A(t), t ≥ 0} are two counting processes. If an energy unit arrives when the battery is full, it is lost because there is no capacity to store it.
The system starts to operate at time t = 0. Let Z k denote the generation time of the k-th update packet such that 0 = Z 0 ≤ Z 1 ≤ Z 2 ≤ . . .. An update policy is represented by a sequence of update instants π = (Z 0 , Z 1 , Z 2 , ...). Let X k represent the inter-update duration between updates k − 1 and k, i.e., X k = Z k −Z k−1 . In many status-update systems (e.g., a sensor reporting temperature [30]), update packets are small in size and are only sent out sporadically. Typically, the duration for transmitting a packet is much smaller than the difference between two subsequent update times, i.e., X k s are typically large compared to the duration of a packet transmission. With such systems in mind, in our model, we will approximate the packet transmission durations as zero. In other words, once the k-th update is generated and sent out at time t = Z k , it is immediately delivered to the receiver. Hence, the age of information ∆(t) at any time t ≥ 0 is
∆(t) = t − max{Z k : Z k ≤ t},(2)
which satisfies ∆(t) = 0 at each update time t = Z k . Because an update costs one unit of energy, the battery level reduces by one upon each update, i.e.,
E(Z k ) = E(Z − k ) − 1,(3)
where Z − k is the time immediately before the k-th update. Further, because the battery size is B, the battery level evolves according to
E(t) = min{E(Z k ) + H(t) − H(Z k ), B},(4)
when t ∈ [Z k , Z k+1 ) is between two subsequent updates.
In terms of energy available to the scheduler, we can define update policies, that do not violate causality, as in the following: Definition 1. A policy π is said to be energy-causal if updates only occur when the battery is non-empty, that is,
E(Z − k ) ≥ 1 for each packet k.
Another restriction on update instants is due to the information available to the scheduler which we define as follows Definition 2. Information on the energy arrivals and updates by time t is represented by the filtration 1
F t = σ({(H(t ′ ), A(t ′ )), 0 ≤ t ′ < t})
which is the σ-field generated by the sequence of energy arrivals and updates, i.e.,
{(H(t ′ ), A(t ′ )), 0 ≤ t ′ < t}.
Similar to the definition of energy-causal policies, in the policy space that we will consider we merely assume the causality of available information besides energy causality. To formulate this assumption, we use the definition of F t . In terms of information available to the scheduler, any random time instant θ does not violate causality if and only if {θ ≤ t} ∈ F t for all t ≥ 0. We will refer such random instants as Markov times [31] and consider update times as Markov times based on the filtration F t in general. Notice that such update times do not have to be finite, however, we will refer Markov times that are also finite with probability 1 (w.p.1.) as stopping times [31]. For a policy trying to regulate age, it is legitimate to assume that update instants are always finite w.p.1. as otherwise the age may grow unbounded with a positive probability. With this in mind, we will consider only the update instants that are stopping times.
Accordingly, we can define the online update policies combining the causality assumptions on available energy and information as follows:
Definition 3. A policy is said to be online if (i) it is energy causal, (ii) no update instant is determined based on future information, i.e., all update times are stopping (finite Markov) times based on F t , i.e., Z k is finite w.p.1. while {Z k ≤ t} ∈ F t for all t ≥ 0 and k ≥ 1.
Let Π online denote the set of online update policies. To evaluate the performance of online policies, we consider an age-penalty function that relates the age at a particular time to a cost which increases by the age. This function is defined as in below:
We consider an age-penalty function p(·) that maps the age ∆(t) at time t to a penalty p(∆(t)):
Definition 4. A function p : [0, ∞) → [0, ∞) of the age is said to be an age-penalty function if • lim ∆→∞ p(∆) = ∞. • p(·) is a non-decreasing function. • ∞ 0 p(t)e −αt dt < ∞ for all α > 0.
Observe that the definition of age-penalty functions covers any non-decreasing function of age that is of sub-exponential order 2 and grows to infinity.
The time-average expected value of the age-penalty or simply the average age-penalty can be expressed as
p = lim sup T →∞ 1 T E T 0 p(∆(t))dt .(5)
Letp π denote the average age-penalty achieved by a particular policy π. The goal of this paper is to find the optimal update policy for minimizing the average age-penalty, which is formulated as min
π∈Π onlinep π .(6)
IV. MAIN RESULTS
We begin with a result guaranteeing that the space of threshold-type policies (see Definition 5) contains optimal update policies hence we can focus our attention to these policies for finding solutions to (6). Note that at time t = Z k , the age ∆(t) is equal to 0. In the meanwhile, the battery level E(t) will grow as more energy is harvested. In threshold policies, the threshold τ E(t) changes according to the battery level E(t) and a new sample is taken at the earliest time that the age ∆(t) exceeds the threshold τ E(t) . We define such policies as follows:
Definition 5. When E(t) ∈ {ℓ = 1, ..., B} represents the battery level at time t, an online policy is said to be a threshold policy if there exists τ ℓ for ℓ = 1, ..., B s.t.
Z k+1 = inf t ≥ Z k : ∆(t) ≥ τ E(t) ,(7)
Note that a policy is said to be stationary if its actions depend only on a current state while being independent of time. An immediate observation is that given ∆(t) and E(t) threshold policies do not depend on time, hence: Proposition 1. All threshold policies are stationary.
Proof. By definition, the update instants of a threshold policy only depend on the time elapsed since the last update, i.e., ∆(t), and the current battery level.
We expect that such stationary policies can minimize∆ among all online policies as energy arrivals follow a Poisson process which is memoryless. Due to the memorylessness of energy arrivals, the evolution of the system can be understood through a renewal type behaviour which suggests that an optimal policy should be stationary.
Indeed, we note the following as the first key result of this paper Theorem 1. There exists a threshold policy that is optimal for solving (6).
Proof. See Appendix A.
One significant challenge in the proof of Theorem 1 is that (6) is an infinite time-horizon time-averaged MDP which has an uncountable state space. When the state space is countable, one can analyze infinite time-horizon time-averaged MDP by making a unichain assumption. However, this method cannot be directly applied when state space is uncountable. To resolve this, we use a modified version of the "vanishing discount factor" approach [32] to prove Theorem 1 in two steps:
1. Show that for every α > 0, there exists a threshold policy that is optimal for solving min π∈Π online
E ∞ 0 e −α(t−a) p(∆(t))dt .
2. Prove that this property also holds when the discount factor α vanishes to zero.
In our search for an optimal policy, we can further reduce the space of policies:
Definition 6. A threshold policy is said to be a monotone threshold policy if τ 1 ≥ τ 2 ≥ . . . ≥ τ B .
Note that the definition of monotone threshold policies refers only to the case of thresholds that non-increasing in battery levels as opposed to the non-decreasing case.
Let Π MT be the set of monotone threshold policies, then, the following is true:
Theorem 2.
There exists a monotone threshold policy π ∈ Π MT that is optimal for solving (6).
Proof. See Appendix B.
Theorem 2 implies that in the optimal update policy, update packets are sent out more frequently when the battery level is high and less frequently when the battery level is low. This result is quite intuitive: If the battery is full, arrival energy cannot be harvested; if the battery is empty, update packets cannot be transmitted when needed and the age increases. Hence, both battery overflow and outage are harmful. Monotone threshold policies can address this issue. When the battery level l is high, the threshold τ l is small to reduce the chance of battery overflow; when the battery level l is low, the threshold τ l is high to avoid battery outage. For a policy in Π MT , the state (∆(t), E(t)) does not spend a measurable amount of time anywhere ∆(t) ≥ τ E(t) in which an update is sent out instantly reducing the battery level. Otherwise, the battery level is incremented upon energy harvests while the age is increasing linearly in time. The illustration in Fig. 2 shows the time evolution of the state (∆(t), E(t)) for policies in Π MT . If the energy level is E(Z k ) = j upon the previous update, then the inter-update time X k+1 ∈ [τ m , τ m−1 ] holds if and only if m − j packets arrive during the inter-update time. In other words, reaching the battery state m or higher is necessary and sufficient for the next inter-update duration being shorter than some x when x ∈ [τ m , τ m−1 ). Let Y i denote the duration required for i ≥ 1 successive energy arrivals, which obeys the Erlang distribution at rate µ H with parameter i,
E(t) ∆(t) τ 1 τ 2 τ B E = 1 E = 2 E = BP (Y i ≤ x) = 1 − i−1 v=0 1 v! e −µH x (µ H x) v ,(8)
and let Y i = 0 for i ≤ 0.
Accordingly, for policies in Π MT , the cumulative distribution function (CDF) of inter-update durations, can be expressed as
Pr(X k+1 ≤ x | E(Z k ) = j) = 0, if x < τ , B Pr(Y m−j ≤ x), if τ m ≤ x < τ m−1 , ∀m ∈ {2, ..., B}, Pr(Y 1−j ≤ x), if τ 1 ≤ x,(9)
From (9), an expression for the transition probability
Pr(E(Z k+1 ) = i | E(Z k ) = j) for i = 0, 1, ...., B − 1 can be derived 3 Pr(E(Z k+1 ) = i | E(Z k ) = j) = Pr(Y B−j ≤ τ B−1 ), if i = B − 1, Pr(Y 1+i−j ≤ τ i ) − Pr(Y 2+i−j ≤ τ i+1 ), if i < B − 1,(10)
Hence, energy states sampled at update instants can be described as a Discrete Time Markov Chain (DTMC) with the transition probabilities in (10) (See Fig. 3). When thresholds are finite, this DTMC is ergodic as any energy state is reachable from any other energy state in B − 1 steps with positive probability. Any optimal policy in Π MT has the following property:
E = B −1 E = 0 E = 1 E = j E = j +1 E = j +2 · · ·
Theorem 3. An optimal policy for solving (6) is a monotone threshold policy that satisfies the following
p(τ * B ) =p π * = min π∈Π onlinep π .(11)
where π * is a monotone threshold policy solving (6) and τ * B is its age threshold for the full battery case.
Proof. See Appendix E.
The result in Theorem 3 exhibits a structural property of optimal policies which also appears in the sampling problem that was studied in [19] . The sampling problem in [19] considered sources without energy harvesting, where the packet transmission times were i.i.d. and non-zero. On the one hand, the optimal sampling policy in Theorem 1 of [19] is a threshold policy on an expected age penalty term, and the threshold is exactly equal to the optimal objective value. On the other hand, a sampling problem for an energy harvesting source with zero packet transmission time is considered in the current paper. The optimal sampling policy in Theorem 3 can be rewritten as
Z k+1 = inf t ≥ Z k : p(∆(t)) ≥ p(τ * E(t) )
which is a multi-threshold policy on the age penalty function, each threshold p(τ * ℓ ) corresponding to a battery level ℓ. Further, the threshold p(τ * B ) associated with a full battery size E(t) = B is equal to the optimal objective value. The results in these two studies are similar to each other. Together, they provide a unified view on optimal sampler design for sources both with and without energy harvesting capability. The proof techniques in these two studies are of fundamental difference.
A. Average Age Case
If we take the age-penalty function as an identity function, i.e., p(∆) = ∆, then (6) becomes the problem of minimizing the time-average expected age. In this case, the result in Theorem 3 implies that in optimal monotone threshold policies, inter-update durations can be small as much as the minimum average AoI only when the battery is full. From results in [8] and [9], we know that the minimum average AoI for the infinite battery case is 1 2µH and this can be achieved asymptotically using the best-effort scheme in [9] or with a threshold policy [8] where all thresholds are nearly equal to 1 µH . On the other hand, according to Theorem 3, the optimal threshold for the full battery level tends to 1 2µH as the battery capacity increases. This shows that the optimal monotone threshold policies remain structurally dissimilar to asymptotically optimal policies when the battery capacity is approaching to infinity. The result is more useful when the battery capacity is finite as it may lead to the optimal threshold values of the other battery levels. We will use this in an algorithm for finding near optimal policies for any given integer sized battery capacity. In addition, the special case of Theorem 3 for average age [1] can be derived from a more general result which we provide in Lemma 1. This result shows a relation between the partial derivatives of a non-negative random variable with respect to the thresholds determining the random variable in a similar way to the inter-update duration case.
Lemma 1. Suppose X is a r.v. that satisfies the following:
Pr(X ≤ x) = 0 if x < τ B , F i (x) if τ i ≤ x < τ i−1 , ∀i ∈ {2, ..., B}, F 1 (x) if τ 1 ≤ x,
where 0 < τ B ≤ ... ≤ τ 2 ≤ τ 1 and for each i ∈ {1, ..., B} F i (x) is the CDF of a non-negative random variable. Then:
∂ ∂τ i E X 2 = 2τ i ∂ ∂τ i E [X] .
Proof. See Appendix C.
Corollary 1. The inter-update intervals, X, for any π ∈ Π MT satisfy the following:
∂ ∂τ i E X 2 | E = j = 2τ i ∂ ∂τ i E [X | E = j] ,(12)∀(i, j) ∈ {1, 2, ..., B} 2 where E [X | E = j] E [X k | E(Z k ) = j] and E X 2 | E = j E X 2 k | E(Z k ) = j .
Note that the transition probabilities (10) do not depend on τ B hence the steady-state probabilities obtained from (10) also do not depend on τ B . This leads to a property of τ B the average age case of Theorem 3 as shown in [1]. The unitbattery case , i.e., B = 1 case was solved in [8] and [9]. For completeness, this result is summarized in Theorem 4.
Theorem 4. When B = 1, the average age∆ can be expressed as∆
= 1 2 (µ H τ 1 ) 2 + e −µH τ1 (µ H τ 1 + 1) µ H (µ H τ 1 + e −µH τ1 ) ,(13)and τ * 1 =∆ π * = 1 µH 2W ( 1 √ 2 ) where W (·) is the Lambert-W function.
Proof. See Appendix F.
= α 2 2 2 +e −α 2 [α2+1+ρ1(α 2 2 +2α2+2)]−e −α 1 [α1+1+ρ1(α 2 1 +α1+1)] µH (α2+e −α 2 [1+ρ1(α2+1)]−e −α 1 [1+ρ1α1]) ,(14)
where
ρ 1 = e −α1 1 − e −α1 α 1 , and α 1 = µ H τ 1 , α 2 = µ H τ 2 .
Proof. See Appendix G.
B. An Algorithm for Finding Near Optimal Policies
We propose an algorithm to find a near optimal policy π ∈ Π MT such that∆ π −∆ π * ≤ 1 2 q+1 µH for any given B and q ∈ Z + . Let m 1 (τ 1 , τ 2 , ..., τ B ) and m 2 (τ 1 , τ 2 , ..., τ B ) denote the functions such that:
m 1 (τ 1 , τ 2 , ..., τ B ) = B−1 j=0 E [X | E = j] Pr(E = j), (15) m 2 (τ 1 , τ 2 , ..., τ B ) = B−1 j=0 E X 2 | E = j Pr(E = j),(16)
where Pr(E = j) is the steady-state probability for en-
ergy state j, E [X | E = j] E [X k | E(Z k ) = j] and E X 2 | E = j E X 2 k | E(Z k ) = j .
Note that it is straight forward to derive m 1 (τ 1 , τ 2 , ..., τ B ) and m 2 (τ 1 , τ 2 , ..., τ B ) using (9) and (10), hence we assume these functions are available for any B.
In the below theorem , we state the main result that we will use in an algorithm for finding near optimal policies: Theorem 6. For B > 1, the equation
2τ B m 1 (τ 1 , τ 2 , ..., τ B ) − m 2 (τ 1 , τ 2 , ..., τ B ) = 0,(17)
has a solution with monotone non-increasing thresholds, i.e., τ B ≤ ... ≤ τ 2 ≤ τ 1 if and only if τ B ≥∆ π * .
Algorithm 1 uses this result to find a near optimal policy π ∈ Π MT such that∆ π −∆ π * ≤ 1 2 q+1 µH . Each iteration in Algorithm 1 halves the interval where the minimum average AoI can be found based on the existence of solution to (6) with the current estimate of the smallest thresholdτ B . Accordingly, it is guaranteed that Algorithm 1 finds a solution within a gap to the optimal value that is 1 2 q+1 µH . Algorithm 1 assumes a numerical solver that can solve the transcendental equation in (17), however, the exact solution is required only for once at the final step while only the existence of solutions to (6) is to be checked during iterations. Table I. These results were obtained through exhaustive search for possible threshold values, and Monte Carlo analysis for approximating AoI values in the simulation of the considered system and policies without relying on analytical results. It can be seen that these optimal thresholds and corresponding AoI values (in Table I) validate Theorem 3. Fig. 5 and 6 show the dependency of AoI on threshold values τ 1 and τ 2 which is consistent with the result in Theorem 5 for the special case of B = 2.
Algorithm 1 Find π ∈ Π MT such that∆ π −∆ π * ≤ 1 2 q+1 µH Require: B ≥ 1 ∧ q ≥ 1 Ensure:∆ π −∆ π * ≤ 1 2 q+1 µH τ − B ← 1 2µH , τ + B ← 1 µH for i = 1, 2, ..., q dô τ B ← τ − B +τ + B 2 if ∃τ B−1 ≤ ... ≤ τ 2 ≤ τ 1 s.t. τ B−1 ≥τ B and 2τ B m 1 (τ 1 , τ 2 , ...,τ B ) − m 2 (τ 1 , τ 2 , ...,τ B ) = 0 then τ + B ←τ B else τ − B ←τ B end if end for Solve
VI. CONCLUSION
We have studied optimizing a non-linear age penalty in the generation and transmission of status updates by an energy harvesting source with a finite battery. An optimal status updating policy for minimizing the time-average expectation of a general non-decreasing age function p(·) has been obtained. The policy has a monotonic threshold structure: (i) each new update is sent out only when the age is higher than a threshold and (ii) the threshold is a non-increasing function of the instantaneous battery level such that the updates are sent out more frequently when the battery level is high. Furthermore, we have identified an interesting relationship between the smallest optimal threshold τ * B (i.e., the threshold corresponding to a full battery level) and the optimal objective valuep π * (i.e., the minimum achievable time-average expected age penalty), which is given by
p(τ * B ) =p π * .(18)
APPENDIX
A. The Proof of Theorem 1
In order to prove Theorem 1, we use a modified version of the "vanishing discount factor" approach [32] which consists of 2 steps:
Step 1. Show that for every α > 0, there exists a threshold policy that is optimal for solving min π∈Π online E ∞ 0 e −αt p(∆(t))dt .
Step 2. Prove that this property still holds when the discount factor α vanishes to zero.
We first discuss Step 1. Recall that F t represents the information about the energy arrivals and the update policy during [0, t]. Given F a , we are interested in finding the optimal online policy during [a, ∞), which is formulated as
min π∈Π online E ∞ a e −α(t−a) p(∆(t))dt F a .(19)
Observe that, in (19), the term e −α(t−a) ensures that the exponential decay always starts from unity so that the problem is independent of a given F a . In addition, this problem has the following nice property:
Lemma 2.
There exists an optimal solution to (19) that depends on F a only through (∆(a), E(a)). That is, (∆(a), E(a)) is a sufficient statistic for solving (19).
Proof. In Problem (19), the age evolution {∆(t), t ≥ a} is determined by the initial age ∆(a) at time a and the update policy during [a, ∞). Further, the update policy during [a, ∞) is determined by the initial age ∆(a), the initial battery level Recall that ∆(0) and E(0) are fixed. Hence, for any online update policy, the online update decisions during [0, a] depends only on {H(t), t ≤ a}. Hence, F a is determined by
{H(t), t ≤ a}. Because {H(t), t ≥ 0} is a compound Poisson process, {H(t) − H(a), t ≥ a} is independent of {H(t), t ≤ a}. Hence, {∆(t)
, t ≥ a} depends on F a only through ∆(a) and E(a). By this, (∆(a), E(a)) is a sufficient statistic for solving (19).
By using Lemma 2, we can simplify (19) as (20) and define a cost function J α (∆(a), E(a)) which is the optimal objective value of (20):
Furthermore, one important question is: Given that the previous update occurs at Z k = a, how to choose the next update time Z k+1 . This can be formulated as
min (Z1,...,Z k =a,Z k+1 ,...)∈Π online E ∞ a e −α(t−a) p(∆(t))dt Z k = a, ∆(a) = 0, E(a) ,(21)
where we have used the fact that if Z k = a, then ∆(a) = ∆(Z k ) = 0.
According to the definition of Π online , Z k+1 is a finite Markov time, i.e., stopping time, hence the problem of finding Z k+1 for a solution to (21) can be formulated as an infinite horizon optimal stopping problem in the interval [a, ∞). We will consider a gain [31] process G = (G t ) t≥a adapted to the filtration F t where a stopping time Z k+1 for a solution to (21) maximizes E G Z k+1 | F a when we choose Z k+1 from a family of stopping times based on F t . Let M a denote this family of Z k+1 s which can be expressed as:
M a = {Z k+1 ≥ a : Pr(Z k+1 < ∞) = 1, {Z k+1 ≤ t} ∈ F t , ∀t ≥ a} .
Note that a stopping time in M a may violate energy causality however our definition of the gain process will guarantee that those stopping times cannot be optimal.
We will define the gain process (G t ) t≥a based on the value of the discounted cost when an update is sent at a particular time t. The gain process (G t ) t≥a for E(t) > 0 corresponds to the additive inverse of this cost and can be written as follows:
G t = − min π∈Π online E ∞ a e −α(w−a) p(∆(w))dw Z k = a, Z k+1 = t, E(t) , E(t) > 0.(22)
Note that the stopping time cannot be at time t when E(t) = 0 as there is no energy to send another update in that case. To cover this case, we set G t to −∞ so that a stopping time Z k+1 maximizing E G Z k+1 | F a should satisfy energy causality hence belongs to an online policy. In other words, the stopping time Z k+1 in a solution to (21) maximizes E G Z k+1 | F a among all the stopping times in M a .
Alternatively, the gain process (G t ) t≥a can be expressed in terms of the cost defined in (20) as follows
G t = − t a e −α(w−a) p(w − a)dw− E ∞ t e −α(w−a) p(∆(w))dw Z k = a, Z k+1 = t, E(t) = − t a e −α(w−a) p(w − a)dw − e −α(t−a) J α (0, E(t) − 1),(23)
for t ≥ a and E(t) > 0.
Let's define J(0, −1) := ∞ so that (23) holds for the E(t) = 0 as well. Notice that the process G t is driven by the random process E(t) which is not conditioned on any particular value of E(a) while being adapted to the filtration F t . However, for a policy solving (21), the stopping time Z k+1 depends on E(a) as it maximizes E G Z k+1 | F a which depends on E(a) through the filtration F a .
Accordingly, we define the stopping problem of maximizing the expected gain in the given interval [a, ∞) as in the following: max
t∈Ma E [G t | F a ] .(24)
Based on this formulation, we will show that the optimal stopping time exists and is given by the following stopping rule for Z k+1 :
Z k+1 = inf{t ≥ Z k = a : G t = S t },(25)
where S is the Snell envelope [31] for G:
S t = ess sup t ′ ∈Mt E [G t ′ | F t ] .(26)
Showing that Z k+1 in (25) is finite w.p.1 is sufficient to prove the existence of the optimal stopping time and the optimality of the stopping rule in (25)(see [31,Theorem 2.2.]). Consider the lemma below and its proof in order to see the finiteness of Z k+1 in (25):
Lemma 3. For the stopping rule in (25) Z k+1 is finite w.p.1, i.e., Pr(Z k+1 < ∞) = 1.
Proof. Consider the Markov time Q k+1 which is defined as follows:
Q k+1 := inf{t ≥ Z k = a : E(t) = B, G t = S t }.(27)
Clearly, the stopping time Z k+1 chosen in (25) is earlier than Q k+1 as Q k+1 has an additional stopping condition E(t) = B. This means that if Pr(Q k+1 < ∞) = 1, then Pr(Z k+1 < ∞) = 1. Accordingly, for the proof of this lemma, it is sufficient to show that Q k+1 is finite w.p.1. We will show this by showing the finiteness of (i) the first time t ≥ Z k = a such that E(t) = B, and (ii) the duration between this time and the Markov time Q k+1 . Note that E(t) = B condition is always satisfied after it reached for the first time. Let R k+1 be the Markov time representing the first time when E(t) = B is satisfied:
R k+1 := inf{t ≥ Z k = a : E(t) = B}.(28)
(i) Observe that the Markov time R k+1 is finite w.p.1 as it is stochastically dominated by a + Y B where Y B is an Erlang distributed random variable with parameter B which obeys (8) and Pr(Y B < ∞) = 1.
(ii) In order to see that Q k+1 − R k+1 is also finite, consider the time period after R k+1 , i.e., [R k+1 , ∞). As E(t) = B for any t ≥ R k+1 , the evolution of G t becomes deterministic after t ≥ R k+1 :
G t = − t a e −α(w−a) p(w − a)dw − e −α(t−a) J α (0, B − 1) ,(29)
for t ≥ R k+1 . On the other hand, for t ≥ R k+1 , the Snell envelope is S t = ess sup t ′ ∈Mt G t ′ = sup t ′ ≥t G t ′ . We will show that G t is always non-increasing after some finite time so that S t = G t is always satisfied after that time.
In order to see this, consider the change in G t for t ≥ R k+1 . As
− ∂ ∂t t a e −α(w−a) p(w − a)dw + e −α(t−a) J α (0, B − 1) = e −α(t−a) (αJ α (0, B − 1) − p(t − a)) ,(30)
and p(t − a) is non-decreasing, for t ≥ R k+1 , G t is nonincreasing if t ≥ t c for some t c such that
t c := inf{t ≥ a : p(t − a) = αJ α (0, B − 1)}.(31)
This implies that, for t ≥ max{R k+1 , t c }, G t = sup t ′ ≥t G t ′ and hence S t = G t . Accordingly, the stopping conditions of Q k+1 are satisfied for the first time when t = max{R k+1 , t c } which means Q k+1 = max{R k+1 , t c }.
As αJ α (0, B −1) is finite, t c is finite which implies Q k+1 is finite w.p.1 as R k+1 is finite w.p.1. This completes the proof.
We just showed that the Markov time in (25) is finite w.p.1 and this means that it is the optimal stopping time by [31,Theorem 2.2.]. Next, we show that the optimal stopping rule in (25) is a threshold policy by using the properties of the cost function in (20). To relate the optimal stopping time and the cost function in (20), we will express the Snell envelope in an alternative way.
Notice that the Snell envelope can be written by substituting (22) in (26) as follows:
S t = ess sup t ′ ∈Mt − min π∈Π online E ∞ a e −α(w−a) p(∆(w))dw Z k = a, Z k+1 = t ′ , F t .(32)
Hence,
S t = − min π∈Π online E ∞ a e −α(w−a) p(∆(w))dw Z k = a, Z k+1 ≥ t, F t . (33)
Accordingly, using the definition of J α (∆(a), E(a)), we can write (34) Therefore, as the first terms in (29) and (34) are identical, the optimal stopping rule in (25) is equivalent to
S t = −Z k+1 = inf{t ≥ Z k = a : J α (0, E(t)−1) = J α (∆(t), E(t))}.
(35) Next, we show that the stopping rule in (35) is a threshold rule in age. In order to show this, let us define the function ρ α (·) : {0, 1, ..., B} → [0, ∞) such that:
ρ α (ℓ) := inf{w ≥ 0 : J α (0, ℓ − 1) = J α (w, ℓ)}.(36)
We can show that for any ∆ ≥ ρ α (ℓ), it is guaranteed that J α (0, ℓ − 1) = J α (∆, ℓ) due to the following reasons:
• For any ∆ and ℓ ∈ {0, 1, 2, .., B}, J α (∆, ℓ) is smaller than or equal to J α (0, ℓ − 1) as :
J α (∆, ℓ) = min π∈Π online e a E ∞ a e −αw p(∆(w))dw Z k = t a − ∆, Z k+1 ≥ t a , E(t a ) = ℓ ≤ min π∈Π online e a E ∞ a e −αw p(∆(w))dw Z k = t a − ∆, Z k+1 = a, E(t) = ℓ = J α (0, ℓ − 1),
where the inequality is true as the expectation is conditioned on policies with Z k+1 = t a . • For any ℓ ∈ {0, 1, 2, .., B}, J α (∆, ℓ) is non-decreasing in ∆ as :
J α (∆, ℓ) = min π∈Π online E Z k+1 a e −α(w−a) p(w + ∆ − t a )dw θ(∆) + E ∞ Z k+1 e −α(w−a) p(∆(w))dw θ(∆) ≤ min π∈Π online E Z k+1 a e −α(w−a) p(w + ∆ ′ − t a )dw θ(∆) + E ∞ Z k+1 e −α(w−a) p(∆(w))dw θ(∆ = min π∈Π online E Z k+1 a e −α(w−a) p(∆(w))dw θ(∆ ′ ) + E ∞ Z k+1 e −α(w−a) p(∆(w))dw θ(∆ ′ ) = J α (∆ ′ , ℓ), for any ∆ ′ ≥ ∆ and θ(∆) := (Z k = t a − ∆, Z k+1 ≥ t a , E(t a ) = ℓ)
where the inequality follows from the fact that p(·) is non-decreasing and the second equality is due to that, given Z k+1 , the integrated values are conditionally independent from Z k . Accordingly, J α (∆, ℓ) = J α (0, ℓ − 1) for any ℓ ∈ {0, 1, 2, .., B} and ∆ ≥ ρ α (ℓ). Therefore, the stopping rule in (35) is equivalent to:
Z k+1 = inf{t ≥ Z k = a : ∆(t) ≥ ρ α (E(t))},(37)
for ℓ ∈ {0, 1, 2, .., B}.
We showed that the stopping rule in (37) gives the optimal stopping time Z k+1 for a policy solving (21). Now, we can start discussing Step 2 in order to show that the optimal stopping rule with the same structure also gives a solution to (6).
In this part (
Step 2) of the proof, we will consider the optimal stopping rules in (37) while the discount factor α is vanishing to zero. Notice that the policy solving (21) is identified by ρ α (ℓ) due to (37). Let π α and ∆ πα (t) be a policy obeying (37) and solving (21) for discount factor α and the age at time t for that policy, respectively. We will show the following
lim β↓0 lim t f →∞ t f 0 E[p(∆π β (t))]dt t f = inf π∈Π online lim sup t f →∞ t f 0 E[p(∆π(t))]dt t f ,(38)
which implies that for any {β n } n≥1 ↓ 0 sequence, π βn converges to the policy solving (6).
To prove the equivalence in (38), we will use Feller's Tauberian theorem [33] (also see the Tauberian theorem in [34]) which can be stated as follows:
On the other hand, lim n→+∞ 1 n n k=0 X k < ∞ w.p.1 and lim n→+∞ 1 n n k=0 X k > 1 µH w.p.1 due to the energy causality constraint. Therefore, we can apply the derivation steps in [35,Theorem 5.4.5] and obtain (42). This completes the proof. Lemma 4 and (41) imply the following for for a = 0 and β > 0:
lim α↓0 αJ α;β (0, 0) = lim t f →∞ t f 0 E p(∆ π β (t)) dt t f .(43)
Now, consider an arbitrary online policy π for which E [p(∆ π (t))] is Lebesgue-measurable and bounded, then apply Feller's Tauberian theorem for f (t) = E [p(∆ π (t))] giving the following inequality when t a = 0:
lim sup α↓0 α ∞ 0 e −α(t−a) E [p(∆ π (t))] dt ≤ lim sup t f →∞ t f 0 E[p(∆π(t))]dt t f .(44)
Note that for α > 0, J α;β (0, 0) is minimized for α = β, hence:
lim β↓0 lim α↓0 αJ α;β (0, 0) = inf β>0 lim α↓0 αJ α;β (0, 0) ≤ lim sup α↓0 α ∞ 0 e −α(t−a) E [p(∆ π (t))] dt.(45)
Combining (43), (44) and (45), we get (38). This completes the proof.
B. The Proof of Theorem 2
Theorem 2 follows from the proof of Theorem 1. To prove the theorem it is sufficient to show that for any α > 0, ρ α (ℓ) (see (36)) is non-increasing in ℓ as this guarantees that the monotonicity of optimal thresholds holds for any sequence of α values that vanishes to zero. To see this, consider the following lemma and the argument provided below its proof:
Lemma 5. For J(·, ·) is the function defined in (20), J α (0, ℓ)− J α (0, ℓ + 1) is non-increasing in ℓ ∈ {0, 1, ..., B − 1} for any α ≥ 0.
Proof. First, consider the alternative formulation of J α (r, ℓ + 1) in below:
J α (r, ℓ + 1) = min π∈Π online e a E[ E ∞ a e −αt p(∆(t))dt Z k+1 , ∆(t a ) = r, E(t a ) = ℓ + 1 ],
where the outer expectation is taken over Z k+1 .
Let be the joint distribution of Z k+1 ∈ M a and the energy harvested during [a, z]. Then, we can write J α (r, ℓ + 1) as follows:
J α (r, ℓ + 1) = min Z k+1 ∈Ma ∞ σ=0 ∞ ta K r,ℓ+1 (z, σ)e a × z a e −αt p(∆(t))dt + e −αz J α (0, min{ℓ + σ, B − 1}) dz.(46)
Similarly,
J α (r, ℓ + 2) = min Z k+1 ∈Ma ∞ σ=0 ∞ ta K r,ℓ+2 (z, σ)e a × z a e −αt p(∆(t))dt+e −αz J α (0, min{ℓ+1+σ, B − 1}) dz.(47)
Now, let K * r,ℓ+2 (z, σ) be the distribution corresponding to the update time Z k+1 ∈ M a that is optimal in (47), which means:
J α (r, ℓ + 2) = ∞ σ=0 ∞ ta K * r,ℓ+2 (z, σ)e a × z a e −αt p(∆(t))dt + e −αz J α (0, min{ℓ+1+σ, B − 1}) dz.(48)
Clearly, K * r,ℓ+2 (z, σ) is not necessarily the joint distribution corresponding the update time Z k+1 ∈ M a that is optimal for (46), hence:
J α (r, ℓ + 1) ≤ ∞ σ=0 ∞ ta K * r,ℓ+2 (z, σ)[ z a e −α(t−a) p(∆(t))dt + e −α(z−a) J α (0, min{ℓ + σ, B − 1})]dz.(49)
Combining (50) and (49) gives:
J α (r, ℓ + 1) − J α (r, ℓ + 2) ≤ ∞ σ=0 ∞ a K * r,ℓ+2 (z, σ)e −α(z−a) × [J α (0, min{ℓ + σ,B − 1})−J α (0, min{ℓ + 1 + σ,B − 1})]dz.(50)
Then, we have:
J α (0, ℓ + 1) − J α (0, ℓ + 2) ≤ ≤ ∞ σ=0 ∞ a K * r,ℓ+2 (z, σ)e −α(z−a) × [J α (0, min{ℓ + σ,B − 1})−J α (0, min{ℓ + 1 + σ,B − 1})]dz ≤ ∞ a K * (z, 0)e −α(z−a) [J α (0, ℓ) − J α (0, ℓ + 1)]dz + ∞ σ=1 ∞ a K * r,ℓ+2 (z, σ)e −α(z−a) [J α (0, ℓ + 1) − J α (0, ℓ + 2)]dz ≤ J α (0, ℓ) − J α (0, ℓ + 1).(54)
This means that the inequality (53) is also true for j = ℓ so is for any j = 0, 1, ..., B − 2 by induction. Combining this and (51):
J α (r, ℓ + 1) − J α (r, ℓ + 2) ≤ J α (0, ℓ) − J α (0, ℓ + 1),(55)
for α ≥ 0, r ≥ 0 and Lemma 5 shows that ρ α (ℓ) is non-increasing in ℓ for α > 0. It is sufficient to consider (55) when r = ρ α (ℓ):
0 = J α (0, ℓ−1)−J α (ρ α (ℓ), ℓ) ≤ J α (0, ℓ−2)−J α (ρ α (ℓ), ℓ−1), (56) which implies ρ α (ℓ − 1) ≥ ρ α (ℓ) combining J α (0, ℓ − 2) − J α (ρ α (ℓ − 1), ℓ − 1)
and that J α (r, ℓ − 1) is non-decreasing 5 in r. Accordingly, the optimal policies solving (19) are monotone threshold policies, i.e., π α ∈ Π MT for any α > 0.
C. The proof of Lemma 1
Let τ B+1 = 0. Then, consider:
∂ ∂τ i E X 2 = ∂ ∂τ i ∞ 0 2x Pr(X ≥ x)dx = ∂ ∂τ i B i=0 τi τi+1 2x Pr(X ≥ x)dx = 2 ∂ ∂τ i τi τi+1 x Pr(X ≥ x)dx + τi−1 τi x Pr(X ≥ x)dx = 2τ i ∂ ∂τ i τi−1 τi+1 Pr(X ≥ x)dx = 2τ i ∂ ∂τ i B i=0 τi τi+1 Pr(X ≥ x)dx = 2τ i ∂ ∂τ i E [X] ,
for i = 0, 1, ..., B.
D. Useful Results for Asymptotic Properties
Lemma 6, 7 and 8 provide some useful results that combine ergodicity properties and renewal-reward theorem for a DTMC with transition probabilities in (10). Lemma 6. The DTMC with the transition probabilities in (10) is ergodic for a monotone threshold policy where τ 1 is finite.
Proof. Consider an energy state j in [0, B − 1]. We will show that any other energy state i is reachable from j in at most B − 1 steps with a positive probability. For i ≥ j, the higher energy state i is reachable from j in one step with a positive probability as for i = B − 1, Pr(Y B−j ≤ τ B−1 ) is strictly positive and for j ≤ i < B − 1:
Pr(Y 1+i−j ≤ τ i ) − Pr(Y 2+i−j ≤ τ i+1 ) ≥ Pr(Y 1+i−j ≤ τ i+1 ) − Pr(Y 2+i−j ≤ τ i+1 ) > 0, as τ i+1 ≤ τ i and i − j ≥ 0.
Similarly, the energy state i = j − 1 for j = 1, ...., B − 1 can be reached from j with a probability 1 − Pr(Y 1 ≤ τ j ) which is strictly positive as τ j is finite. This means that any state i < j can be reached from j in at most B − 1 steps with a positive probability.
Lemma 7. For monotone threshold policies with finite τ 1 , the following is true:
lim n→+∞ 1 n n k=0 X k = B−1 j=0 E [X | E = j] Pr(E = j) w.p.1. (57) lim n→+∞ 1 2n n k=0 E[X 2 k ] = 1 2 B−1 j=0 E X 2 | E = j Pr(E = j),(58)
where Pr(E = j) is the steady-state probability for en-
ergy state j, E [X | E = j] E [X k | E(Z k ) = j] and E X 2 | E = j E X 2 k | E(Z k ) = j .
Proof. Consider:
1 n n k=0 X k = 1 n B−1 j=0 k∈[0,n] E(Z k )=j X k = 1 n B−1 j=0 Lj ℓ=0 X ℓ;j ,
where L j is the number of ks in [0, n] such that E(Z k ) = j and X ℓ;j is a r.v. with the CDF Pr(X ℓ;j ≤ x) = Pr(X k ≤ x | E(Z k ) = j) for some k.
Note that the sequence X 0;j , X 1;j , ..., X Lj;j is i.i.d. for any j and their mean is bounded as all thresholds are finite, hence: (59)
Proof. The proof is a generalization of Theorem 5.4.5 in [35] for the case where X k s are non-i.i.d. but the limits still exist (w.p.1). When X k s are i.i.d. with E[X k ] < ∞ and E[X 2 k ] < ∞, the convergence (w.p.1) of the limits is guaranteed.
E. The proof of Theorem 3
Theorem 3 follows from the proof of Theorem 1. The proof of Lemma 3 shows that given that Z k = a is the last update time and E(t ′ ) = B for some t ′ > a, the condition S t = G t is satisfied for the first time when t ≥ {t ′ , t c } (see (31)). This means that ρ α (B) = αJ α (0, B − 1) for ρ α (E(t)) in (37). Accordingly, p(τ * B ) = lim α↓0 ρ α (B) = lim α↓0 αJ α (0, B − 1) = min π∈Π online lim sup t f →∞ t f 0 E[p(∆π(t))|E(0)=B]dt t f =p π * , which follows from the application of Feller's Tauberian theorem (applying Theorem 7 for f (t) = E [p(∆ π (t)) | E(0) = B]). This completes the proof.
F. The Proof of Theorem 4
By Lemma 8 and Lemma 7,∆ for B = 1 can be computed as follows∆
= 1 2 E X 2 | E = 0 Pr(E = 0) E [X | E = 0] Pr(E = 0) ,(60)
where Pr(E = 0) = 1, E X 2 | E = 0 = τ 2 1 + ( 2 µ 2 H + 2 µH τ 1 )e −µH τ1 and E [X | E = 0] = τ 1 + 1 µH e −µH τ1 . Accordingly,∆ is given by (13). By Theorem 3, τ * 1 =∆ π * and combining this with (13) results in µ H τ * 1 = 1 2 (µ H τ * 1 ) 2 + e −µH τ * 1 (µ H τ 1 + 1) µ H τ * 1 + e −µH τ * 1 .
Solving (61) gives that (τ * 1 ) 2 = 2 µH e −µH τ * 1 which means τ * 1 =
The probability of being in E = 1, i.e. Pr(E = 1) can be solved using:
Pr(E = 1) = 1 j=0
Pr(E(Z k+1 ) = 1 | E(Z k ) = j) Pr(E = j).
(63) Combining (63) and (9),
Pr(E = 1) = e −µH τ1 1 − e −µH τ1 µ H τ 1 .(64)
Now, we can obtain E X 2 | E = j , E [X | E = j] using (9). Combining these with (64) and substituting in (62) gives (5).
H. The Proof of Theorem 6
First, we show that τ B ≥∆ π * is necessary to find a solution to (17) with monotonic non-increasing thresholds. Then, we show that this condition is also sufficient.
The necessity part of the proof follows from the fact that τ B =∆ π for any solution of (17), as ∆ π = m 1 (τ 1 , τ 2 , ..., τ B )/2m 2 (τ 1 , τ 2 , ..., τ B ) by Lemma 8 and Lemma 7. Therefore, by the optimality of∆ π * , τ B ≥∆ π * must hold for any solution of (17). Now, we consider the sufficiency part of the proof where it is useful to define a function φ : [0, ∞) B → R as follows:
φ(τ B , τ B−1 − τ B , ..., τ 1 − τ 2 ) 2τ B m 1 (τ 1 , τ 2 , ..., τ B ) − m 2 (τ 1 , τ 2 , ..., τ B ).
Using this definition, (17) can be written as, φ(τ B , τ B−1 − τ B , ..., τ 1 − τ 2 ) = 0.
We need to show that given τ B ≥∆ π * , one can find a set of non-negative real numbers d 1 , ...., d B−1 such that φ(τ B , d B−1 , ..., d 1 ) = 0. Accordingly, τ B and d 1 , ...., d B−1 constitute a solution to (17) with monotonic non-decreasing thresholds where τ i = τ i+1 + d i , for i = 1, ..., B − 1.
In order to prove this, let us start with the optimal policy π * = (τ * 1 , τ * 2 ..., τ * B ) where we know that τ * B =∆ π * by Theorem 3. Starting from the optimal policy π * , the policy will be modified following the procedure below:
• Phase 1: Modify the policy π (+) = (τ
Fig. 1 .
1System Model.
Fig. 2 .
2An illustration of a monotone threshold policy.
Fig. 3 .
3The DTMC for energy states sampled at update times.
Theorem 5 .
5When B = 2, the average age∆ can be expressed as:∆
2τ B m 1 (τ 1 , τ 2 , ...,τ B ) − m 2 (τ 1 , τ 2 , ...,τ B ) = 0 return π = (τ 1 , τ 2 , ...,τ B ) V. NUMERICAL RESULTS For battery sizes B = 1, 2, 3, 4, the policies in Π MT are numerically optimized giving AoI versus energy arrival rate (Poisson) curves in Fig 4. We give the corresponding threshold values in
Fig. 4 .
4AoI versus energy arrival rate (Poisson) for different battery sizes B = 1, 2, 3, 4.
Fig. 5 .Fig. 6 .
56AoI versus τ 1 against various τ 2 values for B = 2 and µ H = 1. AoI versus τ 2 against various τ 1 values for B = 2 and µ H = 1.
E(a), and the energy counting process {H(t) − H(a), t ≥ a}. Hence, {∆(t), t ≥ a} is determined by ∆(a), E(a), and {H(t) − H(a), t ≥ a}.
α(t−a) p(∆(t))dt ∆(a), E(a)
α(w−a) p(w − a)dw + e −α(t−a) J α (∆(t), E(t)).
K r,ℓ+1 (z, σ) := Pr (Z k+1 = z, H(z) − H(a) = σ|∆(t a ) = r, E(t a ) = ℓ + 1)
E
ℓ;j = E [X | E = j] , w.p.1.Due to the ergodicity of E(Z k )s (Lemma 6):lim n→∞ L j n = Pr(E = j), w.p.1. X 2 | E = j Pr(E = j), w.p.1.Lemma 8. For a threshold policy where τ 1 is finite, the average age∆ is finite (w.p.1) and given by the following expression.∆
E
X 2 | E = 0 Pr(E = 0) + E X 2 | E = 1 Pr(E = 1) E [X | E = 0] Pr(E = 0) + E [X | E = 1] Pr(E = 1).
,
from the previous phase to the policy π (−) for i = 1, ..., B − 1. Then, go to Phase 2 with policy π (−) .
TABLE I OPTIMAL
ITHRESHOLDS FOR DIFFERENT BATTERY SIZES FOR µ H = 1
Note that the filtration is right continuous as both H(t) and A(t) are right continuous.
This is due to the third property in the definition, which is a technical requirement for the proofs.
Note that the event E(Z k+1 ) = i happens if and only if X k+1 ∈ [τ i+1 , τ i ), accordingly Pr(E(Z k+1 ) = i | E(Z k ) = j) = Pr(X k+1 ≤ τ i | E(Z k ) = j) − Pr(X k+1 ≤ τ i+1 | E(Z k ) = j).
Note that the function E p(∆π β (t)) is Lebesgue-measurable (as p(·) is non-decreasing) and bounded (as X k s are bounded w.p.1 for a policy obeying (37)).
This fact is provided in the proof of Theorem 1.
This follows from the fact that any increase in thresholds causes an increase in the battery overflow probability which means an increase in the average inter-update duration, i.e, m 1 (τ 1 , τ 2 , ...,τ B ).
Theorem 7.(Feller 1971) Let f (t) be a Lebesgue-measurable, bounded, real function. Then,Moreover, if the central inequality is an equality, then all inequalities are equalities.This theorem can be applied for the function f (t) = E p(∆ π β (t)) where β > 0 4 . To simplify the inequalities for this case, let's define a function J α;β (∆(a), E(a)) for β > 0 such that:Note that for a = 0:Accordingly, we can apply Feller's Tauberian theorem for f (t) = E p(∆ π β (t)) when a = 0 giving:We can show that the inequalities in (41) are satisfied with equality for any π β with β > 0 as lim t f →∞exists for any π β with β > 0. To see this, consider the following lemma:Lemma 4. For α > 0 and {Z k+1 , k ≥ 0} with Z k+1 as in (37), the following holds:Proof. The proof of Lemma 3 showed that for Z k = a and optimal stopping time solving(24)it is true thatwhere t c is the deterministic time defined in(31)and Y B is an Erlang distributed with parameter B which obeys(8).) from the previous phase to the policy π (+) = (τ= τ B , the procedure stops and (65) gives the solution that φ(τ B , d B−1 , ..., d 1 ) = 0, otherwise go to Phase 1 with policy π (+) .It can be shown that the procedure always stops with a solution that φ(τ B , d B−1 , ..., d 1 ) = 0. To see this, first observe that (65) always has a solution as long as:This is due to the following facts about the function φ(τit is a continuous function of x, (ii) it goes to −∞ as x grows.Next, observe that (66) always holds, i.e.,=0 due to the Step 2 or the initial/optimal policy + π (−) π (+) dφ, is positive. This can be seen by considering:which follows from the fact that Pr(E = j) does not depend on τ B (see(10)) and can be further simplified by Lemma 1, hence: ∂φ ∂τ B = 2m 1 (τ 1 , τ 2 , ..., τ B ).Accordingly, we have:where the inequality follows from the fact that Therefore, (65) can be always satisfied in Phase 2. Also, as the second smallest threshold is strictly increased in Phase 2, the smallest threshold can be moved toward τ B in Phase 1. Also, it can be shown that the procedure does not converge any policy other than the policy that φ(τ B , d B−1 , ..., d 1 ) = 0.This can be seen considering the following:which implies that the procedure cannot converge to a policy with τ (+) B < τ B as the RHS of (67) is positive 6 and does not vanish for a finite set of thresholds. Therefore, as the smallest threshold of the policies modified by the procedure is increased up to τ B , a solution that φ(τ B , d B−1 , ..., d 1 ) = 0 is eventually reached. This completes the proof.
Achieving the age-energy tradeoff with a finite-battery energy harvesting source. B T Bacinoglu, Y Sun, E Uysal-Biyikoglu, V Mutlu, 2018 IEEE International Symposium on Information Theory (ISIT). B. T. Bacinoglu, Y. Sun, E. Uysal-Biyikoglu, and V. Mutlu, "Achieving the age-energy tradeoff with a finite-battery energy harvesting source," in 2018 IEEE International Symposium on Information Theory (ISIT), June 2018, pp. 876-880.
Minimizing age of information in vehicular networks. S Kaul, M Gruteser, V Rai, J Kenney, Sensor, Mesh and Ad Hoc Communications and Networks (SECON), 2011 8th Annual IEEE Communications Society Conference on. S. Kaul, M. Gruteser, V. Rai, and J. Kenney, "Minimizing age of information in vehicular networks," in Sensor, Mesh and Ad Hoc Communications and Networks (SECON), 2011 8th Annual IEEE Com- munications Society Conference on, June 2011, pp. 350-358.
Real-time status: How often should one update?. S Kaul, R Yates, M Gruteser, INFOCOM 2012. S. Kaul, R. Yates, and M. Gruteser, "Real-time status: How often should one update?" in INFOCOM 2012, pp. 2731-2735.
Lynxnet: Wild animal monitoring using sensor networks. R Zviedris, A Elsts, G Strazdins, A Mednis, L Selavo, REALWSN 2010. R. Zviedris, A. Elsts, G. Strazdins, A. Mednis, and L. Selavo, "Lynxnet: Wild animal monitoring using sensor networks," in REALWSN 2010, 2010, pp. 170-173.
Blue force tracking network modeling and simulation. K R Chevli, P Kim, A Kagel, D Moy, R Pattay, R Nichols, A D Goldfinger, MILCOM 2006. K. R. Chevli, P. Kim, A. Kagel, D. Moy, R. Pattay, R. Nichols, and A. D. Goldfinger, "Blue force tracking network modeling and simulation," in MILCOM 2006, Oct 2006, pp. 1-7.
Remote estimation of the wiener process over a channel with random delay. Y Sun, Y Polyanskiy, E Uysal-Biyikoglu, 2017 IEEE International Symposium on Information Theory (ISIT). Y. Sun, Y. Polyanskiy, and E. Uysal-Biyikoglu, "Remote estimation of the wiener process over a channel with random delay," in 2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 321-325.
Age of information under energy replenishment constraints. T Bacinoglu, E T Ceran, E Uysal-Biyikoglu, Proc. Info. Theory and Appl. Workshop. Info. Theory and Appl. WorkshopT. Bacinoglu, E. T. Ceran, and E. Uysal-Biyikoglu, "Age of information under energy replenishment constraints," in Proc. Info. Theory and Appl. Workshop, Feb. 2015.
Scheduling status updates to minimize age of information with an energy harvesting sensor. T Bacinoglu, E Uysal-Biyikoglu, 2017 IEEE International Symposium on Information Theory (ISIT). T. Bacinoglu and E. Uysal-Biyikoglu, "Scheduling status updates to minimize age of information with an energy harvesting sensor," in 2017 IEEE International Symposium on Information Theory (ISIT), Jun. 2017.
Optimal status update for age of information minimization with an energy harvesting source. X Wu, J Yang, J Wu, IEEE Transactions on Green Communications and Networking. 21X. Wu, J. Yang, and J. Wu, "Optimal status update for age of information minimization with an energy harvesting source," IEEE Transactions on Green Communications and Networking, vol. 2, no. 1, pp. 193-204, March 2018.
Lazy is timely: Status updates by an energy harvesting source. R D Yates, 2015 IEEE International Symposium on Information Theory (ISIT). R. D. Yates, "Lazy is timely: Status updates by an energy harvesting source," in 2015 IEEE International Symposium on Information Theory (ISIT), Jun. 2015.
Age-Minimal Online Policies for Energy Harvesting Sensors with Incremental Battery Recharges. A Arafa, J Yang, S Ulukus, H V Poor, ArXiv e-printsA. Arafa, J. Yang, S. Ulukus, and H. V. Poor, "Age-Minimal Online Poli- cies for Energy Harvesting Sensors with Incremental Battery Recharges," ArXiv e-prints, Feb. 2018.
Age-minimal online policies for energy harvesting sensors with incremental battery recharges. 2018 Information Theory and Applications Workshop (ITA). --, "Age-minimal online policies for energy harvesting sensors with incremental battery recharges," in 2018 Information Theory and Appli- cations Workshop (ITA), Feb 2018, pp. 1-10.
Minimizing age of information for an energy harvesting source with updating failures. S Feng, J Yang, 2018 IEEE International Symposium on Information Theory (ISIT). S. Feng and J. Yang, "Minimizing age of information for an energy harvesting source with updating failures," in 2018 IEEE International Symposium on Information Theory (ISIT), June 2018, pp. 2431-2435.
Online timely status updates with erasures for energy harvesting sensors. A Arafa, J Yang, S Ulukus, H V Poor, 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). A. Arafa, J. Yang, S. Ulukus, and H. V. Poor, "Online timely status updates with erasures for energy harvesting sensors," in 2018 56th An- nual Allerton Conference on Communication, Control, and Computing (Allerton), Oct 2018, pp. 966-972.
Effective page refresh policies for web crawlers. J Cho, H Garcia-Molina, http:/doi.acm.org/10.1145/958942.958945ACM Trans. Database Syst. 284J. Cho and H. Garcia-Molina, "Effective page refresh policies for web crawlers," ACM Trans. Database Syst., vol. 28, no. 4, pp. 390-426, Dec. 2003. [Online]. Available: http://doi.acm.org/10.1145/958942.958945
Update or wait: How to keep your data fresh. Y Sun, E Uysal-Biyikoglu, R Yates, C E Koksal, N B Shroff, IEEE INFOCOM 2016. Y. Sun, E. Uysal-Biyikoglu, R. Yates, C. E. Koksal, and N. B. Shroff, "Update or wait: How to keep your data fresh," in IEEE INFOCOM 2016, April 2016, pp. 1-9.
Update or wait: How to keep your data fresh. Y Sun, E Uysal-Biyikoglu, R D Yates, C E Koksal, N B Shroff, IEEE Transactions on Information Theory. 6311Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff, "Update or wait: How to keep your data fresh," IEEE Transactions on Information Theory, vol. 63, no. 11, pp. 7492-7508, Nov 2017.
Age and value of information: Non-linear age case. A Kosta, N Pappas, A Ephremides, V Angelakis, 2017 IEEE International Symposium on Information Theory (ISIT). A. Kosta, N. Pappas, A. Ephremides, and V. Angelakis, "Age and value of information: Non-linear age case," in 2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 326-330.
Sampling for data freshness optimization: Nonlinear age functions. Y Sun, B Cyr, Journal of Communications and Networks. in pressY. Sun and B. Cyr, "Sampling for data freshness optimization: Non- linear age functions," Journal of Communications and Networks, in press, 2019.
Optimizing update frequencies for decaying information. S Razniewski, Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ser. CIKM '16. the 25th ACM International on Conference on Information and Knowledge Management, ser. CIKM '16New York, NY, USAACMS. Razniewski, "Optimizing update frequencies for decaying information," in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, ser. CIKM '16. New York, NY, USA: ACM, 2016, pp. 1191-1200. [Online].
. http:/doi.acm.org/10.1145/2983323.2983719Available: http://doi.acm.org/10.1145/2983323.2983719
Age of information under random updates. C Kam, S Kompella, A Ephremides, IEEE ISIT. C. Kam, S. Kompella, and A. Ephremides, "Age of information under random updates," in IEEE ISIT, July 2013, pp. 66-70.
Age of information with packet management. M Costa, M Codreanu, A Ephremides, IEEE ISIT. M. Costa, M. Codreanu, and A. Ephremides, "Age of information with packet management," in IEEE ISIT, June 2014, pp. 1583-1587.
Optimizing age-of-information in a multiclass queueing system. L Huang, E Modiano, IEEE ISIT. L. Huang and E. Modiano, "Optimizing age-of-information in a multi- class queueing system," in IEEE ISIT, June 2015, pp. 1681-1685.
Age of information of multiple sources with queue management. N Pappas, J Gunnarsson, L Kratz, M Kountouris, V Angelakis, N. Pappas, J. Gunnarsson, L. Kratz, M. Kountouris, and V. Angelakis, "Age of information of multiple sources with queue management," in 2015 ICC, June 2015, pp. 5935-5940.
Effect of message transmission path diversity on status age. C Kam, S Kompella, G D Nguyen, A Ephremides, IEEE Transactions on Information Theory. 623C. Kam, S. Kompella, G. D. Nguyen, and A. Ephremides, "Effect of message transmission path diversity on status age," IEEE Transactions on Information Theory, vol. 62, no. 3, pp. 1360-1374, March 2016.
Age of information: The gamma awakening. E Najm, R Nasser, IEEE ISIT. E. Najm and R. Nasser, "Age of information: The gamma awakening," in IEEE ISIT, July 2016, pp. 2574-2578.
The age of information: Real-time status updating by multiple sources. R D Yates, S K Kaul, IEEE Transactions on Information Theory. 653R. D. Yates and S. K. Kaul, "The age of information: Real-time status updating by multiple sources," IEEE Transactions on Information Theory, vol. 65, no. 3, pp. 1807-1827, March 2019.
Status updates through m/g/1/1 queues with harq. E Najm, R Yates, E Soljanin, 2017 IEEE International Symposium on Information Theory (ISIT). E. Najm, R. Yates, and E. Soljanin, "Status updates through m/g/1/1 queues with harq," in 2017 IEEE International Symposium on Informa- tion Theory (ISIT), June 2017, pp. 131-135.
Average age of information for status update systems with an energy harvesting server. S Farazi, A G Klein, D R Brown, IEEE INFOCOM 2018 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). S. Farazi, A. G. Klein, and D. R. Brown, "Average age of information for status update systems with an energy harvesting server," in IEEE INFOCOM 2018 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), April 2018, pp. 112-117.
A mobile gprs-sensors array for air pollution monitoring. A R Al-Ali, I Zualkernan, F , IEEE Sensors Journal. 1010A. R. Al-Ali, I. Zualkernan, and F. Aloul, "A mobile gprs-sensors array for air pollution monitoring," IEEE Sensors Journal, vol. 10, no. 10, pp. 1666-1671, Oct 2010.
Optimal Stopping and Free-Boundary Problems, ser. G Peskir, A Shiryaev, Lectures in Mathematics. ETH Zürich. Birkhäuser Basel. G. Peskir and A. Shiryaev, Optimal Stopping and Free-Boundary Prob- lems, ser. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel, 2006.
X Guo, O Hernandez-Lerma, Continuous-Time Markov Decision Processes. BerlinSpringerX. Guo and O. Hernandez-Lerma, Continuous-Time Markov Decision Processes. Berlin: Springer, Science Pres, 2009.
W Feller, An Introduction to Probability Theory and Its Applications. New YorkJohn Wiley and Sons22nd EditionW. Feller, An Introduction to Probability Theory and Its Applications, Volume 2, 2nd Edition. John Wiley and Sons, New York, 1971.
The Laplace transform. D V Widder, Princeton University PressPrincetonD. V. Widder, The Laplace transform. Princeton: Princeton University Press, 1946.
R Gallager, Stochastic Processes: Theory for Applications. Cambridge University PressR. Gallager, Stochastic Processes: Theory for Applications. Cambridge University Press, 2013.
| [] |
[
"Cooperative Exploration for Multi-Agent Deep Reinforcement Learning",
"Cooperative Exploration for Multi-Agent Deep Reinforcement Learning"
] | [
"Iou-Jen Liu ",
"Unnat Jain ",
"Raymond A Yeh ",
"Alexander G Schwing "
] | [] | [] | Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces via a normalized entropy-based technique. Then, agents are trained to reach this goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of the multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC). | null | [
"https://arxiv.org/pdf/2107.11444v1.pdf"
] | 235,619,302 | 2107.11444 | 69e45ca87c07805ffa98a0e69754458a124513fc |
Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
Iou-Jen Liu
Unnat Jain
Raymond A Yeh
Alexander G Schwing
Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces via a normalized entropy-based technique. Then, agents are trained to reach this goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of the multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).
Introduction
Multi-agent reinforcement learning (MARL) is an increasingly important field. Indeed, many real-world problems are naturally modeled using MARL techniques. For instance, tasks from areas as diverse as robot fleet coordination (Swamy et al., 2020;Hüttenrauch et al., 2019) and autonomous traffic control (Bazzan, 2008;Sunehag et al., 2018) fit MARL formulations.
To address MARL problems, early work followed independent single-agent reinforcement learning work (Tampuu et al., 2015;Tan, 1993;Matignon et al., 2012). However, more recently, specifically tailored techniques such as monotonic value function factorization (QMIX) (Rashid et al., 2018), multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), and counterfactual multiagent policy gradients (COMA) have been developed. Those methods excel in a multi-agent setting because they address the non-stationary issue of MARL via a centralized critic. Despite those advances and the resulting reported performance improvements, a common issue remains: all of the aforementioned methods use exploration techniques from classical algorithms. Specifically, these methods employ noise-based exploration, i.e., the exploration policy is a noisy version of the actor policy (Rashid et al., 2020a;Lowe et al., 2017;Foerster et al., 2016;Rashid et al., 2018;Yang et al., 2018).
It was recently recognized that use of classical exploration techniques is sub-optimal in a multi-agent reinforcement learning setting. Specifically, Mahajan et al. (2019) show that QMIX with -greedy exploration results in slow exploration and sub-optimality. Mahajan et al. (2019) improve exploration by conditioning an agent's behavior on a shared latent variable controlled by a hierarchical policy. Even more recently, Wang et al. (2020) encourage coordinated exploration by considering the influence of one agent's behavior on other agents' behaviors.
While all of the aforementioned exploration techniques for multi-agent reinforcement learning significantly improve results, they suffer from two common challenges: (1) agents struggle to identify states that are worth exploring. . Identifying under-explored states is particularly challenging when the number of agents increase, since the state and action space grows exponentially with the number of agents. (2) Agents don't coordinate their exploration efforts toward under-explored states. To give an example, consider a Push-Box task, where two agents need to jointly push a heavy box to a specific location before observing a reward. In this situation, instead of exploring the environment independently, agents need to coordinate pushing the box within the environment to find the specific location.
To address both challenges, we propose cooperative multiagent exploration (CMAE). To identify states that are worth exploring, we observe that, while the state space grows exponentially, the reward function typically depends on a small subset of the state space. For instance, in the aforementioned Push-Box task, the state space contains the location of agents and the box while the reward function only arXiv:2107.11444v1 [cs.AI] 23 Jul 2021 depends on the location of the box. To solve the task, exploring the box's location is much more efficient than exploring the full state space. To encode this inductive bias into CMAE, we propose a bottom-up exploration scheme. Specifically, we project the high-dimensional state space to low-dimensional spaces, which we refer to as restricted spaces. Then, we gradually explore restricted spaces from low-to high-dimensional. To ensure the agents coordinate their exploration efforts, we select goals from restricted spaces and train the exploration policies to reach the goal. Specifically, inspired by Andrychowicz et al. (2017), we reshape the rewards in the replay buffer such that a positive reward is given when the goal is reached.
To show that CMAE improves results, we evaluate the proposed approach on two multi-agent environment suites: a discrete version of the multiple-particle environment (MPE) (Lowe et al., 2017;Wang et al., 2020) and the Starcraft multi-agent challenge (SMAC) . In both environments, we consider both densereward and sparse-reward settings. Sparse-reward settings are particularly challenging because agents need to coordinate their behavior for extended timesteps before receiving any non-zero reward. CMAE consistently outperforms the state-of-the-art baselines in sparse-reward tasks. For more, please see our project page: https://ioujenliu. github.io/CMAE.
Preliminaries
We first define the multi-agent Markov decision process (MDP) in Sec. 2.1 and introduce the multi-agent reinforcement learning setting in Sec. 2.2.
Multi-Agent Markov Decision Process
A cooperative multi-agent system is modeled as a multiagent Markov decision process (MDP). An n-agent MDP is defined by a tuple (S, A, T , R, Z, O, n, γ, H). S is the state space of the environment. A is the action space of each agent. At each time step t, each agent's target policy π i , i ∈ {1, . . . , n}, selects an action a t i ∈ A. All selected actions form a joint action a t ∈ A n . The transition function T maps the current state s t and the joint action a t to a distribution over the next state s t+1 , i.e., T : S×A n → ∆(S). All agents receive a collective reward r t ∈ R according to the reward function R : S × A n → R. The objective of all agents' policies is to maximize the collective return H t=0 γ t r t , where γ ∈ [0, 1] is the discount factor, H is the horizon, and r t is the collective reward obtained at timestep t. Each agent i observes local observation o t i ∈ Z according to the observation function O : S → Z. Note, observations usually reveal partial information about the state. For instance, suppose the state contains the location of agents, while the local observation of an agent may only contain the location of other agents within a limited distance. All agents' local observations form a joint observation o t .
Multi-Agent Reinforcement Learning
In this paper, we follow the standard centralized training and decentralized execution (CTDE) paradigm (Lowe et al., 2017;Rashid et al., 2018;Foerster et al., 2018;Mahajan et al., 2019;Liu et al., 2019b): at training time, the learning algorithm has access to all agents' local observations, actions, and the state. At execution time, i.e., at test time, each individual agent only has access to its own local observation.
The proposed CMAE is applicable to off-policy MARL methods (e.g., Rashid et al., 2018;Lowe et al., 2017;Sunehag et al., 2018;Matignon et al., 2012;Liu et al., 2019b). In off-policy MARL, exploration policies µ = {µ i } n i=1 are responsible for collecting data from the environment. The data in the form of transition tuples
(s t , o t , a t , s t+1 , o t+1 , r t ) is stored in a replay memory D, i.e., D = {(s t , o t , a t , s t+1 , o t+1 , r t )} t . The target policies π = {π i } n i=1
are trained using transition tuples from the replay memory.
Coordinated Multi-Agent Exploration (CMAE)
In the following we first present an overview of CMAE before we discuss the method more formally.
Overview: The goal is to train the target policies π = {π i } n i=1 of n agents to maximize the environment episode return. Classical off-policy algorithms (Lowe et al., 2017;Rashid et al., 2018;Mnih et al., 2013;Lillicrap et al., 2016) typically use a noisy version of the target policies π as exploration policies. In contrast, in CMAE, we decouple exploration policies and target policies. Specifically, target polices are trained to maximize the usual external episode return. Exploration policies µ = {µ i } n i=1 are trained to reach shared goals, which are under-explored states, as the job of an exploration policy is to collect data from those under-explored states.
To train the exploration policies, shared goals are required. How to choose shared goals from a high-dimensional state space? As discussed in Sec. 1, while the state space grows exponentially with the number of agents, the reward function often only depends on a small subset of the state space. Concretely, consider an n-agent Push-Box game in a L × L grid. The size of its state space is L 2(n+1) (n agents plus box in L 2 space). However, the reward function depends only on the location of the box, whose state space size is L 2 . Obviously, to solve the task, exploring the location of the box is much more efficient than uniformly exploring the full state space. Select a t using a mixture of exploration and target policies αµ + (1 − α)π. α decreases linearly to 0
5 r t , s t+1 , o t+1 = environment.step(a t ) 6 Add transition tuple {s t , o t , a t , s t+1 , o t+1 , r t } to D 7 UpdateCounters(c, s t+1 , o t+1 ) 8 TrainTarget(π, D) 9 if episode mod N = 0 then 10 g = SelectRestrictedSpaceGoal(c, T space , D, episode) Select shared goal (Alg. 3) 11
TrainExp(µ, g, D)
Train exploration policies (Alg. 2)
Algorithm 2: Train Exploration Policies (Train- Exp) input : exploration policies µ = {µ i } n i=1 , shared goal g, replay buffer D 1 for {s t , o t , a t , s t+1 , o t+1 , r t } ∈ D do 2 if s t is the shared goal g then 3 r t = r t +r 4 Update µ using {s t , o t , a t , s t+1 , o t+1 , r t }
To achieve this, CMAE first explores a low-dimensional restricted space S k of the state space S, i.e., S k ⊆ S. Formally, given an M -dimensional state space S, the restricted space S k associated with a set k is defined as
S k = {proj k (s) : ∀s ∈ S},(1)
where proj k (s) = (s e ) e∈k 'restricts' the space to elements e in set k, i.e., e ∈ k. Here, s e is the e-th component of the full state s, and k is a set from the power set of {1, . . . , M }, i.e., k ∈ P ({1, . . . , M }), where P denotes the power set. CMAE gradually moves from low-dimensional restricted spaces (|k| small) to higherdimensional restricted spaces (|k| larger). This bottom-up space selection is formulated as a search on a space tree T space , where each node represents a restricted space S k .
Alg. 1 summarizes this approach. At each step, a mixture of the exploration policies µ = {µ i } n i=1 and target policies π = {π i } n i=1 is used to select actions (line 4). The resulting experience tuple is then stored in a replay buffer D. Counters c for each restricted space in the space tree T space track how often a particular restricted state was observed (line 7). The target policies π are trained directly using the data within the replay buffer D. Every N episodes, a new restricted space and goal g is chosen (line 10; see Sec. 3.2 for more). Exploration policies are continuously trained to reach the selected goal (Sec. 3.1).
Algorithm 3: Select Restricted Space and Shared Goal (SelectRestrictedSpaceGoal)
input : counters c, space tree T space , replay buffer D, episode output: selected goal g 1 Compute utility of restricted spaces in T space 2 Sample a restricted space S k * from T space following Eq. (4)
3 Sample a batch B = {s i } |B| i=1 from D 4 g = arg min s∈B c k * (proj k * (s)) 5 if episode mod N = 0 then 6 ExpandSpaceTree(c, T space , k * )
Sec. 3.2 7 end 8 return g
Training of Exploration Policies
To encourage that exploration policies scout environments in a coordinated manner, we train the exploration policies µ with an additional modified rewardr. This modified reward emphasizes the goal g of the exploration. For example, in the two-agent Push-Box task, we use a particular joint location of both agents and the box as a goal. Note, the agents receive a bonus rewardr when the shared goal g, i.e., the specified state, is reached. The algorithm for training exploration policies is summarized in Alg. 2: standard policy training with a modified reward.
The goal g is obtained via a bottom-up search method. We first explore low-dimensional restricted spaces that are 'under-explored.' We discuss this shared goal and restricted space selection method next.
Shared Goal and Restricted Space Selection
Since the size of the state space S grows exponentially with the number of agents, conventional exploration strategies (Mahajan et al., 2019;Ronen I. Brafman, 2002) which strive to uniformly visit all states are no longer tractable. To address this issue, we propose to first project the state space to restricted spaces, and then perform shared goal driven coordinated exploration in those restricted spaces. For simplicity, we first assume the state space is finite and discrete. We discuss how to extend CMAE to continuous state spaces in Sec. 3.3. In the following, we first show how to select the goal g given a selected restricted space. Then we discuss how to select restricted spaces and expand the space tree T space . Shared Goal Selection: Given a restricted space S k * and its associated counter c k * , we choose the goal state g by first uniformly sampling a batch of states B from the replay buffer D. From those states, we select the state with the smallest count as the goal state g, i.e.,
g = arg min s∈B c k * (proj k * (s)),(2)
where proj k * (s) is the projection from state space to the restricted space S k * . To make this concrete, consider again the 2-agent Push-Box game. A restricted space may consist of only the box's location. Then, from batch B, a state in which the box is in a rarely seen location will be selected as the goal. For each restricted space S k * , the associated counter c k * (s k * ) stores the number of times the state s k * occurred in the low-dimensional restricted space S k * .
Given a goal state g, we train exploration policies µ using the method presented in Sec. 3.1 (Alg. 2).
Restricted Space Selection:
For an M -dimensional state space, the number of restricted spaces is equivalent to the size of the power set, i.e., 2 M . It's intractable to study all.
To address this issue, we propose a bottom-up tree-search mechanism to select under-explored restricted spaces. We start from low-dimensional restricted spaces and then gradually grow the search tree to explore higher-dimensional restricted spaces. Specifically, we maintain a space tree T space where each node in the tree represents a restricted space. Each restricted space k in the tree is associated with a utility value u k , which guides the selection.
The utility permits to identify the under-explored restricted spaces. For this, we study a normalized-entropy-based mechanism to compute the utility of each restricted space in T space . Intuitively, under-explored restricted spaces have lower normalized entropy. To estimate the normalized entropy of a restricted space S k , we normalize the counter c k to obtain a probability distribution p k (·) = c k (·)/ s∈S k c k (s), which is then used to compute the normalized entropy
η k = H k /H max,k = − s∈S k p k (s) log p k (s) / log(|S k |).(3)
Then the utility u k is given by u k = −η k . Finally, we sample a restricted space k * following a categorical distribution over all spaces in the space tree T space , i.e.,
k * ∼ Cat(softmax((u k ) S k ∈Tspace )).(4)
The restricted space and goal selection method is summarized in Alg. 3. Note that the actual value of |S k | is usually unavailable. We defer the details of estimating |S k | from observed data to Sec. F.1 in the appendix.
Space Tree Expansion: The space tree T space is initialized with one-dimensional restricted spaces. To consider restricted spaces of higher dimension, we grow the space tree from the current selected restricted space S k * every N episodes. If S k * is l-dimensional, we add restricted spaces that are (l + 1)-dimensional and contain S k * as child nodes of S k * . Formally, we initialize the space tree T 0 space via (5) where M denotes the dimension of the full state space and P denotes the power set. Let T (h) space and S k * denote the space tree after the h-th expansion and the current selected restricted space respectively. Note, S k * is sampled according to Eq. (4) and no domain knowledge is used for selecting S k * . The space tree after the (h + 1)-th expansion is
T 0 space = {S k : |k| = 1, k ∈ P ({1, . . . , M })},T (h+1) space = T h space ∪ {S k : |k| = |k * | + 1, k * ⊂ k, k ∈ P ({1, . . . , M })},(6)
i.e., all restricted spaces that are (|k * | + 1)-dimensional and contain S k * are added. The counters associated with the new restricted spaces are initialized from states in the replay buffer. Specifically, for each newly added restricted space S k * , we initialized the corresponding counter to be
c k * (s k * ) = s∈D 1[proj k * (s) = s k * ],(7)
where 1[·] is the indicator function (1 if argument is true; 0 otherwise). Once a restricted space was added, we successively increment the counter, i.e., the counter c k * isn't recomputed from scratch every episode. This ensures that updating of counters is efficient. Goal selection, space selection and tree expansion are summarized in Alg. 3.
Counting in Continuous State Spaces
For high-dimensional continuous state spaces, the counters in CMAE could be implemented using counting methods such as neural density models (Ostrovski et al., 2017;Bellemare et al., 2016) or hash-based counting (Tang et al., 2017). Both approaches have been shown to be effective for counting in continuous state spaces.
In our implementation, we adopt hash-based counting (Tang et al., 2017). Hash-based counting discretizes the state space via a hash function φ(s) : S → Z, which maps a given state s to an integer that is used to index into a table. Specifically, in CMAE, each restricted space S k is associated with a hash function φ k (s) : S k → Z, which maps the continuous s k to an integer. The hash function φ k is used when CMAE updates or queries the counter associated with restricted space S k . Empirically, we found CMAE with hash-counting to perform well in environments with continuous state spaces.
Analysis
To provide more insights into how the proposed method improves data efficiency, we analyze the two major components of CMAE: (1) shared goal exploration and (2) restricted space exploration on a simple multi-player matrix game. We first define the multi-player matrix game:
Example 1. In a cooperative 2-player l-action matrix game (Myerson, 2013), a payoff matrix U ∈ R l×l , which is unobservable to the players, describes the payoff the agents obtain after an action configuration is executed. The agents' goal is to find the action configuration that maximizes the collective payoff.
To efficiently find the action configuration that results in maximal payoff, the agents need to uniformly try different action configurations. We show that exploration with shared goals enables agents to see all distinct action configurations more efficiently than exploration without a shared goal. Specifically, when exploring without shared goal, the agents don't coordinate their behavior. It is equivalent to uniformly picking one action configuration from all configurations. When performing exploration with a shared goal, the least visited action configuration will be chosen as the shared goal. The two agents coordinate to choose the actions that achieve the goal at each step, making exploration more efficient. The following claim formalizes this: Next, we show that whenever the payoff matrix depends only on one agent's action the expected number of steps to see the maximal reward can be further reduced by first exploring restricted spaces. 1 Θ(g) means asymptotically bounded above and below by g.
Claim 2. Consider a special case of Example 1 where the payoff matrix depends only on one agent's action. Let T sub denote the number of steps needed to discover the maximal reward when exploring the action space of agent one and agent two independently. Let T full denote the number of steps needed to discover the maximal reward when the full action space is explored. Then, we have T sub = O(l) and
T full = O(l 2 ).
Proof. See supplementary material.
Suppose the payoff matrix depends on all agents' actions. In this case, CMAE will move to the full space after the restricted spaces are well-explored. For this, the expected total number of steps to see the maximal reward is O(l + l 2 ) = O(l 2 ).
Experimental Results
We evaluate CMAE on two challenging environments:
(1) a discrete version of the multiple-particle environment (MPE) (Lowe et al., 2017;Wang et al., 2020); and (2) the Starcraft multi-agent challenge (SMAC) . In both environments, we consider both densereward and sparse-reward settings. In a sparse-reward setting, agents don't receive any intermediate rewards, i.e., agents only receive a reward when a task is completed.
Tasks: We first consider the following tasks of the sparsereward MPE environment:
Pass-sparse: Two agents operate within two rooms of a 30 × 30 grid. There is one switch in each room. The rooms are separated by a door and agents start in the same room.
The door will open only when one of the switches is occupied. The agents see collective positive reward and the episode terminates only when both agents changed to the other room. The state vector contains x, y locations of all agents and binary variables to indicate if doors are open.
Secret-Room-sparse: Secret-Room extends Pass. There are two agents and four rooms. One large room on the left and three small rooms on the right. There is one door between each small room and the large room. The switch in the large room controls all three doors. The switch in each small room only controls the room's door. The agents need to navigate to one of the three small rooms, i.e., the target room, to receive positive reward. The grid size is 25 × 25. Push-Box-sparse: There are two agents and one box in a 15 × 15 grid. Agents need to push the box to the wall to receive positive reward. The box is heavy, so both agents need to push the box in the same direction at the same time to move the box. The task is considered solved if the box is pushed to the wall. The state vector contains x, y locations of all agents and the box.
For further details on the sparse-reward MPE tasks, please see Wang et al. (2020). For completeness, in addition to the aforementioned sparse-reward setting, we also consider a dense-reward version of the three tasks. Please see Appendix D for more details on the environment settings. (Sutton & Barto, 2018;Mnih et al., 2013;. We compare CMAE with exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI) (Wang et al., 2020). EITI and EDTI results are obtained using the publicly available code released by the authors.
For a more complete comparison, we also show the results of Q-learning with -greedy and Q-learning with countbased exploration (Tang et al., 2017), where exploration bonus is given when a novel state is visited.
For SMAC tasks, we combine CMAE with QMIX (Rashid et al., 2018). We compare with QMIX (Rashid et al., 2018), QMIX with count-based exploration, weighted QMIX (Rashid et al., 2020a), and weighted QMIX with count-based exploration (Tang et al., 2017 and weighted QMIX, we use the publicly available code released by the authors. In all experiments we use restricted spaces of less than four dimensions.
Note, to increase efficiency of the baselines with countbased exploration, in both MPE and SMAC experiments, the counts are shared across all agents. We use '+Bonus' to refer to a baseline with count-based exploration.
Evaluation Protocol: To ensure a rigorous and fair evaluation, we follow the evaluation protocol suggested by Henderson et al. (2017); Colas et al. (2018). We evaluate the target policies in an independent evaluation environment and report final metric. The final metric is an average episode reward or success rate over the last 100 evaluation episodes, i.e., 10 episodes for each of the last ten policies during training. We repeat all experiments using five runs with different random seeds.
Note that EITI and EDTI (Wang et al., 2020) report the episode rewards vs. the number of model updates as an evaluation metric. This isn't common when evaluating RL algorithms as this plot doesn't reflect an RL approach's data efficiency. In contrast, the episode reward vs. number of environment steps is a more common metric for data efficiency and is adopted by many RL works (Lillicrap et al., 2016;Wu et al., 2017;Mnih et al., 2013;Andrychowicz et al., 2017;Shen et al., 2020;Liu et al., 2019a;Li et al., 2019), particularly works on RL exploration (Mahajan et al., 2019;Taiga et al., 2020;Pathak et al., 2017;Tang et al., 2017;Rashid et al., 2020b). Therefore, following most prior works, we report the episode rewards vs. the number of environment steps.
Results:
We first compare CMAE with baselines on Pass, Secret Room, and Push-Box. The final metric and standard deviation is reported in Tab. 1.In the sparse-reward setting, only CMAE is able to solve the tasks, while all baselines do not learn a meaningful policy. Corresponding training curves with standard deviation are included in Fig. 1. CMAE achieves a 100% success rate on Pass, Secret-room, and Push-Box within 3M environment steps. In contrast, baselines cannot solve the task within the given step budget of 3M steps.
CMAE (Ours)
!-greedy Recently, Taiga et al. (2020) pointed out that many existing exploration strategies excel in challenging sparse-reward tasks but fail in simple tasks that can be solved by using classical exploration methods such as -greedy. To ensure CMAE doesn't fail in simpler tasks, we run experiments on the dense-reward version of the three tasks. As shown in Tab. 1, CMAE achieves similar or better performance than the baselines in dense-reward settings.
We also compare the exploration behavior of CMAE to Qlearning with -greedy exploration using the Secret-Room environment. The visit count (in log scale) of each location is visualized in Fig. 2. In early stages of training, both CMAE (top) and -greedy (bottom) explore only locations in the left room. However, after 1.5M steps, CMAE agents frequently visit the three rooms on the right while -greedy agents mostly remain within the left room.
On SMAC, we first compare CMAE with baselines in the sparse-reward setting.
Since the number of nodes in the space tree grows combinatorially, discovering useful high-dimensional restricted spaces for tasks with highdimensional state space, such as SMAC, may be infeasible. However, we found empirically that exploring of low-dimensional restricted spaces is already beneficial in a subset of SMAC tasks. The results on SMAC tasks are summarized in Tab. 2, where final metric and standard deviation of evaluation success rate is reported. As shown in Tab. 2 (top), in 3m-sparse and 2m vs 1z-sparse, QMIX and weighted QMIX, which rely on -greedy exploration, rarely solve the task. When combined with count-based exploration, both QMIX and weighted QMIX are able to achieve 18% to 20% success rate. CMAE achieves much higher success rate of 47.7% and 44.3% on 3m-sparse and 2m vs 1z-sparse, respectively. Corresponding training curves with standard deviation are included in Fig. 3. We also run experiments on dense-reward SMAC tasks, where handcrafted intermediate rewards are available. As shown in Tab. 2 (bottom), CMAE achieves similar performance to state-of-the-art baselines in dense-reward SMAC tasks.
Limitations: To show limitations of the proposed method, we run experiments on the sparse-reward version of 3s vs 5z, which is classified as 'hard' even in the densereward setting . As shown in Tab. 2 and Fig. 3, CMAE as well as all baselines fail to solve the task. In 3s vs 5z, the only winning strategy is to force the enemies to scatter around the map and attend to them one by one . Without hand-crafted intermediate reward, we found it to be extremely challenging for any approach to pick up this strategy. This demonstrates that efficient exploration for MARL in sparse-reward settings is still a very challenging and open problem, which requires more attention from the community.
Ablation Study: To better understand the approach, we perform an ablation study to examine the effectiveness of the proposed (1) target and exploration policy decoupling and (2) restricted space exploration. We conduct the experiments on 3m-sparse. As Fig. 4 shows, without decoupling the exploration and target policies, the success rate drops from 47.7% to 25.4%. In addition, without restricted space exploration, i.e., by directly exploring the full state space, the success rate drops to 9.4%. This demonstrates that the restricted space exploration and policy decoupling are essential to CMAE's success.
Related Work
We discuss recently developed methods for exploration in reinforcement learning, multi-agent reinforcement learning, and concurrent reinforcement learning subsequently.
Exploration for Deep Reinforcement Learning: A wide variety of exploration techniques for deep reinforcement learning have been studied, deviating from classical noisebased methods. Generalization of count-based approaches, which give near-optimal results in tabular reinforcement learning, to environments with continuous state spaces have been proposed. For instance, Bellemare et al. (2016) propose a density model to measure the agent's uncertainty. Pseudo-counts are derived from the density model which give rise to an exploration bonus encouraging assessment of rarely visited states. Inspired by Bellemare et al. (2016), Ostrovski et al. (2017) discussed a neural density model, to estimate the pseudo count, and Tang et al. (2017) use a hash function to estimate the count.
Besides count-based approaches, meta-policy gradient (Xu et al., 2018) uses the target policy's improvement as the reward to train an exploration policy. The resulting exploration policy differs from the actor policy, and enables more global exploration. Stadie et al. (2016) propose an exploration strategy based on assigning an exploration bonus from a concurrently learned environment model. Lee et al. (2020) cast exploration as a state marginal matching (SMM) problem and aim to learn a policy for which the state marginal distribution matches a uniform distribution. Other related works on exploration include curiosity-driven exploration (Pathak et al., 2017), diversity-driven exploration (Hong et al., 2018), GEP-PG (Colas et al., 2018), EX 2 (Fu et al., 2017), bootstrap DQN (Osband et al., 2016) and random network distillation (Burda et al., 2019). In contrast to our approach, all the techniques mentioned above target single-agent deep reinforcement learning.
Multi-agent Deep Reinforcement Learning (MARL):
MARL (Lowe et al., 2017;Foerster et al., 2017;Liu et al., 2019b;Rashid et al., 2020a;Jain et al., 2020;Zhou et al., 2020;Christianos et al., 2020;Liu et al., 2020;Jain et al., 2021;Hu et al., 2021) has drawn much attention recently. MADDPG (Lowe et al., 2017) uses a central critic that considers other agents' action policies to handle the nonstationary environment issues in the multi-agent setting. DIAL (Foerster et al., 2016) uses an end-to-end differentiable architecture that allows agents to learn to communicate. Jiang & Lu (2018) propose an attentional communication model that learns when communication is helpful for a cooperative setting. Foerster et al. (2017) add a 'fingerprint' to each transition tuple in the replay memory to track the age of the transition tuple and stabilize training. In 'Self-Other-Modeling' (SOM) (Raileanu et al., 2018), an agent uses its own policy to predict other agents' behavior and states.
While inter-agent communication (Inala et al., 2020;Rangwala & Williams, 2020;Zhang et al., 2020;Ding et al., 2020;Jiang & Lu, 2018;Foerster et al., 2016;Rashid et al., 2018;Omidshafiei et al., 2017;Jain et al., 2019) has been considered, for exploration, multi-agent approaches rely on classical noise-based exploration. As discussed in Sec. 1, a noise-based approach prevents the agents from sharing their understanding of the environment. A team of cooperative agents with a noise-based exploration policy can only explore local regions that are close to their individual actor policy, which contrasts the approach from CMAE.
Recently, approaches that consider coordinated exploration have been proposed. Multi-agent variational exploration (MAVEN) (Mahajan et al., 2019) introduces a latent space for hierarchical control. Agents condition their behavior on the latent variable to perform committed exploration. Influence-based exploration (Wang et al., 2020) captures the influence of one agent's behavior on others. Agents are encouraged to visit 'interaction points' that will change other agents' behaviour.
Concurrent Deep Reinforcement Learning:
Dimakopoulou & Roy (2018) study coordinated exploration in concurrent reinforcement learning, maintaining an environment model and extending posterior sampling such that agents explore in a coordinated fashion. Parisotto et al. (2019) proposed concurrent meta reinforcement learning (CMRL) which permits a set of parallel agents to communicate with each other and find efficient exploration strategies. The concurrent setting differs from the multiagent setting of our approach. In a concurrent setting, agents operate in different instances of an environment, i.e., one agent's action has no effect on the observations and rewards received by other agents. In contrast, in the multi-agent setting, agents share the same instance of an environment. An agent's action changes observations and rewards observed by other agents.
Conclusion
We propose cooperative multi-agent exploration (CMAE). It defines shared goals and learns coordinated exploration policies. To find a goal for efficient exploration we study restricted space selection which helps, particularly in sparse-reward environments. Empirically, we demonstrate that CMAE increases exploration efficiency. We hope this is a first step toward efficient coordinated exploration.
A. Appendix
Appendix: Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
In this appendix we first provide the proofs for Claim 1 and Claim 2 in Sec. B and Sec. C. We then provide information regarding the MPE and SMAC environments (Sec. D, Sec. E), implementation details (Sec. F), and the absolute metric (Sec. G). Next, we provide additional results on MPE tasks (Sec. H), additional results of baselines (Sec. I) and training curves (Sec. J). Proof. When exploring without shared goal, the agents don't coordinate their behavior. It is equivalent to uniformly picking one action configuration from the m configurations. We aim to show after T non-share m time steps, the agents tried all m distinct action configurations. Let T i be the number of steps to observe the i-th distinct action configuration after seeing i − 1 distinct configurations. Then
E[T non-share m ] = E[T 1 ] + · · · + E[T m ].(8)
In addition, let P (i) denotes the probability of observing the i-th distinct action configuration after observing i − 1 distinct configurations. We have
P (i) = 1 − i − 1 m = m − i + 1 m .(9)
Note that T i follows a geometric distribution with success probability P (i) = m−i+1 m . Then the expected number of timesteps to see the i-th distinct configuration after seeing i − 1 distinct configurations is
E[T i ] = m m − i + 1 .(10)
Hence, we obtain
E[T non-share m ] = E[T 1 ] + · · · + E[T m ] = m i=1 m m − i + 1 = m m i=1 1 i .(11)
2 Θ(g) means asymptotically bounded above and below by g. When performing exploration with shared-goal, the least visited action configuration will be chosen as the shared goal. The two agents coordinate to choose the actions that achieve the goal at each step. Hence, at each time step, the agents are able to visit a new action configuration. Therefore, exploration with shared goal needs m timesteps to visit all m action configurations, i.e., T share m = m, which completes the proof.
C. Proof of Claim 2
Claim 2. Consider a special case of Example 1 where the payoff matrix depends only on one agent's action. Let T sub denote the number of steps needed to discover the maximal reward when exploring the action space of agent one and agent two independently. Let T full denote the number of steps needed to discover the maximal reward when the full action space is explored. Then, we have T sub = O(l) and T full = O(l 2 ).
Proof. When we explore the action spaces of agent one and agent two independently, there are 2l distinct action configurations (l action configurations for each agent) to explore. Since the reward function depends only on one agent's action, one of these 2l action configurations must lead to the maximal reward. Therefore, by checking distinct action configurations at each time step, we need at most 2l steps to receive the maximal reward, i.e., E[T sub ] = O(l).
In contrast, when we explore the joint action space of agent one and agent two. There are l 2 distinct action configurations. Because the reward function depends only on one agent's action, l of these l 2 action configurations must lead to the maximal reward. In the worst case, we choose the l 2 − l action configurations that don't result in maximal reward in the first l 2 − l steps and receive maximal reward at the l 2 − l + 1 step. Therefore, we have E[T full ] = O(l 2 − l + 1) = O(l 2 ), which concludes the proof.
D. Details Regarding MPE Environments
In this section we provide details regarding the sparsereward and dense-reward version of MPE tasks. We first present the sparse-reward version of MPE:
• Pass-sparse: Two agents operate within two rooms of a 30 × 30 grid. There is one switch in each room, the rooms are separated by a door and agents start in the same room. The door will open only when one of the switches is occupied. The agents see collective positive reward and the episode terminates only when both agents changed to the other room. The task is considered solved if both agents are in the right room.
• Secret-Room-sparse: Secret-Room-sparse extends Pass-sparse. There are two agents and four rooms. One large room on the left and three small rooms on the right. There is one door between each small room and the large room. The switch in the large room controls all three doors. The switch in each small room only controls the room's door. All agents need to navigate to one of the three small rooms, i.e., target room, to receive positive reward. The grid size is 25 × 25.
The task is considered solved if both agents are in the target room.
• Push-Box-sparse: There are two agents and one box in a 15×15 grid. Agents need to push the box to the wall to receive positive reward. The box is heavy, so both agents need to push the box in the same direction at the same time to move the box. The task is considered solved if the box is pushed to the wall.
• Island-sparse: Two agents and a wolf operate in a 10 × 10 grid. Agents get a collective reward of 300 when crushing the wolf. The wolf and agents have maximum energy of eight and five respectively. The energy will decrease by one when being attacked. Therefore, one agent cannot crush the wolf. The agents need to collaborate to complete the task. The task is considered solved if the wolf's health reaches zero.
To study the performance of CMAE and baselines in a dense-reward setting, we add 'checkpoints' to guide the learning of the agents. Specifically, to add checkpoints, we draw concentric circles around a landmark, e.g., a switch, a door, a box. Each circle is a checkpoint region. Then, the first time an agent steps in each of the checkpoint regions, the agent receive an additional checkpoint reward of +0.1.
• Pass-dense: Similar to Pass-sparse, but the agents see dense checkpoint rewards when they move toward the switches and the door. Specifically, when the door is open, agents receive up to ten checkpoint rewards when they move toward the door and the switch in the right room.
• Secret-Room-dense: Similar to Secret-Room-sparse, but the checkpoint rewards based on the agents' distance to the door and the target room's switch are added. Specifically, when the door is open, agents receive up to ten checkpoint rewards when they move toward the door and the switch in the target room.
• Push-Box-dense: Similar to Push-Box-sparse, but the checkpoint rewards based on the ball's distance to the wall is added. Specifically, agents receive up to six checkpoint rewards when they push the box toward the wall.
• Island-dense: Similar to Island-sparse, but the agent receives +1 reward when the wolf's energy decrease.
E. Details of SMAC environments
In this section, we present details for the sparse-reward and dense-reward versions of the SMAC tasks. We first discuss the sparse-reward version of the SMAC tasks.
• 3m-sparse: There are three marines in each team. Agents need to collaboratively take care of the three marines on the other team. Agents only see a reward of +1 when all enemies are taken care of.
• 2m vs 1z-sparse: There are two marines on our team and one Zealot on the opposing team. In 2m vs 1zdense, Zealots are stronger than marines. To take care of the Zealot, the marines need to learn to fire alternatingly so as to confuse the Zealot. Agents only see a reward of +1 when all enemies are taken care of.
• 3s vs 5z-sparse: There are three Stalkers on our team and five Zealots on the opposing team. Because Zealots counter Stalkers, the Stalkers have to learn to force the enemies to scatter around the map and attend to them one by one. Agents only see a reward of +1 when all enemies are attended to.
The details of the dense-reward version of the SMAC tasks are as follows.
• 3m-dense: This task is similar to 3m-sparse, but the reward is dense. An agent sees a reward of +1 when it causes damage to an enemy's health. A reward of −1 is received when its health decreases. All the rewards are collective. A reward of +200 is obtained when all enemies are taken care of. • 2m vs 1z-dense: Similar to 2m vs 1z-sparse, but the reward is dense. The reward function is similar to 3mdense.
• 3s vs 5z-dense: Similar to 3s vs 5z-sparse, but the reward is dense. The reward function follows the one in the 3m-dense task.
F. Implementation Details
F.1. Normalized Entropy Estimation
As discussed in Sec. 3, we use Eq.
(3) to compute the normalized entropy for a restricted space S k , i.e.,
η k = H k /H max,k = − s∈S k p k (s) log p k (s) / log(|S k |).
Note that |S k | is typically unavailable even in discrete state spaces. Therefore, we use the number of current observed distinct outcomes |Ŝ k | to estimate |S k |. For instance, suppose S k is a one-dimensional restricted state space and we observe S k takes values −1, 0, 1. Then |Ŝ k | = 3 is used to estimate |S k | in Eq. (3). |Ŝ k | typically gradually increases during exploration. In addition, for |Ŝ k | = 1, i.e., for a constant restricted space, the normalized entropy will be set to infinity.
F.2. Architecture and Hyper-Parameters
We present the details of architectures and hyperparameters of CMAE and baselines next. (Mnih et al., 2013;. The Q-function is parameterized by a three-layer perceptron (MLP) with 64 hidden units per layer and ReLU activation function. The learning rate is 0.0001 and the replay buffer size is 1M . In all MPE tasks, the bonusr for reaching a goal is 1, and the discount factor γ is 0.95.
For the baseline EITI and EDTI (Wang et al., 2020), we use their default architecture and hyper-parameters. The main reason that EITI and EDTI need a lot of environment steps for convergence according to our observations: a long rollout (512 steps × 32 processes) between model updates is used. In an attempt to optimize the data efficiency of baselines, we also study shorter rollout length, i.e., {128, 256}, for both EITI and EDTI. However, we didn't observe an improvement over the default setting. Specifically, after more than 500M environment steps of training on Secret-Room, EITI with 128 and 256 rollout length achieves 0.0% and 54.8% success rate. EDTI with 128 and 256 rollout length achieves 0.0% and 59.6% success rate, which is much lower than the success rate of 80% achieved by using the default setting.
SMAC environment:
We combine CMAE with QMIX (Rashid et al., 2018). Following their default setting, for both exploration and target policies, the agent is a DRQN (Hausknecht & Stone, 2015) with a GRU (Chung et al., 2014) recurrent layer with a 64dimensional hidden state. Before and after the GRU layer is a fully-connected layer of 64 units. The mix network has 32 units. The discount factor γ is 0.99. The replay memory stores the latest 5000 episodes, and the batch size is 32. RMSProp is used with a learning rate of 5 · 10 −4 . The target network is updated every 100 episodes. For goal bonusr (Alg. 2), we studied {0.01, 0.1, 1} and found
G. Absolute Metric and Final Metric
In addition to the final metric reported in Tab
H. Additional Results on MPE Task: Island
In addition to the MPE tasks considered in Sec. 4, we consider one more challenging MPE task: Island. The details of both sparse-reward and dense-reward version of Island, i.e., Island-sparse and Island-dense are presented in Sec. D. We compare CMAE to Q-learning, Q-learning with count-based exploration, EITI, and EDTI on both Island-sparse and Island-dense. The results are summarized in Tab. 4. As Tab. 4 shows, in the sparse-reward setting, CMAE is able to achieve higher than 50% success rate. In contrast, baselines struggle to solve the task. In the dense-reward setting, CMAE performs similar to baselines. The training curves are shown in Fig. 5 Ours EITI EDTI Table 6. Environment steps required to achieve the indicated target success rate on Pass-sparse, Secret-Room-sparse, Push-Box-sparse, and Island-sparse environments.
I. Additional Results of Baselines
Following the setting of EITI and EDTI (Wang et al., 2020), we train both baselines for 500M environment steps. On Pass-sparse, Secret-Room-sparse, and Push-Box-sparse, we observe that EITI and EDTI (Wang et al., 2020) need more than 300M steps to achieve an 80% success rate. In contrast, CMAE achieves a 100% success rate within 3M environment steps. On Island-sparse, EITI and EDTI need more than 3M environment steps to achieve a 20% success rate while CMAE needs less than 8M environment steps to achieve the same success rate. The results are summarized in Tab. 6.
J. Additional Training Curves
The training curves of CMAE and baselines on both sparse-reward and dense-reward MPE tasks are shown in Fig. 5 and Fig. 6. The training curves of CMAE and baselines on both sparse-reward and dense-reward SMAC tasks are shown in Fig. 7 and Fig. 8. As shown in Fig. 5, Fig. 6, Fig. 7, and Fig. 8, in challenging sparsereward tasks, CMAE consistently achieves higher success rate than baselines. In dense-reward tasks, CMAE has similar performance to baselines. Figure 8. Training curves on dense-reward SMAC tasks.
Claim 1 .
1Consider the 2-player l-action matrix game in Example 1. Let m = l 2 denote the total number of action configurations. Let T share m and T non-share m denote the number of steps needed to see all m action configurations at least once for exploration with shared goal and for exploration without shared goal respectively. Then we have E[T share m ] = m and E[T non-See supplementary material.
The task is considered solved if both agents are in the target room. The state vector contains x, y locations of all agents and binary variables to indicate if doors are open.
Figure 1 .
1Training curves on sparse-reward and dense-reward MPE tasks.
Figure 2 .
2Visitation map (log scale) of CMAE (top) and -greedy (bottom) on the Secret-Room task.
Figure 3 .
3Training curves on sparse-reward SMAC tasks.
Figure 4 .
4Ablation: CMAE, CMAE without decoupling target and exploration policies, and CMAE without restricted space exploration on 3m-sparse.
.
Consider the 2-player l-action matrix game in Example 1. Let m = l 2 denote the total number of action configurations. Let T share m and T non-share m denote the number of steps needed to see all m action configurations at least once for exploration with shared goal and for exploration without shared goal respectively. Then we have E[T share m ] = m and E[T
Figure 5 .Figure 6 .
56Training curves on sparse-reward MPE tasks. Training curves on dense-reward MPE tasks.
Figure 7 .
7Training curves on sparse-reward SMAC tasks.
Algorithm 1: Training with Coordinated Multi-Agent Exploration (CMAE)Init: space tree T space , counters c Init: exploration policies µ = {µ i } n i=1 , target policies π = {π i } n i=1 , replay buffer D 1 for episode = 1 . . . E do Reset the environment. Observe state s 1 and observations o 1 = (o 12
1 , . . . , o 1
n )
3
for t = 1 . . . H do
4
CMAE (Ours) Q-learning Q-learning + BonusTable 1. Final metric of episode rewards of CMAE and baselines on sparse-reward (top) and dense-reward (bottom) MPE tasks.EITI
EDTI
Pass-sparse
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Secret-Room-sparse
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Push-Box-sparse
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Pass-dense
5.00±0.00
1.25±0.02
1.42±0.14
0.00±0.00 0.18±0.01
Secret-Room-dense
4.00±0.57
1.62±0.16
1.53±0.04
0.00±0.00 0.00±0.00
Push-Box-dense
1.38±0.21
1.58±0.14
1.55±0.04
0.10±0.01 0.05±0.03
Note SMAC tasks are partially observable, i.e., agents only observe information of units within a range. Please see Appendix E for more details on the SMAC environment.To evaluate CMAE on environments with continu-
ous state space, we consider three standard tasks in
SMAC (Samvelyan et al., 2019): 3m, 2m vs 1z, and
3s vs 5z. While the tasks are considered challenging,
the commonly used reward is dense, i.e., carefully hand-
crafted intermediate rewards are used to guide the agents'
learning. However, in many real-world applications, de-
signing effective intermediate rewards may be very difficult
or infeasible. Therefore, in addition to the dense-reward
setting, we also consider the sparse-reward setting specified
by the SMAC environment (Samvelyan et al., 2019) for the
three tasks. In SMAC, the state vector contains for all units
on the map: x, y locations, health, shield, and unit type.
Experimental Setup: For MPE tasks, we combine CMAE
with Q-learning
5 https://github.com/oxwhirl/smac MPE environments: We combine CMAE with Qlearning. For Pass, Secret-room, and Push-box, the Q value function is represented via a table. The Q-table is initialized to zero. The update step size for exploration policies and target policies are 0.1 and 0.05 respectively. For Island we use a DQN
Table 4. Final metric and absolute metric of CMAE and baselines on sparse-reward and dense-reward MPE tasks.Table 5. Final metric and absolute metric of success rate (%) of CMAE and baselines on sparse-reward and dense-reward SMAC tasks. 0.1 to work well in most tasks. Therefore, we user = 0.1 for all SMAC tasks. The hyper-parameters of CMAE with QMIX and baselines are summarized in Tab. 3.CMAE (Ours)
Q-learning
Q-learning + Bonus
EITI
EDTI
Pass-sparse
Final
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Absolute
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Secret-Room-sparse
Final
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Absolute
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Push-Box-sparse
Final
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Absolute
1.00±0.00
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Island-sparse
Final
0.55±0.30
0.00±0.00
0.00±0.00
0.00±0.00 0.00±0.00
Absolute
0.61±0.23
0.00±0.00
0.01±0.01
0.00±0.00 0.00±0.00
Pass-dense
Final
5.00±0.00
1.25±0.02
1.42±0.14
0.00±0.00 0.18±0.01
Absolute
5.00±0.00
1.30±0.03
1.46±0.08
0.00±0.00 0.20±0.01
Secret-Room-dense
Final
4.00±0.57
1.62±0.16
1.53±0.04
0.00±0.00 0.00±0.00
Absolute
4.00±0.57
1.63±0.03
1.57±0.06
0.00±0.00 0.00±0.00
Push-Box-dense
Final
1.38±0.21
1.58±0.14
1.55±0.04
0.10±0.01 0.05±0.03
Absolute
1.38±0.21
1.59±0.04
1.55±0.04
0.00±0.00 0.18±0.01
Island-dense
Final
138.00±74.70 87.03±65.80
110.36±71.99
11.18±0.62 10.45±0.61
Absolute 163.25±68.50 141.60±92.53
170.14±62.10
16.84±0.65 16.42±0.86
CMAE (Ours) Weighted QMIX Weighted QMIX + Bonus
QMIX
QMIX + Bonus
3m-sparse
Final
47.7±35.1
2.7±5.1
11.5±8.6
0.0±0.0
11.7±16.9
Absolute
62.0±41.0
8.1±4.5
15.6±7.3
0.0±0.0
22.8±18.4
2m vs 1z-sparse
Final
44.3±20.8
0.0±0.0
19.4±18.1
0.0±0.0
19.8±14.1
Absolute
47.7±35.1
0.0±0.0
23.9±16.7
0.0±0.0
30.3±26.7
3s vs 5z-sparse
Final
0.0±0.0
0.0±0.0
0.0±0.0
0.0±0.0
0.0±0.0
Absolute
0.0±0.0
0.0±0.0
0.0±0.0
0.0±0.0
0.0±0.0
3m-dense
Final
98.7±1.7
98.3±2.5
98.9±1.7
97.9±3.6
97.3±3.0
Absolute
99.3±1.8
98.8±0.3
99.0±0.3
99.4±2.1
98.5±1.2
2m vs 1z-dense
Final
98.2±0.1
98.5±0.1
96.0±1.8
97.1±2.4
95.8±1.7
Absolute
98.7±0.4
98.6±1.6
99.1±0.9
99.1±0.6
96.0±1.6
3s vs 5z-dense
Final
81.3±16.1
92.2±6.6
95.3±2.2
75.0±17.6
78.1±24.4
Absolute
85.4±22.6
95.4±4.4
95.4±3.2
76.5±24.3
79.1±14.2
. 1 and Tab. 2, followingHenderson et al. (2017);Colas et al. (2018), we also report the absolute metric. Absolute metric is the best policies' average episode reward over 100 evaluation episodes. The final metric and absolute metric of CMAE and baselines on MPE and SMAC tasks are summarized in Tab. 4 and Tab. 5.
University of Illinois at Urbana-Champaign, IL, U.S.A.. Correspondence to: Iou-Jen Liu <[email protected]>.
O(g) means asymptotically bounded above by g. 4 Ω(g) means asymptotically bounded below by g.
Acknowledgement: This work is supported in part by NSF under Grant #1718221, 2008387, 2045586, and MRI #1725729, UIUC, Samsung, Amazon, 3M, and Cisco Systems Inc. RY is supported by a Google Fellowship.
Hindsight experience replay. M Andrychowicz, F Wolski, A Ray, J Schneider, R Fong, P Welinder, B Mcgrew, J Tobin, P Abbeel, W Zaremba, Proc. NeurIPS. NeurIPSAndrychowicz, M., Wolski, F., Ray, A., Schneider, J., Fong, R., Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. Hindsight experience replay. In Proc. NeurIPS, 2017.
Opportunities for multiagent systems and multiagent reinforcement learning in traffic control. A L Bazzan, Proc. AAMAS. AAMASBazzan, A. L. C. Opportunities for multiagent systems and multiagent reinforcement learning in traffic control. In Proc. AAMAS, 2008.
Unifying count-based exploration and intrinsic motivation. M G Bellemare, S Srinivasan, G Ostrovski, T Schaul, D Saxton, R Munos, Proc. NeurIPS. NeurIPSBellemare, M. G., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. Unifying count-based exploration and intrinsic motivation. In Proc. NeurIPS, 2016.
Exploration by random network distillation. Y Burda, H Edwards, A Storkey, O Klimov, Proc. ICLR. ICLRBurda, Y., Edwards, H., Storkey, A., and Klimov, O. Ex- ploration by random network distillation. In Proc. ICLR, 2019.
Shared experience actor-critic for multi-agent reinforcement learning. F Christianos, L Schafer, Albrecht , S V , Proc. NeurIPS. NeurIPSChristianos, F., Schafer, L., and Albrecht, S. V. Shared ex- perience actor-critic for multi-agent reinforcement learn- ing. In Proc. NeurIPS, 2020.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, arXiv.Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. Empiri- cal evaluation of gated recurrent neural networks on se- quence modeling. In arXiv., 2014.
Decoupling exploration and exploitation in deep reinforcement learning algorithms. C Colas, O Sigaud, P.-Y Oudeyer, Gep-Pg, Proc. ICML. ICMLColas, C., Sigaud, O., and Oudeyer, P.-Y. GEP-PG: Decou- pling exploration and exploitation in deep reinforcement learning algorithms. In Proc. ICML, 2018.
Cooperative Multi-Agent Exploration. Cooperative Multi-Agent Exploration
Coordinated exploration in concurrent reinforcement learning. M Dimakopoulou, B V Roy, Proc. ICML. ICMLDimakopoulou, M. and Roy, B. V. Coordinated exploration in concurrent reinforcement learning. In Proc. ICML, 2018.
Learning individually inferred communication for multi-agent cooperation. Z Ding, T Huang, Z Lu, Proc. NeurIPS. NeurIPSDing, Z., Huang, T., and Lu, Z. Learning individually in- ferred communication for multi-agent cooperation. In Proc. NeurIPS, 2020.
Stabilising experience replay for deep multi-agent reinforcement learning. J Foerster, N Nardelli, G Farquhar, T Afouras, P H S Torr, P Kohli, S Whiteson, Proc. ICML. ICMLFoerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H. S., Kohli, P., and Whiteson, S. Stabilising experience replay for deep multi-agent reinforcement learning. In Proc. ICML, 2017.
Counterfactual multi-agent policy gradients. J Foerster, G Farquhar, T Afouras, N Nardelli, S Whiteson, Proc. AAAI. AAAIFoerster, J., Farquhar, G., Afouras, T., Nardelli, N., and Whiteson, S. Counterfactual multi-agent policy gradi- ents. In Proc. AAAI, 2018.
Learning to communicate with deep multi-agent reinforcement learning. J N Foerster, Y M Assael, N De Freitas, S Whiteson, Proc. ICML. ICMLFoerster, J. N., Assael, Y. M., de Freitas, N., and White- son, S. Learning to communicate with deep multi-agent reinforcement learning. In Proc. ICML, 2016.
Exploration with exemplar models for deep reinforcement learning. J Fu, J D Co-Reyes, S Levine, Ex2, Proc. NeurIPS. NeurIPSFu, J., Co-Reyes, J. D., and Levine, S. Ex2: Exploration with exemplar models for deep reinforcement learning. In Proc. NeurIPS, 2017.
Deep recurrent q-learning for partially observable mdps. M Hausknecht, P Stone, arXiv.Hausknecht, M. and Stone, P. Deep recurrent q-learning for partially observable mdps. In arXiv., 2015.
Deep reinforcement learning that matters. P Henderson, R Islam, P Bachman, J Pineau, D Precup, D Meger, Proc. AAAI. AAAIHenderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., and Meger, D. Deep reinforcement learning that mat- ters. In Proc. AAAI, 2017.
Diversity-driven exploration strategy for deep reinforcement learning. Z.-W Hong, T.-Y Shann, S.-Y Su, Y.-H Chang, C.-Y Lee, Proc. NeurIPS. NeurIPSHong, Z.-W., Shann, T.-Y., Su, S.-Y., Chang, Y.-H., and Lee, C.-Y. Diversity-driven exploration strategy for deep reinforcement learning. In Proc. NeurIPS, 2018.
Updet: Universal multi-agent rl via policy decoupling with transformers. S Hu, F Zhu, X Chang, X Liang, Proc. ICLR. ICLRHu, S., Zhu, F., Chang, X., and Liang, X. Updet: Universal multi-agent rl via policy decoupling with transformers. In Proc. ICLR, 2021.
Deep reinforcement learning for swarm systems. M Hüttenrauch, A Šošić, G Neumann, JMLR. Hüttenrauch, M.,Šošić, A., and Neumann, G. Deep rein- forcement learning for swarm systems. JMLR, 2019.
Neurosymbolic transformers for multi-agent communication. J P Inala, Y Yang, J Paulos, Y Pu, O Bastani, V Kumar, M Rinard, A Solar-Lezama, Proc. NeurIPS. NeurIPSInala, J. P., Yang, Y., Paulos, J., Pu, Y., Bastani, O., Kumar, V., Rinard, M., and Solar-Lezama, A. Neurosymbolic transformers for multi-agent communication. In Proc. NeurIPS, 2020.
Two body problem: Collaborative visual task completion. U Jain, L Weihs, E Kolve, M Rastegari, S Lazebnik, A Farhadi, A Schwing, A Kembhavi, Proc. CVPR. CVPRJain, U., Weihs, L., Kolve, E., Rastegari, M., Lazebnik, S., Farhadi, A., Schwing, A., and Kembhavi, A. Two body problem: Collaborative visual task completion. In Proc. CVPR, 2019.
A cordial sync: Going beyond marginal policies for multi-agent embodied tasks. U Jain, L Weihs, E Kolve, A Farhadi, S Lazebnik, A Kembhavi, A G Schwing, Proc. ECCV. ECCVJain, U., Weihs, L., Kolve, E., Farhadi, A., Lazebnik, S., Kembhavi, A., and Schwing, A. G. A cordial sync: Go- ing beyond marginal policies for multi-agent embodied tasks. In Proc. ECCV, 2020.
Training embodied agents with minimal supervision. U Jain, I.-J Liu, S Lazebnik, A Kembhavi, L Weihs, A Schwing, Gridtopix, In arXiv.Jain, U., Liu, I.-J., Lazebnik, S., Kembhavi, A., Weihs, L., and Schwing, A. Gridtopix: Training embodied agents with minimal supervision. In arXiv., 2021.
Learning attentional communication for multi-agent cooperation. J Jiang, Z Lu, Proc. NeurIPS. NeurIPSJiang, J. and Lu, Z. Learning attentional communication for multi-agent cooperation. In Proc. NeurIPS, 2018.
Efficient exploration via state marginal matching. L Lee, B Eysenbach, E Parisotto, E Xing, S Levine, R Salakhutdinov, arXiv.Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., and Salakhutdinov, R. Efficient exploration via state marginal matching. In arXiv., 2020.
Accelerating distributed reinforcement learning with in-switch computing. Y Li, I.-J Liu, Y Yuan, D Chen, A Schwing, J Huang, Proc. ISCA. ISCALi, Y., Liu, I.-J., Yuan, Y., Chen, D., Schwing, A., and Huang, J. Accelerating distributed reinforcement learn- ing with in-switch computing. In Proc. ISCA, 2019.
Continuous control with deep reinforcement learning. T P Lillicrap, J J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, D Wierstra, Proc. ICLR. ICLRLillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. In Proc. ICLR, 2016.
Knowledge flow: Improve upon your teachers. I.-J Liu, J Peng, A G Schwing, Proc. ICLR. ICLRLiu, I.-J., Peng, J., and Schwing, A. G. Knowledge flow: Improve upon your teachers. In Proc. ICLR, 2019a.
PIC: permutation invariant critic for multi-agent deep reinforcement learning. I.-J Liu, R A Yeh, A G Schwing, Proc. CoRL. CoRLLiu, I.-J., Yeh, R. A., and Schwing, A. G. PIC: permuta- tion invariant critic for multi-agent deep reinforcement learning. In Proc. CoRL, 2019b.
High-throughput synchronous deep rl. I.-J Liu, R A Yeh, A G Schwing, Proc. NeurIPS. NeurIPSLiu, I.-J., Yeh, R. A., and Schwing, A. G. High-throughput synchronous deep rl. In Proc. NeurIPS, 2020.
Multi-agent actor-critic for mixed cooperativecompetitive environments. R Lowe, Y Wu, A Tamar, J Harb, P Abbeel, I Mordatch, Proc. NeurIPS. NeurIPSLowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mor- datch, I. Multi-agent actor-critic for mixed cooperative- competitive environments. In Proc. NeurIPS, 2017.
MAVEN: multi-agent variational exploration. A Mahajan, T Rashid, M Samvelyan, S Whiteson, Proc. NeurIPS. NeurIPSMahajan, A., Rashid, T., Samvelyan, M., and Whiteson, S. MAVEN: multi-agent variational exploration. In Proc. NeurIPS, 2019.
Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review. L Matignon, G Laurent, N L Fort Piat, Matignon, L., j. Laurent, G., and fort piat, N. L. Inde- pendent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review,, 2012.
Playing atari with deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A Graves, I Antonoglou, D Wierstra, M Riedmiller, NeurIPS Deep Learning Workshop. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing atari with deep reinforcement learning. In NeurIPS Deep Learning Workshop, 2013.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis, Nature. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Ve- ness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wier- stra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature, 2015.
. R B Myerson, Theory, Harvard University PressMyerson, R. B. Game Theory. Harvard University Press, 2013.
Cooperative Multi-Agent Exploration. Cooperative Multi-Agent Exploration
Deep decentralized multi-task multi-agent reinforcement learning under partial observability. S Omidshafiei, J Pazis, C Amato, J P How, J Vian, Proc. ICML. ICMLOmidshafiei, S., Pazis, J., Amato, C., How, J. P., and Vian, J. Deep decentralized multi-task multi-agent reinforce- ment learning under partial observability. In Proc. ICML, 2017.
Deep exploration via bootstrapped dqn. I Osband, C Blundell, A Pritzel, B V Roy, Proc. NeurIPS. NeurIPSOsband, I., Blundell, C., Pritzel, A., and Roy, B. V. Deep exploration via bootstrapped dqn. In Proc. NeurIPS, 2016.
Count-based exploration with neural density models. G Ostrovski, M G Bellemare, A Van Den Oord, R Munos, Proc. ICML. ICMLOstrovski, G., Bellemare, M. G., van den Oord, A., and Munos, R. Count-based exploration with neural density models. In Proc. ICML, 2017.
Concurrent meta reinforcement learning. E Parisotto, S Ghosh, S B Yalamanchi, V Chinnaobireddy, Y Wu, R Salakhutdinov, arXiv.Parisotto, E., Ghosh, S., Yalamanchi, S. B., Chinnao- bireddy, V., Wu, Y., and Salakhutdinov, R. Concurrent meta reinforcement learning. In arXiv., 2019.
Curiosity-driven exploration by self-supervised prediction. D Pathak, P Agrawal, A A Efros, Darrell , T , Proc. ICML. ICMLPathak, D., Agrawal, P., Efros, A. A., and Darrell, T. Curiosity-driven exploration by self-supervised predic- tion. In Proc. ICML, 2017.
Modeling others using oneself in multi-agent reinforcement learning. R Raileanu, E Denton, A Szlam, Fergus , R , Proc. ICML. ICMLRaileanu, R., Denton, E., Szlam, A., and Fergus, R. Mod- eling others using oneself in multi-agent reinforcement learning. In Proc. ICML, 2018.
Learning multi-agent communication through structured attentive reasoning. M Rangwala, R Williams, Proc. NeurIPS. NeurIPSRangwala, M. and Williams, R. Learning multi-agent com- munication through structured attentive reasoning. In Proc. NeurIPS, 2020.
QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. T Rashid, M Samvelyan, C S De Witt, G Farquhar, J Foerster, S Whiteson, Proc. ICML. ICMLRashid, T., Samvelyan, M., de Witt, C. S., Farquhar, G., Foerster, J., and Whiteson, S. QMIX: monotonic value function factorisation for deep multi-agent reinforce- ment learning. In Proc. ICML, 2018.
Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. T Rashid, G Farquhar, B Peng, S Whiteson, Proc. NeurIPS. NeurIPSRashid, T., Farquhar, G., Peng, B., and Whiteson, S. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learn- ing. In Proc. NeurIPS, 2020a.
Optimistic exploration even with a pessimistic initialisation. T Rashid, B Peng, W Böhmer, S Whiteson, Proc. ICLR. ICLRRashid, T., Peng, B., Böhmer, W., and Whiteson, S. Opti- mistic exploration even with a pessimistic initialisation. In Proc. ICLR, 2020b.
R-max -a general polynomial time algorithm for near-optimal reinforcement learning. Ronen I Brafman, M T , JMLR. Ronen I. Brafman, M. T. R-max -a general polynomial time algorithm for near-optimal reinforcement learning. In JMLR, 2002.
The starcraft multi-agent challenge. M Samvelyan, T Rashid, C S De Witt, G Farquhar, N Nardelli, T G J Rudner, C.-M Hung, P H S Torr, J Foerster, S Whiteson, arxiv.Samvelyan, M., Rashid, T., de Witt, C. S., Farquhar, G., Nardelli, N., Rudner, T. G. J., Hung, C.-M., Torr, P. H. S., Foerster, J., and Whiteson, S. The starcraft multi-agent challenge. In arxiv., 2019.
Model-based policy optimization with unsupervised model adaptation. J Shen, H Zhao, W Zhang, Yu , Y , Proc. NeurIPS. NeurIPSShen, J., Zhao, H., Zhang, W., and Yu, Y. Model-based policy optimization with unsupervised model adaptation. In Proc. NeurIPS, 2020.
Incentivizing exploration in reinforcement learning with deep predictive models. B C Stadie, S Levine, Abbeel , P , Proc. ICLR. ICLRStadie, B. C., Levine, S., and Abbeel, P. Incentivizing ex- ploration in reinforcement learning with deep predictive models. In Proc. ICLR, 2016.
Valuedecomposition networks for cooperative multi-agent learning based on team reward. P Sunehag, G Lever, A Gruslys, W M Czarnecki, V Zambaldi, M Jaderberg, M Lanctot, N Sonnerat, J Z Leibo, K Tuyls, T Graepel, Proc. AAMAS. AAMASSunehag, P., Lever, G., Gruslys, A., Czarnecki, W. M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J. Z., Tuyls, K., and Graepel, T. Value- decomposition networks for cooperative multi-agent learning based on team reward. In Proc. AAMAS, 2018.
Reinforcement Learning: An Introduction. R S Sutton, A G Barto, The MIT PressSutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. The MIT Press, 2018.
Scaled autonomy: Enabling human operators to control robot fleets. G Swamy, S Reddy, S Levine, A D Dragan, Proc. ICRA. ICRASwamy, G., Reddy, S., Levine, S., and Dragan, A. D. Scaled autonomy: Enabling human operators to control robot fleets. In Proc. ICRA, 2020.
On bonus based exploration methods in the arcade learning environment. A A Taiga, W Fedus, M C Machado, A Courville, M G Bellemare, Proc. ICLR. ICLRTaiga, A. A., Fedus, W., Machado, M. C., Courville, A., and Bellemare, M. G. On bonus based exploration meth- ods in the arcade learning environment. In Proc. ICLR, 2020.
Multiagent cooperation and competition with deep reinforcement learning. A Tampuu, T Matiisen, D Kodelja, I Kuzovkin, K Korjus, J Aru, J Aru, Vicente , R , In arxiv.Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Kor- jus, K., Aru, J., Aru, J., and Vicente, R. Multiagent coop- eration and competition with deep reinforcement learn- ing. In arxiv., 2015.
Multiagent reinforcement learning independent vs cooperative agents. M Tan, Proc. ICML. ICMLTan, M. Multiagent reinforcement learning independent vs cooperative agents. In Proc. ICML, 1993.
Exploration: A study of count-based exploration for deep reinforcement learning. H Tang, R Houthooft, D Foote, A Stooke, X Chen, Y Duan, J Schulman, F D Turck, Abbeel , P , Proc. NeurIPS. NeurIPSTang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., Turck, F. D., and Abbeel, P. Ex- ploration: A study of count-based exploration for deep reinforcement learning. In Proc. NeurIPS, 2017.
Influence-based multi-agent exploration. T Wang, J Wang, Y Wu, C Zhang, Proc. ICLR. ICLRWang, T., Wang, J., Wu, Y., and Zhang, C. Influence-based multi-agent exploration. In Proc. ICLR, 2020.
Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation. Y Wu, E Mansimov, S Liao, R Grosse, J Ba, Proc. NeurIPS. NeurIPSWu, Y., Mansimov, E., Liao, S., Grosse, R., and Ba, J. Scal- able trust-region method for deep reinforcement learn- ing using Kronecker-factored approximation. In Proc. NeurIPS, 2017.
Learning to explore via meta-policy gradient. T Xu, Q Liu, L Zhao, J Peng, Proc. ICML. ICMLXu, T., Liu, Q., Zhao, L., and Peng, J. Learning to explore via meta-policy gradient. In Proc. ICML, 2018.
Mean field multi-agent reinforcement learning. Y Yang, R Luo, M Li, M Zhou, W Zhang, Wang , J , Proc. ICML. ICMLYang, Y., Luo, R., Li, M., Zhou, M., Zhang, W., and Wang, J. Mean field multi-agent reinforcement learning. In Proc. ICML, 2018.
Succinct and robust multi-agent communication with temporal message control. S Q Zhang, Q Zhang, Lin , J , Proc. NeurIPS. NeurIPSZhang, S. Q., Zhang, Q., and Lin, J. Succinct and robust multi-agent communication with temporal message con- trol. In Proc. NeurIPS, 2020.
Learning implicit credit assignment for cooperative multi-agent reinforcement learning. M Zhou, Z Liu, P Sui, Y Li, Chung , Y Y , Proc. NeurIPS. NeurIPSZhou, M., Liu, Z., Sui, P., Li, Y., and Chung, Y. Y. Learning implicit credit assignment for cooperative multi-agent reinforcement learning. In Proc. NeurIPS, 2020.
Task (target success rate) CMAE (Ours) EITI EDTI Pass-sparse (80%) 2.43M±0.10M 384M±1. 2Task (target success rate) CMAE (Ours) EITI EDTI Pass-sparse (80%) 2.43M±0.10M 384M±1.2M
. 381m±2, 381M±2.8M
Secret-Room-sparse (80%) 2.35M±0.05M 448M±10.0M 382M±9.4M Push-Box-sparse (10%). Secret-Room-sparse (80%) 2.35M±0.05M 448M±10.0M 382M±9.4M Push-Box-sparse (10%)
. 47m±0, 307m±2, 347M±0.04M 307M±2.3M
160M±12.1M Push-Box-sparse (80%) 2.26M±0.02M 307M±3.9M. 160M±12.1M Push-Box-sparse (80%) 2.26M±0.02M 307M±3.9M
160M±8.2M Island-sparse (20%). 160M±8.2M Island-sparse (20%)
. 50m±0, 480m±5, 250M±0.12M 480M±5.2M
322M±1.4M Island-sparse (50%) 13.9M±0.21M > 500M > 500M. 322M±1.4M Island-sparse (50%) 13.9M±0.21M > 500M > 500M
Pass-sparse (80%) Secret-Room-sparse (20%) Push-Box-sparse (10%) Push-Box-sparse (80%) Island-sparse (20%) Island-sparse (50%). Pass-sparse (80%) Secret-Room-sparse (20%) Push-Box-sparse (10%) Push-Box-sparse (80%) Island-sparse (20%) Island-sparse (50%)
| [
"https://github.com/oxwhirl/smac"
] |
[
"A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians",
"A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians"
] | [
"Albert S Berahas ",
"Frank E Curtis ",
"Michael J O'neill ",
"Daniel P Robinson ",
"Albert S Berahas \nDepartment of Industrial and Operations Engineering\nUniversity of Michigan\n\n",
"Frank E Curtis \nDepartment of Industrial and Systems Engineering\nLehigh University\n\n",
"Michael J O'neill \nDepartment of Industrial and Systems Engineering\nLehigh University\n\n",
"Daniel P Robinson \nDepartment of Industrial and Systems Engineering\nLehigh University\n\n",
"\nDepartment of Industrial and Operations Engineering\nDepartment of Industrial and Systems Engineering\nUniversity of Michigan\nLehigh University\n\n"
] | [
"Department of Industrial and Operations Engineering\nUniversity of Michigan\n",
"Department of Industrial and Systems Engineering\nLehigh University\n",
"Department of Industrial and Systems Engineering\nLehigh University\n",
"Department of Industrial and Systems Engineering\nLehigh University\n",
"Department of Industrial and Operations Engineering\nDepartment of Industrial and Systems Engineering\nUniversity of Michigan\nLehigh University\n"
] | [] | A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives. * | null | [
"https://export.arxiv.org/pdf/2106.13015v2.pdf"
] | 235,624,145 | 2106.13015 | ae1b92291bc040828371378a5f7610000771be0f |
A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians
16 Mar 2023
Albert S Berahas
Frank E Curtis
Michael J O'neill
Daniel P Robinson
Albert S Berahas
Department of Industrial and Operations Engineering
University of Michigan
Frank E Curtis
Department of Industrial and Systems Engineering
Lehigh University
Michael J O'neill
Department of Industrial and Systems Engineering
Lehigh University
Daniel P Robinson
Department of Industrial and Systems Engineering
Lehigh University
Department of Industrial and Operations Engineering
Department of Industrial and Systems Engineering
University of Michigan
Lehigh University
A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians
16 Mar 2023Original Publication: June 24, 2021 Last Revised: March 17, 2023Industrial and Systems Engineering
A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives. *
Introduction
We propose an algorithm for solving equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. Formulations of this type arise throughout science and engineering in important applications such as data-fitting problems, where one aims to determine a model that minimizes the discrepancy between values yielded by the model and corresponding known outputs.
Our algorithm is designed for solving such problems when the decision variables are restricted to the solution set of a (potentially nonlinear) set of equations. We are particularly interested in such problems when the constraint Jacobian-i.e., the matrix of first-order derivatives of the constraint function-may be rank deficient in some or even all iterations during the run of an algorithm, since this can be an unavoidable occurrence in practice that would ruin the convergence properties of any algorithm that is not specifically designed for this setting. The structure of our algorithm follows a step decomposition strategy that is common in the constrained optimization literature; in particular, our algorithm has roots in the Byrd-Omojokun approach [18]. However, our algorithm is unique from previously proposed algorithms in that it offers convergence guarantees while allowing for the use of stochastic objective gradient information in each iteration. We prove that our algorithm converges to stationarity (in expectation), both in nice cases when the constraints are feasible and convergence to the feasible region can be guaranteed (in expectation), and in more challenging cases, such as when the constraints are infeasible and one can only guarantee convergence to an infeasible stationary point. To the best of our knowledge, there exist no other algorithms in the literature that have been designed specifically for this setting, namely, stochastic optimization with equality constraints that may exhibit rank deficiency.
The step decomposition strategy employed by our algorithm makes it similar to the method proposed in [5], although that method is designed for deterministic optimization only and employs a line search, whereas our approach is designed for stochastic optimization and requires no line searches. Our algorithm builds upon the method for solving equality constrained stochastic optimization problems proposed in [1]. The method proposed in that article assumes that the singular values of the constraint Jacobians are bounded below by a positive constant throughout the optimization process, which implies that the linear independence constraint qualification (LICQ) holds at all iterates. By contrast, the algorithm proposed in this paper makes no such assumption. Handling the potential lack of full-rank Jacobians necessitates a different algorithmic structure and a distinct approach to proving convergence guarantees; e.g., one needs to account for the fact that primal-dual stationarity conditions may not be necessary and/or the constraints may be infeasible.
Similar to the context in [1], our algorithm is intended for the highly stochastic regime in which the stochastic gradient estimates might only be unbiased estimators of the gradients of the objective at the algorithm iterates that satisfy a loose variance condition. Indeed, we show that in nice cases-in particular, when the adaptive merit parameter employed in our algorithm eventually settles at a value that is sufficiently small-our algorithm has convergence properties in expectation that match those of the algorithm in [1]. These results parallel those for the stochastic gradient method in the context of unconstrained optimization [2,22,23]. However, for cases not considered in [1] when the merit parameter sequence may vanish, we require the stronger assumption that the difference between each stochastic gradient estimate and the corresponding true gradient of the objective eventually is bounded deterministically in each iteration. This is appropriate in many ways since in such a scenario the algorithm aims to transition from solving a stochastic optimization problem to the deterministic one of minimizing constraint violation. Finally, we show under reasonable assumptions the total probability is zero that the merit parameter settles at too large of a value.
Our algorithm has some similarities, but many differences with another recently proposed algorithm, namely, that in [15]. That algorithm is also designed for equality constrained stochastic optimization, but: (i) like for the algorithm in [1], for the algorithm in [15] the LICQ is assumed to hold at all algorithm iterates, and (ii) the algorithm in [15] employs an adaptive line search that may require the algorithm to compute relatively accurate stochastic gradient estimates throughout the optimization process. Our algorithm, on the other hand, does not require the LICQ to hold and is meant for a more stochastic regime, meaning that it does not require a procedure for refining the stochastic gradient estimate within an iteration. Consequently, the convergence guarantees that can be proved for our method, and the expectations that one should have about the practical performance of our method, are quite distinct from those for the algorithm in [15].
Besides the methods in [1,15], there have been few proposed algorithms that might be used to solve problem of the form (1). Some methods have been proposed that employ stochastic (proximal) gradient strategies applied to minimizing penalty functions derived from constrained problems [4,12,16], but these do not offer convergence guarantees to stationarity with respect to the original constrained problem. On the other hand, stochastic Frank-Wolfe methods have been proposed [11,13,14,20,21,26], but these can only be applied in the context of convex feasible regions. Our algorithm, by contrast, is designed for nonlinear equality constrained stochastic optimization.
Notation
The set of real numbers is denoted as R, the set of real numbers greater than (respectively, greater than or equal to) r ∈ R is denoted as R >r (respectively, R ≥r ), the set of n-dimensional real vectors is denoted as R n , the set of m-by-n-dimensional real matrices is denoted as R m×n , and the set of n-by-n-dimensional real symmetric matrices is denoted as S n . Given J ∈ R m×n , the range space of J T is denoted as Range(J T ) and the null space of J is denoted as Null(J). (By the Fundamental Theorem of Linear Algebra, for any J ∈ R m×n , the spaces Range(J T ) and Null(J) are orthogonal and Range(J T ) + Null(J) = R n , where in this instance '+' denotes the Minkowski sum operator.) The set of nonnegative integers is denoted as N := {0, 1, 2, . . . }. For any m ∈ N, let [m] denote the set of integers {0, 1, . . . , m}. Correspondingly, to represent a set of vectors {v 0 , . . . , v k }, we define v [k] := {v 0 , . . . , v k }.
The algorithm that we propose is iterative in the sense that, given a starting point x 0 ∈ R n , it generates a sequence of iterates {x k } with x k ∈ R n for all k ∈ N. For simplicity of notation, the iteration number is appended as a subscript to other quantities corresponding to each iteration; e.g., with a function c : R n → R, its value at x k is denoted as c k := c(x k ) for all k ∈ N. Given J k ∈ R m×n , we use Z k to denote a matrix whose columns form an orthonormal basis for Null(J k ).
Organization
Our problem of interest and basic assumptions about the problem and the behavior of our algorithm are presented in Section 2. Our algorithm is motivated and presented in Section 3. Convergence guarantees for our algorithm are presented in Section 4. The results of numerical experiments are provided in Section 5 and concluding remarks are provided in Section 6.
Problem Statement
Our algorithm is designed for solving (potentially nonlinear and/or nonconvex) equality constrained optimization problems of the form
min x∈R n f (x) s.t. c(x) = 0, with f (x) = E[F (x, ι)],(1)
where the functions f : R n → R and c : R n → R m are smooth, ι is a random variable with associated probability space (Ω, F, P ), F : R n × Ω → R, and E[·] denotes expectation taken with respect to P . We assume that values and first-order derivatives of the constraint functions can be computed, but that the objective and its associated first-order derivatives are intractable to compute, and one must instead employ stochastic estimates. (We formalize our assumptions about such stochastic estimates starting with Assumption 2 on page 6.) Formally, we make the following assumption with respect to (1) and our proposed algorithm, which generates a sequence of iterates {x k }.
Assumption 1. Let X ⊆ R n be an open convex set containing the sequence {x k } generated by any run of the algorithm. The objective function f : R n → R is continuously differentiable and bounded over X and its gradient function ∇f : R n → R n is Lipschitz continuous with constant L ∈ R >0 (with respect to · 2 ) and bounded over X . The constraint function c : R n → R m (with m ≤ n) is continuously differentiable and bounded over X and its Jacobian function J := ∇c T : R n → R m×n is Lipschitz continuous with constant Γ ∈ R >0 (with respect to · 2 ) and bounded over X .
The aspects of Assumption 1 that pertain to the objective function f and constraint function c are typical for the equality constrained optimization literature. Notice that we do not assume that the iterate sequence itself is bounded. Under Assumption 1, it follows that there exist positive real numbers (f inf , f sup , κ ∇f , κ c , κ J ) ∈ R >0 × R >0 × R >0 × R >0 × R >0 such that f inf ≤ f k ≤ f sup , ∇f (x k ) 2 ≤ κ ∇f , c k 2 ≤ κ c , and J k 2 ≤ κ J for all k ∈ N.
(2)
Given that our proposed algorithm is stochastic, it is admittedly not ideal to have to assume that the objective value, objective gradient, constraint value, and constraint Jacobian are bounded over the set X containing the iterates. This is a common assumption in the deterministic optimization literature, where it may be justified in the context of an algorithm that is guaranteed to make progress in each iteration, say with respect to a merit function. However, for a stochastic algorithm such as ours, such a claim may be seen as less than ideal since a stochastic algorithm may only be guaranteed to make progress in expectation in each iteration, meaning that it is possible for the iterates to drift far from desirable regions of the search space during the optimization process.
Our justification for Assumption 1 is two-fold. First, any reader who is familiar with analyses of stochastic algorithms for unconstrained optimization-in particular, those analyses that do not require that the objective gradient is bounded over a set containing the iterates-should appreciate that additional challenges present themselves in the context of constrained optimization. For example, whereas in unconstrained optimization one naturally considers the objective f as a measure of progress, in (nonconvex) constrained optimization one needs to employ a merit function for measuring progress, and for practical purposes such a function typically needs to involve a parameter (or parameters) that must be adjusted dynamically by the algorithm. One finds that it is the adaptivity of our merit parameter (see (10) later on) that necessitates the aforementioned boundedness assumptions that we use in our analysis. (Certain exact merit functions, such as that employed in [15], might not lead to the same issues as the merit function that we employ. However, we remark that the merit function employed in [15] is not a viable option unless the LICQ holds at all algorithm iterates.) Our second justification is that we know of no other algorithm that offers convergence guarantees that are as comprehensive as ours (in terms of handling feasible, degenerate, and infeasible settings) under an assumption that is at least as loose as Assumption 1.
Let the Lagrangian : R n × R m → R corresponding to (1) be given by (x, y) = f (x) + c(x) T y, where y ∈ R m represents a vector of Lagrange multipliers. Under a constraint qualification (such as the LICQ), necessary conditions for first-order stationarity with respect to (1) are given by
0 = ∇ x (x, y) ∇ y (x, y) = ∇f (x) + J(x) T y c(x) ;(3)
see, e.g., [17]. However, under only Assumption 1, it is possible for (1) to be degenerate-in which case (3) might not be necessary at a solution of (1)-or (1) may be infeasible. In the latter case, one aims to design an algorithm that transitions automatically from seeking stationarity with respect to (1) to seeking stationarity with respect to a measure of infeasibility of the constraints. For our purposes, we employ the infeasibility measure ϕ : R n → R defined by ϕ(x) = c(x) 2 . A point x ∈ R n is stationary with respect to ϕ if and only if either c(x) = 0 or both c(x) = 0 and
0 = ∇ϕ(x) = J(x) T c(x) c(x) 2 .(4)
Algorithm Description
Our algorithm can be characterized as a sequential quadratic optimization (commonly known as SQP) method that employs a step decomposition strategy and chooses step sizes that attempt to ensure sufficient decrease in a merit function in each iteration. We present our complete algorithm in this section, which builds upon this basic characterization to involve various unique aspects that are designed for handling the combination of (i) stochastic gradient estimates and (ii) potential rank deficiency of the constraint Jacobians.
In each iteration k ∈ N, the algorithm first computes the normal component of the search direction toward reducing linearized constraint violation. Conditioned on the event that x k is reached as the kth iterate, the problem defining this computation, namely,
min v∈R n 1 2 c k + J k v 2 2 s.t. v 2 ≤ ω J T k c k 2(5)
where ω ∈ R >0 is a user-defined parameter, is deterministic since the constraint function value c k and constraint Jacobian J k are available. If J k has full row rank, ω is sufficiently large, and (5) is solved to optimality, then one obtains v k such that c k + J k v k = 0. However, an exact solution of (5) may be expensive to obtain, and-as has been shown for various step decomposition strategies, such as the Byrd-Omojokun approach [18]-the consideration of (5) is viable when J k might not have full row rank. Fortunately, our algorithm merely requires that the normal component v k ∈ R n is feasible for problem (5), lies in Range(J T k ), and satisfies the Cauchy decrease condition
c k 2 − c k + J k v k 2 ≥ v ( c k 2 − c k + α C k J k v C k 2 )(6)
for some user-defined parameter v ∈ (0, 1]. Here, v C k := −J T k c k is the steepest descent direction for the objective of problem (5) at v = 0 and the step size α C k ∈ R is the unique solution to the problem to minimize
1 2 c k + α C J k v C k 2 2 over α C ∈ R ≥0 subject to α C ≤ ω (see, e.g.
, [17,Equations (4.11)-(4.12)]). Since this allows one to choose v k ← v C k , the normal component can be computed at low computational cost. For a more accurate solution to (5), one can employ a so-called matrix-free iterative algorithm such as the linear conjugate gradient (CG) method with Steihaug stopping conditions [24] or GLTR [7], each of which is guaranteed to yield a solution satisfying the aforementioned conditions no matter how many iterations (greater than or equal to one) are performed.
After the computation of the normal component, our algorithm computes the tangential component of the search direction by minimizing a model of the objective function subject to remaining in the null space of the constraint Jacobian. This ensures that the progress toward linearized feasibility offered by the normal component is not undone by the tangential component when the components are added together. The problem defining the computation of the tangential component is
min u∈R n (g k + H k v k ) T u + 1 2 u T H k u s.t. J k u = 0,(7)
where g k ∈ R n is a stochastic gradient estimate at least satisfying Assumption 2 below and the real symmetric matrix H k ∈ S n satisfies Assumption 3 below. (Specific additional requirements for {g k } are stated separately for each case in our convergence analysis.)
Assumption 2. For all k ∈ N, the stochastic gradient estimate g k ∈ R n is an unbiased estimator of ∇f (x k ), i.e., E k [g k ] = ∇f (x k ), where E k [·]
denotes expectation conditioned on the event that the algorithm has reached x k as the kth iterate. In addition, there exists a positive real number M ∈ R >0 such that, for all k ∈ N, one
has E k [ g k − ∇f (x k ) 2 2 ] ≤ M .
Assumption 3. The matrix H k ∈ S n is chosen independently from g k for all k ∈ N, the sequence {H k } is bounded in norm by κ H ∈ R >0 , and there exists ζ ∈ R >0 such that, for all k ∈ N, one has u T H k u ≥ ζ u 2 2 for all u ∈ Null(J k ).
In our context, one can generate g k in iteration k ∈ N by independently drawing b k realizations of the random variable ι, denoting the mini-batch as B k := {ι k,1 , . . . , ι k,b k }, and setting
g k ← 1 b k ι∈B k ∇f (x k , ι).(8)
It is a modest assumption about the function f and the sample sizes {b k } to say that {g k } generated in this manner satisfies Assumption 2. As for Assumption 3, the assumptions that the elements of {H k } are bounded in norm and that H k is sufficiently positive definite in Null(J k ) for all k ∈ N are typical for the constrained optimization literature. In practice, one may choose H k to be (an approximation of) the Hessian of the Lagrangian at (x k , y k ) for some y k , if such a matrix can be computed with reasonable effort in a manner that guarantees that Assumption 3 holds. A simpler alternative is that H k can be set to some positive definite diagonal matrix (independent of g k ).
Under Assumption 3, the tangential component u k solving (7) can be obtained by solving
H k J T k J k 0 u k y k = − g k + H k v k 0 .(9)
Even if the constraint Jacobian J k does not have full row rank, the linear system (9) is consistent since it represents sufficient optimality conditions (under Assumption 3) of the linearly constrained quadratic optimization problem in (7). (Factorization methods that are popular in the context of solving symmetric indefinite linear systems of equations, such as the Bunch-Kaufman factorization, can fail when the matrix in (9) is singular. However, Krylov subspace methods provide a viable alternative, since for such methods singularity is benign as long as the system is known to be consistent, as is the case for (9).) Under Assumption 3, the solution component u k is unique, although the component y k might not be unique (if J k does not have full row rank). Upon computation of the search direction, our algorithm proceeds to determining a positive step size. For this purpose, we employ the merit function φ :
R n × R ≥0 → R defined by φ(x, τ ) = τ f (x) + c(x) 2(10)
where τ is a merit parameter whose value is set dynamically. The function φ is a type of exact penalty function that is common in the literature [9,10,19]. For setting the merit parameter value in each iteration, we employ a local model of φ denoted as l : R n × R ≥0 × R n × R n → R and defined by
l(x, τ, g, d) = τ (f (x) + g T d) + c(x) + J(x)d 2 .
Given the search direction vectors v k , u k , and d k ← v k + u k , the algorithm sets
τ trial k ← ∞ if g T k d k + u T k H k u k ≤ 0 (1 − σ)( c k 2 − c k + J k d k 2 ) g T k d k + u T k H k u k otherwise,(11)
where σ ∈ (0, 1) is user-defined. The merit parameter value is then set as
τ k ← τ k−1 if τ k−1 ≤ τ trial k min{(1 − τ )τ k−1 , τ trial k } otherwise,(12)
where τ ∈ (0, 1) is user-defined. This rule ensures that {τ k } is monotonically nonincreasing, τ k ≤ τ trial k for all k ∈ N, and, with the reduction function ∆l :
R n × R ≥0 × R n × R n → R defined by ∆l(x, τ, g, d) = l(x, τ, g, 0) − l(x, τ, g, d) = −τ g T d + c(x) 2 − c(x) + J(x)d 2(13)
and Assumption 3, it ensures the following fact that is critical for our analysis:
∆l(x k , τ k , g k , d k ) ≥ τ k u T k H k u k + σ( c k 2 − c k + J k v k 2 ).(14)
Similar to the algorithm in [1], our algorithm also adaptively sets other parameters that are used for determining an allowable range for the step size in each iteration. (There exist constants that, if known in advance, could be used by the algorithm for determining the allowable range for each step size; see Lemma 2 in our analysis later on. However, to avoid the need to know these problem-dependent constants in advance, our algorithm generates these parameter sequences adaptively, which our analysis shows is sufficient to ensure convergence guarantees.) For distinguishing between search directions that are dominated by the tangential component and others that are dominated by the normal component, the algorithm adaptively defines sequences {χ k } and {ζ k }. (These sequences were not present in the algorithm in [1]; they are newly introduced for the needs of our proposed algorithm.) In particular, in iteration k ∈ N, the algorithm employs the conditions u k
2 2 ≥ χ k−1 v k 2 2 and 1 2 d T k H k d k < 1 4 ζ k−1 u k 2 2(15)
in order to set
(χ k , ζ k ) ← ((1 + χ )χ k−1 , (1 − ζ )ζ k−1 ) if (15) holds (χ k−1 , ζ k−1 ) otherwise,(16)
where χ ∈ R >0 and ζ ∈ (0, 1) are user-defined. It follows from (16) that {χ k } is monotonically nondecreasing and {ζ k } is monotonically nonincreasing. It will be shown in our analysis that {χ k } is bounded above by a positive real number and {ζ k } is bounded below by a positive real number, where these bounds are uniform over all runs of the algorithm; i.e., these sequences are bounded deterministically. This means that despite the stochasticity of the algorithm iterates, these sequences have (χ k , ζ k ) = (χ k−1 , ζ k−1 ) for all sufficiently large k ∈ N in any run of the algorithm. Whether u k 2 2 ≥ χ k v k 2 2 (i.e., the search direction is tangentially dominated ) or u k 2 2 < χ k v k 2 2 (i.e., the search direction is normally dominated ) influences two aspects of iteration k ∈ N. First, it influences a value that the algorithm employs to determine the range of allowable step sizes that represents a lower bound for the ratio between the reduction in the model l of the merit function and a quantity involving the squared norm of the search direction. (A similar, but slightly different sequence was employed for the algorithm in [1].) In iteration k ∈ N of our algorithm, the estimated lower bound is set adaptively by first setting
ξ trial k ← ∆l(x k , τ k , g k , d k ) τ k d k 2 2 if u k 2 2 ≥ χ k v k 2 2 ∆l(x k , τ k , g k , d k ) d k 2 2 otherwise,(17)
then setting
ξ k ← ξ k−1 if ξ k−1 ≤ ξ trial k min{(1 − ξ )ξ k−1 , ξ trial k } otherwise,(18)
for some user-defined ξ ∈ (0, 1). The procedure in (18) ensures that {ξ k } is monotonically nonincreasing and ξ k ≤ ξ trial k for all k ∈ N. It will be shown in our analysis that {ξ k } is bounded away from zero deterministically, even though in each iteration it depends on stochastic quantities. (Like for {χ k } and {ζ k }, there exists a constant that, if known in advance, could be used in place of ξ k for all k ∈ N-see Lemma 3but for ease of employment our algorithm generates {ξ k } instead.) To achieve this property, it is critical that the denominator in (17) is different depending on whether the search direction is tangentially or normally dominated; see Lemma 3 later on for details. The second aspect of the algorithm that is affected by whether a search direction is tangentially or normally dominated is a rule for setting the step size; this will be seen in (22) later on.
We are now prepared to present the mechanism by which a positive step size is selected in each iteration k ∈ N of our algorithm. We present a strategy that allows for our convergence analysis in Section 4 to be as straightforward as possible. In Section 5, we remark on extensions of this strategy that are included in our software implementation for which our convergence guarantees also hold (as long as some additional cases are considered in one key lemma).
We motivate our strategy by considering an upper bound for the change in the merit function corresponding to the computed search direction, namely, d k ← v k + u k . In particular, under Assumption 1, in iteration k ∈ N, one has for any nonnegative step size α ∈ R ≥0 that
φ(x k + αd k , τ k ) − φ(x k , τ k ) = τ k f (x k + αd k ) − τ k f (x k ) + c(x k + αd k ) 2 − c k 2 ≤ ατ k ∇f (x k ) T d k + c k + αJ k d k 2 − c k 2 + 1 2 (τ k L + Γ)α 2 d k 2 2 ≤ ατ k ∇f (x k ) T d k + |1 − α| c k 2 − c k 2 + α c k + J k d k 2 + 1 2 (τ k L + Γ)α 2 d k 2 2 .(19)
This upper bound is a convex, piecewise quadratic function in α. In a deterministic algorithm in which the gradient ∇f (x k ) is available, it is common to require that the step size α yields
φ(x k + αd k , τ k ) − φ(x k , τ k ) ≤ −ηα∆l(x k , τ k , ∇f (x k ), d k ),(20)
where η ∈ (0, 1) is user-defined. However, in our setting, (20) cannot be enforced since our algorithm avoids the evaluation of ∇f (x k ) and in lieu of it only computes a stochastic gradient g k . The first main idea of our step size strategy is to determine a step size such that the upper bound in (19) is less than or equal to the right-hand side of (20) when the true gradient ∇f (x k ) is replaced by its estimate g k . Since (14), the orthogonality of v k ∈ Range(J T k ) and u k ∈ Null(J k ), and the properties of the normal step (which, as shown in Lemma 1 later on, include that the left-hand side of (6) is positive whenever v k = 0) ensure that ∆l(x k , τ k , g k , d k ) > 0 whenever d k = 0, it follows that a step size satisfying this aforementioned property is given, for any β k ∈ (0, 1], by
α suff k ← min 2(1 − η)β k ∆l(x k , τ k , g k , d k ) (τ k L + Γ) d k 2 2 , 1 ∈ R >0 .(21)
The sequence {β k } referenced in (21) is chosen with different properties-namely, constant or diminishingdepending on the desired type of convergence guarantee. We discuss details of the possible choices for {β k } and the consequences of these choices along with our convergence analysis. Given that the step size α suff k in (21) has been set based on a stochastic gradient estimate, a safeguard is needed for our convergence guarantees. For this purpose, the second main idea of our step size selection strategy is to project the trial step size onto an interval that is appropriate depending on whether the search direction is tangentially dominated or normally dominated. In particular, the step size is chosen as
α k ← Proj k (α suff k ) where Proj k (·) := Proj · 2(1 − η)β k ξ k τ k τ k L + Γ , 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k if u k 2 2 ≥ χ k v k 2 2 Proj · 2(1 − η)β k ξ k τ k L + Γ , 2(1 − η)β k ξ k τ k L + Γ + θβ 2 k otherwise.(22)
Here, Proj(·|I) denotes the projection onto the interval I ⊂ R. In our analysis, the rules for {β k } (see Lemma 9) ensure that this projection only ever decreases the step size; hence, the overall motivation for the projection is to ensure that the step size is not too large compared to a conservative choice, namely, the lower end of the projection interval. Motivation for the difference in the interval depending on whether the search direction is tangentially or normally dominated can be seen Lemma 15 later on, where it is critical that the step size for a normally dominated search direction does not necessarily vanish if/when the merit parameter vanishes, i.e., {τ k } 0. Overall, our step size selection mechanism can be understood as follows. First, the algorithm adaptively sets the sequences {χ k }, {ζ k }, and {ξ k } in order to estimate bounds that are needed for the step size selection and are known to exist theoretically, but cannot be computed directly. By the manner in which these sequences are set, our analysis shows that they remain constant for sufficiently large k ∈ N in any run of the algorithm. With these values, our step size selection strategy aims to achieve a reduction in the merit function in expectation, with safeguards since the computed values are based on stochastic quantities. One finds by the definition of the projection interval in (22) that the step size for a tangentially dominated search direction may decrease to zero if {τ k } 0; this is needed in cases when the problem is degenerate or infeasible, and the algorithm wants to avoid long steps in the tangential component that may ruin progress toward minimizing constraint violation. Otherwise, for a normally dominated search direction, the step size would remain bounded away from zero if β k = β ∈ (0, 1] for all k ∈ N; i.e., it can only decrease to zero if {β k } is diminishing. If our algorithm did not make this distinction between the projection intervals for tangentially versus normally dominated search directions, then the algorithm would fail to have desirable convergence guarantees even in the deterministic setting. (In particular, our proof in Appendix A of Theorem 1, which is upcoming in Section 4, would break down.)
Our complete algorithm is stated as Algorithm 1 on page 10.
Convergence Analysis
In this section, we prove convergence guarantees for Algorithm 1. To understand the results that can be expected given our setting and the type of algorithm that we employ, let us first present a set of guarantees
Algorithm 1 Stochastic SQP Algorithm
Require: L ∈ R >0 , a Lipschitz constant for ∇f ; Γ ∈ R >0 , a Lipschitz constant for c; {β k } ⊂ (0, 1]; x 0 ∈ R n ; τ −1 ∈ R >0 ; χ −1 ∈ R >0 ; ζ −1 ∈ R >0 ; ω ∈ R >0 ; v ∈ (0, 1]; σ ∈ (0, 1); τ ∈ (0, 1); χ ∈ R >0 ; ζ ∈ (0, 1); ξ ∈ (0, 1); η ∈ (0, 1); θ ∈ R ≥0 1: for k ∈ N do 2: if J T k c k 2 = 0 and c k 2 > 0 then 3: terminate and return x k (infeasible stationary point) 4: end if
5:
Compute a stochastic gradient g k at least satisfying Assumption 2
6:
Compute v k ∈ Range(J T k ) that is feasible for problem (5) and satisfies (6) 7:
Compute (u k , y k ) as a solution of (9), and then set d k ← v k + u k 8:
if d k = 0 then 9:
Set τ trial k ← ∞ and τ k ← τ k−1 10:
Set (χ k , ζ k ) ← (χ k−1 , ζ k−1 )
11:
Set ξ trial k ← ∞ and ξ k ← ξ k−1
12:
Set α suff Set τ trial k by (11) and τ k by (12) 15:
Set (χ k , ζ k ) by (15)-(16)
16:
Set ξ trial k by (17) and ξ k by (18) 17:
Set α suff k by (21) and α k ← Proj k (α suff k ) using (22) 18:
end if 19:
Set x k+1 ← x k + α k d k 20: end for
that can be proved if Algorithm 1 were to be run with g k = ∇f (x k ) and β k = β for all k ∈ N, where β ∈ R >0 is sufficiently small. For such an algorithm, we prove the following theorem in Appendix A. The theorem is consistent with what can be proved for other deterministic algorithms in our context; e.g., see Theorem 3.3 in [5]. Theorem 1. Suppose Algorithm 1 is employed to solve problem (1) such that Assumption 1 holds, g k = ∇f (x k ) for all k ∈ N, {H k } satisfies Assumption 3, and β k = β for all k ∈ N where β ∈ (0, 1] and
2(1 − η)βξ −1 max{τ −1 , 1} Γ ∈ (0, 1].(23)
If there exist k J ∈ N and σ J ∈ R >0 such that the singular values of J k are bounded below by σ J for all k ≥ k J , then the merit parameter sequence {τ k } is bounded below by a positive real number and
0 = lim k→∞ ∇f (x k ) + J T k y k c k 2 = lim k→∞ Z T k ∇f (x k ) c k 2 .(24)
Otherwise, if such k J and σ J do not exist, then it still follows that
0 = lim k→∞ J T k c k 2 ,(25)
and if {τ k } is bounded below by a positive real number, then
0 = lim k→∞ ∇f (x k ) + J T k y k 2 = lim k→∞ Z T k ∇f (x k ) 2 .(26)
Based on Theorem 1, the following aims-which are all achieved in certain forms in our analyses in Sections 4.1 and 4.2-can be set for Algorithm 1 in the stochastic setting. First, if Algorithm 1 is run and the singular values of the constraint Jacobians happen to remain bounded away from zero beyond some iteration, then (following (24)) one should aim to prove that a primal-dual stationarity measure (recall (3)) vanishes in expectation. This is shown under certain conditions in Corollary 1 (and the subsequent discussion) on page 18. Otherwise, a (sub)sequence of {J k } tends to singularity, in which case (following (25)) one should at least aim to prove that { J T k c k 2 } vanishes in expectation, which would mean that a (sub)sequence of iterates converges in expectation to feasibility or at least stationarity with respect to the constraint infeasibility measure ϕ (recall (4)). Such a conclusion is offered under certain conditions by combining Corollary 1 (see page 18) and Theorem 3 (see page 21). The remaining aim (paralleling (26)) is that one should aim to prove that even if a (sub)sequence of {J k } tends to singularity, if the merit parameter sequence {τ k } happens to remain bounded below by a positive real number, then { Z T k ∇f (x k ) 2 } vanishes in expectation. This can also be seen to occur under certain conditions in Corollary 1 on page 18. In addition, due to its stochastic nature, there are events that one should consider in which the algorithm may exhibit behavior that cannot be exhibited by the deterministic algorithm. One such event is when the merit parameter eventually remains fixed at a value that is not sufficiently small. We show in Section 4.3-with formal results stated and proved in Appendix B-that, under reasonable assumptions, the total probability of this event (over all possible runs of the algorithm) is zero. We complete the picture of the possible behaviors of our algorithm by discussing remaining possible (practically irrelevant) events in Section 4.4.
Let us now commence our analysis of Algorithm 1. If a run terminates finitely at iteration k ∈ N, then an infeasible stationary point has been found. Hence, without loss of generality throughout the remainder of our analysis and discussions, we assume that the algorithm does not terminate finitely, i.e., an infinite number of iterates are generated. As previously mentioned, for much of our analysis, we merely assume that the stochastic gradient estimates satisfy Assumption 2. This is done to show that many of our results hold under this general setting. However, we will ultimately impose stronger conditions on {g k }, as needed; see Sections 4.2 and 4.3 (and Appendix B).
We build to our main results through a series of lemmas. Our first lemma has appeared for various deterministic algorithms in the literature. It extends easily to our setting since the normal component computation is deterministic conditioned on the event that the algorithm reaches x k .
where
E k [ Z T k (g k − ∇f (x k )) 2 (Z T k H k Z k ) −1 ] = E k [ Z T k g k 2 (Z T k H k Z k ) −1 ] − 2E k [g T k Z k (Z T k H k Z k ) −1 Z T k ∇f (x k )] + Z T k ∇f (x k ) 2 (Z T k H k Z k ) −1 = E k [ Z T k g k 2 (Z T k H k Z k ) −1 ] − Z T k ∇f (x k ) 2 (Z T k H k Z k ) −1 .
Combining the facts above and again using Assumption 2, it follows that
∇f (x k ) T d true k − E k [g T k d k ] = ∇f (x k ) T v k + ∇f (x k ) T u true k − E k [g T k v k + g T k u k ] = ∇f (x k ) T u true k − E k [g T k u k ] = − ∇f (x k ) T Z k (Z T k H k Z k ) −1 Z T k (∇f (x k ) + H k v k ) + E k [g T k Z k (Z T k H k Z k ) −1 Z T k (g k + H k v k )] = − Z T k ∇f (x k ) 2 (Z T k H k Z k ) −1 + E k [ Z T k g k 2 (Z T k H k Z k ) −1 ] ∈ [0, ζ −1 M ]
, which gives the desired conclusion.
In the subsequent subsections, our analysis turns to offering guarantees conditioned on each of a few possible events that occur in a run of the algorithm, a few of which involve the merit parameter sequence eventually remaining constant in a run of the algorithm. Before considering these events, let us first prove under certain circumstances that such behavior of the merit parameter sequence would occur. As seen in Theorem 1, it is worthwhile to consider such an occurrence regardless of the properties of the sequence of constraint Jacobians. That said, one might only be able to prove that it occurs when the constraint Jacobians are eventually bounded away from singularity.
Our first lemma here proves that if the constraint Jacobians are eventually bounded away from singularity, then the normal components of the search directions satisfy a useful upper bound. The proof is essentially the same as that of [5,Lemma 3.15], but we provide it for completeness. Lemma 6. If, in a run of the algorithm, there exist k J ∈ N and σ J ∈ R >0 such that the singular values of J k are bounded below by σ J for all k ≥ k J , then there exists
κ ω ∈ R >0 such that v k 2 ≤ κ ω ( c k 2 − c k + J k v k 2 ) for all k ≥ k J . Proof. Proof. Under the conditions of the lemma, J T k c k 2 ≥ σ J c k 2 for all k ≥ k J . Hence, along with Lemma 1, it follows that c k 2 ( c k 2 − c k + J k v k 2 ) ≥ κ v J T k c k 2 2 ≥ κ v σ 2 J c k 2 2 for all k ≥ k J .
Combining this again with Lemma 1, it follows with the Cauchy-Schwarz inequality and (2) that
v k 2 ≤ ω J T k 2 c k 2 ≤ ωκ J κ v σ 2 J ( c k 2 − c k + J k v k 2 ) for all k ≥ k J ,
from which the desired conclusion follows.
We now prove that if the differences between the stochastic gradient estimates and the true gradients are bounded deterministically, then the sequence of tangential components is bounded.
Lemma 7. If, in a run of the algorithm, the sequence { g k − ∇f (x k ) 2 } is bounded by a positive real number κ g ∈ R >0 , then the sequence { u k 2 } is bounded by a positive real number κ u ∈ R >0 . Proof. Proof. Under Assumption 1, the sequence { ∇f (x k ) 2 } is bounded; recall (2). Hence, under the conditions of the lemma, { g k 2 } is bounded. The first block of (9) yields u T k H k u k = −u T k (g k + H k v k ), which under Assumption 3 yields ζ u k 2 2 ≤ −u T k g k − u T k H k v k ≤ ( g k 2 + H k 2 v k 2 ) u k 2 .
Hence, the conclusion follows from these facts, Assumption 1, Assumption 3, and Lemma 1.
By combining the preceding two lemmas, the following lemma indicates certain circumstances under which the sequence of merit parameters will eventually remain constant. We remark that it is possible in a run of the algorithm for the merit parameter sequence to remain constant eventually even if the conditions of the lemma do not hold, which is why our analyses in the subsequent subsections do not presume that these conditions hold. That said, to prove that there exist settings in which the merit parameter is guaranteed to remain constant eventually, we offer the following.
Lemma 8. If, in a run, there exist k J ∈ N and σ J ∈ R >0 such that the singular values of J k are bounded below by σ J for all k ≥ k J and { g k − ∇f (x k ) 2 } is bounded by a positive real number κ g ∈ R >0 , then there exist k τ ∈ N and τ min ∈ R >0 such that τ k = τ min for all k ∈ N with k ≥ k τ .
Proof. Proof. Observe that the algorithm terminates if J T k c k 2 = 0 while c k 2 > 0. Let us now show that if c k 2 = 0, then the algorithm sets τ k ← τ k−1 . Indeed, c k 2 = 0 implies v k = 0 by Lemma 1. If u k = 0 as well, then d k = 0 and the algorithm explicitly sets
τ k ← τ k−1 . Otherwise, if v k = 0 and u k = 0, then (9) yields 0 = g T k u k + u T k H k u k = g T k d k + u T k H k u k , in which case (11)-(12) again yield τ k ← τ k−1 . Overall, it follows that τ k < τ k−1 if and only if one finds J T k c k 2 > 0, g T k d k + u T k H k u k > 0, and τ k−1 (g T k d k + u T k H k u k ) > (1 − σ)( c k 2 − c k + J k v k 2 )
. On the other hand, from the first equation in (9), the Cauchy-Schwarz inequality, (2), and Lemmas 6 and 7, it holds that
g T k d k + u T k H k u k = (g k − H k u k ) T v k = (g k − ∇f (x k ) + ∇f (x k ) − H k u k ) T v k ≤ (κ g + κ ∇f + κ H κ u ) v k 2 ≤ (κ g + κ ∇f + κ H κ u )κ ω ( c k 2 − c k + J k v k 2 ).
Combining these facts, the desired conclusion follows.
Constant, Sufficiently Small Merit Parameter
Our goal in this subsection is to prove a convergence guarantee for our algorithm in the event E τ,low , which is defined formally in the assumption below. In the assumption, similar to our notation of u true k and d true k , we use τ trial,true k to denote the value of τ trial k that, conditioned on x k as the kth iterate, would be computed in iteration k ∈ N if the search direction were computed using the true gradient ∇f (x k ) in place of g k in (9). Assumption 4. In a run of the algorithm, event E τ,low occurs, i.e., there exists an iteration number k τ ∈ N and a merit parameter value τ min ∈ R >0 such that
τ k = τ min ≤ τ trial,true k , χ k = χ k−1 , ζ k = ζ k−1 , and ξ k = ξ k−1 for all k ≥ k τ .
In addition, along the lines of Assumption 2,
{g k } k≥kτ satisfies E k,τ,low [g k ] = ∇f (x k ) and E k,τ,low [ g k − ∇f (x k ) 2 2 ] ≤ M ,
where E k,τ,low denotes expectation with respect to the distribution of ι conditioned on the event that E τ,low occurs and the algorithm has reached x k in iteration k ∈ N.
Recall from Lemmas 2 and 3 that the sequences {χ k }, {ζ k }, and {ξ k } are guaranteed to be bounded deterministically, and in particular will remain constant for sufficiently large k ∈ N. Hence, one circumstance in which Assumption 4 may hold is under the conditions of Lemma 8. A critical distinction in Assumption 4 is that the value at which the merit parameter eventually settles is sufficiently small such that τ k ≤ τ trial,true k for all sufficiently large k ∈ N. This is the key distinction between the event E τ,low and some of the events we consider in Sections 4.3 and 4.4.
For the sake of brevity in the rest of this subsection, let us temporarily redefine E k := E k,τ,low . Our next lemma provides a key result that drives our analysis for this subsection. It shows that as long as β k is sufficiently small for all k ∈ N (in a manner similar to (23)), the reduction in the merit function in each iteration is at least the sum of two terms: (1) the reduction in the model of the merit function corresponding to the true gradient and its associated search direction, and (2) a pair of quantities that can be attributed to the error in the stochastic gradient estimate.
Lemma 9. Suppose that {β k } is chosen such that β k ∈ (0, 1] and 2(1 − η)β k ξ k max{τ k , 1} τ k L + Γ ∈ (0, 1] for all k ∈ N.(28)
Then, for all k ∈ N in any such run of the algorithm, it follows that
φ(x k , τ k ) − φ(x k + α k d k , τ k ) ≥ α k ∆l(x k , τ k , ∇f (x k ), d true k ) − (1 − η)α k β k ∆l(x k , τ k , g k , d k ) − α k τ k ∇f (x k ) T (d k − d true k ).
Proof. Proof. Consider arbitrary k ∈ N in any run. From (21)- (22) and the supposition about {β k }, one finds α k ∈ (0, 1]. Hence, with (19) and (9)), one has
J k d k = J k d true k (since J k u k = J k u true k = 0 byφ(x k , τ k ) − φ(x k + α k d k , τ k ) ≥ − α k (τ k ∇f (x k ) T d k − c k 2 + c k + J k d k 2 ) − 1 2 (τ k L + Γ)α 2 k d k 2 2 = − α k (τ k ∇f (x k ) T d true k − c k 2 + c k + J k d true k 2 ) − 1 2 (τ k L + Γ)α 2 k d k 2 2 − α k τ k ∇f (x k ) T (d k − d true k ) = α k ∆l(x k , τ k , ∇f (x k ), d true k ) − 1 2 (τ k L + Γ)α 2 k d k 2 2 − α k τ k ∇f (x k ) T (d k − d true k ).(29)
By (21), it follows that α suff
k ≤ 2(1−η)β k ∆l(x k ,τ k ,g k ,d k ) (τ k L+Γ) d k 2 2 . If u k 2 2 ≥ χ k v k 2 2 , then it follows from (17)-(18) that ξ k ≤ ξ trial k = ∆l(x k ,τ k ,g k ,d k ) τ k d k 2 2 and 2(1−η)β k ∆l(x k ,τ k ,g k ,d k ) (τ k L+Γ) d k 2 2 ≥ 2(1−η)β k ξ k τ k τ k L+Γ . On the other hand, if u k 2 2 < χ k v k 2 2 , then it follows from (17)-(18) that ξ k ≤ ξ trial k = ∆l(x k ,τ k ,g k ,d k ) d k 2 2 and 2(1−η)β k ∆l(x k ,τ k ,g k ,d k ) (τ k L+Γ) d k 2 2 ≥ 2(1−η)β k ξ k τ k L+Γ .
It follows from these facts and the supposition about {β k } that the projection in (22)
never sets α k > α suff k . Thus, α k ≤ α suff k ≤ 2(1−η)β k ∆l(x k ,τ k ,g k ,d k ) (τ k L+Γ) d k 2 2 . Hence, by (29), φ(x k , τ k ) − φ(x k + α k d k , τ k ) ≥ α k ∆l(x k , τ k , ∇f (x k ), d true k ) − 1 2 α k (τ k L + Γ) 2(1−η)β k ∆l(x k ,τ k ,g k ,d k ) (τ k L+Γ) d k 2 2 d k 2 2 − α k τ k ∇f (x k ) T (d k − d true k ) = α k ∆l(x k , τ k , ∇f (x k ), d true k ) − (1 − η)α k β k ∆l(x k , τ k , g k , d k ) − α k τ k ∇f (x k ) T (d k − d true k ),
which completes the proof.
Our second result in this case offers a critical upper bound on the final term in the conclusion of Lemma 9. The result follows in a similar manner as [1, Lemma 3.11].
Lemma 10. For any run under Assumption 4, it follows for any k ≥ k τ that
E k [α k τ k ∇f (x k ) T (d k − d true k )] ≤ β 2 k θτ min κ ∇f ζ −1 √ M .
Proof. Proof. Consider k ≥ k τ , where k τ is defined in Assumption 4. We prove the desired conclusion under the assumption that the search direction in iteration k is tangentially dominated, then argue that it also holds by a similar argument when this search direction is normally dominated. Let I k be the event that
∇f (x k ) T (d k − d true k ) ≥ 0 and let I c k be the complementary event.
In addition, let P k denote probability conditioned on the event that E τ,low occurs and x k is the kth iterate. By the law of total expectation, Assumption 4, and (22), one finds that
E k [α k τ k ∇f (x k ) T (d k − d true k )] = E k [α k τ min ∇f (x k ) T (d k − d true k )|I k ]P k [I k ] + E k [α k τ min ∇f (x k ) T (d k − d true k )|I c k ]P k [I c k ] ≤α k,max τ min E k [∇f (x k ) T (d k − d true k )|I k ]P k [I k ] + α k,min τ min E k [∇f (x k ) T (d k − d true k )|I c k ]P k [I c k ],
where α k,min := 2(1−η)β kξmin τmin τminL+Γ and α k,max := 2(1−η)β kξmin τmin τminL+Γ + θβ 2 k are, respectively, the lower and upper bounds for the step size for the tangentially dominated search direction from (22)
withξ min ∈ [ξ min , ∞)
being the positive real number such that ξ k =ξ min for all k ≥ k τ (see Lemma
E k [α k τ k ∇f (x k ) T (d k − d true k )] ≤ α k,min τ min E k [∇f (x k ) T (d k − d true k )|I k ]P k [I k ] + α k,min τ min E k [∇f (x k ) T (d k − d true k )|I c k ]P k [I c k ] + (α k,max − α k,min )τ min E k [∇f (x k ) T (d k − d true k )|I k ]P k [I k ] = (α k,max − α k,min )τ min E k [∇f (x k ) T (d k − d true k )|I k ]P k [I k ].
Moreover, by the Cauchy-Schwarz inequality and law of total expectation, one finds
E k [∇f (x k ) T (d k − d true k )|I k ]P k [I k ] ≤ E k [ ∇f (x k ) 2 d k − d true k 2 |I k ]P k [I k ] = E k [ ∇f (x k ) 2 d k − d true k 2 ] − E k [ ∇f (x k ) 2 d k − d true k 2 |I c k ]P k [I c k ] ≤ ∇f (x k ) 2 E k [ d k − d true k 2 ].
Combining the above results, (2), Lemma 4, and the fact that α k,max − α k,min = θβ 2 k , the desired conclusion follows for tangentially dominated search directions. Finally, using the same arguments-except with α k,min := 2(1−η)β kξmin τminL+Γ and α k,max := 2(1−η)β kξmin τminL+Γ + θβ 2 k , where again α k,max − α k,min = θβ 2 k -the desired conclusion follows for normally dominated search directions as well.
Our next result in this case bounds the middle term in the conclusion of Lemma 9.
Lemma 11. For any run under Assumption 4, it follows for any k ≥ k τ that
E k [∆l(x k , τ min , g k , d k )] ≤ ∆l(x k , τ min , ∇f (x k ), d true k ) + τ min ζ −1 M.
Proof. Proof. Consider arbitrary k ≥ k τ in any run under Assumption 4. By Assumption 4, it follows from the model reduction definition (13), Lemma 5 and (9) that
E k [∆l(x k , τ k , g k , d k )] = E k [−τ min g T k d k + c k 2 − c k + J k d k 2 ] ≤ −τ min ∇f (x k ) T d true k + τ min ζ −1 M + c k 2 − c k + J k d true k 2 = ∆l(x k , τ min , ∇f (x k ), d true k ) + τ min ζ −1 M, as desired.
We now prove our main theorem of this subsection, where E τ,low [ · ] := E[ · | Assumption 4 holds].
Theorem 2. Suppose that Assumption 4 holds and the sequence {β k } is chosen such that (28) holds for all k ∈ N. For a given run of the algorithm, defineξ min ∈ R >0 as the value in Assumption 4 such that ξ k =ξ min for all k ≥ k τ and define
A := min 2(1−η)ξminτmin τminL+Γ , 2(1−η)ξmin τminL+Γ , A := max 2(1−η)ξminτmin τminL+Γ , 2(1−η)ξmin τminL+Γ , and M := τ min ζ −1 ((1 − η)(A + θ)M + θκ ∇f √ M ). If β k = β ∈ (0, A/((1 − η)(A + θ))) for all k ≥ k τ , then for all k ≥ k τ one finds E τ,low 1 k − k τ + 1 k j=kτ ∆l(x j , τ min , ∇f (x j ), d true j ) ≤ βM A − (1 − η)(A + θ)β + E τ,low [φ(x kτ , τ min )] − φ min (k + 1)β(A − (1 − η)(A + θ)β) k→∞ − −−− → βM A − (1 − η)(A + θ)β ,(30)
where, in the context of Assumption 1, φ min ∈ R >0 is a lower bound for φ(·, τ min ) over X . On the other hand, if
∞ j=kτ β j = ∞ and ∞ j=kτ β 2 j < ∞, then lim k≥kτ ,k→∞ E τ,low 1 ( k j=kτ βj ) k j=kτ β j ∆l(x j , τ min , ∇f (x j ), d true j ) = 0. (31)
Proof. Proof. Consider arbitrary k ≥ k τ in any run under Assumption 4. From the definitions of A and A in the statement of the theorem, the manner in which the step sizes are set by (22), and the fact that β k ∈ (0, 1], it follows that Aβ k ≤ α k ≤ (A + θ)β k . Hence, it follows from Lemmas 9-11 and the conditions of the theorem that
φ(x k , τ min ) − E k [φ(x k + α k d k , τ min )] ≥ E k [α k ∆l(x k , τ min , ∇f (x k ), d true k ) − (1 − η)α k β k ∆l(x k , τ min , g k , d k ) − α k τ min ∇f (x k ) T (d k − d true k )] ≥ β k (A − (1 − η)(A + θ)β k )∆l(x k , τ min , ∇f (x k ), d true k ) − β 2 k M . If β k = β ∈ (0, A/((1 − η)(A + θ))) for all k ≥ k τ , then total expectation under Assumption 4 yields E τ,low [φ(x k , τ min )] − E τ,low [φ(x k + α k d k , τ min )] ≥ β(A − (1 − η)(A + θ)β)E τ,low [∆l(x k , τ min , ∇f (x k ), d true k )] − β 2 M for all k ≥ k τ .
Summing this inequality for j ∈ {k τ , . . . , k}, it follows under Assumption 1 that
E τ,low [φ(x kτ , τ min )] − φ min ≥ E τ,low [φ(x kτ , τ min )] − E τ,low [φ(x k+1 , τ min )] ≥ β(A − (1 − η)(A + θ)β)E τ,low k j=kτ ∆l(x j , τ min , ∇f (x j ), d true j ) − (k − k τ + 1)β 2 M ,
from which (30) follows. On the other hand, if {β k } satisfies ∞ j=kτ β j = ∞ and ∞ j=kτ β 2 j < ∞, then it follows for sufficiently large k ≥ k τ that β k ≤ ηA/((1 − η)(A + θ)); hence, without loss of generality, let us assume that this inequality holds for all k ≥ k τ , which implies that
A − (1 − η)(A + θ)β k ≥ (1 − η)A for all k ≥ k τ . As above, it follows that E τ,low [φ(x k , τ min )] − E τ,low [φ(x k + α k d k , τ min )] ≥ (1 − η)Aβ k E τ,low [∆l(x k , τ min , ∇f (x k ), d true k )] − β 2 k M for all k ≥ k τ .
Summing this inequality for j ∈ {k τ , . . . , k}, it follows under Assumption 1 that
E τ,low [φ(x kτ , τ min )] − φ min ≥ E τ,low [φ(x kτ , τ min )] − E τ,low [φ(x k+1 , τ min )] ≥ (1 − η)AE τ,low k j=kτ β j ∆l(x j , τ min , ∇f (x j ), d true j ) − M k j=kτ β 2 j .
Rearranging this inequality yields
E τ,low k j=kτ β j ∆l(x j , τ min , g j , d true j ) ≤ E τ,low [φ(x kτ , τ min )] − φ min (1 − η)A + M (1 − η)A k j=kτ β 2 j ,
from which (31) follows.
We end this subsection with a corollary in which we connect the result of Theorem 2 to first-order stationarity measures (recall (3)). For this corollary, we require the following lemma.
Lemma 12. For all k ∈ N, it holds that u true k 2 ≥ κ −1 H Z T k (∇f (x k ) + H k v k ) 2 .
Proof. Proof. Consider arbitrary k ∈ N in any run. As in the proof of Lemma 4,
Z T k H k Z k w true k = −Z T k (∇f (x k ) + H k v k ), meaning with Assumption 3 that u true k 2 ≥ κ −1 H Z T k (∇f (x k ) + H k v k ) 2 .
Corollary 1. Under the conditions of Theorem 2, the following hold true.
(a) If β k = β ∈ (0, A/((1 − η)(A + θ))) for all k ≥ k τ , then for all k ≥ k τ one finds E τ,low 1 k − k τ + 1 k j=kτ τ min ζ Z T j (∇f (x j ) + H j v j ) 2 2 κ 2 H + κ v σ J T j c j 2 2 κ c ≤ βM A − (1 − η)(A + θ)β + E τ,low [φ(x kτ , τ min )] − φ min (k + 1)β(A − (1 − η)(A + θ)β) k→∞ − −−− → βM A − (1 − η)(A + θ)β . (b) If ∞ j=kτ β j = ∞ and ∞ j=kτ β 2 j < ∞, then lim k≥kτ ,k→∞ E τ,low 1 k j=kτ β j k j=kτ β j τ min ζ Z T j (∇f (x j ) + H j v j ) 2 2 κ 2 H + κ v σ J T j c j 2 2 κ c = 0, from which it follows that lim inf k≥kτ ,k→∞ E τ,low τ min ζ Z T k (∇f (x k ) + H k v k ) 2 2 κ 2 H + κ v σ J T k c k 2 2 κ c = 0.
Proof. Proof. For all k ∈ N, it follows under Assumption 4 that (14) holds with ∇f (x k ) in place of g k and u true k in place of u k . The result follows from this fact, Theorem 2, and Lemmas 1 and 12.
Observe that if the singular values of J k are bounded below by σ J ∈ R >0 for all k ≥ k J for some k J ∈ N, then (as in the proof of Lemma 6) it follows that J T k c k 2 ≥ σ J c k 2 for all k ≥ k J . In this case, the results of Corollary 1 hold with σ J c k 2 in place of J T k c k 2 . Overall, Corollary 1 offers results for the stochastic setting that parallel the limits (24) and (26) for the deterministic setting. The only difference is the presence of Z T k H k v k in the term involving the reduced gradient Z T k ∇f (x k ) for all k ∈ N. However, this does not significantly weaken the conclusion. After all, it follows from (5) (see also Lemma 1) that v k 2 ≤ ω J T k c k 2 for all k ∈ N. Hence, since Corollary 1 shows that at least a subsequence of { J T k c k 2 } tends to vanish in expectation, it follows that { v k 2 } vanishes in expectation along the same subsequence of iterations. This, along with Assumption 3 and the orthonormality of Z k , shows that { Z T k H k v k 2 } exhibits this same behavior, which means that from the corollary one finds that a subsequence of
{ Z T k ∇f (x k ) 2 } vanishes in expectation.
Let us conclude this subsection with a few remarks on how one should interpret its main conclusions. First, one learns from the requirements on {β k } in Lemma 9, Theorem 2, and Corollary 1 that, rather than employ a prescribed sequence {β k }, one should instead prescribe {β j } ∞ j=0 ⊂ (0, 1] and for each k ∈ N set β k based on whether or not an adaptive parameter changes its value. In particular, anytime k ∈ N sees either τ k < τ k−1 , χ k > χ k−1 , ζ k < ζ k−1 , or ξ k < ξ k−1 , the algorithm should set β k+j ← λβ j for j = 0, 1, 2, . . . (continuing indefinitely or untilk ∈ N withk > k sees τk < τk −1 , χk > χk −1 , ζk < ζk −1 , or ξk < ξk −1 ), where λ ∈ R >0 is chosen sufficiently small such that (28) holds. Since such a "reset" of j ← 0 will occur only a finite number of times under event E τ,low , one of the desirable results in Theorem 2/Corollary 1 can be attained if {β j } is chosen as an appropriate constant or diminishing sequence. Second, let us note that due to the generality of Assumption 4, it is possible that for different runs of the algorithm the corresponding terminal merit parameter value, namely, τ min , in Assumption 4 could become arbitrarily close to zero. (This is in contrast to the conditions of Lemma 8, which guarantee a uniform lower bound for the merit parameter over all runs satisfying these conditions.) Hence, while our main conclusions of this subsection hold under looser conditions than those in Lemma 8, one should be wary in practice if/when the merit parameter sequence reaches small numerical values.
Vanishing Merit Parameter
Let us now consider the behavior of the algorithm in settings in which the merit parameter vanishes; in particular, we make Assumption 5 below.
Assumption 5. In a run of the algorithm, event E τ,zero occurs, i.e., {τ k } 0. In addition, along the lines of Assumption 2, the stochastic gradient sequence {g k } satisfies E k,τ,zero [g k ] = ∇f (x k ) and g k − ∇f (x k ) 2 2 ≤ M , where E k,τ,zero denotes expectation with respect to the distribution of ι conditioned on the event that E τ,zero occurs and the algorithm has reached x k in iteration k ∈ N.
Recalling Theorem 1 and Lemma 8, one may conclude in general that the merit parameter sequence may vanish for one of two reasons: a (sub)sequence of constraint Jacobians tends toward rank deficiency or a (sub)sequence of stochastic gradient estimates diverges. Our assumption here assumes that the latter event does not occur. (In our remarks in Section 4.4, we discuss the obstacles that arise in proving convergence guarantees when the merit parameter vanishes and the stochastic gradient estimates diverge.) Given our setting of constrained optimization, it is reasonable and consistent with Theorem 1 to have convergence toward stationarity with respect to the constraint violation measure as the primary goal in these circumstances.
For the sake of brevity in the rest of this subsection, let us temporarily redefine E k := E k,τ,zero . Our first result in this subsection is an alternative of Lemma 9.
Lemma 13. Under Assumption 5 and assuming that {β k } is chosen such that (28) holds for all k ∈ N, it follows for all k ∈ N that
c k 2 − c(x k + α k d k ) 2 ≥ α k (1 − (1 − η)β k )∆l(x k , τ k , g k , d k ) − τ k (f k − f (x k + α k d k )) − α k τ k (∇f (x k ) − g k ) T d k .
Proof. Proof. Consider arbitrary k ∈ N in any run under Assumption 5. As in the proof of Lemma 9, from (21)- (22) and the supposition about {β k }, one finds α k ∈ (0, 1]. Hence, with (19), one has
φ(x k , τ k ) − φ(x k + α k d k , τ k ) ≥ − α k (τ k ∇f (x k ) T d k − c k 2 + c k + J k d k 2 ) − 1 2 (τ k L + Γ)α 2 k d k 2 2 = α k ∆l(x k , τ k , g k , d k ) − 1 2 (τ k L + Γ)α 2 k d k 2 2 − α k τ k (∇f (x k ) − g k ) T d k .
Now following the same arguments as in the proof of Lemma 9, it follows that − 1 2 (τ k L + Γ)α 2 k d k 2 2 ≥ −(1 − η)α k β k ∆l(x k , τ k , g k , d k ), which combined with the above yields the desired conclusion.
Our next result yields a bound on the final term in the conclusion of Lemma 13.
Lemma 14.
For any run under Assumption 5, there exists κ β ∈ R >0 such that
α k τ k (∇f (x k ) − g k ) T d k ≤ β k τ k κ β for all k ∈ N such that u k 2 2 < χ k v k 2 2 β k τ k max{β k , τ k }κ β for all k ∈ N such that u k 2 2 ≥ χ k v k 2 2 .
Proof. Proof. The existence of κ d ∈ R >0 such that, in any run under Assumption 5, one finds d k 2 ≤ κ d for all k ∈ N follows from Assumption 5, the fact that d k
2 2 = v k 2 2 + u k 2
2 for all k ∈ N, Lemma 7, Lemma 1, and Assumption 1. Now consider arbitrary k ∈ N in any run under Assumption 5. If (∇f (x k ) − g k ) T d k < 0, then the desired conclusion follows trivially (for any κ β ∈ R >0 ). Hence, let us proceed under the assumption
that (∇f (x k ) − g k ) T d k ≥ 0. If u k < χ k v k 2 2
, then it follows from (22), the facts that 0 ≤ τ k , ξ k ≤ ξ −1 , and β k ≤ 1 for all k ∈ N, the Cauchy-Schwarz inequality, and Assumption 5 that
α k τ k (∇f (x k ) − g k ) T d ≤ 2(1 − η)β k ξ k τ k L + Γ + θβ 2 k τ k ∇f (x k ) − g k 2 d k 2 ≤ 2(1 − η)ξ −1 Γ + θ β k τ k √ M κ d .
On the other hand, if u k 2 2 ≥ χ k v k 2 2 , then it follows under the same reasoning that
α k τ k (∇f (x k ) − g k ) T d ≤ 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k τ k ∇f (x k ) − g k 2 d k 2 ≤ 2(1 − η)ξ −1 Γ + θ β k τ k max{β k , τ k } √ M κ d .
Overall, the desired conclusion follows with κ β :
= 2(1−η)ξ−1 Γ + θ √ M κ d .
Our third result in this subsection offers a formula for a positive lower bound on the step size that is applicable at points that are not stationary for the constraint infeasibility measure. For this lemma and its subsequent consequences, we define for arbitrary γ ∈ R >0 the subset
X γ := {x ∈ R n : J(x) T c(x) 2 ≥ γ}.(32)
Lemma 15. There exists α ∈ R >0 such that α k ≥ αβ k for each k ∈ N such that u k
2 2 < χ k v k 2 2 .
On the other hand, for each γ ∈ R >0 , there exists γ ∈ R >0 (proportional to γ 2 ) such that
x k ∈ X γ implies α k ≥ min{ γ β k , γ β k τ k + θβ 2 k } whenever u k 2 2 ≥ χ k v k 2 2 .
Proof. Proof. Define K γ := {k ∈ N : x k ∈ X γ }. By Lemma 1, it follows that v k 2 ≥ ω J T k c k 2 2 ≥ ωγ 2 for all k ∈ K γ . Consequently, by Lemma 7, it follows that
u k 2 ≤ κ u ωγ 2 v k 2 for all k ∈ K γ .(33)
It follows from (22)
that α k ≥ 2(1 − η)β k ξ k /(τ k L + Γ) whenever u k 2 2 < χ k v k 2 2 . Otherwise, whenever u k 2 2 ≥ χ k v k 2 2
, it follows using the arguments in Lemma 9 and (22) that
α k = min 2(1 − η)β k ∆l(x k , τ k , g k , d k ) (τ k L + Γ) d k 2 2 , 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k , 1 ,
which along with (14), Lemma 1, (2), and (33) imply that
α k ≥ min 2(1 − η)β k σ( c k 2 − c k + J k v k 2 ) (τ k L + Γ)( u k 2 2 + v k 2 2 ) , 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k , 1 ≥ min 2(1 − η)β k σκ v J T k c k 2 (τ k L + Γ)( κ 2 u ω 2 γ 4 + 1)ω 2 c k J T k c k 2 , 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k , 1 ≥ min 2(1 − η)β k σκ v ω 2 γ 4 (τ k L + Γ)κ c ω 2 (κ 2 u + ω 2 γ 4 ) , 2(1 − η)β k ξ k τ k τ k L + Γ + θβ 2 k , 1 .
Combining the cases above with Lemma 3 yields the desired conclusion.
We now prove our main theorem of this subsection.
Theorem 3. Suppose that Assumption 5 holds, the sequence {β k } is chosen such that (28) holds for all k ∈ N, and either (a) β k = β ∈ (0, 1) for all k ∈ N, or
(b) ∞ k=0 β k = ∞, ∞ k=0 β 2 k < ∞, and either |{k ∈ N : u k 2 2 < χ k v k 2 2 }| = ∞ or ∞ k=0 β k τ k = ∞. Then, lim inf k→∞ J T k c k 2 = 0.
Proof. Proof. To derive a contradiction, suppose that there exists k γ ∈ N and γ ∈ R >0 such that x k ∈ X γ for all k ≥ k γ . Our aim is to show that, under condition (a) or (b), a contradiction is reached. First, suppose that condition (a) holds. By Lemmas 13-15, (2), (14), the fact that β ∈ (0, 1), Lemma 1, and Assumption 1, it follows that there exists γ ∈ R >0 such that
c k 2 − c k+1 2 ≥ α k (1 − (1 − η)β)∆l(x k , τ k , g k , d k ) − τ k (f k − f k+1 ) − α k τ k (∇f (x k ) − g k ) T d k ≥ γ βησ( c k 2 − c k + J k v k 2 ) − τ k (f sup − f inf ) − βτ k max{1, τ k }κ β ≥ γ βησκ v κ −1 c J T k c k 2 2 − τ k (f sup − f inf + β max{1, τ k }κ β ) for all k ≥ k γ .(34)
Since J T k c k 2 ≥ γ for all k ≥ k γ and {τ k } 0 under Assumption 5, it follows that there exists
k τ ≥ k γ such that τ k (f sup − f inf + β max{1, τ k }κ β ) ≤ 1 2 γ βησκ v κ −1 c J T k c k 2 2 for all k ≥ k τ .
Hence, summing (34) for j ∈ {k τ , . . . , k}, it follows with (2) that
κ c ≥ c kτ 2 − c k+1 2 ≥ 1 2 γ βησκ v κ −1 c k j=kτ J T j c j 2 2 .
It follows from this fact that {J T k c k } k≥kτ → 0, yielding the desired contradiction. Second, suppose that condition (b) holds. Since
∞ k=0 β 2 k < ∞, it follows that there exists k β ∈ N with k β ≥ k γ such that (1 − (1 − η)β k ) ≥ η for all k ≥ k β . Hence, for all k ≥ k β with u k 2 2 < χ k v kc k 2 − c k+1 2 ≥ α k (1 − (1 − η)β k )∆l(x k , τ k , g k , d k ) − τ k (f k − f k+1 ) − α k τ k (∇f (x k ) − g k ) T d k ≥ β k αησκ v κ −1 c J T k c k 2 2 − τ k (f k − f inf ) + τ k (f k+1 − f inf ) − β k τ k κ β ≥ β k αησκ v κ −1 c J T k c k 2 2 − τ k−1 (f k − f inf ) + τ k (f k+1 − f inf ) − β k τ k κ β .
Similarly, for all sufficiently large k ≥ k β -specifically, k ≥ k β , where k β ∈ N is sufficiently large such that k β ≥ k β and γ ≥ γ τ k + θβ k -with u k 2 2 ≥ χ k v k 2 2 , similar reasoning yields
c k 2 − c k+1 2 ≥ α k (1 − (1 − η)β k )∆l(x k , τ k , g k , d k ) − τ k (f k − f k+1 ) − α k τ k (∇f (x k ) − g k ) T d k ≥ β k max{β k , τ k } min{ γ , θ}ησκ v κ −1 c J T k c k 2 2 − τ k−1 (f k − f inf ) + τ k (f k+1 − f inf ) − β k max{β k , τ k }τ k κ β . Since J T k c k 2 ≥ γ for all k ≥ k β ≥ k β ≥ k γ and {τ k } 0 under Assumption 5, it follows that there exists k τ ≥ k β such that τ k κ β ≤ 1 2 αησκ v κ −1 c J T k c k 2 2 and τ k κ β ≤ 1 2 min{ γ , θ}ησκ v κ −1 c J T k c k 2 2 for all k ≥ k τ . Hence, letting K u := {k ∈ N : u k 2 2 ≥ χ k v k 2 2 } and K v := {k ∈ N : u k 2 2 < χ k v kκ c ≥ c kτ 2 − c k+1 2 ≥ − τ kτ −1 (f kτ − f inf ) + τ k (f k+1 − f inf ) + k j=kτ ,j∈Kv β j αησκ v κ −1 c J T j c j 2 2 − τ j κ β + k j=kτ ,j∈Ku β j max{β j , τ j } min{ γ , θ}ησκ v κ −1 c J T j c j 2 2 − τ k κ β ≥ − τ kτ −1 (f kτ − f inf ) + k j=kτ ,j∈Kv β j 1 2 αησκ v κ −1 c J T j c j 2 2 + k j=kτ ,j∈Ku β j max{β j , τ j } 1 2 min{ γ , θ}ησκ v κ −1 c J T j c j 2 2 .(35)
It follows from this fact and the fact that either |K v | = ∞ or at least j=kτ ,j∈Ku β j τ j = ∞ that {J T k c k } k≥kτ → 0, yielding the desired contradiction.
There is one unfortunate case not covered by Theorem 3, namely, the case when {β k } diminishes (as in condition (b)), the search direction is tangentially dominated for all sufficiently large k ∈ N, and ∞ k=0 β k τ k < ∞. One can see in the proof of the theorem why the desired conclusion, namely, that the limit inferior of { J T k c k 2 } is zero, does not necessarily follow in this setting: If, after some iteration, all search directions are tangentially dominated and ∞ k=0 β k τ k < ∞, then the coefficients on J T k c k 2 2 in (35) are summable, which means that there might not be a subsequence of { J T k c k 2 2 } that vanishes. Fortunately, however, this situation is detectable in practice, in the sense that one can detect it using computed quantities. In particular, if β k is below a small threshold, J T k c k 2 has remained above a threshold in all recent iterations, τ k = O(β k ) in recent iterations, and the algorithm has computed tangentially dominated search directions in all recent iterations, then the algorithm may benefit by triggering a switch to a setting in which {β k } is kept constant in future iterations, in which case the desired conclusion follows under condition (a). Such a trigger arguably does not conflict much with Section 4.1, since the analysis in that section presumes that {τ k } remains bounded away from zero, whereas here one has confirmed that τ k ≈ 0.
Constant, Insufficiently Small Merit Parameter
Our goal now is to consider the event that the algorithm generates a merit parameter sequence that eventually remains constant, but at a value that is too large in the sense that the conditions of Assumption 4 do not hold. Such an event for the algorithm in [1] is addressed in Proposition 3.16 in that article, where under a reasonable assumption (paralleling (38a), which we discuss later on) it is shown that, in a given run of the algorithm, the probability is zero of the merit parameter settling on too large of a value. The same can be said of our algorithm, as discussed in this subsection. That said, this does not address what might be the total probability, over all runs of the algorithm, of the event that the merit parameter remains too large. We discuss in this section that, under reasonable assumptions, this total probability is zero, where a formal theorem and proof are provided in Appendix B.
For our purposes in this section, we make some mild simplifications. First, as shown in Lemmas 2 and 3, each of the sequences {χ k }, {ζ k }, and {ξ k } has a uniform bound that holds over any run of the algorithm. Hence, for simplicity, we shall assume that the initial values of these sequences are chosen such that they are constant over k ∈ N. (Our discussions in this subsection can be generalized to situations when this is not the case; the conversation merely becomes more cumbersome, which we have chosen to avoid.) Second, it follows from properties of the deterministic instance of our algorithm (recall Theorem 1) that if a subsequence of {τ trial,true k } converges to zero, then a subsequence of the sequence of minimum singular values of the constraint Jacobians {J k } vanishes as well. Hence, we shall consider in this subsection events in which there exists τ trial,true min ∈ R >0 such that τ trial,true k ≥ τ trial,true min for all k ∈ N in any run of the algorithm. (We will remark on the consequences of this assumption further in Section 4.4.) It follows from this and (12) that if the cardinality of the set of iteration indices {k ∈ N : τ k < τ k−1 } ever exceeds s(τ trial,true min ) := log(τ trial,true
min /τ −1 ) log(1 − τ ) ∈ N,(36)
then for all subsequent k ∈ N one has τ k−1 ≤ τ trial,true min ≤ τ trial,true k . This property of s(τ trial,true min ) is relevant in our event of interest for this subsection, which we now define.
Considering a given run of our algorithm in which it is presumed that τ trial,true k ≥ τ trial,true min for some τ trial,true min ∈ R >0 for all k ∈ N, one has under a reasonable assumption (specifically, that (38a) in the lemma below holds for all k ∈ N) that the probability is zero that E τ,big (τ trial,true min ) occurs. We prove this now using the same argument as in the proof of [1,Proposition 3.16]. For this, we require the following lemma, proved here for our setting, which is slightly different than for the algorithm in [1] (due to the slightly different formula for setting the merit parameter).
Lemma 16. For any k ∈ N in any run of the algorithm, it follows for any p ∈ (0, 1] that
P k [g T k d k + u T k H k u k ≥ ∇f (x k ) T d true k + (u true k ) T H k u true k ] ≥ p (38a) implies P k [τ k < τ k−1 |τ trial,true k < τ k−1 ] ≥ p. (38b)
Proof. Proof. Consider any k ∈ N in any run of the algorithm such that τ trial,true k < τ k−1 ∈ R >0 . Then, it follows from (11) that τ trial,true
k < ∞, ∇f (x k ) T d true k + (u true k ) T H k u true k > 0, and τ trial,true k = (1 − σ)( c k 2 − c k + J k d true k 2 ) ∇f (x k ) T d true k + (u true k ) T H k u true k < τ k−1 ,
from which it follows that
(1 − σ)( c k 2 − c k + J k d true k 2 ) < (∇f (x k ) T d true k + (u true k ) T H k u true k )τ k−1 .(39)
If, in addition, a realization of g k yields
g T k d k + u T k H k u k ≥ ∇f (x k ) T d true k + (u true k ) T H k u true k ,(40)
then it follows from (39) and the fact that J k d true
k = J k d k that (1 − σ)( c k 2 − c k + J k d k 2 ) < (g T k d k + u T k H k u k )τ k−1 .
It follows from this inequality and Lemma 1 that g T k d k + u T k H k u k > 0, and with (12) it holds that
τ k ≤ τ trial k = (1 − σ)( c k 2 − c k + J k d k 2 ) g T k d k + u T k H k u k < τ k−1 .
Hence, conditioned on the event that τ trial,true k < τ k−1 , one finds that (40) implies that τ k < τ k−1 . Therefore, under the conditions of the lemma and the fact that, conditioned on the events leading up to iteration number k one has that both τ trial,true k and τ k−1 are deterministic, it follows that
P k [τ k < τ k−1 |τ trial,true k < τ k−1 ] ≥ P k [g T k d k + u T k H k u k ≥ ∇f (x k ) T d true k + (u true k ) T H k u true k |τ trial,true k < τ k−1 ] = P k [g T k d k + u T k H k u k ≥ ∇f (x k ) T d true k + (u true k ) T H k u true k ] ≥ p,
as desired.
We can now prove the following result for our algorithm. (We remark that [1] also discusses an illustrative example in which (38a) holds for all k ∈ N; see Example 3.17 in that article.) Proposition 1. If, in a given run of our algorithm, there exist τ trial,true min ∈ R >0 and p ∈ (0, 1] such that τ trial,true k ≥ τ trial,true min and (38a) hold for all k ∈ N, then the probability is zero that the event E τ,big (τ trial,true min ) occurs in the run.
Proof. Proof. Under the conditions of the proposition, the conclusion follows from Lemma 16 using the same argument as in the proof of [1,Proposition 3.16].
The analysis above shows that if {τ trial,true k } is bounded below uniformly by a positive real number, then the probability is zero that E τ,big (τ trial,true min ) occurs in a given run. From this property, it follows under this condition that the probability is zero that E τ,big (τ trial,true min ) occurs in a countable number of runs. However, this analysis does not address what may be the total probability, over all possible runs of the algorithm, that E τ,big (τ trial,true min ) may occur. (To understand this, recognize that a given run of the algorithm may be akin to a single realization from a continuous probability distribution. Since the probability of any given realization is zero, one cannot simply take the fact that the probability of E τ,big (τ trial,true min ) occurring in a given run is zero to imply that the probability of such an event is zero over all possible runs-since there may be an uncountable number of them. Hence, an alternative approach needs to be taken.) Proving that, under certain assumptions, the total probability is zero that this event occurs requires careful consideration of the stochastic process generated by the algorithm, and in particular consideration of the filtration defined by the initial conditions and the sequence of stochastic gradient estimates that are generated by the algorithm. We prove in Appendix B a formal version of the following informally written theorem.
Theorem 4 (Informal version of Theorem 5 in Appendix B). If the true trial merit parameter sequence is bounded below by a positive real number and there exists p ∈ (0, 1] such that a condition akin to (38a) always holds, then the total probability of the event that the merit parameter sequence eventually remains constant at too large of a value (as in Definition 1) is zero.
The key to our proof of Theorem 5 is the construction of a tree to characterize the stochastic process generated by the algorithm in a manner that one can employ the multiplicative form of Chernoff's bound to capture the probability of having repeated missed opportunities to decrease the merit parameter when it would have been reduced if the true gradients were computed.
Complementary Events
Our analyses in Sections 4.1, 4.2, and 4.3 do not cover all possible events. Ignoring events in which the stochastic gradients are biased and/or have unbounded variance, the events that complement E τ,low , E τ,zero , and E τ,big are the following:
• E τ,zero,bad : {τ k } 0 and for all M ∈ R >0 there exists k ∈ N such that g k − ∇f (x k ) 2 2 > M ;
• E τ,big,bad : {τ trial,true k } 0 and there exists τ big ∈ R >0 such that τ k = τ big for all k ∈ N.
The event E τ,zero,bad represents cases in which the merit parameter vanishes while the stochastic gradient estimates do not remain in a bounded set. The difficulty of proving a guarantee for this setting can be seen as follows. If the merit parameter vanishes, then this is an indication that less emphasis should be placed on the objective over the course of the optimization process, which may indicate that the constraints are infeasible or degenerate. However, if a subsequence of stochastic gradient estimates diverges at the same time, then each large (in norm) stochastic gradient estimate may suggest that a significant amount of progress can be made in reducing the objective function, despite the merit parameter having reached a small value (since it is vanishing). This disrupts the balance that the merit parameter attempts to negotiate between the objective and the constraint violation terms in the merit function. Our analysis of the event E τ,zero in Section 4.2 shows that if the stochastic gradient estimates remain bounded, then the algorithm can effectively transition to solving the deterministic problem of minimizing constraint violation. However, it remains an open question whether it is possible to obtain a similar guarantee if/when a subsequence of stochastic gradient estimates diverges. Ultimately, one can argue that scenarios of unbounded noise, such as described here, might only be of theoretical interest rather than real, practical interest. For instance, if f is defined by a (large) finite sum of component functions whose gradients (evaluated at points in a set containing the iterates) are always contained in a ball of uniform radius about the gradient of f -a common scenario in practice-then E τ,zero,bad cannot occur. Now consider the event E τ,big,bad . We have shown in Section 4.3 that under certain conditions, including if {τ trial,true k } is bounded below by τ trial,true min ∈ R >0 , then E τ,big occurs with probability zero. However, this does not account for situations in which {τ trial,true k } vanishes while {τ k } does not. Nonetheless, we contend that E τ,big,bad can be ignored for practical purposes since the adverse effect that it may have on the algorithm is observable. In particular, if the merit parameter remains fixed at a value that is too large, then the worst that may occur is that { J T k c k 2 } does not vanish. A practical implementation of the algorithm would monitor this quantity in any case (since, by Corollary 1, even in E τ,low one only knows that the limit inferior of the expectation of { J T k c k 2 } vanishes) and reduce the merit parameter if progress toward reducing constraint violation is inadequate. Hence, E τ,big,bad (and E τ,big for that matter) is an event that at most suggests practical measures of the algorithm that should be employed for E τ,low in any case.
Numerical Experiments
The goal of our numerical experiments is to compare the empirical performance of our proposed stochastic SQP method (Algorithm 1) against some alternative approaches on problems from a couple of test set collections. We implemented our algorithm in Matlab. Our code is publicly available. 1 We first consider equality constrained problems from the CUTEst collection [8], then consider two types of constrained logistic regression problems with datasets from the LIBSVM collection [3]. We compare the performance of our method versus a stochastic subgradient algorithm [6] employed to minimize the exact penalty function (10) and, in one set of our logistic regression experiments where it is applicable, versus a stochastic projected gradient method. These algorithms were chosen since, like our method, they operate in the highly stochastic regime. We do not compare against the aforementioned method from [15] since, as previously mentioned, that approach may refine stochastic gradient estimates during each iteration as needed by a line search. Hence, that method offers different types of convergence guarantees and is not applicable in our regime of interest.
In all of our experiments, results are given in terms of feasibility and stationarity errors at the best iterate, which is determined as follows. If, for a given problem instance, an algorithm produced an iterate that was sufficiently feasible in the sense that c k ∞ ≤ 10 −6 max{1, c 0 ∞ } for some k ∈ N, then, with the largest k ∈ N satisfying this condition, the feasibility error was reported as c k ∞ and the stationarity error was reported as ∇f (x k ) + J T k y k ∞ , where y k was computed as a least-squares multiplier using the true gradient ∇f (x k ) and J k . (The multiplier y k and corresponding stationarity error are not needed by our algorithm; they are computed merely so that we could record the error for our experimental results.) If, for a given problem instance, an algorithm did not produce a sufficiently feasible iterate, then the feasibility and stationarity errors were computed in the same manner at the least infeasible iterate (with respect to the measure of infeasibility · ∞ ).
Implementation Details
For all methods, Lipschitz constant estimates for the objective gradient and constraint Jacobian-playing the roles of L and Γ, respectively-were computed using differences of gradients near the initial point. Once these values were computed, they were kept constant for all subsequent iterations. This procedure was performed in such a way that, for each problem instance, all algorithms used the same values for these estimates.
As mentioned in Section 3, there are various extensions of our step size selection scheme with which one can prove, with appropriate modifications to our analysis, comparable convergence guarantees as are offered by our algorithm. We included one such extension in our software implementation for our experiments. In particular, in addition to α suff k in (21), one can directly consider the upper bound in (19) with the gradient ∇f (x k ) replaced by its estimate g k , i.e.,
ατ k g T k d k + |1 − α| c k 2 − c k 2 + α c k + J k d k 2 + 1 2 (τ k L + Γ)α 2 d k 2 2 = − α∆l(x k , τ k , g k , d k ) + |1 − α| c k 2 − (1 − α) c k 2 + 1 2 (τ k L + Γ)α 2 d k 2 2 ,
and consider the step size that minimizes this as a function of α (with scale factor β k ), namely,
α min k := max min β k ∆l(x k , τ k , g k , d k ) (τ k L + Γ) d k 2 2 , 1 , β k ∆l(x k , τ k , g k , d k ) − 2 c k 2 (τ k L + Γ) d k 2 2 .(41)
(Such a value is used in [1].) The algorithm can then set a trial step size as any satisfying
α trial k ∈ [min{α suff k , α min k }, max{α suff k , α min k }](42)
and set α k as the projection of this value, rather than α suff k , for all k ∈ N. (The projection interval in (22) should be modified, specifically with each instance of 2(1 − η) replaced by min{2(1 − η), 1}, to account for the fact that the lower value in (42) may be smaller than α suff k . A similar modification is needed in the analysis, specifically in the requirements for {β k } in Lemma 9.)
One can also consider rules that allow even larger step sizes to be taken. For example, rather than consider the upper bound offered by the last expression in (19), one can consider any step size that ensures that the penultimate expression in (19) is less than or equal to the right-hand side of (20) with ∇f (x k ) replaced by g k . Such a value can be found with a one-dimensional search over α with negligible computational cost. Our analysis can be extended to account for this option as well. However, for our experimental purposes here, we do not consider such an approach.
For our stochastic SQP method, we set H k ← I and α trial k ← max{α suff k , α min k } for all k ∈ N. Other parameters were set as τ −1 ← 1,
χ −1 ← 10 −3 , ζ −1 ← 10 3 , ξ −1 ← 1, ω ← 10 2 , v ← 1, σ ← 1/2, τ ← 10 −2 ,
χ ← 10 −2 , ζ ← 10 −2 , ξ ← 10 −2 , η ← 1/2, and θ ← 10 4 . For the stochastic subgradient method, the merit parameter value and step size were tuned for each problem instance, and for the stochastic projected gradient method, the step size was tuned for each problem instance; details are given in the following subsections. In all experiments, both the stochastic subgradient and stochastic projected gradient method were given many more iterations to find each of their best iterates for a problem instance; this is reasonable since the search direction computation for our method is more expensive than for the other methods. Again, further details are given below.
CUTEst problems
In our first set of experiments, we consider equality constrained problems from the CUTEst collection. Specifically, of the 136 such problems in the collection, we selected those for which (i) f is not a constant function, and (ii) n + m + 1 ≤ 1000. This selection resulted in a set of 67 problems. In order to consider the context in which the LICQ does not hold, for each problem we duplicated the last constraint. (This does not affect the feasible region nor the set of stationary points, but ensures that the problem instances are degenerate.) Each problem comes with an initial point, which we used in our experiments. To make each problem stochastic, we added noise to each gradient computation. Specifically, for each run of an algorithm, we fixed a noise level as N ∈ {10 −8 , 10 −4 , 10 −2 , 10 −1 }, and in each iteration set the stochastic gradient estimate as g k ← N (∇f (x k ), N I). For each problem and noise level, we ran 10 instances with different random seeds. This led to a total of 670 runs of each algorithm for each noise level.
We set a budget of 1000 iterations for our stochastic SQP algorithm and a more generous budget of 10000 iterations for the stochastic subgradient method. We followed the same strategy as in [1] to tune the merit parameter τ for the stochastic subgradient method, but also tuned the step sizes through the sequence {β k }. Specifically, for each problem instance, we ran the stochastic subgradient method for 11 different values of τ and 4 different values of β, namely, τ ∈ {10 −10 , 10 −9 , . . . , 10 0 } and β ∈ {10 −3 , 10 −2 , 10 −1 , 10 0 }, set the step size as βτ τ L+Γ , and selected the combination of τ and β for that problem instance that led to the best iterate overall. (We found through this process that the selected (τ, β) pairs were relatively evenly distributed over their ranges, meaning that this extensive tuning effort was useful to obtain better results for the stochastic subgradient method.) For our stochastic SQP method, we set β k ← 1 for all k ∈ N. Overall, between the additional iterations allowed in each run of the stochastic subgradient method, the different merit parameter values tested, and the different step sizes tested, the stochastic subgradient method was given 440 times the number of iterations that were given to our stochastic SQP method for each problem. The results of this experiment are reported in the form of box plots in Figure 1. One finds that the best iterates from our stochastic SQP algorithm generally correspond to much lower feasibility and stationarity errors for all noise levels. The stationarity errors for our method degrade as the noise level increases, but this is not surprising since these experiments are run with {β k } being a constant sequence. It is interesting, however, that our algorithm typically finds iterates that are sufficiently feasible, even for relatively high noise levels. This shows that our approach handles the deterministic constraints well despite the stochasticity of the objective gradient estimates. Finally, we remark that for these experiments our algorithm found τ k−1 ≤ τ trial,true k to hold in roughly 98% of all iterations for all runs (across all noise levels), and found this inequality to hold in the last 50 iterations in 100% of all runs. This provides evidence for our claim that the merit parameter not reaching a sufficiently small value is not an issue of practical concern.
Constrained Logistic Regression
In our next sets of experiments, we consider equality constrained logistic regression problems of the form
min x∈R n f (x) = 1 N N i=1 log 1 + e −yi(X T i x) s.t. Ax = b, x 2 2 = 1,(43)
where X ∈ R n×N contains feature data for N data points (with X i representing the ith column of X), y ∈ {−1, 1} N contains corresponding label data, A ∈ R (m+1)×n and b ∈ R m+1 . For instances of (X, y), we consider 11 binary classification datasets from the LIBSVM collection [3]; specifically, we consider all of the datasets for which 12 ≤ n ≤ 1000 and 256 ≤ N ≤ 100000. (For datasets with multiple versions, e.g., the {a1a, . . . , a9a} datasets, we consider only the largest version.) The names of the datasets that we used and their sizes are given in Table 1. For the linear constraints, we generated random A and b for each problem. Specifically, the first m = 10 rows of A and first m entries in b were set as random values with each entry being drawn from a standard normal distribution. Then, to ensure that the LICQ was not satisfied (at any algorithm iterate), we duplicated the last constraint, making m + 1 linear constraints overall. For all problems and algorithms, the initial iterate was set to the vector of all ones of appropriate dimension. For one set of experiments, we consider problems of the form (43) except without the norm constraint. For this set of experiments, the performance of all three algorithms-stochastic SQP, subgradient, and projected gradient-are compared. For each dataset, we considered two noise levels, where the level is dictated by the mini-batch size of each stochastic gradient estimate (recall (8)). For the mini-batch sizes, we employed b k ∈ {16, 128} for all problems. For each dataset and mini-batch size, we ran 5 instances with different random seeds.
A budget of 5 epochs (i.e., number of effective passes over the dataset) was used for all methods. For our stochastic SQP method, we used β k = 10 −1 for all k ∈ N. For the stochastic subgradient method, the merit parameter and step size were tuned like in Section 5.2 over the sets β ∈ {10 −3 , 10 −2 , 10 −1 , 10 0 } and τ ∈ {10 −3 , 10 −2 , 10 −1 , 10 0 }. For the stochastic projected gradient method, the step size was tuned using the formula β L over β ∈ {10 −8 , 10 −7 , . . . , 10 1 , 10 2 }. Overall, this meant that the stochastic subgradient and stochastic projected gradient methods were effectively run for 16 and 11 times the number of epochs, respectively, that were allowed for our method.
The results for this experiment are reported in Table 2. For every dataset and mini-batch size, we report the average feasibility and stationarity errors for the best iterates of each run along with a 95% confidence interval. The results show that our method consistently outperforms the two alternative approaches despite the fact that each of the other methods were tuned with various choices of the merit and/or step size parameter. For a second set of experiments, we consider problems of the form (43) with the norm constraint. The settings for the experiment were the same as above, except that the stochastic projected gradient method is not considered. The results are stated in Table 3. Again, our method regularly outperforms the stochastic subgradient method in terms of the best iterates found. For the experiments without the norm Table 2: Average feasibility and stationarity errors, along with 95% confidence intervals, when our stochastic SQP method, a stochastic subgradient method, and a stochastic projected gradient method are employed to solve logistic regression problems with linear constraints (only). The results for the best-performing algorithm are shown in bold. constraint, our algorithm found τ k−1 ≤ τ trial,true k to hold in roughly 98% of all iterations for all runs, and found this inequality to hold in all iterations in the last epoch in 100% of all runs. With the norm constraint, our algorithm found τ k−1 ≤ τ trial,true k to hold in roughly 97% of all iterations for all runs, and found this inequality to hold in all iterations in the last epoch in 99% of all runs.
Conclusion
We have proposed, analyzed, and tested a stochastic SQP method for solving equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. Our algorithm is specifically designed for cases when the LICQ does not necessarily hold in every iteration. The convergence guarantees that we have proved for our method consider situations when the merit parameter sequence eventually remains fixed at a value that is sufficiently small, in which case the algorithm drives stationarity measures for the constrained optimization problem to zero, and situations when the merit parameter vanishes, which may indicate that the problem is degenerate and/or infeasible. Numerical experiments demonstrate that our algorithm consistently outperforms alternative approaches in the highly stochastic regime.
A Deterministic Analysis
In this appendix, we prove that Theorem 1 holds, where in particular we consider the context when g k = ∇f (x k ) and β k = β satisfy (23) for all k ∈ N. For this purpose, we introduce a second termination condition in Algorithm 1. In particular, after line 7, we terminate the algorithm if both g k + J T k y k 2 = 0 and c k 2 = 0. In this manner, if the algorithm terminates finitely, then it returns an infeasible stationary point (recall (4)) or primal-dual stationary point for problem (1) and there is nothing left to prove. Hence, without loss of generality, we proceed under the assumption that the algorithm runs for all k ∈ N.
Throughout our analysis in this appendix, we simply refer to the tangential direction as u k , the full search direction as d k = v k + u k , etc., even though it is assumed throughout this appendix that these are the true quantities computed using the true gradient ∇f (x k ) for all k ∈ N.
It follows in this context that both Lemma 1 and Lemma 2 hold. In addition, Lemma 3 holds, where, in the proof, the case that d k = 0 can be ignored due to the following lemma.
= v k + u k = 0.
Proof. Proof. For all k ∈ N, the facts that v k ∈ Range(J T k ) and u k ∈ Null(J k ) imply d k = v k + u k = 0 if and only if v k = 0 and u k = 0. Since we suppose in our analysis that the algorithm does not terminate finitely with an infeasible stationary point, it follows for all k ∈ N that J T k c k 2 > 0 or c k 2 = 0. If J T k c k 2 > 0, then Lemma 1 implies that v k = 0, and the desired conclusion follows. Hence, we may proceed under the assumption that c k 2 = 0. In this case, it follows under Assumption 3 that g k + J T k y k = 0 if and only if u k = 0, which under our supposition that the algorithm does not terminate finitely means that u k = 0.
We now prove a lower bound on the reduction in the merit function that occurs in each iteration. This is a special case of Lemmas 9 and 13 for the deterministic setting.
Lemma 18. For all k ∈ N, it holds that φ(x k , τ k ) − φ(x k + α k d k , τ k ) ≥ ηα k ∆l(x k , τ k , g k , d k ).
Proof. Proof. For all k ∈ N, it follows by the definition of α suff k that (recall (20))
φ(x k + αd k , τ k ) − φ(x k , τ k ) ≤ −ηα∆l(x k , τ k , g k , d k ) for all α ∈ [0, α suff k ]. If u k 2 2 ≥ χ k v k 2 2 , then the only way that α k > α suff k is if 2(1 − η)βξ k τ k τ k L + Γ > min 2(1 − η)β∆l(x k , τ k , g k , d k ) (τ k L + Γ) d k 2 2 , 1 .
By (23), the left-hand side of this inequality is less than 1, meaning α k > α suff k only if
2(1 − η)βξ k τ k τ k L + Γ > 2(1 − η)β∆l(x k , τ k , g k , d k ) (τ k L + Γ) d k 2 2 ⇐⇒ ξ k τ k > ∆l(x k , τ k , g k , d k ) d k 2 2 .
However, this is not true since ξ k ≤ ξ trial k for all k ∈ N. Following a similar argument for the case when u k 2 2 < χ k v k 2 2 , the desired conclusion follows.
For our purposes going forward, let us define the shifted merit functionφ :
R n × R ≥0 → R bỹ φ(x, τ ) = τ (f (x) − f inf ) + c(x) 2 . Lemma 19. For all k ∈ N, it holds thatφ(x k , τ k ) −φ(x k+1 , τ k+1 ) ≥ ηα k ∆l(x k , τ k , g k , d k ).
Proof. Proof. For arbitrary k ∈ N, it follows from Lemma 18 that
τ k+1 (f (x k + α k d k ) − f inf ) + c(x k + α k d k ) 2 ≤ τ k (f (x k + α k d k ) − f inf ) + c(x k + α k d k ) 2 ≤ τ k (f (x k ) − f inf ) + c k 2 − ηα k ∆l(x k , τ k , g k , d k ),
from which the desired conclusion follows.
We now prove our first main result of this appendix.
Lemma 20. The sequence { J T k c k 2 } vanishes. Moreover, if there exist k J ∈ N and σ J ∈ R >0 such that the singular values of J k are bounded below by σ J for all k ≥ k J , then { c k 2 } vanishes.
Proof. Proof. Let γ ∈ R >0 be arbitrary. Our aim is to prove that the number of iterations with x k ∈ X γ (recall (32)) is finite. Since γ has been chosen arbitrarily in R >0 , the conclusion will follow. By Lemma 15 and the fact that {β k } is chosen as a constant sequence, it follows that there exists α ∈ R >0 such that α k ≥ α for all k ∈ K γ (regardless of whether the search direction is tangentially or normally dominated). Hence, using Lemmas 1 and 19, it follows that
φ(x k , τ k ) −φ(x k+1 , τ k+1 ) ≥ ηα∆l(x k , τ k , g k , d k ) ≥ ηασ( c k 2 − c k + J k v k 2 ) ≥ ηασκ v κ −1 c γ 2 .
Hence, the desired conclusion follows since {φ(x k , τ k )} is monotonically nonincreasing by Lemma 19 and is bounded below under Assumption 1.
We now show a consequence of the merit parameter eventually remaining constant.
Lemma 21.
If there exists k τ ∈ N and τ min ∈ R >0 such that τ k = τ min for all k ≥ k τ , then
0 = lim k→∞ u k 2 = lim k→∞ d k 2 = lim k→∞ g k + J T k y k 2 = lim k→∞ Z T k g k 2 .
Proof. Proof. Under Assumption 1 and the conditions of the lemma, Lemmas 15 and 19 imply that ∆l(x k , τ k , g k , d k )} → 0, which with (14) and Lemma 1 implies that { u k 2 } → 0, { v k 2 } → 0, and { J T k c k 2 } → 0. The remainder of the conclusion follows from Assumption 3 and (9).
The proof of Theorem 1 can now be completed.
Proof. Proof of Theorem 1. The result follows from Lemmas 8, 20, and 21.
B Total Probability Result
In this appendix, we prove a formal version of Theorem 4, which is stated at the end of this appendix as Theorem 5. Toward this end, we formalize the quantities generated by Algorithm 1 as a stochastic process, namely,
{(X k , G k , V k , U k , U true k , D k , D true k , Y k , Y true k , T k , T trial,true k , X k , Z k , Ξ k , A k )},
where, for all k ∈ N, we denote the primal iterate as X k , the stochastic gradient estimate as G k , the normal search direction as V k , the tangential search direction as U k , the "true" tangential search direction as U true k , the search direction as D k , the "true" search direction as D true k , the Lagrange multiplier estimate as Y k , the "true" Lagrange multiplier estimate as Y true k , the merit parameter as T k , the "true" trial merit parameter as T trial,true k , the curvature parameter as X k , the curvature threshold parameter as Z k , the ratio parameter as Ξ k , and the step size as A k . A realization of the kth element of this process are the quantities that have appeared throughout the paper, namely, (x k , g k , v k , u k , u true k , d k , d true k , y k , y true k , τ k , τ trial,true k , χ k , ζ k , ξ k , α k ). Algorithm 1's behavior is dictated entirely by the initial conditions (i.e., initial point and parameter values) as well as the sequence of stochastic gradient estimates; i.e., assuming for simplicity that the initial conditions are predetermined, a realization of {G 0 , . . . , G k−1 } determines the realizations of
{X j } k j=1 and {V k , U k , U true k , D k , D true k , Y k , Y true k , T k , T trial,true k , X k , Z k , Ξ k , A k )} k−1 j=0 .
In the process of proving our main result (Theorem 5), we prove a set of lemmas about the behavior of the merit parameter sequence after a finite number of iterations. Specifically, we consider the behavior of Algorithm 1 when terminated at k = k max ∈ N. With this consideration, we define a tree with a depth bounded by k max , which will be integral to our arguments in this section. The proof of Theorem 5 ultimately considers the behavior of the algorithm as k max → ∞.
Let I[·] denote the indicator function of an event and, for all k ∈ k max , define the random variables
Q k := I[T trial,true k < T k−1 ] and W k := k−1 i=0 I[T i < T i−1 ].
Accordingly, for any realization of a run of Algorithm 1 and any k ∈ k max , the realization (q k , w k ) of (Q k , W k ) is determined at the beginning of iteration k. The signature of a realization up to iteration k is (q 0 , . . . , q k , w 0 , . . . , w k ), which encodes all of the pertinent information regarding the behavior of the merit parameter sequence and these indicators up to the start of iteration k.
We use the set of all signatures to define a tree whereby each node contains a subset of all realizations of the algorithm. To construct the tree, we denote the root node by N (q 0 , w 0 ), where q 0 is the indicator of the event τ trial,true 0 < τ −1 , which is deterministic based on the initial conditions of the algorithm, and w 0 = 0. All realizations of the algorithm follow the same initialization, so q 0 and w 0 are in the signature of every realization. Next, we define a node N (q [k] , w [k] ) at depth k ∈ [k max ] (where the root node has a depth of 0) in the tree as the set of all realizations of the algorithm for which the signature of the realization up to iteration k is (q 0 , . . . , q k , w 0 , . . . , w k ). We define the edges in the tree by connecting nodes at adjacent levels, where node N (q [k] , w [k] ) is connected to node N (q [k] , q k+1 , w [k] , w k+1 ) for any q k+1 ∈ {0, 1} and w k+1 ∈ {w k , w k + 1}.
Notationally, since the behavior of a realization of the algorithm up to iteration k ∈ N is completely determined by the initial conditions and the realization of G [k−1] , we say that a realization described by
G [k−1] belongs in node N (q [k] , w [k] ) by writing that G [k−1] ∈ N (q [k] , w [k]
). The initial condition, denoted for consistency as G [−1] ∈ N (q 0 , w 0 ), occurs with probability one. Based on the description above, the nodes of our tree satisfy the property that for any k ≥ 2, the event G [k−1] ∈ N (q [k] , w [k] ) occurs if and only if
Q k = q k , W k = w k , and G [k−2] ∈ N (q [k−1] , w [k−1] ).(44)
Similar to Section 4.3, we consider the following event, under which the true trial merit parameter sequence {T trial,true k } is bounded below by a positive real number. We are now prepared to state the assumption under which Theorem 5 is proved.
P T k < T k−1 |E τ (τ trial,true min ), G [k−1] ∈ N (q [k] , w [k] ), T trial,true k < T k−1 ≥ p τ .(45)
Intuitively, equation (45) states that conditioned on E τ (τ trial,true min ), the behavior of the algorithm up to the beginning of iteration k, and Q k = 1, the probability that the merit parameter is decreased in iteration k is at least p τ . For simplicity of notation, henceforth we define E := E τ (τ trial,true min ) and s max :=s(τ trial,true min ).
We remark that if (38a) holds for all realizations of Algorithm 1-as is the case when the distribution of the stochastic gradients satisfies a mild form of symmetry (see [1,Example 3.17] for a simple example)-then (45) holds by Lemma 16. Our main result, Theorem 5, essentially shows that the probability that T trial,true k < T k occurs infinitely often is zero. Toward proving this result, we first prove a bound on the probability that T trial,true k < T k occurs at least J times for any J ∈ N such that J > smax pτ + 1. Given such a J, we can define a number of important sets of nodes in the tree. First, let
L good := N (q [k] , w [k] ) : k i=0 q i < J ∧ (w k = s max ∨ k = k max )
be the set of nodes at which the sum of the elements of q [k] is sufficiently small (less than J) and either w k has reached s max or k has reached k max . Second, let
L bad := N (p [k] , w [k] ) : k i=0 q i ≥ J
be the nodes in the complement of L good at which the sum of the elements of q [k] is at least J. Going forward, we restrict attention to the tree defined by the root node and all paths from the root node that terminate at a node contained in L good ∪ L bad . From this restriction and the definitions of L good and L bad , the tree has finite depth with the elements of L good ∪ L bad being leaves.
Let us now define relationships between nodes. The parent of a node is defined as
P (N (q [k] , w [k] )) = N (q [k−1] , w [k−1] ).
On the other hand, the children of node N (q [k] , w [k] ) are defined as
C(N (q [k] , w [k] )) = {N (q [k] , q k+1 , w [k] , w k+1 )} if N (q [k] , w [k] ) ∈ L good ∪ L bad ∅ otherwise.
Under these definitions, the paths down the tree terminate at nodes in L good ∪ L bad , reaffirming that these nodes are the leaves of the tree. For convenience in the rest of our discussions, let C(∅) = ∅. We define the height of node N (q [k] , w [k] ) as the length of the longest path from N (q [k] , w [k] ) to a leaf node, i.e., the height is denoted as
h(N (q [k] , w [k] )) := min{j ∈ N \ {0} : C j (N (q [k] , w [k] )) = ∅} − 1, where C j (N (q [k] , w [k]
)) is shorthand for applying the mapping C(·) consecutively j times. From this defini-
tion, h(N (q [k] , w [k] )) = 0 for all N (q [k] , w [k] ) ∈ L good ∪ L bad .
Finally, let us define the event E bad,kmax,J as the event that for some j ∈ [k max ] one finds
j i=0 Q i = j i=0 I[T trial,true i < T i−1 ] ≥ J.(46)
Our first goal in this section is to find a bound on the probability of this event occurring. We will then utilize this bound to prove Theorem 5. As a first step towards bounding the probability of E bad,kmax,J , we prove the following result about the leaf nodes of the tree.
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).
Proof. Proof. Consider arbitrary k ∈ [k max ] and J ∈ N as well as an arbitrary pair (q [k] , w [k] ) such that N (q [k] , w [k] ) ∈ L good . By the definition of L good , it follows that k i=0 q i < J. Then, by (44),
P k i=0 Q i ≥ J E, G [k−1] ∈ N (q [k] , w [k] ) = P k i=0 q i ≥ J E, G [k−1] ∈ N (q [k] , w [k] ) = 0.
Therefore, for any j ∈ {1, . . . , k}, one finds from conditional probability that
P G [j−1] ∈ N (q [j] , w [j] ) ∧ (46) holds|E = P (46) holds E, G [j−1] ∈ N (q [j] , w [j] ) · P G [j−1] ∈ N (q [j] , w [j] )|E = 0.
In addition, (46) cannot hold for j = 0 since I[τ trial,true 0 < τ −1 ] = q 0 < J by the definition of L good . Hence, along with the conclusion above, it follows that E bad,kmax,J does not occur in any realization whose signature up to iteration j ∈ {1, . . . , k} falls into a node along any path from the root to N (q [k] , w [k] ). Now, by the definition of L good , at least one of w k = s max or k = k max holds. Let us consider each case in turn. If k = k max , then it follows by the preceding arguments that
P kmax i=0 Q i < J E, G [k−1] ∈ N (p [k] , w [k] ) = 1.
Otherwise, if w k = s max , then it follows by the definition of s max that T k−1 ≤ τ trial,true min so that Q i = I[T trial,true i < T i−1 ] = 0 holds for all i ∈ {k, . . . , k max }, and therefore the equation above again follows. Overall, it follows that P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] = 0, as desired. Now consider arbitrary k ∈ [k max ] and J ∈ N as well as an arbitrary pair (q [k] , w [k] ) with N (q [k] , w [k] ) ∈ L bad . One finds that
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ P[G [k−1] ∈ N (q [k] , w [k] )|E] = P[(44) holds|E] = P[Q k = q k |E, W k = w k , G [k−2] ∈ N (q [k−1] , w [k−1] )] · P[W k = w k ∧ G [k−2] ∈ N (p [k−1] , w [k−1] )|E] = P[Q k = q k |E, W k = w k , G [k−2] ∈ N (q [k−1] , w [k−1] )] · P[W k = w k |E, G [k−2] ∈ N (p [k−1] , w [k−1] )] · P[G [k−2] ∈ N (p [k−1] , w [k−1] )|E] = P[G −1 ∈ N (q 0 , w 0 )] · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]),
which, since P[G [−1] ∈ N (q 0 , w 0 )] = 1, proves the remainder of the result.
Next, we show that the probability of the occurrence of E bad,kmax,J at any node in the tree can be bounded in terms of the probability of the sum of a set of independent Bernoulli random variables being less than a threshold defined by s max .
Lemma 23. For any k ∈ [k max ], J ∈ N, and (q [k] , w [k] ) with N (q [k] , w [k] ) ∈ L good , let
ψ J (q [k] ) = J − 1 − k i=0 q i .(47)
One finds that
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]) · P ψ J (q [k−1] ) j=1 Z j ≤ s max − w k ,(48)
where {Z j } are independent Bernoulli random variables with P[Z j = 1] = p τ for all j ∈ N.
Proof. Proof. Consider any (q [k] , w [k] ) with h(N (q [k] , w [k] )) = 0. Since N (q [k] , w [k] ) ∈ L good , it follows that N (q [k] , w [k] ) ∈ L bad . Then, by the definition of L bad , it follows that k i=0 q i ≥ J. In addition, since C(N (q [k] , w [k] )) = ∅ for any node in L bad , it follows that P (N (q [k] , w [k] )) ∈ L bad , which implies that We prove the rest of the result by induction on the height of the node. We note that the base case, i.e., when h(N (q [k] , w [k] )) = 0, holds by the above argument. Now, assume that (48) holds for any (q
· P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + P[Q k+1 = 0|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )] · P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 + P[Q k+1 = 1|E, W k+1 = w k , G [k−1] ∈ N (q [k] , w [k] )] · P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,3 ≤ s max − w k + P[Q k+1 = 1|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )] · P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,4 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )])
where Z j,p for all p ∈ {1, . . . , 4} and j ∈ {1, . . . , ψ(q [k] )} are four sets of independent Bernoulli random variables with P[Z j,p = 1] = p τ . Now, by the definitions of Z j,1 , Z j,2 , Z j,3 , and Z j,4 , it follows that
P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k = P ψ J (q [k] ) j=1 Z j,3 ≤ s max − w k , and P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 = P ψ J (q [k] ) j=1 Z j,4 ≤ s max − w k − 1 .
Therefore, it follows that
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ (P[Q k+1 = 0|E, W k+1 = w k , G [k−1] ∈ N (q [k] , w [k] )] + P[Q k+1 = 1|E, W k+1 = w k , G [k−1] ∈ N (q [k] , w [k] )]) · P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + (P[Q k+1 = 0|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )] + P[Q k+1 = 1|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )]) · P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).
Now, by the law of total probability, it follows that
1 = P[Q k+1 = 0|E, W k+1 = w k , G [k−1] ∈ N (q [k] , w [k] )] + P[Q k+1 = 1|E, W k+1 = w k , G [k−1] ∈ N (q [k] ,≤ P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )] · P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).(49)
We proceed by considering two cases. First, suppose q k = 1. By Assumption 6, it follows that
P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )] = P[T k < T k−1 |E, G [k−1] ∈ N (q [k] , w [k] ), T trial,true k < T k−1 ] ≥ p τ .
Additionally, using the law of total probability, we have
(1 − p)P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + pP ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).(50)
In addition, by the definition of Z j,1 and Z j,2 , one finds that
P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 ≤ P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k .
Therefore, it follows that the max in (50) is given by p = p τ . Thus,
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ (1 − p τ )P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + p τ P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]),
where, by the definitions of Z j,1 and Z j,2 , we have used the fact that
P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k − 1 = P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 .
Now, for all j ∈ {1, . . . , ψ J (q [k] )}, define Z j = Z j,1 and let Z ψ J (q [k] )+1 be a Bernoulli random variable with P[Z ψ J (q [k] )+1 = 1] = p τ . Then, it follows that
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax |E] ≤ P ψ J (q [k] )+1 j=1 Z j ≤ s max − w k · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).
This proves the result in this case by noting that q k = 1 implies
ψ J (q [k] ) + 1 = J − 1 − k i=0 q i + 1 = J − 1 − k−1 i=0 q i = ψ J (q [k−1](1 − p)P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k + pP ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).(51)
Similar to before, noting that
P ψ J (q [k] ) j=1 Z j,2 ≤ s max − w k − 1 ≤ P ψ J (q [k] ) j=1 Z j,1 ≤ s max − w k ,
it follows that the max in (51) is given by p = 0, so, with Z j = Z j,1 for all j ∈ {1, . . . , ψ J (q [k] )},
P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ P ψ J (q [k] ) j=1 Z j ≤ s max − w k · k i=1 (P[Q i = q i |E, W i = w i , G [i−2] ∈ N (q [i−1] , w [i−1] )] · P[W i = w i |E, G [i−2] ∈ N (q [i−1] , w [i−1] )]).
The result follows from this inequality and the fact that ψ J (q [k] ) = ψ J (q [k−1] ) since q k = 0.
We now apply Lemma 23 to obtain a high probability bound. which is the desired conclusion.
2 2
2}, one can sum the inequalities above for j ∈ {k τ , ..., k} to find
Definition 1 .
1The event E τ,big (τ trial,true min ) for some τ trial,true min ∈ R >0 occurs in a run if and only if τ trial,true k ≥ τ trial,true min for all k ∈ N and there exists an infinite index set K ⊆ N such that τ trial,true k < τ k−1 for all k ∈ K.
Figure 1 :
1Box plots for feasibility errors (left) and stationarity errors (right) when our stochastic SQP method and a stochastic subgradient method are employed to solve equality constrained problems from the CUTEst collection.
Definition 2 .
2For some τ trial,true min ∈ R >0 , the event E τ (τ trial,true min ) occurs if and only if T trial,true k ≥ τ trial,true min for all k ∈ N in all realizations of a run of the algorithm. When this event occurs, it follows that if W k = k−1 i=0 I[T i < T i−1 ] ≥s(τ trial,true min ) (see (36)) in a run, then for all k ≥ k in the run it follows that T k−1 ≤ τ trial,true min ≤ T trial,true k .
i = J − 1, which implies that ψ J (q [k−1] ) = 0. Therefore, overall, the result holds for any (q [k] , w [k] ) with h(N (q [k] , w [k] )) = 0 by Lemma 22.
[k] , w [k] ) with N (q [k] , w [k] ) ∈ L good such that h(N (q [k] , w [k] )) ≤ĥ. Consider arbitrary (q [k] , w [k] ) such that N (q [k] , w [k] ) ∈ L good and h(N (q [k] , w [k] )) =ĥ + 1. By the definition of C, one finds P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] = {(q k+1 ,w k+1 ):N (q [k+1] ,w [k+1] )∈C(N (q [k] ,w [k] ))} P[G [k] ∈ N (q [k+1] , w [k+1] ) ∧ E bad,kmax,J |E].
P[Q k+1 = 0|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )] + P[Q k+1 = 1|E, W k+1 = w k + 1, G [k−1] ∈ N (q [k] , w [k] )].Thus,P[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E]
Lemma 24 .<= e − 1 2 2 .< 1 <<<<
24121For any J > smax pτ + 1, T k ] ≥ J ≤ e − pτ (Proof. Proof. Recalling that the initial condition for the tree, G [−1] ∈ N (q 0 , w 0 ), occurs with probability one, by Lemma 23, it follows that there exist J −1 independent Bernoulli random variables Z j with P[Z j = 1] = p τ for all j ∈ {1, . . . , J − 1} such thatP[E bad,kmax,J |E] = P[G [−1] ∈ N (q 0 , w 0 ) ∧ E bad,kmax |E] j = 1] = p τ (J − 1) and ρ := 1 − s max /µ.Noting that ρ ∈ (0, 1) by the definition of J, by the multiplicative form of Chernoff's bound, µ(1−smax/µ) 2 = e − pτ (J−1) 2 (1− smax pτ (J−1) ) For all k ∈ [k max ], we have T k ≤ T k−1 . Thus, by the definition of E bad,kmax,J T k ] < ∞ E = 1. (53)Proof. Proof. By Lemma 24, for any k max ∈ N \ {0} and J > smaxpτ + T k ] ≥ J E ≤ e − pτ (T k ] ≥ J.It follows from this definition that A kmax ⊆ A kmax+1 for any k max ∈ N \ {0}. Therefore, by the properties of an increasing sequence of events (see, for example [25, Section 1.5])T k ] ≥ J E ≤ e − pτ (T k ] < J.From the definition of A J , it follows that A J ⊆ A J+1 for any J > smax pτ + 1. Thus, as above,
3 and Assumption 4). Thus, since E k [d k ] = d true k by Lemma 4, the law of total expectation yields
Table 1 :
1Names and sizes of datasets. (Source: [3].)
dataset
dimension (n)
datapoints (N )
a9a
123
32, 561
australian
14
690
heart
13
270
ijcnn1
22
49, 990
ionosphere
34
351
madelon
500
2, 000
mushrooms
112
8, 124
phising
68
11, 055
sonar
60
208
splice
60
1, 000
w8a
300
49, 749
Table 3 :
3Average feasibility and stationarity errors, along with 95% confidence intervals, when our stochastic SQP method and a stochastic subgradient method are employed to solve logistic regression problems with linear constraints and a squared 2 -norm constraint. The results for the best-performing algorithm are shown in bold.Lemma 17. For all k ∈ N, one finds that d kStochastic
Subgradient
Stochastic
SQP
dataset
batch
Feasibility
Stationarity
Feasibility
Stationarity
a9a
16
4.62e − 03 ± 3.27e − 04
1.24e − 01 ± 7.52e − 02
5.52e − 05 ± 5.04e − 09
5.52e − 05 ± 5.04e − 09
5.52e − 05 ± 5.04e − 09
6.07e − 03 ± 2.32e − 05
6.07e − 03 ± 2.32e − 05
6.07e − 03 ± 2.32e − 05
a9a
128
4.27e − 03 ± 3.92e − 04
1.90e − 01 ± 3.03e − 03
6.38e − 05 ± 1.12e − 08
6.38e − 05 ± 1.12e − 08
6.38e − 05 ± 1.12e − 08
4.40e − 03 ± 1.41e − 05
4.40e − 03 ± 1.41e − 05
4.40e − 03 ± 1.41e − 05
australian
16
1.51e − 01 ± 1.07e − 05
1.51e − 01 ± 1.07e − 05
1.52e − 04 ± 5.58e − 06
1.52e − 04 ± 5.58e − 06
1.52e − 04 ± 5.58e − 06
5.65e − 03 ± 3.73e − 05
5.65e − 03 ± 3.73e − 05
5.65e − 03 ± 3.73e − 05
australian
128
3.96e − 01 ± 1.87e − 04
3.96e − 01 ± 1.87e − 04
3.83e − 04 ± 5.45e − 05
3.83e − 04 ± 5.45e − 05
3.83e − 04 ± 5.45e − 05
1.68e − 02 ± 3.29e − 03
1.68e − 02 ± 3.29e − 03
1.68e − 02 ± 3.29e − 03
heart
16
1.57e + 00 ± 5.76e − 01
2.86e + 01 ± 1.00e + 01
9.29e − 01 ± 3.47e − 02
9.29e − 01 ± 3.47e − 02
9.29e − 01 ± 3.47e − 02
2.65e + 01 ± 1.81e + 01
2.65e + 01 ± 1.81e + 01
2.65e + 01 ± 1.81e + 01
heart
128
1.33e + 00 ± 6.69e − 01
1.33e + 00 ± 6.69e − 01
1.33e + 00 ± 6.69e − 01
1.69e + 01 ± 2.23e + 00
1.69e + 01 ± 2.23e + 00
1.69e + 01 ± 2.23e + 00
1.88e + 00 ± 1.42e − 01
2.93e + 00 ± 1.26e + 00
ijcnn1
16
5.36e − 02 ± 9.37e − 07
5.36e − 02 ± 9.37e − 07
3.70e − 02 ± 9.24e − 05
3.70e − 02 ± 9.24e − 05
3.70e − 02 ± 9.24e − 05
4.60e − 02 ± 8.32e − 03
4.60e − 02 ± 8.32e − 03
4.60e − 02 ± 8.32e − 03
ijcnn1
128
5.41e − 02 ± 1.04e − 06
5.41e − 02 ± 1.04e − 06
3.64e − 02 ± 1.06e − 04
3.64e − 02 ± 1.06e − 04
3.64e − 02 ± 1.06e − 04
3.64e − 02 ± 1.06e − 04
3.64e − 02 ± 1.06e − 04
3.64e − 02 ± 1.06e − 04
ionosphere
16
3.35e − 01 ± 1.06e − 03
3.35e − 01 ± 1.06e − 03
5.79e − 03 ± 1.44e − 04
5.79e − 03 ± 1.44e − 04
5.79e − 03 ± 1.44e − 04
1.21e − 02 ± 4.96e − 03
1.21e − 02 ± 4.96e − 03
1.21e − 02 ± 4.96e − 03
ionosphere
128
8.70e − 01 ± 1.43e − 03
8.70e − 01 ± 1.43e − 03
5.92e − 03 ± 2.18e − 05
5.92e − 03 ± 2.18e − 05
5.92e − 03 ± 2.18e − 05
4.31e − 02 ± 3.52e − 04
4.31e − 02 ± 3.52e − 04
4.31e − 02 ± 3.52e − 04
madelon
16
2.66e + 00 ± 6.84e − 01
3.86e + 01 ± 3.28e + 01
3.74e − 01 ± 8.55e − 02
3.74e − 01 ± 8.55e − 02
3.74e − 01 ± 8.55e − 02
4.70e − 01 ± 3.27e − 02
4.70e − 01 ± 3.27e − 02
4.70e − 01 ± 3.27e − 02
madelon
128
2.21e + 01 ± 4.90e − 01
2.21e + 01 ± 4.90e − 01
2.21e + 01 ± 4.90e − 01
4.77e + 01 ± 4.84e + 00
4.77e + 01 ± 4.84e + 00
4.77e + 01 ± 4.84e + 00
7.21e + 01 ± 5.28e + 00
7.21e + 01 ± 5.28e + 00
mushrooms
16
1.01e − 01 ± 5.79e − 05
1.55e − 01 ± 8.22e − 06
4.06e − 04 ± 8.76e − 09
4.06e − 04 ± 8.76e − 09
4.06e − 04 ± 8.76e − 09
4.65e − 03 ± 3.65e − 05
4.65e − 03 ± 3.65e − 05
4.65e − 03 ± 3.65e − 05
mushrooms
128
9.72e − 01 ± 9.94e − 06
9.72e − 01 ± 9.94e − 06
6.96e − 04 ± 1.52e − 09
6.96e − 04 ± 1.52e − 09
6.96e − 04 ± 1.52e − 09
3.34e − 03 ± 2.35e − 07
3.34e − 03 ± 2.35e − 07
3.34e − 03 ± 2.35e − 07
phishing
16
1.30e − 01 ± 1.61e − 06
1.30e − 01 ± 1.61e − 06
3.65e − 05 ± 2.44e − 08
3.65e − 05 ± 2.44e − 08
3.65e − 05 ± 2.44e − 08
8.17e − 03 ± 2.43e − 05
8.17e − 03 ± 2.43e − 05
8.17e − 03 ± 2.43e − 05
phishing
128
1.53e − 01 ± 3.37e − 08
1.53e − 01 ± 3.37e − 08
1.26e − 04 ± 3.30e − 09
1.26e − 04 ± 3.30e − 09
1.26e − 04 ± 3.30e − 09
8.45e − 04 ± 2.73e − 07
8.45e − 04 ± 2.73e − 07
8.45e − 04 ± 2.73e − 07
sonar
16
6.45e − 01 ± 5.62e − 04
6.45e − 01 ± 5.62e − 04
3.38e − 03 ± 8.81e − 06
3.38e − 03 ± 8.81e − 06
3.38e − 03 ± 8.81e − 06
1.48e − 02 ± 2.58e − 04
1.48e − 02 ± 2.58e − 04
1.48e − 02 ± 2.58e − 04
sonar
128
5.04e + 00 ± 4.44e − 03
5.04e + 00 ± 4.44e − 03
5.71e − 03 ± 8.61e − 06
5.71e − 03 ± 8.61e − 06
5.71e − 03 ± 8.61e − 06
2.16e − 02 ± 8.48e − 05
2.16e − 02 ± 8.48e − 05
2.16e − 02 ± 8.48e − 05
splice
16
1.96e − 03 ± 1.78e − 04
1.96e − 03 ± 1.78e − 04
1.96e − 03 ± 1.78e − 04
4.94e − 01 ± 7.35e − 03
3.96e − 03 ± 7.12e − 07
1.03e − 02 ± 1.14e − 05
1.03e − 02 ± 1.14e − 05
1.03e − 02 ± 1.14e − 05
splice
128
1.40e + 00 ± 7.90e − 05
1.40e + 00 ± 7.90e − 05
5.52e − 03 ± 3.72e − 06
5.52e − 03 ± 3.72e − 06
5.52e − 03 ± 3.72e − 06
1.04e − 02 ± 1.06e − 04
1.04e − 02 ± 1.06e − 04
1.04e − 02 ± 1.06e − 04
w8a
16
1.32e − 02 ± 6.83e − 04
1.15e − 01 ± 1.33e − 02
2.15e − 04 ± 2.24e − 09
2.15e − 04 ± 2.24e − 09
2.15e − 04 ± 2.24e − 09
1.83e − 03 ± 8.90e − 07
1.83e − 03 ± 8.90e − 07
1.83e − 03 ± 8.90e − 07
w8a
128
5.35e − 02 ± 7.79e − 02
1.33e − 01 ± 1.74e − 07
1.67e − 04 ± 6.01e − 09
1.67e − 04 ± 6.01e − 09
1.67e − 04 ± 6.01e − 09
1.00e − 03 ± 1.01e − 06
1.00e − 03 ± 1.01e − 06
1.00e − 03 ± 1.01e − 06
Assumption 6. For some τ trial,true min ∈ R >0 , the event E τ (τ trial,true min ) occurs. In addition, {X k }, {Z k }, and {Ξ k } are constant. Lastly, there exists p τ ∈ (0, 1] such that, for all k ∈ [k max ], one finds
Lemma 22. For any k ∈ [k max ], J ∈ N, and (q [k] , w [k] ) with N (q [k] , w [k] ) ∈ L good , one finds P[G [k−1] ∈ N (q [k] , w [k]) ∧ E bad,kmax,J |E] = 0. the other hand, for any k ∈ [k max ], J ∈ N, and (q[k] , w[k] ) with N (q[k] , w[k] ) ∈ L bad , one findsOn
Then, by the definition of q[k] and w[k] , we can enumerate the children of N (q[k] , w [k] ) asP[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] = P[G [k] ∈ N (q [k] , 0, w [k] , w k ) ∧ E bad,kmax,J |E] + P[G [k] ∈ N (q [k] , 0, w [k] , w k + 1) ∧ E bad,kmax,J |E] + P[G [k] ∈ N (q [k] , 1, w [k] , w k ) ∧ E bad,kmax,J |E] + P[G [k] ∈ N (q [k] , 1, w [k] , w k + 1) ∧ E bad,kmax,J |E].Now, noting that all children of N (q[k] , w[k] ) have a height that is at mostĥ, we apply the induction hypothesis four times to obtainP[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ P[Q k+1 = 0|E, W k+1 = w k , G [k−1] ∈ N (q [k] , w [k] )]
1 = P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] + P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )].Therefore, it follows thatP[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ max p∈[pτ ,1]
) .
)Next, consider the case where q k = 0. Recalling that1 = P[W k+1 = w k |E, G [k−1] ∈ N (q [k] , w [k] )] + P[W k+1 = w k + 1|E, G [k−1] ∈ N (q [k] , w [k] )],it follows from (49) thatP[G [k−1] ∈ N (q [k] , w [k] ) ∧ E bad,kmax,J |E] ≤ max p∈[0,1]
https://github.com/frankecurtis/StochasticSQP
Acknowledgments.
Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization. Albert S Berahas, Frank E Curtis, Daniel P Robinson, Baoyu Zhou, SIAM Journal on Optimization. 312Albert S. Berahas, Frank E. Curtis, Daniel P. Robinson, and Baoyu Zhou. Sequential Quadratic Opti- mization for Nonlinear Equality Constrained Stochastic Optimization. SIAM Journal on Optimization, 31(2):1352-1379, 2021.
Optimization methods for large-scale machine learning. Léon Bottou, E Frank, Jorge Curtis, Nocedal, SIAM Review. 602Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learn- ing. SIAM Review, 60(2):223-311, 2018.
LIBSVM: a library for support vector machines. Chih-Chung Chang, Chih-Jen Lin, ACM Transactions on Intelligent Systems and Technology (TIST). 2Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines. ACM Transac- tions on Intelligent Systems and Technology (TIST), 2(3):1-27, 2011.
Constraint-aware deep neural network compression. Changan Chen, Frederick Tung, Naveen Vedula, Greg Mori, Proceedings of the European Conference on Computer Vision (ECVC). the European Conference on Computer Vision (ECVC)Changan Chen, Frederick Tung, Naveen Vedula, and Greg Mori. Constraint-aware deep neural network compression. In Proceedings of the European Conference on Computer Vision (ECVC), pages 400-415, 2018.
A matrix-free algorithm for equality constrained optimization problems with rank deficient Jacobians. Frank E Curtis, Jorge Nocedal, Andreas Wächter, SIAM Journal on Optimization. 203Frank E. Curtis, Jorge Nocedal, and Andreas Wächter. A matrix-free algorithm for equality constrained optimization problems with rank deficient Jacobians. SIAM Journal on Optimization, 20(3):1224-1249, 2009.
Stochastic subgradient method converges on tame functions. Damek Davis, Dmitriy Drusvyatskiy, Sham Kakade, Foundations of Computational Mathematics. 20Damek Davis, Dmitriy Drusvyatskiy, and Sham Kakade. Stochastic subgradient method converges on tame functions. Foundations of Computational Mathematics, 20:119-154, 2020.
Solving the trust-region subproblem using the Lanczos method. N I M Gould, S Lucidi, M Roma, Ph L Toint, SIAM Journal on Optimization. 92N. I. M. Gould, S. Lucidi, M. Roma, and Ph. L. Toint. Solving the trust-region subproblem using the Lanczos method. SIAM Journal on Optimization, 9(2):504-525, 1999.
CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. I M Nicolas, Dominique Gould, Philippe L Orban, Toint, Computational Optimization and Applications. 60Nicolas I. M. Gould, Dominique Orban, and Philippe L. Toint. CUTEst: a constrained and uncon- strained testing environment with safe threads for mathematical optimization. Computational Opti- mization and Applications, 60:545-557, 2015.
A globally convergent method for nonlinear programming. S P Han, Journal of Optimization Theory and Applications. 223S. P. Han. A globally convergent method for nonlinear programming. Journal of Optimization Theory and Applications, 22(3):297-309, 1977.
Exact penalty functions in nonlinear programming. S P Han, O L Mangasarian, Mathematical Programming. 17S. P. Han and O. L. Mangasarian. Exact penalty functions in nonlinear programming. Mathematical Programming, 17:251-269, 1979.
Variance-reduced and projection-free stochastic optimization. Elad Hazan, Haipeng Luo, Proceedings of International Conference on Machine Learning (ICML). International Conference on Machine Learning (ICML)Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic optimization. In Pro- ceedings of International Conference on Machine Learning (ICML), pages 1263-1271, 2016.
Geometry aware constrained optimization techniques for deep learning. Zakaria Soumava Kumar Roy, Mehrtash Mhammedi, Harandi, Proceedings of Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)Soumava Kumar Roy, Zakaria Mhammedi, and Mehrtash Harandi. Geometry aware constrained op- timization techniques for deep learning. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pages 4460-4469, 2018.
Stochastic Frank-Wolfe for composite convex minimization. Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher, Proceedings of Neural Information Processing Systems (NeurIPS). Neural Information Processing Systems (NeurIPS)Francesco Locatello, Alp Yurtsever, Olivier Fercoq, and Volkan Cevher. Stochastic Frank-Wolfe for composite convex minimization. In Proceedings of Neural Information Processing Systems (NeurIPS), pages 14269-14279, 2019.
Generalized stochastic Frank-Wolfe algorithm with stochastic "substitute" gradient for structured convex opt. Haihao Lu, M Robert, Freund, Math. Prog. Haihao Lu and Robert M Freund. Generalized stochastic Frank-Wolfe algorithm with stochastic "sub- stitute" gradient for structured convex opt. Math. Prog, pages 1-33, 2020.
An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians. Sen Na, Mihai Anitescu, Mladen Kolar, arXiv:2102.05320arXiv preprintSen Na, Mihai Anitescu, and Mladen Kolar. An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians. arXiv preprint arXiv:2102.05320, 2021.
A primal-dual formulation for deep learning with constraints. Yatin Nandwani, Abhishek Pathak, Parag Singla, Proceedings of Neural Information Processing Systems (NeurIPS). Neural Information Processing Systems (NeurIPS)Yatin Nandwani, Abhishek Pathak, and Parag Singla. A primal-dual formulation for deep learning with constraints. In Proceedings of Neural Information Processing Systems (NeurIPS), pages 12157-12168, 2019.
Numerical optimization. Jorge Nocedal, Stephen Wright, Springer Series in Operations Research and Financial Engineering. New YorkSpringer-VerlagJorge Nocedal and Stephen Wright. Numerical optimization. Springer Series in Operations Research and Financial Engineering. Springer-Verlag New York, 2006.
Trust Region Algorithms for Optimization with Nonlinear Equality and Inequality Constraints. E O Omojokun, Boulder, CO, USAUniversity of ColoradoPhD thesisE. O. Omojokun. Trust Region Algorithms for Optimization with Nonlinear Equality and Inequality Constraints. PhD thesis, University of Colorado, Boulder, CO, USA, 1989.
A fast algorithm for nonlinearly constrained optimization calculations. M J D Powell, Numerical Analysis. BerlinSpringerM. J. D. Powell. A fast algorithm for nonlinearly constrained optimization calculations. In Numerical Analysis, Lecture Notes in Mathematics, pages 144-157. Springer, Berlin, 1978.
Explicitly imposing constraints in deep networks via conditional gradients gives improved generalization and faster convergence. N Sathya, Tuan Ravi, Dinh, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Vishnu Suresh Lokhande, and Vikas SinghSathya N Ravi, Tuan Dinh, Vishnu Suresh Lokhande, and Vikas Singh. Explicitly imposing constraints in deep networks via conditional gradients gives improved generalization and faster convergence. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4772-4779, 2019.
Stochastic Frank-Wolfe methods for nonconvex optimization. J Sashank, Suvrit Reddi, Barnabás Sra, Alex Póczos, Smola, 54th Annual Allerton Conference. IEEESashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Stochastic Frank-Wolfe methods for nonconvex optimization. In 2016 54th Annual Allerton Conference, pages 1244-1251. IEEE, 2016.
A stochastic approximation method. Herbert Robbins, Sutton Monro, The Annals of Mathematical Statistics. 223Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400-407, 1951.
A convergence theorem for nonnegative almost supermartingales and some applications. Herbert Robbins, David Siegmund, Optimizing Methods in Statistics. Jagdish S. RustagiAcademic PressHerbert Robbins and David Siegmund. A convergence theorem for nonnegative almost supermartingales and some applications. In Jagdish S. Rustagi, editor, Optimizing Methods in Statistics. Academic Press, 1971.
The conjugate gradient method and trust regions in large scale optimization. T Steihaug, SIAM Journal on Numerical Analysis. 203T. Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM Journal on Numerical Analysis, 20(3):626-637, 1983.
Elementary probability. David Stirzaker, Cambridge University PressDavid Stirzaker. Elementary probability. Cambridge University Press, 2003.
One sample stochastic Frank-Wolfe. Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi, AISTATS. Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, and Amin Karbasi. One sample stochas- tic Frank-Wolfe. In AISTATS, pages 4012-4023, 2020.
| [
"https://github.com/frankecurtis/StochasticSQP"
] |
[
"Rate-Splitting Multiple Access for Downlink MIMO: A Generalized Power Iteration Approach",
"Rate-Splitting Multiple Access for Downlink MIMO: A Generalized Power Iteration Approach"
] | [
"Jeonghun Park [email protected] ",
"Jinseok Choi [email protected] ",
"Namyoon Lee [email protected] ",
"Wonjae Shin [email protected] ",
"H Vincent Poor ",
"\nSchool of Electronics Engineering\nwith Department of Electrical Engineering\nKyungpook National University\nSouth Korea\n",
"\nwith Department of Electrical Engineering, POSTECH\nwith Department of Electrical and Computer Engineering\nUlsan National Institute of Science and Technology\nSouth Korea, South Korea\n",
"\nwith Department of Electrical and Computer Engineering\nAjou University\nSouth Korea\n",
"\nPrinceton University\nPrincetonNJUSA\n"
] | [
"School of Electronics Engineering\nwith Department of Electrical Engineering\nKyungpook National University\nSouth Korea",
"with Department of Electrical Engineering, POSTECH\nwith Department of Electrical and Computer Engineering\nUlsan National Institute of Science and Technology\nSouth Korea, South Korea",
"with Department of Electrical and Computer Engineering\nAjou University\nSouth Korea",
"Princeton University\nPrincetonNJUSA"
] | [] | Rate-splitting multiple access (RSMA) is a general multiple access scheme for downlink multiantenna systems embracing both classical spatial division multiple access and more recent non-orthogonal multiple access. Finding a linear precoding strategy that maximizes the sum spectral efficiency of RSMA is a challenging yet significant problem. In this paper, we put forth a novel precoder design framework that jointly finds the linear precoders for the common and private messages for RSMA. Our approach is first to approximate the non-smooth minimum function part in the sum spectral efficiency of RSMA using a LogSumExp technique. Then, we reformulate the sum spectral efficiency maximization problem as a form of the log-sum of Rayleigh quotients to convert it into a tractable form. By interpreting the first-order optimality condition of the reformulated problem as an eigenvector-dependent nonlinear eigenvalue problem, we reveal that the leading eigenvector of the derived optimality condition is a local optimal solution. To find the leading eigenvector, we propose an algorithm inspired by a power iteration.Simulation results show that the proposed RSMA transmission strategy provides significant improvement in the sum spectral efficiency compared to the state-of-the-art RSMA transmission methods.Index TermsRate-splitting multiple access (RSMA), multi-user MIMO, imperfect channel state information (CSI), sum spectral efficiency maximization, generalized power iteration. J. Park is with the | 10.1109/twc.2022.3205480 | [
"https://arxiv.org/pdf/2108.06844v2.pdf"
] | 237,091,814 | 2108.06844 | da3757849df8f96ee0bae19f647016e6e9de5657 |
Rate-Splitting Multiple Access for Downlink MIMO: A Generalized Power Iteration Approach
2 Jun 2022
Jeonghun Park [email protected]
Jinseok Choi [email protected]
Namyoon Lee [email protected]
Wonjae Shin [email protected]
H Vincent Poor
School of Electronics Engineering
with Department of Electrical Engineering
Kyungpook National University
South Korea
with Department of Electrical Engineering, POSTECH
with Department of Electrical and Computer Engineering
Ulsan National Institute of Science and Technology
South Korea, South Korea
with Department of Electrical and Computer Engineering
Ajou University
South Korea
Princeton University
PrincetonNJUSA
Rate-Splitting Multiple Access for Downlink MIMO: A Generalized Power Iteration Approach
2 Jun 20221
Rate-splitting multiple access (RSMA) is a general multiple access scheme for downlink multiantenna systems embracing both classical spatial division multiple access and more recent non-orthogonal multiple access. Finding a linear precoding strategy that maximizes the sum spectral efficiency of RSMA is a challenging yet significant problem. In this paper, we put forth a novel precoder design framework that jointly finds the linear precoders for the common and private messages for RSMA. Our approach is first to approximate the non-smooth minimum function part in the sum spectral efficiency of RSMA using a LogSumExp technique. Then, we reformulate the sum spectral efficiency maximization problem as a form of the log-sum of Rayleigh quotients to convert it into a tractable form. By interpreting the first-order optimality condition of the reformulated problem as an eigenvector-dependent nonlinear eigenvalue problem, we reveal that the leading eigenvector of the derived optimality condition is a local optimal solution. To find the leading eigenvector, we propose an algorithm inspired by a power iteration.Simulation results show that the proposed RSMA transmission strategy provides significant improvement in the sum spectral efficiency compared to the state-of-the-art RSMA transmission methods.Index TermsRate-splitting multiple access (RSMA), multi-user MIMO, imperfect channel state information (CSI), sum spectral efficiency maximization, generalized power iteration. J. Park is with the
I. INTRODUCTION
Multi-user multiple-input multiple-output (MU-MIMO) downlink transmissions can provide extensive gains in spectral efficiency by serving multiple users with a shared time-frequency resource [2]- [4]. Assuming perfect channel state information at the transmitter (CSIT), a transmitter is able to send information symbols along with multiple linear precoding vectors to different users simultaneously by mitigating inter-user interference. In practice, however, the theoretical gains of downlink MU-MIMO transmissions can greatly vanish due to the inaccuracies of the CSIT. For example, considering the frequency division duplex (FDD) systems, the downlink channel has to be estimated at the receiver first and sent back to the transmitter via a finite-rate feedback link [5], [6], wherein quantization error on CSIT is inevitable. For this reason, in order to attain the de facto MU-MIMO spectral efficiency gains, it is crucial to design a downlink MU-MIMO transmission strategy that achieves high spectral efficiency under imperfect CSIT.
Rate-splitting multiple access (RSMA) is a robust downlink multiple access technique, especially when a transmitter has inaccurate knowledge for downlink CSI. Unlike the conventional spatial division multiple access (SDMA), in RSMA [7]- [10], the transmitter harnesses the ratesplitting strategy that breaks user messages into common and private parts in order to dynamically manage interference caused by imperfect CSIT. The transmitter constructs a common message by jointly encoding the common parts of the users' split messages. The rate for this common message is carefully controlled so that all the users can decode it. The transmitter also encodes the private parts of the users' messages to generate private information symbols. Then, the transmitter sends the common and private information symbols along with linear precoding vectors in a non-orthogonal manner. Each user decodes and eliminates the common message by performing successive interference cancellation (SIC) while treating the residual interference as noise. It then decodes the desired private message. Thanks to the rate-splitting encoding and SIC decoding, RSMA has been shown to outperform dirty paper coding (DPC) when imperfect CSIT is given [11].
To clearly understand the gains of RSMA over SDMA, it is instructive to consider a simple case of a two-user multi-antenna broadcast channel with imperfect CSIT. From an informationtheoretic viewpoint, when applying linear precoding with imperfect CSIT in a two-user multi-A part of this paper was presented at the Workshop on Rate-Splitting (Multiple Access) for Beyond 5G at the 2021 IEEE Wireless Communications and Networking Conference, Nanjing, China [1]. antenna broadcast channel, the channel can be interpreted as a virtual two-user interference channel with transmitter cooperation, in which the channel gains of desired and interfering links are determined by the precoding vectors and the channel vectors. In this equivalent interference channel, the quasi-optimal transmission strategy is the Han-Kobayashi scheme [12], i.e., splitting messages into common and private parts and allocating the power according to the relative channel gains between the interfering and the desired link [13]. Motivated by this, RSMA mimics this near capacity-achieving strategy in downlink MIMO.
To reap the spectral efficiency gains by using RSMA in downlink MIMO, it is significant to find the optimal linear precoding solution; yet it is challenging to find such a precoding vector.
Unlike the sum spectral efficiency maximization problem for SDMA relying on private messages only, the problem for RSMA has an additional unique challenge induced by the common message rate, which is the minimum of all the achievable rates for the common message at the users. This minimum function is non-smooth, making the sum-spectral efficiency maximization problem for RSMA challenging to solve. In this work, we put forth a new approach for designing a precoder to maximize the sum spectral efficiency of multi-antenna RSMA with imperfect CSIT.
A. Related Works
Recently, to cope with imperfect CSIT in downlink MIMO systems, the idea of rate-splitting has been actively re-explored as a multiple access technique, i.e., RSMA [14]. In [15], it was shown that RSMA provides sum degrees-of-freedom gains in multi-antenna broadcast channels where erroneous CSIT is given. Exploiting the idea of [15], in [16], the achievable spectral efficiency was analyzed while fixing the precoder for the common message as a random precoder and the precoder for the private messages as ZF.
Besides the theoretical analysis, there exist several prior works that have developed practical linear precoding designs for RSMA multi-antenna systems. In [7], a linear precoding design was proposed based on the weighted minimum mean square (WMMSE) approach [3]. Specifically, a non-convex original problem was transformed into a quadratically constrained quadratic program (QCQP) by using the equivalence between the sum spectral efficiency maximization and the sum mean square error (MSE) minimization problem. Subsequently, an interior point method was used to solve the QCQP. Employing the same idea, in [8], [17], a max-min fairness problem with RSMA was addressed. In [10], a linear precoding method for general RSMA was proposed by exploiting a concave-convex procedure (CCCP) that successively approximates the original problem into convex forms. To evaluate the performance of RSMA in practical settings, e.g., finite constellation and implementable channel coding, [18] performed a link-level simulation in RSMA downlink MIMO systems. In [19], considering a single-antenna downlink channel, a power control method was proposed by incorporating the SIC constraint. In [20], considering downlink massive MIMO, it was shown that hierarchical RSMA that uses multiple-layer partial common messages can be well-harmonized in massive MIMO systems thanks to its spatial covariance separability [21]. Beyond the sum spectral efficiency maximization in downlink, other variants also exist. For example, RSMA for energy efficiency maximization [22], RSMA with hardware impairments [23], RSMA in joint MIMO radar and communication system [24], and RSMA in uplink channels [25] have been studied in the context of optimization for RSMA.
Further, multi-antenna RSMA in interference channels [26] was also presented.
A key obstacle of the RSMA linear precoding design arises from the common message rate that should be determined as the minimum of all the achievable rates. To resolve this, the conventional methods use convex relaxation. Namely, an original non-convex problem is relaxed into a convex problem first, and then this convexified problem is put into an off-the-shelf optimization toolbox such as CVX to obtain a solution. A limitation of such approaches is that the optimization toolbox is hard to implement in practical hardware due to its extremely high complexity [27].
For this reason, the existing precoding optimization methods for RSMA are hardly used in practice. Therefore, this paper proposes a new optimization framework for MIMO RSMA that outperforms the existing methods in terms of complexity and also performance.
B. Contributions
This paper proposes a new approach for linear precoding optimization in downlink MIMO with RSMA. The contributions of this paper are listed as follows.
• Considering an imperfect CSIT model, in which the CSI error statistic is modeled as complex Gaussian with zero-mean and a certain covariance matrix, we derive a lower bound on the instantaneous sum spectral efficiency for RSMA. In contrast to the sum spectral efficiency maximization for SDMA with imperfect CSIT, this lower bound entails the non-smooth minimum function for the common message rate. To convert the non-smooth minimum function into a tractable form, we take the LogSumExp technique, which offers a tight approximation of the minimum function to a smooth function. Then, by representing all optimization variables (precoding vectors) onto a higher dimensional vector, we reformulate the lower bound of the instantaneous sum spectral efficiency for RSMA into a tractable nonconvex function in the form of the log-sum of Rayleigh quotients.
• Using the derived lower bound with the smooth function approximation, we establish the first-order optimality condition for the sum spectral efficiency maximization problem. Remarkably, it is shown that the derived condition is cast as an eigenvector-dependent nonlinear eigenvalue problem [28], where the optimization variable behaves as an eigenvector, and the objective function behaves as an eigenvalue. Accordingly, we reveal that if we find the leading eigenvector that ensures the derived optimality condition, the best local optimal solution is obtained, maximizing the approximate lower bound of the instantaneous sum spectral efficiency for RSMA.
• To obtain the leading eigenvector of the derived condition, we put forth a novel algorithm inspired by a power iteration, referred to as generalized power iteration for rate-splitting (GPI-RS). Adopting the conventional power iteration principle, the idea of GPI-RS is to compute the leading eigenvector iteratively. The solution obtained by GPI-RS jointly provides the precoding directions and power allocation for the common and private messages.
Notably, we do not rely on CVX in the proposed algorithm; thereby, it is more beneficial to implement in practical hardware. In addition to this, the computational complexity is less compared to the existing WMMSE-based method [7]. Later, we also generalize the proposed GPI-RS for a case in which multiple-layer RSMA is used. In multiple-layer RSMA, not only common message, but also the partial common message that includes messages of a subset of the users are jointly used. We show that the proposed method is suitably extended to this case.
• Simulation results show that the proposed GPI-RS provides spectral efficiency gains over the existing methods, including the conventional convex relaxation-based WMMSE method [7] in various system environments. To be specific, the proposed GPI-RS provides around 20% sum spectral efficiency gains, while consuming only 6 ∼ 7% of the computation time compared to the conventional method. Further, we empirically confirm that the GPI-RS converges well.
Notation: The superscripts (·) T , (·) H , and (·) −1 denote the transpose, Hermitian, and matrix inversion, respectively. I is the identity matrix of size × , Assuming that A 1 , ..., A ∈ C × , A = blkdiag (A 1 , ..., A , ..., A ) is a block-diagonal matrix concatenating A 1 , ..., A .
II. SYSTEM MODEL
A. Channel Model
We consider a single-cell downlink MU-MIMO system, where a base station (BS) equipped with antennas serves single-antenna users. We denote a user set as K = {1, · · · , }. The channel vector between the BS and user is denoted as h ∈ C for ∈ K, where h is generated based on the spatial covariance matrix R , i.e., R = E h h H . For constructing the channel covariance matrix, we adopt the one-ring model [21]. Specifically, we assume that the BS is equipped with uniform circular array with radius where denotes a signal wavelength and = . Then, the channel correlation coefficient between the -th antenna and -th antenna corresponding to user is defined as
[R ] , = 1 2Δ ∫ +Δ −Δ − 2 Ψ( )(r −r ) d ,(1)
where is angle-of-arrival (AoA) of user , Δ is the angular spread of user , Ψ( ) =
[cos( ), sin( )], and r is the position vector of the -th antenna. By employing the Karhunen-Loeve model as in [20], [21], the channel vector h is represented as
h = U Λ 1 2 g ,(2)
where Λ ∈ C × is a diagonal matrix that contains the non-zero eigenvalues of R , U ∈ C × is a collection of the eigenvectors of R corresponding to the eigenvalues in Λ , and g ∈ C is an independent and identically distributed channel vector. We assume that each element of g is drawn from CN(0, 1). We consider a block fading model, where g keeps constant within one transmission block. Over the two consecutive transmission blocks, g changes independently.
We clarify that the applicability of our method does not depend on particular channel model assumptions. We consider the one-ring model in this paper because it is one of the widely used channel models that suitably captures spatial covariance structures of MIMO channels. The proposed method can be applied in any channel model.
B. CSIT Acquisition Model
This subsection explains the CSIT estimation and the error model used throughout this paper.
We assume perfect channel state information at the receiver (CSIR), which can be achieved via downlink pilots planted in the data packet as described in LTE and 5G NR. In contrast to CSIR, a BS should estimate CSIT, so that only imperfect knowledge of CSIT is allowed. Generally, two approaches are known to estimate the CSIT, each of which is linear MMSE (LMMSE) and limited feedback, respectively. LMMSE yields the optimum CSIT estimation performance, provided that the channel is distributed as Gaussian. Nonetheless, LMMSE only can be used when the channel reciprocity holds. On the contrary to that, limited feedback can be employed in any environment. In this paper, we focus on LMMSE, but note that our method is also useful with limited feedback.
As mentioned above, LMMSE can be exploited when the BS can use channel reciprocity.
Specifically, assuming that the uplink and downlink channels are reciprocal, the BS estimates the CSIT from uplink training sent from the users using the MMSE estimation [29]. For this reason, LMMSE is adequate to use in time division duplex (TDD) systems where the channel reciprocity holds. Using LMMSE, the estimated CSIT is presented aŝ
h = h − e ,(3)
where e is the CSIT estimation error vector. Since h is distributed as Gaussian,ĥ and e are also Gaussian that is independent to each other. The error covariance is obtained as
E[e e H ] = = R − R R + 2 ul ul −1 R ,(4)
where ul and ul are uplink training length and uplink training transmit power. As the uplink training length and power increases to infinity, the error covariance = 0 and the CSIT error e also vanishes; then the BS has the perfect CSIT.
C. RSMA Signal Model
Using RSMA, the message for user is split into the common message part c, and the private message . The common message part c, from each user is combined to encode the common message c . The common message c is drawn from a public codebook so that any user associated with the BS can decode it. On the contrary, the private message comes from an individual codebook. Therefore it is only decodable to intended users.
One common message and private messages are linearly precoded and then superimposed, so that the transmit signal x ∈ C is given by
x = f c c + =1 f ,(5)
where f c ∈ C and f ∈ C are the precoding vectors for the common and private messages respectively with the transmit power constraint: f c 2 + =1 f 2 ≤ 1. We note that not only the direction of each message, but also the power allocated to each message are controlled by the precoding vectors. For example, if f c = 0, then no common message is delivered; so our RSMA signal model reduces to typical SDMA.
The received signal at user for ∈ K is written as
= h H f c c + h H f + ℓ=1,ℓ≠ h H f ℓ ℓ + ,(6)
where ∼ CN(0, 2 ) is additive white Gaussian noise. We also assume that c and are drawn from an independent Gaussian codebook, i.e., c , ∼ CN(0, ).
D. Performance Characterization
To characterize the performance of RSMA, we first explain the decoding process performed by each user. Each user first decodes the common message c by treating all the other private messages as noise. Once the common message is successfully decoded, using SIC, the users remove the common message from the received signal and decode the private messages with a reduced amount of interference.
To successfully perform SIC, the common message c should be decodable to every user without any error. To this end, the code rate of the common message c is set as the minimum of the ergodic spectral efficiencies among the users. Accordingly, under the premise that the BS has imperfect CSIT, i.e.,ĥ = h + e for ∈ K, the ergodic spectral efficiency of the common message is obtained as [4], [7]
c = min ∈K E {h } log 2 1 + |h H f c | 2 ℓ=1 |h H f ℓ | 2 + 2 / = min ∈K E {ĥ } E {e } log 2 1 + |h H f c | 2 ℓ=1 |h H f ℓ | 2 + 2 / ĥ ,(7)
where in (7), the inner expectation is taken over the randomness associated with the CSIT error Assuming that the channel code length spans an infinite number of channel blocks and we set the channel coding rate of the common message c less than or equal to c , no decoding error for c occurs. Similar to (7), the ergodic spectral efficiency of the private message after cancelling the common message c is obtained as [4], [7]
= E {ĥ } E {e } log 2 1 + |h H f | 2 ℓ=1,ℓ≠ |h H f ℓ | 2 + 2 / ĥ .(8)
Since we assume that the common message is successfully eliminated, we observe that there is no interference from the common message in (8). Under the assumption that the channel code length spans an infinite number of channel blocks and the channel coding rate of the private message is less than or equal to , the users can successfully decode .
Our main goal is to optimize the precoders using imperfect knowledge on CSIT per each fading block. For this reason, we focus on one particular fading block without loss of generality, allowing to assume thatĥ , ∈ K is given. In a certain fading block, we can define the instantaneous spectral efficiency. Specifically, the instantaneous spectral efficiency of the common message achieved at user is defined as ins.
c ( ) = E {e } log 2 1 + |h H f c | 2 ℓ=1 |h H f ℓ | 2 + 2 / ĥ .(9)
The instantaneous spectral efficiency differs from the ergodic spectral efficiency. On the one hand, the ergodic spectral efficiency is the long-term performance that can be achieved when the channel code length spans very long channel blocks. On the other hand, the instantaneous spectral efficiency is the short-term rate expression when taking into account the channel estimation error effect per channel realization. Considering multiple fading blocks, the instantaneous spectral efficiency and the ergodic spectral efficiency are connected as c = min ∈K E {ĥ } [ ins. c ( )] . Unfortunately, however, (9) is not tractable. The main challenge is that no closed-form exists for the expectation on CSIT error. To address this, we characterize a lower bound by adopting a similar approach in [4]. We rewrite the received signal (6) with the CSIT error term as follows:
= h H f c c + ℓ=1 h H f ℓ + ( ) =ĥ H f c c + ℓ=1ĥ H f ℓ + e H f c c + =1 e H f ( ) + ,(10)
where (a) follows h =ĥ + e . The term (b) is correlated with the common message c , yet it is not tractable due to the CSIT estimation error e . To resolve this, employing a concept of generalized mutual information, we treat (b) as independent Gaussian noise, which is the worst case of mutual information. Then a lower bound on the instantaneous spectral efficiency is made as:
ins.
c ( ) ( ) ≥ E {e } log 2 1 + |ĥ H f c | 2 ℓ=1 |ĥ H f ℓ | 2 + |e H f c | 2 + ℓ=1 |e H f ℓ | 2 + 2 ( ) ≥ log 2 1 + |ĥ H f c | 2 ℓ=1 |ĥ H f ℓ | 2 + f H c E e e H f c + ℓ=1 f H ℓ E e e H f ℓ + 2 ( ) = log 2 1 + |ĥ H f c | 2 ℓ=1 |ĥ H f ℓ | 2 + f H c f c + ℓ=1 f H ℓ f ℓ + 2 =¯ ins. c ( ),(11)
where (c) comes from treating (b) in (10) A lower bound on the instantaneous spectral efficiency, denoted as¯ ins. c ( ), is derived as a closed-form, so that this can be handled easily. Finally, we take another lower bound on the ergodic spectral efficiency.
The ergodic spectral efficiency of the common message is represented with¯ ins. c ( ) as
c = min ∈K E {ĥ } ins. c ( ) ≥ min ∈K E {ĥ } ¯ ins. c ( ) ( ) ≥ E {ĥ ∈K } min ∈K ¯ ins. c ( )(12)
where (f) follows the fact that putting the minimum operator into the expectation does not increase the value. We take (12) as our main objective function for the common message rate.
Next, we characterize a lower bound on the instantaneous spectral efficiency of the private message. We first define the instantaneous spectral efficiency of the private message in a certain fading block as ins. . Using a similar technique to the common message case, we derive a lower bound on ins. such as
ins. ≥ E {e } log 2 1 + |ĥ H f | 2 ℓ=1,ℓ≠ |ĥ H f ℓ | 2 + ℓ=1 |e H f ℓ | 2 + 2 ≥ log 2 1 + |ĥ H f | 2 ℓ=1,ℓ≠ |ĥ H f ℓ | 2 + ℓ=1 f H ℓ f ℓ + 2 =¯ ins. ,(13)
Considering multiple fading blocks, the obtained lower bound on the instantaneous spectral efficiency¯ ins. is connected to the ergodic spectral efficiency as follows:
= E {ĥ } [ ins. ] ≥ E {ĥ } [¯ ins. ].(14)
Combining (12) and (14), we finally complete the following lower bound on the ergodic sum spectral efficiency Σ :
Σ ≥¯ Σ = E {ĥ ∈K } min ∈K {¯ ins. c ( )} + =1¯ ins. .(15)
Now we are ready to formulate our main problem.
Remark 1 (Comparison to [10]). Similar to our lower bound, [10] also proposed a lower bound on the instantaneous spectral efficiencies by incorporating the CSIT estimation error. To be specific, [
c ( ) = E {e } log 2 1 + h H Q c h ℓ=1 h H Q ℓ h + 2 / ĥ = E {e } log 2 h H Q c h + ℓ=1 h H Q ℓ h + 2 / − E {e } log 2 ℓ=1 h H Q ℓ h + 2 / ,(16)
where Q c = f c f H c and Q = f f H . By using Jensen's inequality and the fact that the CSIT estimation error has zero-mean, we obtain an upper bound on the second term in (16) as
E {e } log 2 ℓ=1 h H Q ℓ h + 2 / ≤ log 2 ℓ=1 (ĥ H Q ℓĥ + 2 tr(Q ℓ )) + 2 /(17)
Additionally, [10] proved that the first term in (16) is non-decreasing with , so we get a lower bound by putting = 0, yielding
E {e } log 2 h H Q c h + ℓ=1 h H Q ℓ h + 2 / ≥ log 2 ĥ H Q cĥ + ℓ=1ĥ H Q ℓĥ + 2 / .(18)
Combining (17) and (18), [10] claimed that we can make a lower bound as ins.
c ( ) ≥ log 2 1 +ĥ H Q cĥ ℓ=1 (ĥ H Q ℓĥ + 2 tr(Q ℓ )) + 2 / =˜ ins. c ( ).(19)
The lower bound (19) is closely related to our lower bound. Rewriting (19), we havẽ ins.
c ( ) = log 2 1 + |ĥ H f c | 2 ℓ=1 |ĥ H f ℓ | 2 + ℓ=1 f H ℓ · 2 I · f ℓ + 2 / .(20)
Comparing˜ ins. c ( ) in (20) to our lower bound¯ ins. c ( ) in (11) under the assumption that = 2 I, (20) seems to be tighter since the denominator of the SINR in (20) does not include the term f H c · 2 I · f c . Nonetheless, the lower bound (20) is limited in its applicability. This is because, the lower bound technique to derive (20) cannot be applied when ≠ 2 I, i.e., the CSIT error is spatially correlated. Since it is usual that MIMO channels have particular spatial correlation structures, our lower bound is more proper to use in general cases.
E. Problem Formulation
We aim to maximize¯ Σ in (15). In our setup, maximizing¯ Σ is equivalent to maximizing
min ∈K {¯ ins. c ( )} + =1¯
ins. per each fading block, wherein the BS is able to calculate¯ ins.
and min ∈K {¯ ins. c ( )} in a closed-form by using the estimated CSIT. Accordingly, we formulate an optimization problem as follows:
maximize f c ,f 1 ,··· ,f min ∈K {¯ ins. c ( )} + =1¯ ins.(21)subject to f c 2 + =1 f 2 ≤ 1.(22)
We tackle (21) as our main problem. Finding the global solution of (21) is infeasible due to its non-convexity and non-smoothness.
III. EXISTING APPROACH: WMMSE
In this section, we briefly introduce the existing WMMSE approach [7] for the sum spectral efficiency maximization in an RSMA scenario. We focus on two points: (i) how to solve a nonconvex optimization problem and (ii) how to incorporate the CSIT estimation error. Then we explain the main distinguishable points of the proposed method.
We first explain how to relax a non-convex problem. To solve a non-convex sum spectral efficiency maximization problem, [7] adopted a well-known WMMSE relaxation technique [3].
Specifically, we denote thatˆ c ( ),ˆ are estimates for c ( ) and , where c ( ) is the common message received at user . With scalar equalizers c ( ) and , we define the MSEs of the common message received at user ( c ( )) and the private message ( ) as
c ( ) = E[|ˆ c ( ) − c | 2 ] = E[| c ( ) − c | 2 ] = | c ( )| 2 c ( ) − 2Re c ( )h H f c + 1,(23)= | | 2 − 2Re h H f + 1,(24)where c ( ) = |h H f c | 2 + ℓ=1 |h H f ℓ | 2 + 2 and = |h H f | 2 + ℓ=1,( ) = c ( ) −1 ( c ( ) − |h H f c | 2 ) and MMSE = −1 ( − |h H f | 2 ).
Then the augmented WMSEs are give by
c ( ) = c ( ) c ( ) − log 2 ( c ( )) = f H c c ( )| c ( )| 2 h h H f c + ℓ=1 f H ℓ c ( )| c ( )| 2 h h H f ℓ − 2Re c ( ) c ( )h H f c + 2 c ( )| c ( )| 2 + c ( ) − log 2 ( c ( )),(25)= − log 2 ( ) = f H | | 2 h h H f + ℓ=1,ℓ≠ f H ℓ | | 2 h h H f ℓ − 2Re h H f + 2 | | 2 + − log 2 ( ).(26)
We note that the optimal weights to achieve the minimum of c ( ) and are obtained as c ( ) = 1/ MMSE c ( ) and = 1/ MMSE . Upon this weight update, by the rate-WMMSE equivalence [3], the sum spectral efficiency is maximized by solving the following WMSE minimization problem:
minimize f c ,f 1 ,··· ,f , c c + =1(27)
subject to c ( ) ≤ c , ∀ ∈ K,
f c 2 + =1 f 2 ≤ .(28)
The problem (27) is QCQP, which can be solved by using CVX. We compute the optimal weight, equalizer, and the precoding vector in an alternating fashion. We repeat this process until a certain termination criterion is met.
The presented WMMSE approach assumes the perfect CSIT. To take the CSIT estimation error into account, [7] adopted the sample average approximation (SAA) technique. In the SAA technique, we produce number of samples of the augmented WMSEs by randomly generating channel vector h =ĥ + e (ĥ is given, e is randomly generated) then calculate an empirical average of these samples as follows:¯
c ( ) ← 1 =1 ( ) c ( ),(30)← 1 =1 ( ) c ( ).(31)
We replace c ( ) and by the empirical average augmented WMSE¯ c ( ) and¯ in (27).
Now we clarify the distinguishable points of our method compared to the WMMSE approach.
In the WMMSE approach, CVX is required to solve the WMSE minimization problem (27).
We need additional efforts to implement CVX in FPGA hardware since CVX is not designed to run in real-time hardware [27]. In addition to this, we observe that the common message rate is controlled by the WMSE constraints (28), where the number of the constraints is equal to the number of users . For this reason, the associated computational complexity of the WMMSE approach scales with 3.5 [8], [30]; resulting in the huge computational complexity is caused when there are a large number of users. Compared to this, as we will show later, our method does not require CVX to obtain a solution. Further, since we control the common message rate by cleverly approximating the minimum function, the computational complexity scales with .
For this reason, our method is much more beneficial when there are many users.
IV. PRECODER OPTIMIZATION WITH GENERALIZED POWER ITERATION
In this section, we explain the key ideas to solve the optimization problem (21). We first approximate the non-smooth minimum function as a smooth function using the LogSumExp technique. Subsequently, we represent the optimization variable onto a higher dimensional vector to reformulate the problem (21) into a tractable non-convex optimization problem expressed as a function of Rayleigh quotients. By deriving the first-order KKT condition for the reformulated problem, we show that the first-order optimality condition is cast as an eigenvector-dependent nonlinear eigenvalue problem (NEPv) [28], and finding the leading eigenvector is equivalent to finding the best local optimal point of the reformulated problem. Consequently, to find the leading eigenvector, we propose a computationally efficient generalized power iteration algorithm.
A. Reformulation to a Tractable Form
At first, we approximate the non-smooth minimum function by using the LogSumExp technique. With the LogSumExp, the minimum function is approximated as [31] min =1,...,
{ } ≈ − log 1 =1 exp − ,(32)
where the approximation becomes tight as → +0. Leveraging (32), we approximate
min ∈K {¯ ins. c ( )} ≈ − log 1 =1 exp ¯ ins. c ( ) − .(33)
To help understand the LogSumExp approximation technique, we draw an illustration in Fig. 1.
In Fig. 1, assuming the = 1, = 2, a landscape of the minimum spectral efficiency between two users is depicted. In addition to that, the approximate minimum spectral efficiency using the LogSumExp technique is also presented. As shown in the figure, the true maximum value of the non-smooth minimum function is tightly approximated by the LogSumExp technique.
Now we rewrite the precoding vectors f c , f 1 , · · · , f in a higher dimensional vectorf by
stacking each vector asf = [f T c , f T 1 , · · · , f T ] T ∈ C ( +1)×1
. With this, we express the instantaneous spectral efficiencies regarding the common message c as ins.
c ( ) = log 2 f H A c ( )f f H B c ( )f ,(34)
where
A c ( ) = blkdiag (ĥ ĥ H + ), · · · , (ĥ ĥ H + ) + I ( +1) 2 ,(35)B c ( ) = A c ( ) − blkdiag ĥ ĥ H , 0, · · · , 0 .(36)
Note that we implicitly assume f 2 = 1 to have (34). This assumption does not hurt the optimality since the spectral efficiency is monotonically increasing with the transmit power. It is also worthwhile to mention that (34) is presented as a function of Rayleigh quotient terms.
Similar to this, we also write the spectral efficiency of the private message as
ins. = log 2 f H A f f H B f ,(37)
where
A = blkdiag 0, (ĥ ĥ H + ), · · · , (ĥ ĥ H + ) + I ( +1) 2 ,(38)B = A − blkdiag 0, · · · , 0,ĥ ĥ H the ( +1)th block , 0, · · · , 0 .(39)
With this Rayleigh quotients representation, the problem (21) is transformed to
maximizē f log 1 =1 exp log 2 f H A c ( )f f H B c ( )f − 1 − + =1 log 2 f H A f f H B f (40) subject to f 2 = 1.(41)
We note that in (40), the obtained precoding vectorf can always be normalized by dividing the numerator and the denominator of each Rayleigh quotient with f , while not affecting the objective function. Thanks to this feature, the constraint f 2 = 1 can vanish from (67). Now we are ready to tackle the problem (40).
B. First-Order Optimality Condition
To approach the solution of the transformed problem (40), we derive a first-order optimality condition of (40). The following lemma shows the main result in this subsection.
Lemma 1. The first-order optimality condition of the optimization problem (40) is satisfied if
the following holds:
B −1 KKT (f)A KKT (f)f = (f)f,(42)
where
A KKT (f) = num (f) × =1 exp 1 − log 2 f H A c ( )f f H B c ( )f ℓ=1 exp 1 − log 2 fH A c (ℓ)f f H B c (ℓ)f A c ( ) f H A c ( )f + A fH A f ,(43)B KKT (f) = den (f) × =1 exp 1 − log 2 f H A c ( )f f H B c ( )f ℓ=1 exp 1 − log 2 fH A c (ℓ)f f H B c (ℓ)f B c ( ) f H B c ( )f + B fH B f ,(44)
with
(f) = 1 =1 exp log 2 f H A c ( )f f H B c ( )f − 1 − log 2 × =1 f H A f f H B f = num (f) den (f) .(45)
Proof. See Appendix A.
Now we interpret the derived optimality condition (42). We first observe if a precoding vectorf satisfies the condition (42), then it also satisfies the first-order optimality condition, which means that the correspondingf is a stationary point of the problem (40) whose gradient is zero. If the problem (40) has multiple stationary points, it is possible to exist multiplef satisfying (42). Next, we see that (42) is presented as a form of the eigenvector problem for
the matrix B −1 KKT (f)A KKT (f).
More rigorously, (42) is cast as a NEPv [28]. As described in [28], NEPv is a generalized version of an eigenvalue problem, in that a matrix can be changed depending on an eigenvector in a nonlinear fashion. In our case, the matrix B −1 KKT (f)A KKT (f) is a nonlinear function of the eigenvectorf. Crucially, in the formulated NEPv (42), the eigenvalue (f) is equivalent to the objective function of the problem (40). Accordingly, if we find the leading eigenvector of the NEPv (42), then it maximizes the objective function among multiple eigenvectors. Eventually, since (42) holds for any eigenvector, finding the leading eigenvector of the NEPv (42) is equivalent to finding the local optimal point that maximizes the objective function of (40) and has zero gradient. This leads to the following proposition. Proposition 1. Denoting that the local optimal point for the problem (40) asf ★ ,f ★ is the leading
eigenvector of B −1 KKT (f ★ )A KKT (f ★ ) satisfying B −1 KKT (f ★ )A KKT (f ★ )f ★ = ★f★ ,(46)
where ★ is the corresponding eigenvalue.
Algorithm 1 GPI-RS initialize:f (0) = MRT Set the iteration count = 1. while f ( ) −f ( −1) > do
Construct the matrices A KKT (f ( −1) ) and B KKT (f ( −1) ) by using (43) and (44).
Updatef ( ) ← B KKT (f ( −1) ) −1 A KKT (f ( −1) )f ( −1) B KKT (f ( −1) ) −1 A KKT (f ( −1) )f ( −1)
. ← + 1.
end while
Findingf ★ is, however, not straightforward due to the intertwined nature of the problem. In the next subsection, we propose a novel method called GPI-RS. GPI-RS is able to obtain the leading eigenvector of the matrix B KKT (f ) −1 A KKT (f) in a computationally efficient fashion.
C. Generalized Power Iteration for Rate-Splitting
The basic process of the proposed GPI-RS follows that of the conventio power iteration. Given f ( −1) obtained in the ( − 1)th iteration, we construct the matrices B KKT (f ( −1) ) and A KKT (f ( −1) ) using (43) and (44). Then, we update the precoding vector for the current iteration as
f ( ) ← B −1 KKT (f ( −1) )A KKT (f ( −1) )f ( −1) B −1 KKT (f ( −1) )A KKT (f ( −1) )f ( −1) .(47)
We repeat this process until the convergence criterion is met. In this paper, we use f ( ) −f ( −1) < for small enough . We summarize this process in Algorithm 1. For an initial pointf (0) , we use maximum ratio transmission (MRT), which works well in the later simulations. 3 ) when and increase with the same order. We note that this is substantially small compared to the existing methods. For example, the conventional WMMSE methods based on QCQP [7], [32] need the complexity order of O(( ) 3.5 ) [8], [30]. Further, the CCCP based method [10] is associated with the complexity order of O( 6 0.5 2 3.5 ). In particular, it is noteworthy that the proposed GPI-RS has the linear order complexity with the number of user ; which makes the proposed GPI-RS advantageous when there are a large number of users. Additionally, our algorithm is easy to implement in practice in that CVX is not needed to use to obtain a solution. We note that the computational complexity of the proposed method can be further reduced by adopting matrix inversion approximation techniques such as Chebyshev iteration [33] or Neumann series [34], or by limiting the maximum number of iterations in the proposed method.
Remark 4. (Selection of the parameter ) Even though small is desirable since it provides
accurate approximation in the LogSumExp technique, using too small may cause the algorithm to diverge. Analytically identifying the optimal , however, is very challenging. To find the proper numerically, we can modify the GPI-RS to obtain the smallest that makes the GPI-RS algorithm converge. Specifically, we start the GPI-RS with a small . If the iteration loop of the GPI-RS does not converge within the predetermined number of iterations, then we enforce to terminate the loop, increase , and newly start the algorithm again. We repeat this process until the algorithm converges before the predetermined number. We can empirically adapt the starting value and the increasing ratio depending on the system configuration to reduce the algorithm time. M q (0) . A rationale that the power iteration converges to the leading eigenvector of M is as follows. Since a set of eigenvectors form a set of basis, we can represent q (0) = =1 x , where x indicates theth eigenvector and is the corresponding weight. Denoting that is the -th eigenvalue that
| 1 | > | 2 | ≥ · · · ≥ | |, we use M = =1 x to derive Mq ( ) = =1 x = 1 1 x 1 + =2 1 1 x ( ) .(48)
As → ∞, (a) vanishes; thereby the remaining term converges to the leading eigenvector x 1 .
As presented in Proposition 1, our problem generalizes a conventional eigenvalue problem by considering an eigenvector-dependent matrix M(x), known as NEPv [28]. That is to say, we aim to identify x 1 that fulfills M(x 1 )x 1 = 1 x 1 , where 1 is the maximum eigenvalue and
M(x) = B KKT (x) −1 A KKT (x) in our case. For convenience, denote M(x )x = (x ).
Then with an arbitrary vector x, the Taylor expansion at x 1 for (x) leads to
(x) H x = (x 1 ) H x + (x − x 1 ) H ∇ (x 1 )x + ( x − x 1 ).(49)
On one hand, we have
(x) H x 1 2 = ( 1 + ( x − x 1 )) 2(50)
due to the fact that M(x 1 )x 1 = 1 x 1 . On the other hand, assuming that {v 1 , · · · , v } is a set of orthonormal basis where v 1 = x 1 , we also have
=2 (x) H v 2 ≤ =2 2 (x H v ) 2 + 2 (x H v ) ( x − x 1 ) + ( x − x 1 ) 2 (51) ≤ ( 2 x − x 1 + ( x − x 1 )) 2(52)
Under a premise that | 1 | > | 2 | ≥ | |, ∀ ≠ 1, 2, by iteratively projecting x onto M(x) with the GPI-RS algorithm, each component corresponding to non-leading eigenvectors x 2 , · · · , x vanishes. Accordingly, the GPI-RS converges to the leading eigenvector x 1 .
V. GENERALIZATION TO MULTIPLE-LAYER RATE SPLITTING
This section discusses how to generalize the proposed framework to multiple-layer RSMA.
As mentioned in [9], if groups of users are located within multiple clusters, it is beneficial to exploit a partial common message that includes the messages of a subset of the users. Employing multiple-layer RSMA, the transmit signal x is given by
x = f c c + =1 f c,K c,K + =1 f ,(53)
where the partial common message c,K is decoded for the users included in K ⊂ K. We denote |K | = . If = 0, i.e., there is no partial common message, then (53) reduces to the 1-layer RSMA (5). Assuming K ∩ K = ∅, user for ∈ K decodes the messages with the following order: c → c,K → . To guarantee that the partial common message c,K is successfully decoded for the users in K , the information rate of c,K is determined as
c,K = min ∈K E {ĥ } ins. c,K ( ) .(54)
By using the same technique presented in Section II-D, we derive a lower bound as
c,K ≥ E {ĥ ∈K } min ∈K ¯ ins. c,K ( ) ,(55)
wherē ins.
c,K ( ) = log 2 1 + |ĥ H f c,K | 2 =1, ≠ |ĥ H f c,K | 2 + ℓ=1 |ĥ H f ℓ | 2 + =1 f H c,K f c,K + ℓ=1 f H ℓ f ℓ + 2 .(56)
We observe that the SINR in (56) does not have interference from the common message c since we assume that the common message is already decoded and eliminated via SIC. Similar to this, we also characterize a lower bound on the instantaneous spectral efficiency for the common message and the private message as follows:
ins.
c ( ) (59)
= log 2 1 + |ĥ H f c | 2 =1 |ĥ H f c,K | 2 + ℓ=1 |ĥ H f ℓ | 2 + f H c f c + =1 f H c,K f c,K + ℓ=1 f H ℓ f ℓ + 2 ,(57)= log 2 1 + |ĥ H f | 2 =1, ≠ |ĥ H f c,K | 2 + ℓ=1,ℓ≠ |ĥ H f ℓ | 2 + =1, ≠ f H c,K f c,K + ℓ=1 f H ℓ f ℓ + 2 .(58)subject to f c 2 + =1 f 2 ≤ 1.(60)
To obtain a solution of (59), we reformulate (59) by following our approach used in the 1-layer RSMA. We first approximate min ∈K ¯ ins. c,K ( ) by using the LogSumExp technique as
min ∈K {¯ ins. c,K ( )} ≈ − log 1 ∈K exp ¯ ins. c,K ( ) − .(61)
By following our approach, we define a high dimensional precoding vectorf as
f = [f T c , f T c,K 1 , · · · , f T c,K , f T 1 , · · · , f T ] T ∈ C ( + +1)×1 .(62)
VI. NUMERICAL RESULTS
In this section, we evaluate the sum spectral efficiency performance to demonstrate the proposed GPI-RS. For the baseline methods, we consider the followings:
• MRT: The precoding vectors is designed by matching the estimated channel vector. Specifically, we have f =ĥ , ∈ K, and f c = 0.
• RZF : The precoding vectors are designed by following the ZF rule, while regularizing it depending on SNR:
f = ĤĤ H + I 2 −1ĥ
H , ∈ K, and f c = 0.
As SNR goes to infinity, RZF becomes equal to ZF.
• Sum SE Max with no RS: In this method, we use the method proposed in [4] to maximize the sum spectral efficiency using classical SDMA without considering RSMA. Note that we do not incorporate the CSIT estimation error into this method, i.e., we treat the estimated channel vector as the true channel in this method.
• WMMSE-SAA: This case indicates the WMMSE method with the SAA technique [7].
We use 1000 samples for the SAA technique. A detailed description of this approach is presented in Section III.
In what follows, we present the simulation results.
Ergodic Sum Spectral Efficiency per SNR: First, we compare the ergodic sum spectral efficiency of the proposed GPI-RS and the other baseline methods. The basic simulation setups are explained in the caption of Fig. 2. For updating , we set the initial value as 0.1 if SNR < 15dB, and 0.5 for the rest of the cases. We note that this initial setup is designed empirically.
If the GPI-RS loop is not terminated within 50 iterations, we increase by 0.5 and repeat the algorithm. As shown in Fig. 2, the proposed GPI-RS provides meaningful spectral efficiency gains over the baseline methods in both of the 6 × 4 and the 12 × 8 cases. In particular, compared to the WMMSE-SAA method at SNR = 40dB, the GPI-RS obtains around 24% gains in the 6×4 case and 19% gains in the 12 × 8 case. We observe that considerable gains are achieved in both cases by using the proposed method. The rationales of the performance gains are two folds. First, the GPI-RS can reach the best local optimal point by NEPv principle, while the WMMSE-SAA approach cannot guarantee the best local optimum. Second, we incorporate the CSIT estimation error into our performance characterization in a rigorous way, while the WMMSE-SAA relies on randomly generated samples. These two features bring the gains of the proposed GPI-RS method.
Especially, the second point sheds light on a reason why the proposed GPI-RS obtains more significant performance gains in the high SNR regime. In the high SNR regime, the performance is mainly determined by the interference induced from the inaccurate CSIT as the noise becomes negligible. For this reason, to achieve high spectral efficiency performance, rigorous treatment of CSIT error is required. The SAA, used in the WMMSE-SAA, relies on randomly generated samples instead of incorporating the CSIT estimation error effects into the spectral efficiency performance. Compared to this, the GPI-RS specifically incorporates the CSIT estimation error effects into its design by deriving a rigorous lower bound. This difference causes more significant performance gaps in the high SNR regime. Additionally, by comparing the sum SE max with no RS case, we see that the spectral efficiency gains of the proposed GPI-RS increase as SNR
increases. This indicates that the common message rate portion increases in the high SNR regime, which matches our intuition on RSMA.
In Fig. 3, we also illustrate the ergodic sum spectral efficiency by assuming the the 32 × 16
case. As in the above case, we also observe that the GPI-RS provides significant gains over the baseline methods in Fig. 3. Specifically, at SNR = 40dB, the GPI-RS offers about 30% gains compared to the sum spectral efficiency maximization method without RSMA. This demonstrates that the proposed GPI-RS works well in the regimes of large BS antennas and users.
Ergodic Sum Spectral Efficiency per CSIT Accuracy: Now, we investigate the sum spectral efficiency depending on the CSIT accuracy. We depict the sum spectral efficiency by increasing ul ul in Fig. 4. Note that the CSIT estimation accuracy increases as ul ul increases. In Fig. 4, we observe that the relative performance gains of the GPI-RS over the WMMSE-SAA increase as the CSIT becomes accurate. For instance, if ul ul = 0.1, the WMMSE-SAA outperforms the GPI-RS by 5%, while if ul ul = 8, the GPI-RS outperforms the WMMSE-SAA by 27%.
This is because, as the CSIT is more accurate, the regularization term of our lower bound also becomes accurate; resulting in that our lower bound becomes tight. Then the CSIT estimation error is suitably reflected into the GPI-RS design.
Convergence: We depict the convergence behavior. The update process is equal as above:
the initial value is 0.1 if SNR < 15dB and 0.5 for the rest of the cases. We update by 0.5 per every 50 iterations. We recall that this is for finding the smallest that guarantees convergence.
In Fig. 5, we observe that the GPI-RS converges well with small in low SNR. On contrary to this, in the high SNR, we need to tune to make the GPI-RS converge. For instance, when SNR = 20dB, needs to be updated one time until convergence, while when SNR = 40dB, In (b) = 12, = 8, ul ul = 4 and 2 = 1. In both cases, each user's location is randomly determined, so that the AoA is drawn from the uniform distribution. The angular spread is fixed as Δ = /6 for ∈ K. needs to be updated two times until convergence. Through this observation, we numerically confirm that using the presented update method, the convergence of the proposed GPI-RS is Computation Time: As a complement to the complexity analysis in Remark 3, we compare numerical MATLAB computation times between the proposed GPI-RS and the WMMSE-SAA in Table I. The simulation setups are equivalent to those used to produce Fig. 2. As shown in Table I, the proposed GPI-RS consumes only 7.4% of the computation time compared to the WMMSE-SAA in the 6 × 4 case and 6.6% in the 12 × 8 case on average. This dramatic complexity reduction comes from two sources: First, we approximate the non-smooth minimum function via a LogSumExp technique, by which we avoid having distinct constraints on the common message rate in the optimization problem. Second, by using the GPI-RS, we do not rely on an off-the-shelf optimization toolbox, including CVX. This result indicates that the proposed method is beneficial in terms of computational complexity, not only in an analytical (big-O) sense but also in a practical (numerical computation time) sense. We note that Table I is only a rough measure of relative computational complexity.
Multiple-layer RSMA : Next, we evaluate the sum spectral efficiency performance of the multiple-layer RSMA using the generalized GPI-RS. In particular, we assume the 2-layer RSMA scenario in the 6 × 4 case, wherein one common message, two partial common messages, and private messages are transmitted. Specifically, the partial common messages are c,K 1 and c,K 2 , and K 1 = {1, 2} and K 2 = {3, 4}; therefore the partial common message c,K 1 is intended to user 1 and 2 and the partial common message c,K 2 is intended to user 3 and 4. We construct a favorable channel environment for this message setup, in which users 1, 2, and user 3, 4
are clustered in the same location, respectively. The performance result is illustrated in Fig. 6, whose caption includes detailed simulation setups. As shown in Fig. 6, the GPI-RS for 2-layer RSMA achieves the best sum spectral efficiency performance. Specifically, the GPI-RS for 2layer RSMA has around 17% gains over the GPI-RS for 1-layer RSMA and round 31% gains over the WMMSE-SAA. This confirms the observation of [20] more in a more rigorous way.
We also show that the proposed framework is well extended to multiple-layer RSMA.
VII. CONCLUSION
In this paper, we have proposed a novel precoding optimization method for downlink MIMO with RSMA. Aiming to maximize the sum spectral efficiency of the considered system, we have formulated an optimization problem, while a sum spectral efficiency maximization problem regarding linear precoding vectors is infeasible to solve due to its non-convexity and nonsmoothness. To resolve this, we have approximated a non-smooth minimum function using the LogSumExp technique and reformulated the problem into a tractable form. We have shown that the first-order optimality condition of the reformulated problem is cast as a NEPv. In order to find the leading eigenvector for the derived condition, we have proposed the GPI-RS. We also have extended the GPI-RS to the multiple-layer RSMA scenario. The simulations have demonstrated that the GPI-RS brings significant spectral efficiency gains in various environments while the associated complexity is small compared to the existing WMMSE-SAA method.
For future work, it is promising to consider the finite blocklength regime where a non-zero decoding probability is induced [35], [36]. In particular, decoding failure on a common message can cause significant interference in the SINR of private messages; therefore, precoders need to be designed carefully to maximize the spectral efficiency. In addition, design for physical layer security [37], [38] with RSMA is of interest. Considering RSMA in terahertz line-of-sight MIMO environments [39] is also promising.
APPENDIX A PROOF OF LEMMA 1
We first derive the KKT condition of the problem (40). The corresponding Lagrangian function is defined as
(f) = log 1 =1 exp log 2 f H A c ( )f f H B c ( )f − 1 − + =1 log 2 f H A f f H B f .(73)
(
E {e } [·]) within one particular coherence block and the outer expectation is taken over the randomness associated with the imperfect knowledge of the channel fading process (E {ĥ } [·]).
as independent Gaussian noise, (d) follows Jensen's inequality, and (e) comes from the CSIT error covariance E[e e H ] = .
Fig. 1 .
1An illustration of the comparison between the approximate maximum using the LogSumExp and the true maximum of the non-smooth minimum function.
Remark 2 .
2(Joint power control and beamforming) The proposed GPI-RS identifies the leading eigenvectorf ★ that maximizes (42). Since the vectorf is constructed by stacking all the precoding vectors corresponding to each message, the power allocation and the beamforming direction of each message are jointly identified within the found vector. For example, if f ★ (1 : ) = 0, this means f c = 0 in the obtained solution. Then the common message is not assigned any transmit power, therefore we do not use RSMA and go back to use classical SDMA. Thanks to this feature, the proposed GPI-RS automatically determines the message setups depending on channel conditions; so that there is no need to employ a separate process to determine whether to use a common message.
Remark 3 .
3(Algorithm complexity) The total computational complexity of the proposed GPI-RS is dominated by the calculation of B −1 KKT (f). The matrix B −1 KKT (f) is the sum of the block-diagonal matrices as presented in (44). Specifically, + 1 number of × submatrices are concatenated, so that the total size is ( + 1) × ( + 1) . For this reason, the inverse matrix B −1 KKT (f) is obtained by computing the inverse of each submatrix, and this requires the complexity with the order of O ( 1 3 ( +1) 3 ). This results in that the complexity of the proposed GPI-RS per iteration is with the order of O( 1 3
Remark 5 .
5(Principle of GPI-RS) We explain the principle of the GPI-RS algorithm through the conventional power iteration. In the conventional power iteration, we obtain the leading eigenvector of a matrix M ∈ C × by iteratively calculating q ( +1) = M q (0)
Fig. 2 .
2The sum spectral efficiency comparison per SNR. The simulation setup is: In (a), = 6, = 4, ul ul = 4 and 2 = 1.
Fig. 3 .
3The sum spectral efficiency comparison per SNR assuming = 32, = 16. The other setups are same withFig. 2.
Fig. 4 .
4The sum spectral efficiency comparison per CSIT accuracy. The simulation setup is: = 6, = 4, and 2 = 1. TheAoAis drawn from the uniform distribution. The angular spread is Δ = /6 for ∈ K.
Fig. 5 .
5The convergence behavior of the proposed GPI-RS per iteration. In (a) the residual is depicted. In (b). the sum spectral efficiency is drawn. The simulation setup is: = 6, = 4, ul ul = 4 and 2 = 1. The AoA is drawn from the uniform distribution and the angular spread is Δ = /6 for ∈ K. Residual is defined as f ( ) −f ( −1) .
Fig. 6 .
6The sum spectral efficiency comparison per CSIT accuracy. The simulation setup is: = 6, = 4, ul ul = 4, and 2 = 1. We assume that user 1, 2 and user 3, 4 are clustered in the same location: = /3 for ∈ {1, 2} and = 2 /3 for ∈ {3, 4}. The angular spread is fixed as Δ = /6 for ∈ K.
10] assumed a particular scenario of CSIT estimation that h =ĥ + e , where E[e H e ] = I and E[e ] = 0. We note that this is a special case of our CSIT estimation model. Under this premise, the instantaneous spectral efficiency of the common message achieved atuser , ins.
c ( ), is expressed as
ins.
ℓ≠ |h H f ℓ | 2 + 2 . Note that here we exchange the power assumption between the message and the precoding vectors, i.e., E[| c | 2 ] = E[| | 2 ] = 1 and f c 2 + =1 f 2 ≤ for ease of description. This does not change the SINR.The minimum MSEs are achieved when c ( ) = f H
c h c ( ) −1 and
= f H h −1 , providing the
following MMSE: MMSE
c
TABLE I
IAVERAGE MATLAB CPU TIME (SEC) SetupProposed GPI-RS WMMSE-SAA Comparison (%)6 × 4
2.62
35.12
7.4%
12 × 8
11.24
170.29
6.6%
guaranteed well.
This completes the proof.
The higher dimensional precoding vectorf in (62) allows to represent the spectral efficiency expression of the partial common message c,K as a Rayleigh quotient form.ins.whereWe also represent¯ ins. c ( ) and¯ ins. adequately by considering the partial common message. With this representation, the problem (59) is converted to23To apply the GPI-RS algorithm for (66), we derive the optimality condition for (66) in the following corollary.Corollary 1. The first-order optimality condition of the optimization problem (66) is satisfied if the following holds:wherewithProof. It can be proven by extending Lemma 1.With Corollary 1, we apply the GPI-RS described in Algorithm 1 by using(69)and(70)instead of (43) and (44).
Sum spectral efficiency optimization for rate splitting in downlink MU-MISO: A generalized power iteration approach. J Park, J Choi, N Lee, W Shin, H V Poor, Proc. IEEE Wireless Commun. and Netw. Conf. Workshop, 2021. IEEE Wireless Commun. and Netw. Conf. Workshop, 2021J. Park, J. Choi, N. Lee, W. Shin, and H. V. Poor, "Sum spectral efficiency optimization for rate splitting in downlink MU-MISO: A generalized power iteration approach," in Proc. IEEE Wireless Commun. and Netw. Conf. Workshop, 2021, pp. 1-6.
On the achievable throughput of a multiantenna Gaussian broadcast channel. G Caire, S Shamai, IEEE Trans. Inf. Theory. 497G. Caire and S. Shamai, "On the achievable throughput of a multiantenna Gaussian broadcast channel," IEEE Trans. Inf. Theory, vol. 49, no. 7, pp. 1691-1706, Jul. 2003.
Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design. S S Christensen, R Agarwal, E D Carvalho, J M Cioffi, IEEE Trans. Wireless Commun. 712S. S. Christensen, R. Agarwal, E. D. Carvalho, and J. M. Cioffi, "Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design," IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 4792-4799, Dec. 2008.
Joint user selection, power allocation, and precoding design with imperfect CSIT for multi-cell MU-MIMO downlink systems. J Choi, N Lee, S Hong, G Caire, IEEE Trans. Wireless Commun. 191J. Choi, N. Lee, S. Hong, and G. Caire, "Joint user selection, power allocation, and precoding design with imperfect CSIT for multi-cell MU-MIMO downlink systems," IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 162-176, 2020.
MIMO broadcast channels with finite-rate feedback. N , IEEE Trans. Inf. Theory. 5211N. Jindal, "MIMO broadcast channels with finite-rate feedback," IEEE Trans. Inf. Theory, vol. 52, no. 11, pp. 5045-5060, 2006.
On the optimal feedback rate in interference-limited multi-antenna cellular systems. J Park, N Lee, J G Andrews, R W Heath, IEEE Trans. Wireless Commun. 158J. Park, N. Lee, J. G. Andrews, and R. W. Heath, "On the optimal feedback rate in interference-limited multi-antenna cellular systems," IEEE Trans. Wireless Commun., vol. 15, no. 8, pp. 5748-5762, 2016.
Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: A rate-splitting approach. H Joudeh, B Clerckx, IEEE Trans. Commun. 6411H. Joudeh and B. Clerckx, "Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: A rate-splitting approach," IEEE Trans. Commun., vol. 64, no. 11, pp. 4847-4861, 2016.
Robust transmission in downlink multiuser MISO systems: A rate-splitting approach. IEEE Trans. Signal Process. 6423--, "Robust transmission in downlink multiuser MISO systems: A rate-splitting approach," IEEE Trans. Signal Process., vol. 64, no. 23, pp. 6227-6242, 2016.
A rate splitting strategy for massive MIMO with imperfect CSIT. M Dai, B Clerckx, D Gesbert, G Caire, IEEE Trans. Wireless Commun. 157M. Dai, B. Clerckx, D. Gesbert, and G. Caire, "A rate splitting strategy for massive MIMO with imperfect CSIT," IEEE Trans. Wireless Commun., vol. 15, no. 7, pp. 4611-4624, 2016.
Rate splitting for multi-antenna downlink: Precoder design and practical implementation. Z Li, C Ye, Y Cui, S Yang, S Shamai, IEEE J. Sel. Areas Commun. 388Z. Li, C. Ye, Y. Cui, S. Yang, and S. Shamai, "Rate splitting for multi-antenna downlink: Precoder design and practical implementation," IEEE J. Sel. Areas Commun., vol. 38, no. 8, pp. 1910-1924, 2020.
Beyond dirty paper coding for multi-antenna broadcast channel with partial CSIT: A rate-splitting approach. Y Mao, B Clerckx, IEEE Trans. Commun. 6811Y. Mao and B. Clerckx, "Beyond dirty paper coding for multi-antenna broadcast channel with partial CSIT: A rate-splitting approach," IEEE Trans. Commun., vol. 68, no. 11, pp. 6775-6791, 2020.
A new achievable rate region for the interference channel. Te Han, K Kobayashi, IEEE Trans. Inf. Theory. 271Te Han and K. Kobayashi, "A new achievable rate region for the interference channel," IEEE Trans. Inf. Theory, vol. 27, no. 1, pp. 49-60, 1981.
Gaussian interference channel capacity to within one bit. R H Etkin, D N C Tse, H Wang, IEEE Trans. Inf. Theory. 5412R. H. Etkin, D. N. C. Tse, and H. Wang, "Gaussian interference channel capacity to within one bit," IEEE Trans. Inf. Theory, vol. 54, no. 12, pp. 5534-5562, 2008.
Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution. B Clerckx, H Joudeh, C Hao, M Dai, B Rassouli, IEEE Commun. Mag. 545B. Clerckx, H. Joudeh, C. Hao, M. Dai, and B. Rassouli, "Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution," IEEE Commun. Mag., vol. 54, no. 5, pp. 98-105, 2016.
Degrees of freedom of time correlated MISO broadcast channel with delayed CSIT. S Yang, M Kobayashi, D Gesbert, X Yi, IEEE Trans. Inf. Theory. 591S. Yang, M. Kobayashi, D. Gesbert, and X. Yi, "Degrees of freedom of time correlated MISO broadcast channel with delayed CSIT," IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 315-328, 2013.
Rate analysis of two-receiver MISO broadcast channel with finite rate feedback: A rate-splitting approach. C Hao, Y Wu, B Clerckx, IEEE Trans. Commun. 639C. Hao, Y. Wu, and B. Clerckx, "Rate analysis of two-receiver MISO broadcast channel with finite rate feedback: A rate-splitting approach," IEEE Trans. Commun., vol. 63, no. 9, pp. 3232-3246, 2015.
Max-min fair beamforming for cooperative multigroup multicasting with rate-splitting. A Z Yalçın, Y Yapıcı, IEEE Trans. Wireless Commun. 201A. Z. Yalçın and Y. Yapıcı, "Max-min fair beamforming for cooperative multigroup multicasting with rate-splitting," IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 254-268, 2021.
Rate-splitting multiple access for downlink multi-antenna communications: Physical layer design and link-level simulations. O Dizdar, Y Mao, W Han, B Clerckx, Proc. IEEE Int. Symp. Pers., Indoor Mobile Radio Commun. IEEE Int. Symp. Pers., Indoor Mobile Radio CommunO. Dizdar, Y. Mao, W. Han, and B. Clerckx, "Rate-splitting multiple access for downlink multi-antenna communications: Physical layer design and link-level simulations," in Proc. IEEE Int. Symp. Pers., Indoor Mobile Radio Commun., 2020, pp. 1-6.
Downlink sum-rate maximization for rate splitting multiple access (RSMA). Z Yang, M Chen, W Saad, M Shikh-Bahaei, Proc. IEEE Int. Conf. Comm., 2020. IEEE Int. Conf. Comm., 2020Z. Yang, M. Chen, W. Saad, and M. Shikh-Bahaei, "Downlink sum-rate maximization for rate splitting multiple access (RSMA)," in Proc. IEEE Int. Conf. Comm., 2020, pp. 1-6.
A rate splitting strategy for massive MIMO with imperfect CSIT. M Dai, B Clerckx, D Gesbert, G Caire, IEEE Trans. Wireless Commun. 157M. Dai, B. Clerckx, D. Gesbert, and G. Caire, "A rate splitting strategy for massive MIMO with imperfect CSIT," IEEE Trans. Wireless Commun., vol. 15, no. 7, pp. 4611-4624, Jul. 2016.
Joint spatial division and multiplexing -The large-scale array regime. A Adhikary, J Nam, J Ahn, G Caire, IEEE Trans. Inf. Theory. 5910A. Adhikary, J. Nam, J. Ahn, and G. Caire, "Joint spatial division and multiplexing -The large-scale array regime," IEEE Trans. Inf. Theory, vol. 59, no. 10, pp. 6441-6463, Oct. 2013.
Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis. Y Mao, B Clerckx, V O K Li, IEEE Trans. Commun. 6712Y. Mao, B. Clerckx, and V. O. K. Li, "Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis," IEEE Trans. Commun., vol. 67, no. 12, pp. 8754-8770, 2019.
Rate-splitting to mitigate residual transceiver hardware impairments in massive MIMO systems. A Papazafeiropoulos, B Clerckx, T Ratnarajah, IEEE Trans. Veh. Technol. 669A. Papazafeiropoulos, B. Clerckx, and T. Ratnarajah, "Rate-splitting to mitigate residual transceiver hardware impairments in massive MIMO systems," IEEE Trans. Veh. Technol., vol. 66, no. 9, pp. 8196-8211, 2017.
Rate-splitting multiple access for multi-antenna joint radar and communications. C Xu, B Clerckx, S Chen, Y Mao, J Zhang, IEEE J. Sel. Areas Commun. C. Xu, B. Clerckx, S. Chen, Y. Mao, and J. Zhang, "Rate-splitting multiple access for multi-antenna joint radar and communications," IEEE J. Sel. Areas Commun., pp. 1-1, 2021.
Ensuring max-min fairness of UL SIMO-NOMA: A rate splitting approach. J Zeng, T Lv, W Ni, R P Liu, N C Beaulieu, Y J Guo, pp. 11 080-11 093IEEE Trans. Veh. Technol. 6811J. Zeng, T. Lv, W. Ni, R. P. Liu, N. C. Beaulieu, and Y. J. Guo, "Ensuring max-min fairness of UL SIMO-NOMA: A rate splitting approach," IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 11 080-11 093, 2019.
MISO networks with imperfect CSIT: A topological rate-splitting approach. C Hao, B Clerckx, IEEE Trans. Commun. 655C. Hao and B. Clerckx, "MISO networks with imperfect CSIT: A topological rate-splitting approach," IEEE Trans. Commun., vol. 65, no. 5, pp. 2164-2179, 2017.
FPGA acceleration for computationally efficient symbol-level precoding in multi-user multi-antenna communication systems. J Krivochiza, J Duncan, S Andrenacci, S Chatzinotas, B Ottersten, IEEE Access. 7J. Krivochiza, J. Merlano Duncan, S. Andrenacci, S. Chatzinotas, and B. Ottersten, "FPGA acceleration for computationally efficient symbol-level precoding in multi-user multi-antenna communication systems," IEEE Access, vol. 7, pp. 15 509- 15 520, 2019.
On an eigenvector-dependent nonlinear eigenvalue problem. Y Cai, L.-H Zhang, Z Bai, R.-C Li, SIAM J. Matrix Anal. Appl. 393Y. Cai, L.-H. Zhang, Z. Bai, and R.-C. Li, "On an eigenvector-dependent nonlinear eigenvalue problem," SIAM J. Matrix Anal. Appl., vol. 39, no. 3, pp. 1360-1382, 2018.
A coordinated approach to channel estimation in large-scale multiple-antenna systems. H Yin, D Gesbert, M Filippou, Y Liu, IEEE J. Sel. Areas Commun. 312H. Yin, D. Gesbert, M. Filippou, and Y. Liu, "A coordinated approach to channel estimation in large-scale multiple-antenna systems," IEEE J. Sel. Areas Commun., vol. 31, no. 2, pp. 264-273, 2013.
Hybrid data-sharing and compression strategy for downlink cloud radio access network. P Patil, B Dai, W Yu, IEEE Trans. Commun. 6611P. Patil, B. Dai, and W. Yu, "Hybrid data-sharing and compression strategy for downlink cloud radio access network," IEEE Trans. Commun., vol. 66, no. 11, pp. 5370-5384, Nov. 2018.
On the dual formulation of boosting algorithms. C Shen, H Li, IEEE Trans. Pattern Anal. Mach. Intell. 3212C. Shen and H. Li, "On the dual formulation of boosting algorithms," IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2216-2231, 2010.
Rate-splitting for max-min fair multigroup multicast beamforming in overloaded systems. H Joudeh, B Clerckx, IEEE Trans. Wireless Commun. 1611H. Joudeh and B. Clerckx, "Rate-splitting for max-min fair multigroup multicast beamforming in overloaded systems," IEEE Trans. Wireless Commun., vol. 16, no. 11, pp. 7276-7289, Nov. 2017.
Low-computing-load, high-parallelism detection method based on Chebyshev iteration for massive MIMO systems with VLSI architecture. G Peng, L Liu, P Zhang, S Yin, S Wei, IEEE Trans. Signal Process. 6514G. Peng, L. Liu, P. Zhang, S. Yin, and S. Wei, "Low-computing-load, high-parallelism detection method based on Chebyshev iteration for massive MIMO systems with VLSI architecture," IEEE Trans. Signal Process., vol. 65, no. 14, pp. 3775-3788, 2017.
On the matrix inversion approximation based on Neumann series in massive MIMO systems. D Zhu, B Li, P Liang, IEEE International Conf. on Commun. (ICC). D. Zhu, B. Li, and P. Liang, "On the matrix inversion approximation based on Neumann series in massive MIMO systems," in IEEE International Conf. on Commun. (ICC), 2015, pp. 1763-1769.
MIMO design for Internet-of-Things: Joint optimization of spectral efficiency and error probability in finite blocklength regime. J Choi, J Park, IEEE Internet of Things J. J. Choi and J. Park, "MIMO design for Internet-of-Things: Joint optimization of spectral efficiency and error probability in finite blocklength regime," IEEE Internet of Things J., pp. 1-1, 2021.
Channel coding rate in the finite blocklength regime. Y Polyanskiy, H V Poor, S Verdu, IEEE Trans. Inf. Theory. 565Y. Polyanskiy, H. V. Poor, and S. Verdu, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.
Secure transmission for hierarchical information accessibility in downlink MU-MIMO. K Lee, J Choi, D K Kim, J Park, ArXiv. K. Lee, J. Choi, D. K. Kim, and J. Park, "Secure transmission for hierarchical information accessibility in downlink MU-MIMO," ArXiv, 2021. [Online]. Available: https://arxiv.org/abs/2109.07727
Sum secrecy spectral efficiency maximization in downlink MU-MIMO: Colluding eavesdroppers. J Choi, J Park, IEEE Trans. Veh. Technol. 701J. Choi and J. Park, "Sum secrecy spectral efficiency maximization in downlink MU-MIMO: Colluding eavesdroppers," IEEE Trans. Veh. Technol., vol. 70, no. 1, pp. 1051-1056, 2021.
Terahertz line-of-sight MIMO communication: Theory and practical challenges. H Do, S Cho, J Park, H.-J Song, N Lee, A Lozano, IEEE Commun. Mag. 593H. Do, S. Cho, J. Park, H.-J. Song, N. Lee, and A. Lozano, "Terahertz line-of-sight MIMO communication: Theory and practical challenges," IEEE Commun. Mag., vol. 59, no. 3, pp. 104-109, 2021.
| [] |
[
"The set of stable indices of 0-1 matrices with a given order",
"The set of stable indices of 0-1 matrices with a given order"
] | [
"Zhibing Chen \nCollege of Mathematics and Statistics\nShenzhen University\n518060ShenzhenChina\n",
"Zejun Huang \nCollege of Mathematics and Statistics\nShenzhen University\n518060ShenzhenChina\n"
] | [
"College of Mathematics and Statistics\nShenzhen University\n518060ShenzhenChina",
"College of Mathematics and Statistics\nShenzhen University\n518060ShenzhenChina"
] | [] | The stable index of a 0-1 matrix A is defined to be the smallest integer k such that A k+1 is not a 0-1 matrix if such an integer exists; otherwise the stable index of A is defined to be infinity. We characterize the set of stable indices of 0-1 matrices with a given order.The stable index of 0-1 matrices is closely related to directed walks of digraphs. Digraphs in this paper allow loops but do not allow multiple arcs. We follow the terminology on digraphs in[1]. Directed paths, directed cycles and directed walks will be abbreviated as paths, cycles and walks, respectively. A path (walk) with initial vertex u and terminal vertex v is called a uv-path (uv-walk). A walk (path, cycle) of length k is called a k-walk (k-path, k-cycle). The number of vertices in a digraph is called its order. Denote by M n {0, 1} the set of 0-1 matrices of order n. Given a matrix A = (a ij ) ∈ M n {0, 1}, we define its digraph as D(A) = (V, A) with the vertex set V = {1, 2, . . . , n} and the arc set A = {(i, j) : a ij = 1, 1 ≤ i, j ≤ n}. Conversely, given a digraph D = (V, A)with a vertex set V = {v 1 , v 2 , . . . , v n } and an arc set A, its adjacency matrix is defined asGiven A ∈ M n {0, 1}, the (i, j)-entry of A k equals t if and only if D(A) has exactly t distinct ij-walks of length k. | 10.1080/03081087.2023.2199190 | [
"https://export.arxiv.org/pdf/2111.01670v3.pdf"
] | 248,227,817 | 2111.01670 | afc0abdc2cdb95d48021a68c5f31878d616c313e |
The set of stable indices of 0-1 matrices with a given order
Zhibing Chen
College of Mathematics and Statistics
Shenzhen University
518060ShenzhenChina
Zejun Huang
College of Mathematics and Statistics
Shenzhen University
518060ShenzhenChina
The set of stable indices of 0-1 matrices with a given order
0-1 matrixdigraphstable indexwalk Mathematics Subject Classification: 05C5005C2015A99
The stable index of a 0-1 matrix A is defined to be the smallest integer k such that A k+1 is not a 0-1 matrix if such an integer exists; otherwise the stable index of A is defined to be infinity. We characterize the set of stable indices of 0-1 matrices with a given order.The stable index of 0-1 matrices is closely related to directed walks of digraphs. Digraphs in this paper allow loops but do not allow multiple arcs. We follow the terminology on digraphs in[1]. Directed paths, directed cycles and directed walks will be abbreviated as paths, cycles and walks, respectively. A path (walk) with initial vertex u and terminal vertex v is called a uv-path (uv-walk). A walk (path, cycle) of length k is called a k-walk (k-path, k-cycle). The number of vertices in a digraph is called its order. Denote by M n {0, 1} the set of 0-1 matrices of order n. Given a matrix A = (a ij ) ∈ M n {0, 1}, we define its digraph as D(A) = (V, A) with the vertex set V = {1, 2, . . . , n} and the arc set A = {(i, j) : a ij = 1, 1 ≤ i, j ≤ n}. Conversely, given a digraph D = (V, A)with a vertex set V = {v 1 , v 2 , . . . , v n } and an arc set A, its adjacency matrix is defined asGiven A ∈ M n {0, 1}, the (i, j)-entry of A k equals t if and only if D(A) has exactly t distinct ij-walks of length k.
Introduction and Main Results
Combinatorial problems on the power of nonnegative matrices is an interesting topic in matrix theory. One of the classical problems in this topic is the characterization of the exponents of primitive matrices with a given order. Let A be a nonnegative matrix. Denote by ρ(A) the spectral radius of A. By the famous Perron-Frobenius theorem we know that ρ(A) is an eigenvalue of A. If A has no other eigenvalue of modulus ρ(A), then A is called primitive. Frobenius proved that a nonnegative square matrix is primitive if and only if there is a positive integer k such that A k is positive entrywise. The exponent of a primitive matrix is the smallest positive integer k such that A k is positive. A natural interesting problem is which numbers are the exponents of nonnegative matrices with a given order, which has been solved by Wielandt [6], Dulmage and Mendelsohn [3], Lewin and Vitek [4], Shao [5] and Zhang [7]. It turns out that the set of exponents of primitive matrices of order n is not a consecutive set in the interval [1, (n − 1) 2 + 1], which is quite surprising.
Analogous to the exponent of primitive matrices, Chen, Huang and Yan [2] introduced the concept of stable index for 0-1 matrices as follows.
Definition. The stable index of a 0-1 matrix A, denoted by θ(A), is defined to be the smallest integer k such that A k+1 is not a 0-1 matrix if such an integer exists; otherwise the stable index of A is defined to be infinity.
Using adjacency matrices, we have an equivalent definition for the stable index of digraphs as follows.
Definition. The stable index of a digraph D, denoted by θ(D), is defined to be the smallest integer k such that D contains at least two distinct (k + 1)-walks with the same initial vertex and terminal vertex if such an integer exists; otherwise the stable index of D is defined to be infinity.
Given a digraph D, by the above definitions we have θ(D) = θ(A D ). Conversely, given a square 0-1 matrix A, we have θ(A) = θ(D(A)).
Given a positive integer n, we denote by s(n) the maximum finite stable index of 0-1 matrices of order n. The precise value of s(n) was determined in [2]. For n ≤ 6, we have s(2) = 1, s(3) = 3, s(4) = 4, s(5) = 6, s(6) = 7; see [2]. For n ≥ 7 we have Theorem 1. [2] Let n ≥ 7 be an integer. Then
s(n) = n 2 −1 4 , if n is odd, n 2 −4 4 , if n ≡ 0 (mod 4), n 2 −16 4 , if n ≡ 2 (mod 4).(1)
Similar with the exponent of primitive matrices, we are interested in the following natural problem.
Problem 2. Given a positive integer n, which numbers can be the stable indices of 0-1 matrices (digraphs) of order n?
We solve this problem in this paper. Denote by Z + the set of positive integers and denote by LCM(p, q) the least common multiple of two integers p and q. For k ∈ Z + , we use [k] to represent the set {1, 2, . . . , k}. Let Θ(n) be the set of stable indices of 0-1 matrices of order n. Then Θ(n) is also the set of indices of digraphs with n vertices. Our main result states as follows.
Theorem 3. Let n ≥ 7 be an integer. Then
Θ(n) = [s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + } ∪ {∞}.
For the sake of convenience, we prove Theorem 3 for digraphs.
Proof of Theorem 3
Two digraphs D 1 = (V 1 , A 1 ) and D 2 = (V 2 , A 2 ) are isomorphic, written D 1 ∼ = D 2 , if there is a bijection σ : V 1 → V 2 such that (u, v) ∈ A 1 if and only if (σ(u), σ(v)) ∈ A 2 . It is clear that two digraphs are isomorphic if and only if their adjacency matrices are permutation similar. We say a digraph D contains a copy of H if D has a subgraph isomorphic to H.
In a digraph D, if there is a walk from u to v for all u, v ∈ V(D), then D is said to be strongly connected. A digraph is strongly connected if and only if its adjacency matrix is irreducible.
Denote by
− → C k a k-cycle. Given an integer k ≥ 2 and two disjoint cycles
− → C p and − → C q , let − → g (p, k, q) be the digraph obtained by adding a (k − 1)-path from a vertex of − → C p to a vertex of − → C q , which has the following diagram. When k = 2, we abbreviate − → g (p, k, q) as − → g (p, q). − → C p m v 1 n v 2 r v k − → C q − → g (p, k, q)
It is clear that
θ( − → g (p, k, q)) = LCM(p, q) + k − 2.
Denote by C p the adjacency matrix of the p-cycle v 1 · · · v p v 1 , which is
1 . . . 1 1 .
We need the following lemmas.
Lemma 4. Let D be a strongly connected digraph of order n ≥ 2. If D is a cycle, then
θ(D) = ∞; otherwise, we have θ(D) ≤ n. Proof. Recall that θ(D) = θ(A D ). Since D is strongly connected if and only if A D is irreducible, the result follows from [2, Lemma 3]. Lemma 5. Let A = C p X 0 C q ∈ M p+q {0, 1}
, where X has exactly k nonzero entries.
Then
θ(A) ≤ pq k .
Proof. Notice that
A m = C p m m−1 k=0 C p k XC q m−1−k 0 C q m ≡ C p m B 0 C q m
for all positive integer m. Since C p k XC q m−1−k only changes the positions of X's entries, each C p k XC q m−1−k has exactly k entries equal to 1. So the summation of all entries in B
is km. If m > pq/k , then km > pq, which implies that B is not a 0-1 matrix. Therefore,
θ(A) ≤ pq/k . Corollary 6. Let p, q, n be positive integers such that p + q = n. Suppose D is a digraph of order n containing a copy of − → g (p, q). If θ(D) > max{n, pq/2 }, then D ∼ = − → g (p, q).
Proof. Since D contains a copy of − → g (p, q), we may assume
A D = C p + A 11 A 12 A 21 C q + A 22 ,
where A 12 = 0.
If A 21 = 0, then A D is irreducible. Applying Lemma 4 we have θ(D) ≤ n, a contradiction.
Therefore, we have A 21 = 0.
If A 11 = 0, then since C p + A 11 is irreducible, we have θ(D) ≤ θ(C p + A 11 ) ≤ p, a contradiction. Hence, A 11 = 0. Similarly, we have A 22 = 0.
Now applying Lemma 5, we can deduce that A 12 has exactly one nonzero entry. Therefore,
D ∼ = − → g (p, q). Lemma 7. Suppose p, q > 0 are relatively prime numbers. Let kq ≡ x k (mod p) for 1 ≤ k ≤ p − 1.
Then
{x 1 , x 2 , . . . , x p−1 } = {1, 2, . . . , p − 1}. Proof. It suffices to prove x u = x v for all u > v. Otherwise, suppose there exist u, v ∈ {1, 2, . . . , p − 1} such that u > v and x u = x v . Then x u − x v = (u − v)q is divided by p. Since (p, q) = 1, we have p | u − v, which contradicts the fact 1 ≤ u − v < p − 1.
This completes the proof.
Corollary 8. Suppose p, q are relatively prime numbers such that p > q > 1. Then min{u : there exists a nonnegative number v such that p − q = uq − vp} = p − 1.
Proof. Notice that (p − 1)q ≡ p − q (mod p). Applying Lemma 7 we have the conclusion.
Given two walks − → w 1 and − → w 2 in a digraph D, we denote by − → w 1 ∪ − → w 2 the union of − → w 1 and − → w 2 , i.e., the subgraph of D with vertex set V( − → w 1 ) ∪ V( − → w 2 ) and arc set A( − → w 1 ) ∪ A( − → w 2 ).
From the proof of [2, Theorem 1] we have the following lemma. For completeness we restate its proof.
Lemma 9.
[2] Suppose a digraph D of order n contains two distinct walks − → w 1 and − → w 2 with the same length from x to y for some vertices x and y. If − → w 1 ∪ − → w 2 contains at most one cycle, then θ(D) < n − 1.
Proof. If − → w 1 ∪ − → w 2 is acyclic, then both − → w 1 and − → w 2 are directed paths with length less than n − 1. Therefore, θ(D) < n − 1.
Now suppose − → w 1 ∪ − → w 2 contains exactly one cycle − → C p . If only one of the two walks, say − → w 1 , contains copies of − → C p , then − → w 2 is a directed path with length less than n, which implies θ(D) < n − 1. If both − → w 1 and − → w 2 contain copies of − → C p , then by deleting the same number of copies of − → C p in both − → w 1 and − → w 2 , we can get two distinct walks with the same length from
x to y such that at least one of them contains no cycle. Again, we have θ(D) < n − 1.
For the sake of simplicity, we denote by
[a, b] = {x ∈ Z + : a ≤ x ≤ b}.
Now we are ready to present the proof of our main result.
Proof of Theorem 3. We first show
Θ(n) ⊆ [s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + } ∪ {∞}.(2)
Suppose D has finite stable index r. Then D has two distinct (r + 1)-walks − → w 1 and − → w 2 from
x to y for some vertices x, y.
If − → w 1 ∪ − → w 2 contains at most one cycle, then applying Lemma 9 we have θ(D) < n − 1.
If − → w 1 ∪ − → w 2 contains two cycles whose vertex sets have nonempty intersection, then the subgraph D 1 induced by these two cycles is strongly connected. Applying Lemma 4, we have θ(D) ≤ θ(D 1 ) ≤ n. Now suppose − → w 1 ∪ − → w 2 contains two disjoint cycles. We distinguish three cases. Since max{n, n/2 n/2 /2} ≤ s(n − 1) + 1, we have
θ(D) ∈ [s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + }.
Case 2. − → w 1 ∪ − → w 2 contains no copy of − → g (p, q) with p + q = n and it contains a copy of − → g (p 1 , k 1 , q 1 ) for some positive integers p 1 , q 1 , k 1 with p 1 + q 1 < n. Let λ 1 (n) = max{θ( − → g (p, k, q)) : p + q + k − 2 ≤ n, p + q ≤ n − 1}.
Note that θ( − → g (p, k, q)) = LCM(p, q) + k − 2 and s(n) = max{LCM(p, q) : p + q = n} for n ≥ 7.
If n = 7, since s(6) = 7, we have λ 1 (7) = θ( − → g (2, 4, 3)) = 8 = s(6) + 1.
For n ≥ 8, since s(x) − x is strictly increasing on the integer variable x when x ≥ 2, we have
θ( − → g (p, k, q)) = LCM(p, q) + k − 2 ≤ s(p + q) + n − p − q ≤ s(n − 1) + 1.
Hence, λ 1 (n) = s(n − 1) + 1 for n ≥ 7 and θ(D) ≤ s(n − 1) + 1.
Case 3. − → w 1 ∪ − → w 2 does not contain any copy of − → g (p, k, q). Then similarly as in the proof of Theorem 1 in [2], − → w 1 ∪ − → w 2 has the following diagram.
− → C p x x − → w 1 − → w 2 y y − → C q
Suppose the xy-paths contained in − → w 1 and − → w 2 have lengths l and t, respectively. Then we have θ(D) = min{l + up : l + up = t + vq for some nonnegative integers u, v} − 1
≤ min{l + (q − 1)p, t + (p − 1)q} − 1.(3)
Next we show θ(D) ≤ s(n − 1) + 1.
If p = q, then we have θ(D) ≤ max{l, t} < s(n − 1) + 1. Next we assume p > q. Since
l ≤ n − 1 − q, t ≤ n − 1 − p and p + q ≤ n − 2, by (3) we have θ(D) ≤ t + (p − 1)q − 1 ≤ n − 2 − p + (p − 1)q = n − 3 + (p − 1)(q − 1) ≡ h(n, p, q).
If n is odd, then max{h(n, p, q) : p + q ≤ n − 2, p > q} is attained at (p, q) = ((n − 1)/2, (n − 3)/2). Hence, by (1) we have
θ(D) ≤ h(n, p, q) ≤ n − 3 + (n − 3)(n − 5)/4 = (n 2 − 4n + 3)/4 ≤ s(n − 1) + 1.
If n is even, then max{h(n, p, q) : p + q ≤ n − 2, p > q} is attained at (p, q) = (n/2, n/2 − 2).
Hence, by (1) we have θ(D) ≤ h(n, p, q) ≤ n − 3 + (n/2 − 1)(n/2 − 3) = (n 2 − 4n)/4 ≤ s(n − 1) + 1.
Combining all the above cases we have (2). Now we prove
[s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + } ∪ {∞} ⊆ Θ(n).(4)
Notice that θ( − → g (p, q)) = LCM(p, q), θ(
− → C n ) = ∞ and θ( − → K n ) = 1, where − → K n is
Let G(p, q) be the set of graphs with the following diagrams, where − → C p is a p-cycle
u 1 u 2 · · · u p u 1 , − → C q is a q-cycle v 1 v 2 · · · v q v 1 , the arrows from x to − → C p and − → C q represent two
paths xx 1 · · · x r u 1 and xy 1 · · · y s v 1 , the arrow from
− → C p to − → C q represents an arc u i v j with 1 ≤ i ≤ p, 1 ≤ j ≤ q.
x G(p, q)
Let G(p, q, l, t) be the set of graphs in G(p, q) such that the length of the path xx 1 · · · x r u 1 · · · u i v j is l and the length of the path xy 1 · · · y s v 1 · · · v j is t. Then θ(D) = min{l + ap : l + ap = t + bq for some nonnegative integers a, b} − 1 for all D ∈ G(p, q, l, t). We denote by θ(G(p, q, l, t)) the stable index of digraphs in G(p, q, l, t).
If q = p − 1 and l − t = d < p, applying Lemma 7, min{l + ap : l + ap = t + bq for some nonnegative integers a, b} = min{l + ap : l − t = bq − ap for some nonnegative integers a, b}
is attained at (a, b) = (p − d − 1, p − d). Therefore, θ(G(p, p − 1, l, t)) = t − 1 + (p − 1)(p − d)
.
Denote by f (p, q, d) = {θ(G(p, q, l, t)) : l − t = d}. If q = p − 1 ≥ 1 and d = 1, then we have 1 ≤ t ≤ n/2 − 1 in G(p, q, l, t). Hence,
f (p, p − 1, 1) = {w + (p − 1) 2 : 0 ≤ w ≤ n/2 − 2} = [(p − 1) 2 , (p − 1) 2 + n/2 − 2].
If q = p − 1 ≥ 2 and d = 2, then we have 1 ≤ t ≤ n/2 − 1 in G(p, q, l, t). Hence,
f (p, p − 1, 2) = {w + (p − 1)(p − 2) : 0 ≤ w ≤ n/2 − 2} = [(p − 1)(p − 2), (p − 1)(p − 2) + n/2 − 2].
Notice that
(p − 1) 2 ≤ (p − 1)(p − 2) + n/2 − 1 for 2 ≤ p ≤ n/2 , (p − 1)(p − 2) ≤ (p − 2) 2 + n/2 − 1 for 2 ≤ p ≤ n/2 .
We have
n/2 p=3 f (p, p − 1, 1) ∪ f (p, p − 1, 2) = 2, ( n/2 − 1) 2 + n/2 − 2 ≡ T (n),
which implies T (n) ⊆ Θ(n).
Next we distinguish three cases. Since m + 1 and m − 1 are relatively prime, applying Corollary 8, we have
f (m + 1, m − 1, 2) = {w + m(m − 1) : 0 ≤ w ≤ m − 2} = [m 2 − m, m 2 − 2].
Moreover, since m is even, we have θ( − → g (m + 1, m − 1)) = m 2 − 1 and θ( − → g (m + 1, 3, m − 1)) = m 2 = s(n − 1) + 1.
Therefore, we have (5). On the other hand, θ( − → g (m + 2, m − 2)) = m 2 − 4 and θ( − → g (m + 2, 3, m − 2)) = m 2 − 3 = s(n − 1) + 1.
Therefore, we get (5).
Combining (2) and (4), we obtain Θ(n) = [s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + } ∪ {∞}.
This completes the proof.
Remark. Denote by − → L n the union of the k-cycle v 1 v 2 · · · v k v 1 and the 2-cycle v 1 v n v 1 , where k = n if n is odd and k = n − 1 if n is even. Then we have θ( − → L n ) = n for n ≥ 3.
Note that θ( − → K n ) = 1, θ( − → g (1, 2)) = 2, θ( − → g (2, 3)) = 6, θ( − → g (2, 3, 3)) = 7.
We have Θ(n) = [s(n)] ∪ {∞} for 1 ≤ n ≤ 6.
When
Case 1 .
1− → w 1 ∪ − → w 2 contains a copy of − → g (p, q) with p + q = n. Then applying Corollary 6, we have either θ(D) ≤ max{n, pq/2 } or D ∼ = − → g (p, q), while the later case leads to θ(D) = LCM(p, q). Thus θ(D) ∈ [max{n, n/2 n/2 /2}] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + }.
the complete digraph of order n. It suffices to verify [2, s(n − 1) + 1] ⊆ Θ(n).
Case 1 .
1n is even, say, n = 2m. Then ( n/2 − 1) 2 + n/2 − 2 = m 2 − m − 1 and s(n − 1) + 1 = (n 2 − 2n + 4)/4 = m 2 − m + 1. Note that θ( − → g (m, m − 1)) = m 2 − m and θ( − → g (m, 3, m − 1)) = m 2 − m + 1.We have(5).
Case 2 .
2n ≡ 1 (mod 4), say, n = 4k + 1. Suppose m = 2k. Then ( n/2 − 1) 2 + n/2 − 2 = m 2 − m and s(n − 1) + 1 = (n 2 − 2n + 1)/4 = m 2 .
Case 3 .
3n ≡ 3 (mod 4), say, n = 4k + 3. Suppose m = 2k + 1. Then( n/2 − 1) 2 + n/2 − 2 = m 2 − m and s(n − 1) + 1 = (n 2 − 2n − 11)/4 = m 2 − 3.Since m + 2 and m − 2 are relatively prime, applying Corollary 8 we havef (m + 2, m − 2, 4) = {w + (m + 1)(m − 2) : 0 ≤ w ≤ m − 3} = [m 2 − m − 2, m 2 − 5].
n ≥ 7, by direct computation we see that Θ(10) = [s(10)] ∪ {∞} and [s(n − 1) + 1] ∪ {LCM(p, q) : p + q = n, p, q ∈ Z + } = [s(n)] for n = 10. So Θ(n) has gaps in the set [s(n)] for n ∈ Z + − [6] ∪ {10}.
AcknowledgementThe authors are grateful to Professor Xingzhi Zhan for valuable suggestions.
J A Bondy, U S R Murty, Graph Theory, GTM 244. SpringerJ.A. Bondy and U.S.R. Murty, Graph Theory, GTM 244, Springer, 2008.
The stable index of 0-1 matrices. Z Chen, Z Huang, J Yan, Linear Algebra Appl. 600Z. Chen, Z. Huang, J. Yan, The stable index of 0-1 matrices, Linear Algebra Appl. 600 (2020) 148-160.
Gaps in the exponent set of primitive matrices. A L Dulmage, N S Mendelsohn, Illinois J. Math. 8A.L. Dulmage, N.S. Mendelsohn, Gaps in the exponent set of primitive matrices, Illinois J. Math. 8 (1964) 642-656.
A system of gaps in the exponent set of primitive matrices. M Lewin, Y Vitek, Illinois J. Math. 25M. Lewin, Y. Vitek, A system of gaps in the exponent set of primitive matrices, Illinois J. Math. 25 (1981) 87-98.
On a conjecture about the exponent set of primitive matrices. J.-Y Shao, Linear Algebra Appl. 65J.-Y. Shao, On a conjecture about the exponent set of primitive matrices, Linear Algebra Appl. 65 (1985) 91-123.
Unzerlegbare nicht negative matrizen. H Wielandt, Math. Z. 52H. Wielandt, Unzerlegbare nicht negative matrizen, Math. Z. 52 (1950) 642-648.
On Lewin and Vitek's conjecture about the exponent set of primitive matrices. K M Zhang, Linear Algebra Appl. 96K.M. Zhang, On Lewin and Vitek's conjecture about the exponent set of primitive matrices, Linear Algebra Appl. 96 (1987) 101-108.
| [] |
[
"Spin-dependent sub-GeV Inelastic Dark Matter-electron scattering and Migdal effect: (I). Velocity Independent Operator",
"Spin-dependent sub-GeV Inelastic Dark Matter-electron scattering and Migdal effect: (I). Velocity Independent Operator"
] | [
"Jiwei Li \nDepartment of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina\n",
"Liangliang Su \nDepartment of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina\n",
"Lei Wu \nDepartment of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina\n",
"‡ ",
"Bin Zhu \nDepartment of Physics\nYantai University\n264005YantaiChina\n"
] | [
"Department of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina",
"Department of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina",
"Department of Physics\nInstitute of Theoretical Physics\nNanjing Normal University\n210023NanjingChina",
"Department of Physics\nYantai University\n264005YantaiChina"
] | [] | The ionization signal provide an important avenue of detecting light dark matter. In this work, we consider the sub-GeV inelastic dark matter and use the non-relativistic effective field theory (NR-EFT) to derive the constraints on the spin-dependent DM-electron scattering and DM-nucleus Migdal scattering. Since the recoil electron spectrum of sub-GeV DM is sensitive to tails of galactic DM velocity distributions, we also compare the bounds on corresponding scattering cross sections in Tsallis, Empirical and standard halo models. With the XENON1T data, we find that the exclusion limits of the DM-proton/neutron and DM-electron scattering cross sections for exothermic inelastic DM are much stronger that those for the endothermic inelastic DM. Each limits of the endothermic inelastic DM can differ by an order of magnitude at most in three considered DM velocity distributions. * | 10.1088/1475-7516/2023/04/020 | [
"https://export.arxiv.org/pdf/2210.15474v2.pdf"
] | 253,157,411 | 2210.15474 | b3128b2a613e12c6d9a589c0a547df21d54d0caf |
Spin-dependent sub-GeV Inelastic Dark Matter-electron scattering and Migdal effect: (I). Velocity Independent Operator
12 Apr 2023
Jiwei Li
Department of Physics
Institute of Theoretical Physics
Nanjing Normal University
210023NanjingChina
Liangliang Su
Department of Physics
Institute of Theoretical Physics
Nanjing Normal University
210023NanjingChina
Lei Wu
Department of Physics
Institute of Theoretical Physics
Nanjing Normal University
210023NanjingChina
‡
Bin Zhu
Department of Physics
Yantai University
264005YantaiChina
Spin-dependent sub-GeV Inelastic Dark Matter-electron scattering and Migdal effect: (I). Velocity Independent Operator
12 Apr 20231 CONTENTS
The ionization signal provide an important avenue of detecting light dark matter. In this work, we consider the sub-GeV inelastic dark matter and use the non-relativistic effective field theory (NR-EFT) to derive the constraints on the spin-dependent DM-electron scattering and DM-nucleus Migdal scattering. Since the recoil electron spectrum of sub-GeV DM is sensitive to tails of galactic DM velocity distributions, we also compare the bounds on corresponding scattering cross sections in Tsallis, Empirical and standard halo models. With the XENON1T data, we find that the exclusion limits of the DM-proton/neutron and DM-electron scattering cross sections for exothermic inelastic DM are much stronger that those for the endothermic inelastic DM. Each limits of the endothermic inelastic DM can differ by an order of magnitude at most in three considered DM velocity distributions. *
Numerous astronomical and cosmological observations have provided evidence for the existence of dark matter (DM) in the universe. However, besides its gravitational interaction, other physical properties of DM remain mystery. From the perspective of particle physics, dark matter may be made up of a hypothetical particle that is still undetected. Among the various conjectures, the weakly interacting massive particles (WIMPs) have been widely studied in the various experiments.
Direct detection that attempts to discern signals induced by DM at extremely low backgrounds has made great efforts in the past few years [1][2][3][4][5][6][7][8][9][10][11][12][13]. However, there is no any evidence of WIMP dark matter in the typical mass range. This strongly motivates the search for sub-GeV dark matter . While the low momentum transfer of sub-GeV DM
can not produce the observable nuclear recoil signal in the conventional detectors. With the improvements of direct detection experiments, we can access to the low mass DM by using the ionization events. Such signals can arise from the scattering of electrons with DM [5, 10-14, 18, 19, 27-35], and the secondary effects in the DM-nuclear interactions, such as the Migdal scattering [9,[20][21][22][23][24][25][26][36][37][38][39][40][41]. There have been many studies on DMelectron scattering to date. For instance, in the context of elastic scattering, various operators for spin-dependent (SD) interactions are discussed in Ref. [42] in an effective field theory (EFT). The inelastic dark matter (iDM) model [43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58], originally used to explain the DAMA anomaly, has also been used to study DM-electron scattering with spin-independent interactions to explain the XENON1T excess [34,[57][58][59]. Dent et al. [24] showed some enlightening results on the Migdal effect of inelastic dark matter scattering with nuclei through the spin-independent (SI) interaction. However, there is still much scope for discussion of iDM-electron/Migdal scattering via SD interactions.
In this paper, we will study the ionization signals of sub-GeV inelastic dark matter (iDM),
including Migdal effect and DM-electron scattering. Given the current strong constraints on the spin-independent (SI) cross section, we calculate the spin-dependent (SD) iDMnucleus/electron scattering. We consider the Lagrangian density L int ⊃χ γ µ γ 5 χN γ µ γ 5 N for the axial-vector interaction of DM χ with the standard model particle N and derive the operator O 4 = S χ · S N ; this type of SD interaction is the only one in the leading order that is not suppressed by momentum transfer q. For some models, the SD interaction may still dominate, e.g. the scattering cross section for a Dirac DM particle interacting through its anomalous magnetic dipole moment, where the SD-like part (dipole-dipole) dominates in certain parameter space [60]. Or when the DM is the Majorana fermion or a real vector boson, the SD interaction can naturally dominate (but is not always guaranteed) [61]. In the future, if a signal associated with SD is observed, it will rule out the spinless DM particles by and large.
On the other hand, the velocity distribution function (VDF) of the local DM halo can have a non-negligible impact on the direct detection [62][63][64][65][66]. In particular, the electron recoil spectrum is sensitive to the high-velocity tail of the DM halo. As a benchmark distribution, the Standard Halo Model (SHM) is usually adopted [67], however, it still can not accurately describe the distribution of DM in the Galaxy [68]. This motivates other alternative halo models for the VDF [69], such as Tsallis and Empirical models. We will also discuss their impacts on the exclusion limits of iDM-nucleus/electron scattering.
The paper is structured as follows. In Sec. II, we compare the velocity distribution functions for three models: the Standard Halo Model, the Tsallis model and the empirical model. In Sec. III and Sec. IV, we investigate the ionization rates of the spin-dependent scattering of the inelastic dark matter with the nucleus and the electron targets, respectively.
With the available data, we obtain the exclusion limit for spin-dependent inelastic dark matter-nucleus Migdal/electron scattering in three velocity distribution models. Finally, we draw the conclusions in Sec. V.
II. DARK MATTER VELOCITY DISTRIBUTION FUNCTION
In the DM direct detections, the astrophysical properties of the local DM halo distribution, such as local DM density, mean DM velocity, etc., can significantly change the sensitivity. In particular, the electron spectrum is exceptionally sensitive to the high-velocity tail of the local velocity distribution of dark matter [66,69,70]. The most popular and widely used standard halo model (SHM) in DM direct detection experimental analysis, which assumes DM particles are in an isothermal sphere and obey the isotropic Maxwell-Boltzmann velocity distribution function (VDF). Although its simple analytical form is appealing [68], this model cannot adequately explain the distribution of DM particles in the Galaxy. Consequently, it is important to investigate different velocity distribution models to substitute for the halo model. Based on the work in Ref. [69], this paper also introduces two additional velocity distribution models: Tsallis Model and an Empirical Model. We will discuss the effects on DM-Target scattering caused by different VDF models.
In the rest frame of the Galaxy, the SHM is given by
f SHM ( v) ∝ e −| v| 2 /v 2 0 | v| ≤ v esc 0 | v| > v esc .(1)
The escape speed of the galaxy limits the speed of DM particles gravitationally bound to our galaxy, so a physical cut-off point is set at the local escape speed v esc , with v 0 as the circular velocity at the Solar position [62]. The rotation curve in this model will be asymptotically flat at large r (i.e. the distance from the centre of the Galaxy), and v 0 is usually regarded as the value of the curve at this point. In the laboratory frame it has the following analytical
forms f SHM ( v) = 1 K e −| v+ v E | 2 /v 2 0 Θ(v esc − | v + v E |),(2)
where v E is the Earth's Galactic velocity. The velocity distribution of the SHM is truncated at the escape speed v esc through the Heaviside function Θ, with the normalization coefficient
K = v 3 0 π 3 2 Erf( v esc v 0 ) − 2π v esc v 0 exp − v 2 esc v 2 0 (3) that results from f ( v) d 3 v = 1.
The features of the local VDFs derived from DM cosmological simulations that include baryonic physics are largely consistent with the SHM; however, several studies [62,69,[71][72][73][74] using data from DM-only simulations reveal a significant deviation from the overall trends manifested by the relevant local VDFs compared to the SHM. These simulations show that, especially in the high-velocity tail of the distribution, different features with the SHM will appear. One point worth making is that although adding baryons to the simulation makes the process more complex, it is nevertheless essential to restore the possible real universe.
Next, we discuss some alternative models in which the VDF of the Tsallis Model (Tsa) [75] can be considered more compatible with the numerical results of N − body simulations that include baryons [76,77]. According to the statistical results of Tsallis, the definition of standard Boltzmann-Gibbs entropy is extended by introducing the entropy index q s , as
following S qs ≡ k q s − 1 i=1 p i (1 − p qs−1 i ) = −k i p qs i ln qs p i ,(4)
where p i is the probability for a particle to be in state i, and ln qs p = (p 1−qs − 1)/(1 − q s ).
Note that q s is an arbitrary positive real number and that Eq. 4 recovers the standard Boltzmann-Gibbs entropy expression when the limit q s → 1. Then, we can write down the velocity distribution function according to this Tsallis entropy
f Tsa ( v) ∝ 1 − (1 − q s ) v 2 v 2 0 1/(1−qs) | v| < v esc 0 | v| ≥ v esc .(5)
It is advantageous to use the Tsa model to elucidate the velocity distribution of the DM halo because the escape speed is already physically involved in the range q s < 1, determined by v esc = v 2 0 /(1 − q s ), without the need for manual truncation, but the escape speed still needs to be set for q s > 1. Finally, based on the work of Ref. [69,74,78] another alternative model we introduce is an empirical model (Emp). It is derived from Hydrodynamical simulations with baryons on the data of DM-only cosmological simulation [79,80]. In the Galactic rest frame, the empirical model described has a velocity distribution of the following form
f Emp ( v) ∝ exp − | v| v 0 (v 2 esc − | v| 2 ) p | v| < v esc 0 | v| ≥ v esc .(6)
This empirical model is an exponential-based distribution, where p is an adjustable parameter, and following the best-fit parameters for the Eris simulations [69,81], p = 1.5 is set as our fiducial model. The shape of the VDF for this empirical model primarily relies on a proportional relationship, r/r s , the ratio of the VDF's measured radius to the scaled radius of the halo density profile, and the uncertainty of the VDF is also derived from this quantity [74].
In Fig. 1, we have depicted with solid lines of various colours the η(v min ) resulting from the three velocity distribution models after integral
η(v min ) = d 3 v v f χ (v)Θ(v − v min ).
Here v min is the minimum incoming DM velocity that causes nuclear recoil, and we will discuss it in the next section. In this paper, we adopted some astrophysical parameters suggested by recent work [82],v 0 = 238 km/s [83,84], v esc = 544 km/s [85] and v E = 250 km/s [86] ( the Solar peculiar velocity from Ref. [87] and average galactocentric Earth speed from
Ref. [88].) corresponding to q s = 0.809 of the Tsallis model, and then compared the η(v min ) values of different models with the same set of parameters. It can be observed that as the speed of DM shifts from low to high, the η(v min ) transition in the Empirical model and the
Standard Halo Model appears to be smoother, whereas the η(v min ) of the Tsallis model is steeper than the other two models. In the low speed region, the η(v min ) of the Emp and SHM diverge, although not significant (Tsa's diverges most from both). However as the DM speed increase, the Emp curve almost coincides with that of SHM.
In the following discussion, we turn our attention to inelastic dark matter-nucleus Migdal scattering and inelastic dark matter-electron scattering. We will examine the impact of the velocity distribution model discussed above through the electron spectrum induced by these two processes.
III. INELASTIC DARK MATTER-NUCLEUS MIGDAL SCATTERING
We introduce a fermion dark matter χ with spin 1/2 coupling to a Standard Model (SM) particle N [60,61,[89][90][91]. Assuming inelastic scattering between them, χN → χ N , mass splitting δ = m χ − m χ occurs between the incoming and outgoing dark matter (more details on kinematics are discussed below). If we consider that their interaction is via axial-vectoraxial-vector couplings, the Lagrangian density L at low momentum transfer is
L int ⊃χ γ µ γ 5 χN γ µ γ 5 N .(7)
This is called the standard spin-dependent interaction in non-relativistic effective field theory (NR-EFT) and is usually reduced to the type of the two spin operators, −4 S χ · S N = −4O 4 .
Such a spin-dependent interaction is the only one in the leading order not suppressed by momentum transfer q. This may allow us to place stronger constraints on the DM-nucleus scattering of SD interactions.
A. Calculations
We begin with the perspective of inelastic dark matter-nucleon scattering kinematics.
In general, there are two different ways to reveal inelasticity [60], DM particle of mass m χ undergo mass splitting after scattering with nucleus become to m χ − m χ = δ, or there is the possibility of the nucleon transitioning from a low-energy state to an excited state. The latter case has been studied in many literatures [92][93][94][95][96][97][98], and for the sake of simplicity we do not consider this possibility in this work.
We focus on the process χ( p) + N ( k) → χ ( p ) + N ( k ), where χ and χ are dark matter particles in the initial and final states, respectively, and N is a nucleon. For non-relativistic limit, inspired by the conservation of energy in the center-of-mass (CM) framework we have
1 2 µ N v 2 = p 2 2m χ + k 2 2m N + ∆ = ( p + q) 2 2m χ + ( k − q) 2 2m N + ∆,(8)
where µ N = mχm N (mχ+m N ) is the reduced mass of the initial χ − N system, v ≡ p mχ − k m N is the relative velocity between the DM particle and the nucleon. The momentum transfer q ≡ p − p = k − k. It should be noted that the momentum transfer q is approximately Galilean invariant in the inelastic scattering under NR boost when the mass splitting |δ| m χ . ∆ shows the initial kinetic energy lost due to inelastic effects. To better discuss the Migdal effect and electron scattering that follow, we write ∆ here as ∆ = E em + δ (for nucleus scattering, ∆ = δ) , and E em is the electromagnetic energy available to excite the electron.
In this paper we have conventionally defined that δ > 0 corresponds to the endothermic scattering, while δ < 0 is the exothermic scattering. Comparing the ∆ with initial kinetic energy E kin of the system, ∆ = 0 corresponds to the usual elastic scattering. Apparently, we can know from Eq. 8 that the maximum possible value of ∆ for the scattering should be equal to the initial available kinetic energy
E kin ≡ 1 2 µ N v 2 = ∆ max .(9)
Significantly, the masses of DM and nucleons are so large compared to kinetic energy that scattering is only kinematically allowed in the |∆| m χ scenario.
To facilitate our calculations, we set p i ≡ µ N v for initial system momentum, and in the final system χ − N , we can write E f = 1 2 µ v 2 + ∆. In the NR limit approximation, we will take ∆/µ ( ) N as a parameter of order O(v 2 ), thus we have
v 2 = v 2 − 2 ∆ µ N(10)
and the square of the momentum of the final system p 2
f p 2 f = µ 2 N v 2 µ 2 N v 2 − 2µ N ∆.(11)
The transfer momentum q = p f − p i is the same in both frames, so that we then can express the atomic recoil energy in the frame of the detector as
E R = µ 2 N v 2 m N 1 − cos θ 1 − 2∆ µ N v 2 − µ N ∆ m N ,(12)
where θ is the DM-nucleon scattering angle in the CM frame. It is worth mentioning that the
derivation above for µ N ∼ O( GeV) , if ∆ ∼ O( keV), there is µ N µ N = m χ m N /(m χ +m N ).
But in numerical calculation, we still maintain the complete expansion (µ N = µ N ).
We can also see from the Eq. 12 that if the incoming DM has a fixed speed, there will be a maximum E max R and a minimum E min R of the recoil energy, corresponding to θ = π and 0, respectively. Likewise, when the DM particle imparts a given recoil energy to the target nucleus, the incident speed of DM is kinematically limited. If we express the momentum transfer q = √ 2m N E R in terms of energy recoil, then we get the minimum DM velocity that can cause nuclear recoil,
v min (E R ) = q 2µ N + ∆ q = 1 µ N √ 2m N E R |m N E R + µ N ∆| .(13)
Next, we will introduce a non-relativistic effective field theory to help us discuss inelastic dark matter-nucleus scattering. Given the average velocity of DM in the galactic halo is v ∼ O(10 −3 ), the non-relativistic effective field theory provides a bottom-up framework to study the DM direction detection [25,[99][100][101][102][103][104][105]. This formalism enables the decomposition of the interaction of dark matter with the nucleus into two classes of response functions. And it allows us to use pre-calculated nuclear form factors for the relevant interaction operators.
According to the work of Haxton et al. [100][101][102], they established an EFT based on elastic DM-nucleus scattering. This approach allows to construct a series of effective operators from four Galilean invariants: the DM particle spin S χ , nucleon spin S N , the momentum transfer i q and the transverse velocity v ⊥ el ≡ v + q 2µ N . However, in the case of inelastic scattering, it is necessary to modify the quantity due to mass splitting. As indicated by the formalism in the Ref. [106], the NR-EFT for inelastic scattering of dark matter is a direct extension of elastic scattering. It pointed out in the context that at the leading order of the v expansion, the only modification made is that v ⊥ el changed from elastic scattering. According to the kinematics of inelastic scattering, the mass splitting δ allows for a contribution to the incident velocity component perpendicular to the momentum transfer q, so that a new Galilean invariant on inelastic scattering can be obtained by adding a new component for
modification v ⊥ el → v ⊥ inel ≡ v + q 2µ N + ∆ | q | 2 q.(14)
The above equation satisfies q · v ⊥ inel = 0 due to the conservation of energy. The effect of this inelasticity will be directly reflected in the DM particle response function R τ τ X rather than the nucleon response function W τ τ X . However, we are concerned with the effective spin-dependent operator O 4 = S χ · S N , which does not depend on v ⊥ inel . For our calculations, we can still use the nucleon matrix elements from Ref. [102].
For a given Lagrangian, the invariant amplitude of the DM-nucleon can be obtained using spherical harmonics and multipole expansions,
M = τ =0,1 j χ , M χ ; j N , M N | O JM ;τ (q) |j χ , M χ ; j N , M N ≡ τ =0,1 j χ , M χ ; j N , M N | A i=1 O JM (q x i ) t τ (i) |j χ , M χ ; j N , M N .(15)
Here O JM ;τ (q) contains six operators familiar to the standard model electroweak interaction theory: M , Σ , ∆, Σ , Φ ,Φ . This is the result obtained by considering only elastic transitions and assuming that the nuclear ground state obeys CP and parity conservation.
According to semi-leptonic electroweak theory [107][108][109], the only spin-dependent interactions of interest to us are only two related single particle operators, Σ and Σ , corresponding to axial transverse and axial longitudinal operators, respectively,
Σ JM ;τ q 2 ≡ −i A i=1 1 q ∇ i × M M JJ (q x i ) · σ (i) τ 3 (i) Σ JM ;τ q 2 ≡ A i=1 1 q ∇ i M JM (q x i ) · σ (i) τ 3 (i) .(16)
By averaging over initial spins and summing over outgoing spins, we then write down the DM-nucleus scattering transition probability,
P tot = 1 2j χ + 1 1 2j N + 1 spins |M| 2 = 4π 2j N + 1 τ =0,1 τ =0,1 R τ τ Σ v ⊥ 2 T inel , q 2 m 2 N W τ τ Σ q 2 + R τ τ Σ v ⊥ 2 T inel , q 2 m 2 N W τ τ Σ q 2 ,(17)
where j χ and j N label the dark matter and nuclear spin, respectively. The Eq. 17 expresses the transition probability as the product of the DM particle response functions R τ τ X and nuclear response functions W τ τ X . The former is determined by the bilinear functions c τ i 's in the EFT coefficients, which distinguishes particle physics well from nuclear physics. In the isospin basis c τ ( ) i , here we list the DM particle response functions considered,
R τ τ Σ v ⊥ 2 T inel , q 2 m 2 N = 1 8 q 2 m 2 N v ⊥ 2 T inel c τ 3 c τ 3 + v ⊥ 2 T inel c τ 7 c τ 7 + j χ (j χ + 1) 12 c τ 4 c τ 4 + q 2 m 2 N c τ 9 c τ 9 + v ⊥ 2 T inel 2 c τ 12 − q 2 m 2 N c τ 15 c τ 12 − q 2 m 2 N c τ 15 + q 2 2m 2 N v ⊥ 2 T inel c τ 14 c τ 14 R τ τ Σ v ⊥ 2 T inel , q 2 m 2 N = q 2 4m 2 N c τ 10 c τ 10 + j χ (j χ + 1) 12 c τ 4 c τ 4 + q 2 m 2 N c τ 4 c τ 6 + c τ 6 c τ 4 + q 4 m 4 N c τ 6 c τ 6 + v ⊥ 2 T inel c τ 12 c τ 12 + q 2 m 2 N v ⊥ 2 T inel c τ 13 c τ 13 .(18)
The nuclear response functions W τ τ X , obtained by multipoles expansion and summing over the nuclear states,
W τ τ Σ (q 2 ) = ∞ J=1,3,... j N Σ J;τ (q) j N j N Σ J;τ (q) j N W τ τ Σ (q 2 ) = ∞ J=1,3,... j N Σ J;τ (q) j N j N Σ J;τ (q) j N(19)
where W τ τ Σ and W τ τ Σ only receive contributions from the odd multipoles. A more complete formulation of Eq. 17, Eq. 18 and Eq. 19 is shown in the Ref. [102]. The full amplitude or the nuclear responses can be calculated using the package DMFormFactor. Notice that, relativistic normalisation is used in Eq. 17 to produce a dimensionless |M| 2 , which is achieved by multiplying by a factor of (4m χ m T ) 2 . From the transition probability P tot , one can immediately obtain the differential cross section
dσ dE R = 2m T 4πv 2 1 2j χ + 1 1 2j N + 1 spins |M| 2 .(20)
Next, we turn our attention to the Migdal effect of iDM-nucleus scattering. Based on the work in Ref. [31,[36][37][38][39][40][110][111][112], we will briefly review the Migdal effect and present the main formulas to facilitate our calculation of the scattering cross section. The Migdal effect is the process of atomic ionization or excitation. In the scattering of DM particles and nuclear, the nucleus suddenly receives a transfer momentum q, and the electron cloud cannot 'catch up' instantaneously, which makes it possible to detect the subsequent electromagnetic signatures. Thus, the theoretical calculation of the Migdal effect and the DM-electron scattering rate is closely related.
In kinematics, we can obtain the formulae of the Migdal scattering by replacing the electron mass m e with the nucleus mass m N in those of the DM-electron scattering. To demonstrate the physical process of Migdal, following Ref. [36], it is assumed that both the incoming and outgoing DM are plane waves. However, the outgoing atom is regarded as an atom in an excited state, where the ionized electrons belong to the continuum of the atomic Hamiltonian [111]. In this formalism, treating the nucleus and electron as a single many-particle system would allow us to treat the transfer momentum q as originating from the DM rather than other specific components. According to the conservation of energy in Eq. 8, we have
E em = q · v + q 2 2µ N − δ,(21)
where E em = E e,f − E e,i is the transfer energy available for scattered electrons. There exists a maximum value of E em , which can be derived from Eq. 9,
E max em 1 2 m χ v 2 max − δ,(22)
where assumed m χ m N , v max is the maximum DM incoming velocity (in laboratory frame). This indicates that the maximum value E max em is not related to the initial occupied energy level of Migdal electrons and target nucleus. We would like to point out that these inelastic effects mainly affect in kinematics.
Nevertheless, the dynamics of Migdal and electron scattering differ significantly depending on whether the DM interacts directly with electrons or the nucleus. To clarify their connection, we briefly review the process from isolated atom reduction to nuclear recoil and projection onto the electron cloud [36]. In the relativistic limit, we convert the dark matter-nucleus interaction into an interaction potential V int , then the total Hamiltonian for the atom can be written as
H tot = H A +p 2 χ 2m χ + V int ( x N − x χ ),(23)
where x N and x χ represent the positional operators of nucleus and DM, and H A is the approximate Hamiltonian of the atomic system. Therefore, the elements of the transition matrix are derived by using reduced atomic eigenstates of H tot
iT F I ∼ F (q 2 N ) × M(q 2 N ) × Z F I (q e ) × i(2π) 4 δ 4 (p F − p I ).(24)
Here T F I is decomposed into the nuclear form factor F (q 2 N ) and the DM-nucleon scattering invariant matrix element M(q 2 N ), both evaluate the interaction of nucleons. And the factor Z F I associated with the electron cloud transition. This treatment makes explicit the conservation of momentum-energy at invariant amplitude.
Notice that Eq. 24 assumes that the initial state of atoms in the laboratory frame are stationary, i.e. v I = 0. Moreover, V int is the interaction potential between the nucleus and the DM, it does not contain the position operator x of the electron, so theoretically, the electron cannot be induced to transition. Assume that momentum q is transferred instantaneously to the nucleus, in which case the entire atom suddenly obtains velocity v A ≡ qe me and leaves its stationary electrostatic potential. At this moment the wave function of the electron of the moving atom will change, considering q e ≡ me m N q as the effective momentum of the electron. Following the method of Ref. [36], where electron transitions and nucleon scattering are linked to construct the approximate energy eigenstates of the moving atoms by applying the Galilean transformation with the velocity parameter v A . Now we will assess the factor Z F I , which is the sum of three probabilities as given in the Ref. [36],
F |Z F I | 2 = |Z II | 2 + |Z exc | 2 + |Z ion | 2 .(25)
Here |Z II | 2 represents the probability that the electron is unaffected by the nuclear recoil (mention that this is the result in O( q 2 e r 2 ).), whereas |Z exc | 2 and |Z ion | 2 denote the probabilities of electron excitation and ionization, respectively. The ionization factor Z ion (q e ) involve Z ion (q e ) = F |e
i me m N q· ζ x (ζ) |I ∼ ζ f | i q e · x (ζ) |i ,(26)
where m e and m N are the mass of the electron and nucleus, respectively. The last term of Furthermore, we will quickly write down the single-electron transition amplitude for the direct interaction of DM with the electron at coordinate x (η) ,
F |e i q· η x (η) |I ∼ η f | i q · x (η) |i .(27)
Comparing Eq. 26 and Eq. 27, we can see that the Migdal effect and DM-electron scattering are very similar in form, while the critical difference between them is the transfer momentum q e and q. For the latter, the electrons directly obtain the momentum lost by the DM. The transfer momentum q e received by the electrons in the Migdal process is suppressed by a factor of 10 −3 /A (here A is atomic mass number).
To calculate the electron ionization probability of the Migdal effect and electron scattering in isolated atoms, we rely on the work in the Ref. [19,110,113] to establish their precise relationship. From the dimensionless ionization form factor f nl ion (k e , q) 2 defined in the Ref. [19,113], we can rewrite the Eq. 27 as
|Z ion | 2 ≡ f nl ion (k e , q) 2 = 2k e 8π 3 ×
This represents the sum over final state angular variables l , m and degenerate, occupied initial states. The initial state wave function of bound electrons in isolated atoms is characterized by the principal quantum number n and the angular momentum quantum number l, and the final state is a continuous unbound electron state with momentum k e = √ 2m e E e , which represents the quantum numbers are l and m . We adopt the ionization form factor f nl ion (k e , q) 2 given in the work [114] to derive our results. However, the ionization form factor provided in their research does not adequately describe the ionization behavior of transfer momentum q below 1 keV, hence we employ dipole approximation to extend it,
f nl ion (k e , q) 2 = q q 0 2 × f nl ion (k e , q 0 ) 2 .(29)
For the xenon atom, q 0 1 keV is usually chosen so that the above approximation holds.
According to the previous description and the Ref. [110], we utilize the parameter q e to characterize the ionization probability of Midgal, it can be expressed as
n,l d d ln E e p c qe (n, l → E e ) = π 2 f nl ion (k e , q e ) 2 .(30)
Thus we can derive the ionization differential event rates induced by the Migdal effect in iDM-nucleus scattering,
dR dE R dE em dv dR 0 dE R dv × 1 2π n,l d dE e p c qe (n, l → E e ),(31)
where
dR 0 dE R = N T ρ χ m χ v>v min dσ dE R vf (v)d 3 v.(32)
This is decomposed into the standard elastic DM-nucleus scattering differential rate dR 0 dE R multiplied by the electron ionization probability, where f (v) is the local velocity distribution function of the DM, ρ χ is the local DM density (take ρ χ 0.3 GeV/cm 3 in our calculations ), N T is the number density of target nuclei in the detector. The total electromagnetic energy E em is defined as the sum of the outgoing unbound electron energy E e and the binding energy E nl between the corresponding levels: E em = E e + E nl . For Migdal scattering, the electron equivalent energy detected by the detector is given by
E det = Q E R + E em = Q E R + E e + E nl ,(33)
where Q is the introduced quenching factor that depends on the nuclear recoil energy. The quenching factor of different target nucleus will also be different, and there have been a series of measurement results for xenon [115,116]; here, we take a fixed Q = 0.15 [36]. Finally we can obtain the detection energy spectrum,
dR dE det dv dE R dE em dR dE R dE em dv × δ(E det − QE R − E em ).(34)
B. Numerical Results and Discussions
To facilitate comparison with other results, we set the cross section of the DM-nucleon at transfer momentum q = 0 asσ
χn (q = 0) ≡ c n 2 i µ 2 χn π .(35)
To compensate for the dimension of the coefficient c n i (Energy) −2 , we have maintained the convention of multiplying by the square of electroweak interaction strength m Eq. 8, it can be seen that with δ increasing, E max R decreases and E min R increases, and the peak nuclear recoil rate is reduced accordingly. This indicates that kinematically elastic scattering is more favourable than endothermic scattering, and this becomes more pronounced as δ becomes larger. For m χ = 1 GeV, the maximum available initial-system kinetic energy is about 3.2 keV, while δ is larger than this value, the rate cannot be generated, as shown in the left panel of the figure. On the other hand, in exothermic scattering with δ < 0, due to the fixed maximum incoming velocity of the DM, the peak of the recoil spectrum does not drop, both E min R and E max R increase with increasing |δ|. This illustrates that for larger |δ|, the scattering is more (less) kinematically favored for sufficiently small (large) energies.
In xenon-based detectors, ionized electrons produced by Migdal can be detected, so we depict the differential event rates as a function of the detected energy (in units of keV electron equivalent, keV ee ) by inducing the SD (O 4 ) interaction with xenon in Fig. 3 (consider the incoming DM particle m χ = 1 GeV). Note that to facilitate comparisons, we will scale the coupling strength to expect nuclear recoils up to 10 4 . To illustrate their characteristics, we use the solid black line represents the spectrum for nuclear recoil, the colored solid lines represent the Migdal scattering rates induced by different electron energy levels determined by n, and the dashed line gives the effect caused by different mass splitting δ on the same electron shell. The Migdal scattering rate depends on the electron energy level n. This is because the electrons in the outer shell are more easily excited/ionized. In contrast, the electrons in the inner shell n = 3 require higher energy to excite/ionize them, about 0.7 keV ee as shown in Fig. 3. Secondly, comparing the Migdal rates for the same electron energy level, we can see that for endothermic scattering, there is an overall decrease in the recoil spectrum and the opposite for exothermic scattering, but for both, there is no abrupt change in the shape of the spectrum. The peak of the Migdal rate is determined by the binding energies of the different energy levels n. Consequently, this quantity is determined by n and is largely independent of the dark matter parameter.
Although we have included the nuclear recoil spectrum in Fig. 3, it is important to note that this is only to compare the Migdal spectrum (E det = QE R for elastic nuclear recoil).
For m χ = 1 GeV dark matter, the Migdal rate becomes the dominant rate with E det above 100 keV ee , thus implying that at lower detector thresholds the Migdal effect would be more beneficial in providing an effective window for exploring low-mass dark matter.
In direct detection experiments, the number of events is closely related to the scattering cross section of dark matter. Therefore, after calculating the rate, we will use the results of the Migdal effect and scattering after endothermic (exothermic) to give new limits on the spin EFT operator O 4 for low-mass dark matter. We give the corresponding bounds based on the data provided by the XENON1T experiment. This experiment accepts two main signals: primary scintillation light (S1), which is generated by nuclear recoil and can be detected directly, and delayed proportional scintillation (S2), which is measured as a proportional signal when a drifting electron is extracted into the gas phase. The signals S1
and S2 allow for the discrimination of nuclear/electronic recoil, and electron recoils produces events with larger S2/S1 than nuclear recoils.
Here, we have used here the single ionisation channel S2 data set from XENON1T [2] for the analysis. In the S2-only case, although this reduces the background discrimination and lifetime, it allows the lower threshold to enter the analysis. This case does not distinguish between nuclear and electronic recoil, thus establishing cross section bounds for different cases of low-mass DM. Based on the experimental thresholds for XENON1T, it is reasonable to integrate the rate of events in the range E det = 0.186 − 3.8 keV ee in order to implement a single bin analysis (no signal is seen below 0.186 keV) and to consider taking into account the energy dependent efficiency. At an exposure of 22 tonne·day, the expected number of events from the background was n exp = 23.4, while the total number of observed events was n obs = 61. From these data, using the profile likelihood ratio [117] gives an upper limit of 48.9 for the number of events expected for dark matter at 90% C.L. For the latest liquid xenon (LXe) detector, LUX-ZEPLIN (LZ), which has a higher sensitivity to nuclear recoil energies at the O( keV) level [118,119]. Therefore, we also make projections for the sensitivity of the LZ experiment assuming an exposure of 5.6 × 1000 tonne·day. We used the energy dependent efficiency of XENON1T to integrate over the range E det = 0.5 − 4 keV ee [24,118,119], an expected event rate of 2.5 × 10 −5 (/kg/day/keV) from 220 Rn for the background, and uncertainty of 15% for the background [119], and finally obtain an upper limit on the expected number of events of 79.6.
In Fig. 4, using S2-only data from XENON1T, we compare the effects of three velocity distribution models on the iDM-nucleus Migdal scattering cross section in the spin-dependent interactions. We have marked the SHM with a solid line, the Empirical model with dashed line and the Tsallis model with dash-dotted line. We can observe that, as an overall trend, the bounds of the Tsallis model is weaker than that of the SHM and Empirical model for all three interactions: endothermic (δ = 5 keV), elastic (δ = 0 keV), and exothermic (δ = −5 keV), with this difference more apparent in the elastic and endothermic. There is even an order of magnitude difference between them. The empirical model is only slightly stronger than the SHM limit above the DM mass of 0.1 GeV (for δ = 0 keV). These situation can be traced back to Fig. 1, where the η(v min ) of the Tsallis model falls more rapidly at greater than 300 km/s. Therefore, for larger v min (see Eq. 13), the smaller the value of η(v min ), the weaker the associated generating bounds. Returning to Eq. 13, exothermic scattering makes it easier for DM with masses in the 10 −3 ∼ 1 GeV region to fall in the lowvelocity region. The three models almost overlap for DM masses below 1 GeV, indicating that lower masses of DM retain more flexibility in the choice of VDFs. Compared with the spin-independent results in Ref. [24], we found that our spin-dependent Migdal scattering cross section is much weaker but has a similar slope. Xe, the form factors obtained for Ref. [121] are consistent with the results in Ref. [120] at the 2BCs level. By comparison with Ref. [120], we find that the 131 Xe neutron form factors we use are slightly larger than them at momentum q 100MeV, but the overall curve difference is not significant. However, there is a large difference in the form factor of protons, and the chiral 2BCs lead to a rather significant enhancement effect. With the form factor in Ref. [121], the LZ collaboration reported their exclusion limits for the SD interacions [122]. Since we used the nuclear form factor in [102], the difference between our "proton-only" and "neutron-only" results is nearly 10 3 , instead of 30 in Ref. [122].
On the one side, we notice that the bound of endothermic scattering is very closer to that of elastic scattering. According to Eq. 22, as the mass loss δ increases in endothermic scattering, the approximate maximum available energy projected to the Migdal electron decreases, so the bounds loses sensitivity to low-mass dark matter more rapidly. On the other side, Migdal electrons in exothermic scattering can acquire more energy so that they still have above-threshold sensitivity at lower DM masses. Essentially, Migdal electrons are easily excited above the threshold because for DM with masses below GeV, 1 2 m χ v 2 ∼ O( keV), then δ ∼ O( keV), there is a significant enhancement effect. For DM masses below m χ ∼ 7 MeV, the limiting boundary of LZ is weaker than that of XENON1T, since we predict a higher threshold for LZ (E det ≥ 0.5 keV ee ) than for the S2-only analysis of XENON1T. We also clarify that an S2-only analysis of the LZ experiment could improve the sensitivity to lighter dark matter masses. However, we need to know more about the achievable thresholds, backgrounds, and exposures for this S2-only analysis.
In addition, we set a cut-off value for the E R in Migdal process, as mentioned in Ref. [123].
In calculating the ionisation function, for the impulse approximation to hold, it is necessary to ensure that the collision time t collsion ∼ 1/E R is less than the time t traverse ∼ 1/ω ph (ω ph is the phonon frequency) for the atom to traverse its potential field. For sufficiently small DM masses, there will be recoil energy falling below this cut-off value and the Migdal rate will fail, so this value has a relatively large impact on the detector threshold as well as on the low dark matter masses. Referring to the method in Ref. [24], use the time t ∼ O(10 −12 )s required for a xenon atom to traverse the average interatomic distance at 170k at the speed of sound as the cut-off time. We conservatively to set E R cut ≥ 50 meV. Thus, we can place a limit on the mass of dark matter: elastic scattering corresponds to 0.02 GeV, while exothermic (endothermic) scattering relies on the mass splitting δ = −5 keV (5 keV), which is 0.001 GeV (0.36 GeV).
IV. INELASTIC DARK MATTER-ELECTRON SCATTERING
This section will investigate inelastic dark matter-electron scattering [30,[57][58][59]124] in a non-relativistic effective field theory. We will briefly discuss the relevant kinematics and derive the formulae for our calculations.
A. Calculations
According to our previous description, inelastic dark matter-electron scattering is very similar to the previously described inelastic dark matter-nucleus scattering process. The electron spectrum of Migdal is evaluated in terms of the effective transfer momentum q e ≡ me m N q, which is the most significant difference from electron scattering. For the energy conservation of the iDM-electron scattering process, it is simple to rewrite Eq. 21 as E em + δ = q · v − q 2 2µχe , where µ N → µ χe (µ χe is the reduced mass of the DM and electron). Furthermore, when the maximum incoming velocity v max of the DM is fixed, we can determine the range of allowed momentum transfers. The minimum and maximum momentum transfer are
q min = sign (E em + δ) m χ v max 1 − 1 − 2 (E em + δ) m χ v 2 max q max = m χ v max 1 + 1 − 2 (E em + δ) m χ v 2 max .(36)
In the limit δ → 0 Eq. 36 reduces to elastic scattering.
Similarly to nuclear, we also introduce an effective field theory for iDM-electron scattering according to the work of Catena et al [42]. In this formalism, the active degrees of freedom will be DM particles and electrons. The symmetry governing non-relativistic DM-electron scattering is replaced by Galilean invariance instead of Lorentz invariance under relativistic boosted. Thus, the invariant amplitude of DM-electron scattering can still be represented by a series of operators consisting of Galilean invariants.
In this EFT, there are also four three-momentum Galilean invariants: q, S e , S e , v ⊥ inel . Here corresponding to the inelastic case, v ⊥ inel is defined as
v ⊥ inel ≡ v + q 2µ χe + ∆ |q| 2 q(37)
Since the conservation of energy in the iDM-electron scattering process, v ⊥ inel · q = 0. Compared with the definition of Eq. 14, we can see that the process of inelastic scattering of electrons only modifies µ N → µ χe . Also, this modification is only reflected in the DM particle response function R nl i . For the operator O 4 = S e · S χ , it is not subject to v ⊥ inel , so we can still refer to the results of the elasticity calculation in Ref. [42].
It is worth noting that in this EFT, the invariant scattering amplitude M( q, v ⊥ inel ) of the DM-electron does not depend explicitly on the characteristics of the specific mediator particle. However, this formalism is still applicable when the mediator particle mass is much larger than the transfer momentum: m 2 med q 2 (contact interaction), or much smaller than the transfer momentum: m 2 med q 2 (long-range interaction) [125]. To summarise these, the free amplitudes of non-relativistic iDM-electron scattering express as
M( q, v ⊥ inel ) = i c s i + c l i q 2 ref |q| 2 O i ,(38)
where the reference momentum q ref ≡ αm e , α = 1/137, and the coefficient c s i (c l i ) represents the contact (long-range) interaction of the DM particle with the electron.
To obtain the total event rate we are interested in, first, the total transition rate of electrons induced by DM for the initial state of electron |e 1 → final state |e 2 is
R 1→2 = n χ 16m 2 χ m 2 e d 3 q (2π) 3 d 3 v f (v) (2π) δ(E f − E i ) |M 1→2 | 2 ,(39)
where n χ = ρ χ /m χ is the local DM number density, E f (E i ) is the final (initial) state energy of the system, and the δ function ensures the conservation of energy for this process. |M 1→2 | 2 was defined as the squared electron transition amplitude [42],
|M 1→2 | 2 = d 3 k (2π) 3 ψ * 2 ( k + q) M( q, v ⊥ inel ) ψ 1 ( k) 2 .(40)
Here ψ 1 and ψ 2 represent the electron initial and final state wave functions, respectively, and this equation has been averaged (summed) over the initial (final) spin states. Then, we can write down the iDM-electron scattering differential event rates that include the full atomic
orbitals dR d ln E e = N T l m=−l ∞ l =0 l m =−l V k 3 (2π) 3 R 1→2 = N T n χ 128π m 2 χ m 2 e dq q d 3 v v f (v)Θ(v − v min )|M nl ion | 2 ,(41)
where N T is the number of target atoms, V = (2π) 3 δ 3 (0) is the normalized phase space factor [126] and Θ is a step function to ensure that the incoming speed of the DM reaches the energy required to cause the electron recoil. And |M nl ion | 2 is the so-called electron ionisation amplitude squared, defined as
|M nl ion | 2 ≡ V 4k 3 (2π) 3 l m=−l ∞ l =0 l m =−l |M 1→2 | 2 = 4 i=1 R nl i v ⊥ inel , q m e W nl i (k , q) ,(42)
where getting from the first expression to the second is actually a Taylor expansion of M( q, v ⊥ inel ) at k = 0, which is then expressed as a product of the DM particle response function R nl i and the associated atomic response function W nl i . This approach allows for a more intuitive examination of the DM-electron scattering process.
In fact, there are four atomic response functions that can be derived from Ref. [42]. In our work, only W nl 1 was applied
W nl 1 (k , q) ≡ V 4k 3 (2π) 3 l m=−l ∞ l =0 l m =−l |f 1→2 (q)| 2 .(43)
For W nl 1 , it is actually the ionization factor commonly used in various light dark matter detection literatures. The f 1→2 ( q) in the above expression is called the scalar form factor
f 1→2 ( q) = d 3 k (2π) 3 ψ * 2 ( k + q)ψ 1 ( k).(44)
Corresponding to our calculation, the DM response function is R nl 1 ≡ jχ(jχ+1)
12
· 3c 2 4 .
B. Numerical Results and Discussions
The non-relativistic effective theory of iDM-electron interactions described in the previous subsection culminates in a general expression for the electron ionization energy spectrum of isolated atoms constructed from Eq. 41. This almost model-independent framework and a general expression for the scattering amplitude consisting of a series of effective operators in Eq. 42 allow us to make predictions for direct searches for sub-GeV DM particles. For comparison purposes, we keep to the formalism in Ref. [42] and also give a reference crosssection for the electron,σ
e ≡ µ 2 χe c 2 i 16πm 2 χ m 2 e .(45)
This definition differs from the reference cross section of the nucleus, where c i does not require additional compensation for the dimensions. The contact and long-rang interaction can then be identified using individual EFT operators and the connection between the EFT coefficients in Eq. 38. In particular, we take into account the effects of inelasticity to compare the electron ionization events induced within the detector threshold.
In Fig. 6, we used δ = 0, −1, 0.5 keV as fiducial parameters to show the differential σ SD e ∼ O(10 −40 ) cm 2 , and assume that the DM particles obey the SHM velocity distribution. In the bottom panel of Fig. 6, only the results of exothermic scattering are shown because the DM with mass m χ ≤ 1 MeV cannot produce enough recoil energy to obtain detectable electrons for elastic and endothermic scattering. In exothermic scattering, the event spectrum at the top panel of Fig. 6 shows a sharp peak at E em = |δ|. This relationship can be understood from Eq. 36: when DM with mass m χ = 1 GeV can produce enough electron recoil energy,E em = |δ|, the lower limit of transfer momentum q min = 0 and the upper limit q max = 2m χ v max , leading to the maximum integration interval. This results in a significant enhancement of the scattering rate due to the ionization function's integration over q. Note that this enhancement is a feature of exothermic scattering. Furthermore, in iDM-electron scattering, the ionization event rate is severely suppressed for endothermic compared to elastic scattering and is more significant for long-range interactions. This is because for elastic and endothermic scattering, they produce typical recoil energies µ χe v 2 max ∼ O(eV). Endothermic scattering does not have better sensitivity than elastic for usual Xenon-type detectors. It should be noted that the factor of 3 in the DM response function R nl 1 ≡ jχ(jχ+1)
12
· 3c 2 4 for the operator O 4 is based on the assumption of nonrelativistic and independent particle approximations [42]. Including the many-body effects and relativistic corrections, such a factor will be mildly changed with the variation of the electron energy [127]. For simplicity, we use a constant factor of 3 in the calculations.
Finally, similar previous analyses are used to give electron scattering cross sections that match the XENON1T S2-only data and to predict limits for future LZ experiment (90%C.L.).
The three different velocity distribution models are still taken into account, and we keep the exothermic (endothermic) scattering parameter of δ = −1 keV (0.5 keV) to demonstrate the inelastic effect on an individual effective spin-dependent operator O 4 in the contact/longrange interaction, as shown in Fig. 7.
The iDM-electron scattering bounds resembles Migdal's behaviour in Fig. 5. Exothermic scattering retains more sensitivity to low mass dark matter, while endothermic scattering preserves the opposite property. This is the similarity between Migdal and electron scattering that we discussed previously, while the transfer momentum q is the crucial difference between them (reflected in the different regions of the ionization function). As mentioned previously, the effects of the velocity distribution remain slight, and only the Tsallis model differs from the other two models at higher velocity tails. Besides, we would like to emphasize that the contact and long-range interactions differ by a factor ( αme q ) 4 , which makes the difference between the two results quite significant. For exothermic scattering in long-range interaction, heavier DM masses (m χ 0.5 MeV) and larger |δ| lead to a larger transfer momentum q, resulting in a significant relative suppression. Conversely, for m χ 0.5 MeV, this effect is less pronounced.
V. CONCLUSION
Although experimental work in direct dark matter detection has yielded fantastic results for exploring the DM parameter space, the future detection of sub-GeV dark matter remains a significant challenge. For sub-GeV dark matter, the electron spectrum induced by the Migdal effect and DM-electron scattering provides a detectable window for direct detection experiments near low thresholds. However, the features of inelastic dark matter and the velocity distribution functions from different dark matter halos can have a critical impact on this electron spectrum. In this paper, we consider inelastic dark matter characterised by mass splitting δ and importing the Tsallis, an Empirical and Standard Halo model of the velocity distribution function. We use a concise non-relativistic effective field theory to study the Migdal effect and electron scattering induced by inelastic dark matter through spin-dependent interaction. With data from XENON1T, we yield inelastic dark matternucleus Migdal/electron scattering cross sections. In the analysis of the Migdal effect, we have taken an oversimplified nuclear form factor, which makes the "only-proton/neutron" cross section differ by about 1000. Based on our choice of astronomical parameters, the Tsallis model can have even an order of magnitude different limit on the cross section than the other two models.
We selected some currently proposed astrophysical parameters as benchmark values [82] to compare the effects of DM halo models in different scattering processes and obtained conservative results. These results will become more transparent with the inflow of data from ongoing or upcoming DM direct detection experiments. At that time, one can compare our results with the new data to constrain the astrophysical parameters of DM particles and uncover potential DM halo models.
Finally, our work considers a single spin operator O 4 ; more complete interaction models should be discussed. Moreover, these models may induce spin operators with velocity dependence. In future work, after considering the changes brought by iDM to the velocity operator v ⊥ inel , such as O 7 , O 12 and O 14 [42,106] with velocity dependence, more constraints may be imposed on the parameter space of sub-GeV DM.
VI. ACKNOWLEDGEMENTS
We appreciate Jayden L. Newstead and James Blackman Dent for the code and helpful discussions on inelastic dark matter scattering in the EFT context. This work is supported by the National Natural Science Foundation of China (NNSFC) under grants No. 12275134, by Natural Science Foundation of Shandong Province under the grants ZR2018QA007.
FIG. 1 .
1The η(v min ) is derived after integration over the velocity distribution, which varies with the parameters of the three models. For fixed the astrophysical parameters v 0 = 238 km/s, v E = 250 km/s and v esc = 544 km/s, the orange line represents the Empirical model (p = 1.5), the green represents the Standard Halo Model and the purple represents the Tsallis model (q s = 0.809).
the above equation considers the leading order of the Taylor expansion of q e . We have made approximations by factoring the wave functions of the initial and final electron clouds, |I and |F , so that only a single electron (with the position operator x (ζ) , i.e. x (ζ) denotes the position of ζ th electron in electron cloud.) involved in the transition between the singleelectron states |i and |f .
, l , m | e i q· x |n, l 2 .
FIG. 2 .
2) −2 in our calculations. In Fig. 2, we calculated the nuclear recoil spectrum of inelastic dark matter-nucleus scattering to indicate to what extent the kinematics of the iDM-nucleus affects the event rate. For simplicity, we consider the dark matter with the mass splitting of δ and mass m χ = 1 GeV coupled to protons only. We can see two cases of inelastic scattering with a xenon atomic target when considering the individual spin-dependent operator O 4 . the left panel is endothermic scattering, and the right panel is exothermic scattering. We scale the strength of the SD interaction at c p i = 10 4 to correspond to the reference cross section σ χp ∼ 10 −30 cm 2 for SD interaction at m χ = 1 GeV, and the value of nuclear recoil energy E max R (E min R ) corresponds to θ = π (θ = 0) in Eq. 12. For endothermic scattering, based on The nuclear recoil spectrum is derived from the inelastic dark matter scattered with xenon atoms in the Standard Halo Model through spin-dependent interaction. Assuming m χ = 1 GeV, we show the endothermic scattering of the iDM-nucleus in the left panel, with different coloured solid lines depicting different δ, where δ = 0 (blue), 1 (orange), 2 (green), and 3 keV (red). The right panel shows the process of exothermic scattering, where δ = −2 (blue), −5 (orange), −20 (green), and − 50 keV (red).
FIG. 3 .
3The rate of Migdal events induced by a mass m χ = 1 GeV dark matter particle scattering with the nucleus through the spin-dependent interaction for the xenon targets in SHM. Coloured solid lines depict the contributions of various atomic energy levels represented by n. For n = 4, we depict the endothermic (exothermic) scattering in Migdal with a blue (green) dashed line, where δ = 2 keV(δ = −5 keV). The black solid line depicts the elastic nuclear recoil.
FIG. 4 .
4Constraints (90% C.L.) from three velocity distribution functions on the iDM-nucleus Migdal effect of the spin-dependent interaction. We depict the bounds of Tsallis, Empirical and Standard Halo Model with dash-dotted, dashed and solid lines, respectively. The coloured lines depict the impacts of different DM mass splitting on the bounds at each velocity distribution, where δ = −5, 0, 5 keV respectively. The left panel represents the interaction of DM with proton-only coupling, and the right panel represents the interaction with neutron-only coupling.
FIG. 5 .
5The 90% C.L. limits on the iDM-nucleus scattering cross section of spin-dependent interaction from XENON1T on nuclear recoils (dashed), XENON1T with the Migdal effect (solid) and the projected LZ sensitivity with the Migdal effect (dash-dotted). The coloured lines depict the different scattering processes, where δ = 5 keV for endothermic (purple), δ = −5 keV for exothermic (green), and δ = 0 keV for elastic scattering (orange). We plot the scattering cross sections of the iDM coupling with protons and neutrons in the left and right panels, respectively. In Fig. 5, for the spin-dependent operator O 4 , after accounting for elastic, exothermic and endothermic interactions, we show the constraints on the cross section for XENON1T and LZ experiments coupled with protons and neutrons alone at 90% C.L., respectively. Even for different couplings, the various bound shapes are quite similar, and the elastic scattering provides a good analogy: the Migdal effect shows more limits than the elastic nuclear recoil around m χ 3 GeV, below which the bounds all originate from the Migdal effect and yield more bounds for the lower masses of dark matter. In fact, the crossover point between the Migdal and nuclear recoil boundary is determined by the threshold reached by the detector. Under the spin-dependent operator O 4 , the degree of constraint on the proton and neutron cross sections differs, with the cross section of the proton being weaker than that of the neutron since the xenon isotope, which has an even number of protons (Z = 54), the spin expectation value of the proton is smaller after the intrinsic spin magnetic moments offset each other. Our work discusses the SD interactions of the O 4 operator and uses the nuclear form factor of 131 Xe from Ref. [102]. In the formalism of Haxton et al., they encode the nuclear physics part into the nuclear response function. It is worth noting that our used nuclear form factor includes the effect of the one-body current, and a truncation of the valence space is made in the calculation. Recently, in Ref. [120], Klos et al. applied large-scale shell-model calculations to evaluate the nuclear form factor for DM-nucleon SD interactions with chiral one-and two-body currents (1BCs and 2BCs). And B. S. Hu et al. [121] used the valencespace formulation of the in-medium similarity renormalization group to calculate ab initio spin-dependent form factors for all nuclei currently used in the direct detection searches. For131
FIG. 6 .
6event rates of exothermic, elastic, and endothermic scattering for different masses of DM with electrons through contact and long-range interactions under an individual operatorO 4 . Here we set the coefficient c 4 = 10 −5 for O 4 , corresponding to the spin cross section The differential event rate of electron scattering between DM with different masses under spin-dependent operator O 4 via contact (left) and long-range (right) interaction, respectively. In the top panel, the endothermic (red), elastic (green), and exothermic (orange) scattering are depicted with solid lines of various colors corresponding to δ = 0.5, 0, −1 keV, respectively. The bottom panel shows the differential event rate of exothermic (δ = −1 keV) scattering for DM mass m χ = 0.1 MeV (blue) and m χ = 1 MeV (purple).
FIG. 7 .
7The 90% C.L. constraints on iDM-electron cross sectionσ e versus DM mass m χ from the spin-dependent contact (top) and long-range (bottom) interaction for different mass splitting δ and three velocity distribution models. We used the S2-only data from XENON1T for analysis and projected the bounds of the LZ experiment. These processes are exothermic with δ = −1 keV (green), elastic with δ = 0 keV (orange), endothermic with δ = 0.5 keV (purple) scattering, and include Standard Halo Model (left panel), Tsallis Mode (middle panel), Empirical Model (right panel).
. E Aprile, XENON10.1103/PhysRevLett.121.111302arXiv:1805.12562Phys. Rev. Lett. 121111302astroph.COE. Aprile et al. (XENON), Phys. Rev. Lett. 121, 111302 (2018), arXiv:1805.12562 [astro- ph.CO].
. E Aprile, XENON10.1103/PhysRevLett.123.251801arXiv:1907.11485Phys. Rev. Lett. 123251801hep-exE. Aprile et al. (XENON), Phys. Rev. Lett. 123, 251801 (2019), arXiv:1907.11485 [hep-ex].
. D Akimov, COHERENT10.1126/science.aao0990arXiv:1708.01294Science. 3571123nucl-exD. Akimov et al. (COHERENT), Science 357, 1123 (2017), arXiv:1708.01294 [nucl-ex].
. E Aprile, XENON10.1103/PhysRevD.102.072004arXiv:2006.09721Phys. Rev. D. 10272004hep-exE. Aprile et al. (XENON), Phys. Rev. D 102, 072004 (2020), arXiv:2006.09721 [hep-ex].
. R Agnese, SuperCDMS10.1103/PhysRevLett.121.051301arXiv:1804.10697Phys. Rev. Lett. 12169901Phys.Rev.Lett.. hep-exR. Agnese et al. (SuperCDMS), Phys. Rev. Lett. 121, 051301 (2018), [Erratum: Phys.Rev.Lett. 122, 069901 (2019)], arXiv:1804.10697 [hep-ex].
. M Crisler, R Essig, J Estrada, G Fernandez, J Tiffenberg, M Sofo Haro, T Volansky, T.-T Yu, 10.1103/PhysRevLett.121.061803arXiv:1804.00088Phys. Rev. Lett. 12161803hep-exM. Crisler, R. Essig, J. Estrada, G. Fernandez, J. Tiffenberg, M. Sofo haro, T. Volansky, and T.-T. Yu (SENSEI), Phys. Rev. Lett. 121, 061803 (2018), arXiv:1804.00088 [hep-ex].
. X Ren, PandaX-II10.1103/PhysRevLett.121.021304arXiv:1802.06912Phys. Rev. Lett. 12121304hep-phX. Ren et al. (PandaX-II), Phys. Rev. Lett. 121, 021304 (2018), arXiv:1802.06912 [hep-ph].
. P Agnes, DarkSide10.1103/PhysRevLett.121.081307arXiv:1802.06994Phys. Rev. Lett. 12181307astroph.HEP. Agnes et al. (DarkSide), Phys. Rev. Lett. 121, 081307 (2018), arXiv:1802.06994 [astro- ph.HE].
. D S Akerib, LUX10.1103/PhysRevLett.122.131301arXiv:1811.11241Phys. Rev. Lett. 122131301astroph.COD. S. Akerib et al. (LUX), Phys. Rev. Lett. 122, 131301 (2019), arXiv:1811.11241 [astro- ph.CO].
SENSEI). L Barak, 10.1103/PhysRevLett.125.171802arXiv:2004.11378Phys. Rev. Lett. 125171802astroph.COL. Barak et al. (SENSEI), Phys. Rev. Lett. 125, 171802 (2020), arXiv:2004.11378 [astro- ph.CO].
. Q Arnaud, EDELWEISS10.1103/PhysRevLett.125.141301arXiv:2003.01046Phys. Rev. Lett. 125141301astro-ph.GAQ. Arnaud et al. (EDELWEISS), Phys. Rev. Lett. 125, 141301 (2020), arXiv:2003.01046 [astro-ph.GA].
. E Armengaud, EDELWEISS10.1103/PhysRevD.99.082003arXiv:1901.03588Phys. Rev. D. 9982003astro-ph.GAE. Armengaud et al. (EDELWEISS), Phys. Rev. D 99, 082003 (2019), arXiv:1901.03588 [astro-ph.GA].
. D S Akerib, LUX10.1103/PhysRevLett.118.021303arXiv:1608.07648Phys. Rev. Lett. 11821303astroph.COD. S. Akerib et al. (LUX), Phys. Rev. Lett. 118, 021303 (2017), arXiv:1608.07648 [astro- ph.CO].
. D Zhang, PandaX10.1103/PhysRevLett.129.161804arXiv:2206.02339Phys. Rev. Lett. 129161804hep-exD. Zhang et al. (PandaX), Phys. Rev. Lett. 129, 161804 (2022), arXiv:2206.02339 [hep-ex].
. W Wang, L Wu, W.-N Yang, B Zhu, arXiv:2111.04000hep-phW. Wang, L. Wu, W.-N. Yang, and B. Zhu, (2021), arXiv:2111.04000 [hep-ph].
. L Su, W Wang, L Wu, J M Yang, B Zhu, 10.1103/PhysRevD.102.115028arXiv:2006.11837Phys. Rev. D. 102115028hep-phL. Su, W. Wang, L. Wu, J. M. Yang, and B. Zhu, Phys. Rev. D 102, 115028 (2020), arXiv:2006.11837 [hep-ph].
. W Wang, L Wu, J M Yang, H Zhou, B Zhu, 10.1007/JHEP12(2020)072arXiv:1912.09904JHEP. 1272Erratum: JHEP 02, 052 (2021). hep-phW. Wang, L. Wu, J. M. Yang, H. Zhou, and B. Zhu, JHEP 12, 072 (2020), [Erratum: JHEP 02, 052 (2021)], arXiv:1912.09904 [hep-ph].
. A Aguilar-Arevalo, DAMIC10.1103/PhysRevLett.123.181802arXiv:1907.12628Phys. Rev. Lett. 123181802astro-ph.COA. Aguilar-Arevalo et al. (DAMIC), Phys. Rev. Lett. 123, 181802 (2019), arXiv:1907.12628 [astro-ph.CO].
. R Essig, J Mardon, T Volansky, 10.1103/PhysRevD.85.076007arXiv:1108.5383Phys. Rev. D. 8576007hep-phR. Essig, J. Mardon, and T. Volansky, Phys. Rev. D 85, 076007 (2012), arXiv:1108.5383 [hep-ph].
. M J Dolan, F Kahlhoefer, C Mccabe, 10.1103/PhysRevLett.121.101801arXiv:1711.09906Phys. Rev. Lett. 121101801hep-phM. J. Dolan, F. Kahlhoefer, and C. McCabe, Phys. Rev. Lett. 121, 101801 (2018), arXiv:1711.09906 [hep-ph].
. J D Vergados, H Ejiri, 10.1016/j.physletb.2004.11.085arXiv:hep-ph/0401151Phys. Lett. B. 606313J. D. Vergados and H. Ejiri, Phys. Lett. B 606, 313 (2005), arXiv:hep-ph/0401151.
. G Grilli Di Cortona, A Messina, S Piacentini, 10.1007/JHEP11(2020)034arXiv:2006.02453JHEP. 1134hep-phG. Grilli di Cortona, A. Messina, and S. Piacentini, JHEP 11, 034 (2020), arXiv:2006.02453 [hep-ph].
. W Wang, K.-Y Wu, L Wu, B Zhu, 10.1016/j.nuclphysb.2022.115907arXiv:2112.06492Nucl. Phys. B. 983115907hep-phW. Wang, K.-Y. Wu, L. Wu, and B. Zhu, Nucl. Phys. B 983, 115907 (2022), arXiv:2112.06492 [hep-ph].
. N F Bell, J B Dent, B Dutta, S Ghosh, J Kumar, J L Newstead, 10.1103/PhysRevD.104.076013arXiv:2103.05890Phys. Rev. D. 10476013hep-phN. F. Bell, J. B. Dent, B. Dutta, S. Ghosh, J. Kumar, and J. L. Newstead, Phys. Rev. D 104, 076013 (2021), arXiv:2103.05890 [hep-ph].
. N F Bell, J B Dent, J L Newstead, S Sabharwal, T J Weiler, 10.1103/PhysRevD.101.015012arXiv:1905.00046Phys. Rev. D. 10115012hep-phN. F. Bell, J. B. Dent, J. L. Newstead, S. Sabharwal, and T. J. Weiler, Phys. Rev. D 101, 015012 (2020), arXiv:1905.00046 [hep-ph].
. V V Flambaum, L Su, L Wu, B Zhu, arXiv:2012.09751hep-phV. V. Flambaum, L. Su, L. Wu, and B. Zhu, (2020), arXiv:2012.09751 [hep-ph].
. G Guo, Y.-L S Tsai, M.-R Wu, Q Yuan, 10.1103/PhysRevD.102.103004arXiv:2008.12137Phys. Rev. D. 102103004astro-ph.HEG. Guo, Y.-L. S. Tsai, M.-R. Wu, and Q. Yuan, Phys. Rev. D 102, 103004 (2020), arXiv:2008.12137 [astro-ph.HE].
. Z Y Zhang, CDEXarXiv:2206.04128hep-exZ. Y. Zhang et al. (CDEX), (2022), arXiv:2206.04128 [hep-ex].
. H An, D Yang, 10.1016/j.physletb.2021.136408arXiv:2006.15672Phys. Lett. B. 818136408hep-phH. An and D. Yang, Phys. Lett. B 818, 136408 (2021), arXiv:2006.15672 [hep-ph].
. W Chao, Y Gao, M J Jin, arXiv:2006.16145hep-phW. Chao, Y. Gao, and M. j. Jin, (2020), arXiv:2006.16145 [hep-ph].
. Z.-L Liang, L Zhang, P Zhang, F Zheng, 10.1007/JHEP01(2019)149arXiv:1810.13394JHEP. 01149cond-mat.mtrl-sciZ.-L. Liang, L. Zhang, P. Zhang, and F. Zheng, JHEP 01, 149 (2019), arXiv:1810.13394 [cond-mat.mtrl-sci].
. S.-F Ge, X.-G He, X.-D Ma, J Sheng, 10.1007/JHEP05(2022)191arXiv:2201.11497JHEP. 05191hepphS.-F. Ge, X.-G. He, X.-D. Ma, and J. Sheng, JHEP 05, 191 (2022), arXiv:2201.11497 [hep- ph].
. C Xia, Y.-H Xu, Y.-F Zhou, 10.1088/1475-7516/2022/02/028arXiv:2111.05559JCAP. 0228hep-phC. Xia, Y.-H. Xu, and Y.-F. Zhou, JCAP 02, 028 (2022), arXiv:2111.05559 [hep-ph].
. H.-J He, Y.-C Wang, J Zheng, 10.1088/1475-7516/2021/01/042arXiv:2007.04963JCAP. 0142hep-phH.-J. He, Y.-C. Wang, and J. Zheng, JCAP 01, 042 (2021), arXiv:2007.04963 [hep-ph].
. J Guo, Y He, J Liu, X.-P Wang, 10.1007/JHEP04(2022)024arXiv:2111.01164JHEP. 0424hep-phJ. Guo, Y. He, J. Liu, and X.-P. Wang, JHEP 04, 024 (2022), arXiv:2111.01164 [hep-ph].
. M Ibe, W Nakano, Y Shoji, K Suzuki, 10.1007/JHEP03(2018)194arXiv:1707.07258JHEP. 03194hep-phM. Ibe, W. Nakano, Y. Shoji, and K. Suzuki, JHEP 03, 194 (2018), arXiv:1707.07258 [hep-ph].
. K V Berghaus, A Esposito, R Essig, M Sholapurkar, arXiv:2210.06490hep-phK. V. Berghaus, A. Esposito, R. Essig, and M. Sholapurkar, (2022), arXiv:2210.06490 [hep-ph].
. D Adams, D Baxter, H Day, R Essig, Y Kahn, arXiv:2210.04917hep-phD. Adams, D. Baxter, H. Day, R. Essig, and Y. Kahn, (2022), arXiv:2210.04917 [hep-ph].
. G Tomar, S Kang, S Scopel, arXiv:2210.00199hep-phG. Tomar, S. Kang, and S. Scopel, (2022), arXiv:2210.00199 [hep-ph].
. C Blanco, I Harris, Y Kahn, B Lillard, J Pérez-Ríos, arXiv:2208.09002hep-phC. Blanco, I. Harris, Y. Kahn, B. Lillard, and J. Pérez-Ríos, (2022), arXiv:2208.09002 [hep-ph].
. H.-J He, Y.-C Wang, J Zheng, 10.1103/PhysRevD.104.115033arXiv:2012.05891Phys. Rev. D. 104115033hep-phH.-J. He, Y.-C. Wang, and J. Zheng, Phys. Rev. D 104, 115033 (2021), arXiv:2012.05891 [hep-ph].
. R Catena, T Emken, N A Spaldin, W Tarantino, 10.1103/PhysRevResearch.2.033195arXiv:1912.08204Phys. Rev. Res. 233195hep-phR. Catena, T. Emken, N. A. Spaldin, and W. Tarantino, Phys. Rev. Res. 2, 033195 (2020), arXiv:1912.08204 [hep-ph].
. J Kopp, J Liu, T R Slatyer, X.-P Wang, W Xue, 10.1007/JHEP12(2016)033arXiv:1609.02147JHEP. 1233hep-phJ. Kopp, J. Liu, T. R. Slatyer, X.-P. Wang, and W. Xue, JHEP 12, 033 (2016), arXiv:1609.02147 [hep-ph].
. D Tucker-Smith, N Weiner, 10.1103/PhysRevD.64.043502arXiv:hep-ph/0101138Phys. Rev. D. 6443502D. Tucker-Smith and N. Weiner, Phys. Rev. D 64, 043502 (2001), arXiv:hep-ph/0101138.
. D Tucker-Smith, N Weiner, 10.1103/PhysRevD.72.063509arXiv:hep-ph/0402065Phys. Rev. D. 7263509D. Tucker-Smith and N. Weiner, Phys. Rev. D 72, 063509 (2005), arXiv:hep-ph/0402065.
. Y Gu, L Wu, B Zhu, arXiv:2203.06664hep-phY. Gu, L. Wu, and B. Zhu, (2022), arXiv:2203.06664 [hep-ph].
. G H Duan, K.-I Hikasa, J Ren, L Wu, J M Yang, 10.1103/PhysRevD.98.015010arXiv:1804.05238Phys. Rev. D. 9815010hep-phG. H. Duan, K.-I. Hikasa, J. Ren, L. Wu, and J. M. Yang, Phys. Rev. D 98, 015010 (2018), arXiv:1804.05238 [hep-ph].
. M Abdughani, L Wu, J M Yang, 10.1140/epjc/s10052-017-5485-2arXiv:1705.09164Eur. Phys. J. C. 784hep-phM. Abdughani, L. Wu, and J. M. Yang, Eur. Phys. J. C 78, 4 (2018), arXiv:1705.09164 [hep-ph].
. M Abdughani, L Wu, 10.1140/epjc/s10052-020-7793-1arXiv:1908.11350Eur. Phys. J. C. 80233hep-phM. Abdughani and L. Wu, Eur. Phys. J. C 80, 233 (2020), arXiv:1908.11350 [hep-ph].
. D P Finkbeiner, N Weiner, 10.1103/PhysRevD.76.083519arXiv:astro-ph/0702587Phys. Rev. D. 7683519D. P. Finkbeiner and N. Weiner, Phys. Rev. D 76, 083519 (2007), arXiv:astro-ph/0702587.
. C Arina, N Fornengo, 10.1088/1126-6708/2007/11/029arXiv:0709.4477JHEP. 1129hep-phC. Arina and N. Fornengo, JHEP 11, 029 (2007), arXiv:0709.4477 [hep-ph].
. S Chang, G D Kribs, D Tucker-Smith, N Weiner, 10.1103/PhysRevD.79.043513arXiv:0807.2250Phys. Rev. D. 7943513hep-phS. Chang, G. D. Kribs, D. Tucker-Smith, and N. Weiner, Phys. Rev. D 79, 043513 (2009), arXiv:0807.2250 [hep-ph].
. Y Cui, D E Morrissey, D Poland, L Randall, 10.1088/1126-6708/2009/05/076arXiv:0901.0557JHEP. 0576hep-phY. Cui, D. E. Morrissey, D. Poland, and L. Randall, JHEP 05, 076 (2009), arXiv:0901.0557 [hep-ph].
. P J Fox, G D Kribs, T M P Tait, 10.1103/PhysRevD.83.034007arXiv:1011.1910Phys. Rev. D. 8334007hep-phP. J. Fox, G. D. Kribs, and T. M. P. Tait, Phys. Rev. D 83, 034007 (2011), arXiv:1011.1910 [hep-ph].
. T Lin, D P Finkbeiner, 10.1103/PhysRevD.83.083510arXiv:1011.3052Phys. Rev. D. 8383510astro-ph.COT. Lin and D. P. Finkbeiner, Phys. Rev. D 83, 083510 (2011), arXiv:1011.3052 [astro-ph.CO].
. A Simone, V Sanz, H P Sato, 10.1103/PhysRevLett.105.121802arXiv:1004.1567Phys. Rev. Lett. 105121802hep-phA. De Simone, V. Sanz, and H. P. Sato, Phys. Rev. Lett. 105, 121802 (2010), arXiv:1004.1567 [hep-ph].
. M Baryakhtar, A Berlin, H Liu, N Weiner, 10.1007/JHEP06(2022)047arXiv:2006.13918JHEP. 0647hep-phM. Baryakhtar, A. Berlin, H. Liu, and N. Weiner, JHEP 06, 047 (2022), arXiv:2006.13918 [hep-ph].
. J Bramante, N Song, 10.1103/PhysRevLett.125.161805arXiv:2006.14089Phys. Rev. Lett. 125161805hep-phJ. Bramante and N. Song, Phys. Rev. Lett. 125, 161805 (2020), arXiv:2006.14089 [hep-ph].
. K Harigaya, Y Nakai, M Suzuki, 10.1016/j.physletb.2020.135729arXiv:2006.11938Phys. Lett. B. 809135729hep-phK. Harigaya, Y. Nakai, and M. Suzuki, Phys. Lett. B 809, 135729 (2020), arXiv:2006.11938 [hep-ph].
The Theory of Direct Dark Matter Detection. E D Nobile, 10.1007/978-3-030-95228-0Springer International PublishingE. D. Nobile, The Theory of Direct Dark Matter Detection (Springer International Publishing, 2022).
. P Agrawal, Z Chacko, C Kilic, R K Mishra, arXiv:1003.1912hep-phP. Agrawal, Z. Chacko, C. Kilic, and R. K. Mishra, (2010), arXiv:1003.1912 [hep-ph].
. M Kuhlen, N Weiner, J Diemand, P Madau, B Moore, D Potter, J Stadel, M Zemp, 10.1088/1475-7516/2010/02/030arXiv:0912.2358JCAP. 0230astro-ph.GAM. Kuhlen, N. Weiner, J. Diemand, P. Madau, B. Moore, D. Potter, J. Stadel, and M. Zemp, JCAP 02, 030 (2010), arXiv:0912.2358 [astro-ph.GA].
. C Mccabe, 10.1103/PhysRevD.82.023530arXiv:1005.0579Phys. Rev. D. 8223530hep-phC. McCabe, Phys. Rev. D 82, 023530 (2010), arXiv:1005.0579 [hep-ph].
. A M Green, 10.1088/1361-6471/aa7819arXiv:1703.10102[astro-ph.COJ. Phys. G. 4484001A. M. Green, J. Phys. G 44, 084001 (2017), arXiv:1703.10102 [astro-ph.CO].
. A Castiñeyra, E Nezri, V Bertin, 10.1088/1475-7516/2019/12/043arXiv:1906.11674JCAP. 1243astro-ph.GAA. Nuñez Castiñeyra, E. Nezri, and V. Bertin, JCAP 12, 043 (2019), arXiv:1906.11674 [astro-ph.GA].
. G Herrera, A Ibarra, 10.1016/j.physletb.2021.136551arXiv:2104.04445Phys. Lett. B. 820136551hep-phG. Herrera and A. Ibarra, Phys. Lett. B 820, 136551 (2021), arXiv:2104.04445 [hep-ph].
. A K Drukier, K Freese, D N Spergel, 10.1103/PhysRevD.33.3495Phys. Rev. D. 333495A. K. Drukier, K. Freese, and D. N. Spergel, Phys. Rev. D 33, 3495 (1986).
. N Bozorgnia, F Calore, M Schaller, M Lovell, G Bertone, C S Frenk, R A Crain, J , N. Bozorgnia, F. Calore, M. Schaller, M. Lovell, G. Bertone, C. S. Frenk, R. A. Crain, J. F.
. J Navarro, T Schaye, Theuns, 10.1088/1475-7516/2016/05/024arXiv:1601.04707astro-ph.COJCAP. 0524Navarro, J. Schaye, and T. Theuns, JCAP 05, 024 (2016), arXiv:1601.04707 [astro-ph.CO].
. A Radick, A.-M Taki, T.-T Yu, 10.1088/1475-7516/2021/02/004arXiv:2011.02493JCAP. 024hep-phA. Radick, A.-M. Taki, and T.-T. Yu, JCAP 02, 004 (2021), arXiv:2011.02493 [hep-ph].
. T N Maity, T S Ray, S Sarkar, 10.1140/epjc/s10052-021-09805-2arXiv:2011.12896Eur. Phys. J. C. 811005hep-phT. N. Maity, T. S. Ray, and S. Sarkar, Eur. Phys. J. C 81, 1005 (2021), arXiv:2011.12896 [hep-ph].
. M Vogelsberger, A Helmi, V Springel, S D M White, J Wang, C S Frenk, A Jenkins, A D Ludlow, J F Navarro, 10.1111/j.1365-2966.2009.14630.xarXiv:0812.0362Mon. Not. Roy. Astron. Soc. 395astro-phM. Vogelsberger, A. Helmi, V. Springel, S. D. M. White, J. Wang, C. S. Frenk, A. Jenk- ins, A. D. Ludlow, and J. F. Navarro, Mon. Not. Roy. Astron. Soc. 395, 797 (2009), arXiv:0812.0362 [astro-ph].
. M Fairbairn, T Schwetz, 10.1088/1475-7516/2009/01/037arXiv:0808.0704JCAP. 0137hep-phM. Fairbairn and T. Schwetz, JCAP 01, 037 (2009), arXiv:0808.0704 [hep-ph].
. J March-Russell, C Mccabe, M Mccullough, 10.1088/1126-6708/2009/05/071arXiv:0812.1931JHEP. 0571astro-phJ. March-Russell, C. McCabe, and M. McCullough, JHEP 05, 071 (2009), arXiv:0812.1931 [astro-ph].
. Y.-Y Mao, L E Strigari, R H Wechsler, H.-Y Wu, O Hahn, 10.1088/0004-637X/764/1/35arXiv:1210.2721Astrophys. J. 764astro-ph.COY.-Y. Mao, L. E. Strigari, R. H. Wechsler, H.-Y. Wu, and O. Hahn, Astrophys. J. 764, 35 (2013), arXiv:1210.2721 [astro-ph.CO].
. C Tsallis, 10.1007/BF01016429J. Statist. Phys. 52479C. Tsallis, J. Statist. Phys. 52, 479 (1988).
. S H Hansen, B Moore, M Zemp, J Stadel, 10.1088/1475-7516/2006/01/014arXiv:astro-ph/0505420JCAP. 0114S. H. Hansen, B. Moore, M. Zemp, and J. Stadel, JCAP 01, 014 (2006), arXiv:astro- ph/0505420.
. F S Ling, E Nezri, E Athanassoula, R Teyssier, 10.1088/1475-7516/2010/02/012arXiv:0909.2028JCAP. 0212astro-ph.GAF. S. Ling, E. Nezri, E. Athanassoula, and R. Teyssier, JCAP 02, 012 (2010), arXiv:0909.2028 [astro-ph.GA].
. Y.-Y Mao, L E Strigari, R H Wechsler, 10.1103/PhysRevD.89.063513arXiv:1304.6401Phys. Rev. D. 8963513astro-ph.COY.-Y. Mao, L. E. Strigari, and R. H. Wechsler, Phys. Rev. D 89, 063513 (2014), arXiv:1304.6401 [astro-ph.CO].
. H.-Y Wu, O Hahn, R H Wechsler, Y.-Y. Mao, P S Behroozi, 10.1088/0004-637X/763/2/70arXiv:1209.3309Astrophys. J. 763astro-ph.COH.-Y. Wu, O. Hahn, R. H. Wechsler, Y.-Y. Mao, and P. S. Behroozi, Astrophys. J. 763, 70 (2013), arXiv:1209.3309 [astro-ph.CO].
. A Klypin, S Trujillo-Gomez, J Primack, 10.1088/0004-637X/740/2/102arXiv:1002.3660Astrophys. J. 740astro-ph.COA. Klypin, S. Trujillo-Gomez, and J. Primack, Astrophys. J. 740, 102 (2011), arXiv:1002.3660 [astro-ph.CO].
. J Guedes, S Callegari, P Madau, L Mayer, 10.1088/0004-637X/742/2/76arXiv:1103.6030Astrophys. J. 742astro-ph.COJ. Guedes, S. Callegari, P. Madau, and L. Mayer, Astrophys. J. 742, 76 (2011), arXiv:1103.6030 [astro-ph.CO].
. D Baxter, 10.1140/epjc/s10052-021-09655-yarXiv:2105.00599Eur. Phys. J. C. 81907hep-exD. Baxter et al., Eur. Phys. J. C 81, 907 (2021), arXiv:2105.00599 [hep-ex].
. J Bland-Hawthorn, O Gerhard, http:/arxiv.org/abs/https:/doi.org/10.1146/annurev-astro-081915-023441Annual Review of Astronomy and Astrophysics. 54J. Bland-Hawthorn and O. Gerhard, Annual Review of Astronomy and Astrophysics 54, 529 (2016), https://doi.org/10.1146/annurev-astro-081915-023441.
. R Abuter, A Amorim, M Bauböck, J Berger, H Bonnet, W Brandner, Y Clénet, R Davies, P De Zeeuw, J Dexter, Astronomy & Astrophysics. 64759R. Abuter, A. Amorim, M. Bauböck, J. Berger, H. Bonnet, W. Brandner, Y. Clénet, R. Davies, P. de Zeeuw, J. Dexter, et al., Astronomy & Astrophysics 647, A59 (2021).
. M C Smith, 10.1111/j.1365-2966.2007.11964.xarXiv:astro-ph/0611671Mon. Not. Roy. Astron. Soc. 379M. C. Smith et al., Mon. Not. Roy. Astron. Soc. 379, 755 (2007), arXiv:astro-ph/0611671.
. G Gelmini, P Gondolo, 10.1103/PhysRevD.64.023504arXiv:hep-ph/0012315Phys. Rev. D. 6423504G. Gelmini and P. Gondolo, Phys. Rev. D 64, 023504 (2001), arXiv:hep-ph/0012315.
. R Schönrich, J Binney, W Dehnen, 10.1111/j.1365-2966.2010.16253.xMonthly Notices of the Royal Astronomical Society. 4031829R. Schönrich, J. Binney, and W. Dehnen, Monthly Notices of the Royal As- tronomical Society 403, 1829 (2010), https://academic.oup.com/mnras/article-
. C Mccabe, 10.1088/1475-7516/2014/02/027arXiv:1312.1355[astro-ph.COJCAP. 0227C. McCabe, JCAP 02, 027 (2014), arXiv:1312.1355 [astro-ph.CO].
. J A Dror, G Elor, R Mcgehee, 10.1103/PhysRevLett.124.181301arXiv:1905.12635Phys. Rev. Lett. 124hep-phJ. A. Dror, G. Elor, and R. Mcgehee, Phys. Rev. Lett. 124, 18 (2020), arXiv:1905.12635 [hep-ph].
. M Freytsis, Z Ligeti, 10.1103/PhysRevD.83.115009arXiv:1012.5317Phys. Rev. D. 83115009hep-phM. Freytsis and Z. Ligeti, Phys. Rev. D 83, 115009 (2011), arXiv:1012.5317 [hep-ph].
. J A Dror, G Elor, R Mcgehee, 10.1007/JHEP02(2020)134arXiv:1908.10861JHEP. 02134hep-phJ. A. Dror, G. Elor, and R. Mcgehee, JHEP 02, 134 (2020), arXiv:1908.10861 [hep-ph].
. J R Ellis, R A Flores, J D Lewin, 10.1016/0370-2693(88)91332-9Phys. Lett. B. 212375J. R. Ellis, R. A. Flores, and J. D. Lewin, Phys. Lett. B 212, 375 (1988).
. J D Vergados, P Quentin, D Strottman, 10.1142/S0218301305003508arXiv:hep-ph/0310365Int. J. Mod. Phys. E. 14J. D. Vergados, P. Quentin, and D. Strottman, Int. J. Mod. Phys. E 14, 751 (2005), arXiv:hep-ph/0310365.
. J Engel, P Vogel, 10.1103/PhysRevD.61.063503arXiv:hep-ph/9910409Phys. Rev. D. 6163503J. Engel and P. Vogel, Phys. Rev. D 61, 063503 (2000), arXiv:hep-ph/9910409.
. J D Vergados, H Ejiri, K G Savvidy, 10.1016/j.nuclphysb.2013.09.010arXiv:1307.4713Nucl. Phys. B. 87736hep-phJ. D. Vergados, H. Ejiri, and K. G. Savvidy, Nucl. Phys. B 877, 36 (2013), arXiv:1307.4713 [hep-ph].
. L Baudis, G Kessler, P Klos, R F Lang, J Menéndez, S Reichard, A Schwenk, 10.1103/PhysRevD.88.115014arXiv:1309.0825Phys. Rev. D. 88115014astro-ph.COL. Baudis, G. Kessler, P. Klos, R. F. Lang, J. Menéndez, S. Reichard, and A. Schwenk, Phys. Rev. D 88, 115014 (2013), arXiv:1309.0825 [astro-ph.CO].
. C Mccabe, 10.1088/1475-7516/2016/05/033arXiv:1512.00460JCAP. 0533hep-phC. McCabe, JCAP 05, 033 (2016), arXiv:1512.00460 [hep-ph].
. L Vietze, P Klos, J Menéndez, W C Haxton, A Schwenk, 10.1103/PhysRevD.91.043520arXiv:1412.6091Phys. Rev. D. 9143520nucl-thL. Vietze, P. Klos, J. Menéndez, W. C. Haxton, and A. Schwenk, Phys. Rev. D 91, 043520 (2015), arXiv:1412.6091 [nucl-th].
. J Fan, M Reece, L.-T Wang, 10.1088/1475-7516/2010/11/042arXiv:1008.1591JCAP. 1142hep-phJ. Fan, M. Reece, and L.-T. Wang, JCAP 11, 042 (2010), arXiv:1008.1591 [hep-ph].
. A L Fitzpatrick, W Haxton, E Katz, N Lubbers, Y Xu, 10.1088/1475-7516/2013/02/004arXiv:1203.3542JCAP. 024hep-phA. L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers, and Y. Xu, JCAP 02, 004 (2013), arXiv:1203.3542 [hep-ph].
. A L Fitzpatrick, W Haxton, E Katz, N Lubbers, Y Xu, arXiv:1211.2818hep-phA. L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers, and Y. Xu, (2012), arXiv:1211.2818 [hep-ph].
. N Anand, A L Fitzpatrick, W C Haxton, 10.1103/PhysRevC.89.065501arXiv:1308.6288Phys. Rev. C. 8965501hep-phN. Anand, A. L. Fitzpatrick, and W. C. Haxton, Phys. Rev. C 89, 065501 (2014), arXiv:1308.6288 [hep-ph].
. Z Liu, Y Su, Y.-L. Sming Tsai, B Yu, Q Yuan, 10.1007/JHEP11(2017)024arXiv:1708.04630JHEP. 1124hep-phZ. Liu, Y. Su, Y.-L. Sming Tsai, B. Yu, and Q. Yuan, JHEP 11, 024 (2017), arXiv:1708.04630 [hep-ph].
. P Gondolo, S Kang, S Scopel, G Tomar, 10.1103/PhysRevD.104.063017arXiv:2008.05120Phys. Rev. D. 10463017hep-phP. Gondolo, S. Kang, S. Scopel, and G. Tomar, Phys. Rev. D 104, 063017 (2021), arXiv:2008.05120 [hep-ph].
. P Gondolo, I Jeong, S Kang, S Scopel, G Tomar, 10.1103/PhysRevD.104.063018arXiv:2102.09778Phys. Rev. D. 10463018hep-phP. Gondolo, I. Jeong, S. Kang, S. Scopel, and G. Tomar, Phys. Rev. D 104, 063018 (2021), arXiv:2102.09778 [hep-ph].
. G Barello, S Chang, C A Newby, 10.1103/PhysRevD.90.094027arXiv:1409.0536Phys. Rev. D. 9094027hep-phG. Barello, S. Chang, and C. A. Newby, Phys. Rev. D 90, 094027 (2014), arXiv:1409.0536 [hep-ph].
. T Donnelly, R Peccei, 10.1016/0370-1573(79)90010-3Physics Reports. 501T. Donnelly and R. Peccei, Physics Reports 50, 1 (1979).
Atomic Data and Nuclear Data Tables. T Donnelly, W Haxton, 10.1016/0092-640X(79)90003-223103T. Donnelly and W. Haxton, Atomic Data and Nuclear Data Tables 23, 103 (1979).
J D Walecka, Theoretical nuclear and subnuclear physics. World Scientific Publishing CompanyJ. D. Walecka, Theoretical nuclear and subnuclear physics (World Scientific Publishing Com- pany, 2004).
. R Essig, J Pradler, M Sholapurkar, T.-T Yu, 10.1103/PhysRevLett.124.021801arXiv:1908.10881Phys. Rev. Lett. 12421801hep-phR. Essig, J. Pradler, M. Sholapurkar, and T.-T. Yu, Phys. Rev. Lett. 124, 021801 (2020), arXiv:1908.10881 [hep-ph].
. D Baxter, Y Kahn, G Krnjaic, 10.1103/PhysRevD.101.076014arXiv:1908.00012Phys. Rev. D. 10176014hep-phD. Baxter, Y. Kahn, and G. Krnjaic, Phys. Rev. D 101, 076014 (2020), arXiv:1908.00012 [hep-ph].
. P Cox, M J Dolan, C Mccabe, H M Quiney, arXiv:2208.12222hep-phP. Cox, M. J. Dolan, C. McCabe, and H. M. Quiney, (2022), arXiv:2208.12222 [hep-ph].
. R Essig, A Manalaysay, J Mardon, P Sorensen, T Volansky, 10.1103/PhysRevLett.109.021301arXiv:1206.2644Phys. Rev. Lett. 10921301astro-ph.COR. Essig, A. Manalaysay, J. Mardon, P. Sorensen, and T. Volansky, Phys. Rev. Lett. 109, 021301 (2012), arXiv:1206.2644 [astro-ph.CO].
. L Hamaide, C Mccabe, arXiv:2110.02985hep-phL. Hamaide and C. McCabe, (2021), arXiv:2110.02985 [hep-ph].
The physics of background discrimination in liquid xenon, and first results from Xenon10 in the hunt for WIMP dark matter. C E Dahl, Ph.D. thesis, Princeton U.C. E. Dahl, The physics of background discrimination in liquid xenon, and first results from Xenon10 in the hunt for WIMP dark matter, Ph.D. thesis, Princeton U. (2009).
. E Aprile, XENON10010.1103/PhysRevLett.107.131302arXiv:1104.2549Phys. Rev. Lett. 107131302astroph.COE. Aprile et al. (XENON100), Phys. Rev. Lett. 107, 131302 (2011), arXiv:1104.2549 [astro- ph.CO].
. G Cowan, K Cranmer, E Gross, O Vitells, 10.1140/epjc/s10052-011-1554-0arXiv:1007.1727Eur. Phys. J. C. 712501Eur.Phys.J.C. physics.data-anG. Cowan, K. Cranmer, E. Gross, and O. Vitells, Eur. Phys. J. C 71, 1554 (2011), [Erratum: Eur.Phys.J.C 73, 2501 (2013)], arXiv:1007.1727 [physics.data-an].
. D S Akerib, LZ10.1016/j.nima.2019.163047arXiv:1910.09124Nucl. Instrum. Meth. A. 953163047physics.ins-detD. S. Akerib et al. (LZ), Nucl. Instrum. Meth. A 953, 163047 (2020), arXiv:1910.09124 [physics.ins-det].
. D S Akerib, LZ10.1103/PhysRevD.104.092009arXiv:2102.11740Phys. Rev. D. 10492009hep-exD. S. Akerib et al. (LZ), Phys. Rev. D 104, 092009 (2021), arXiv:2102.11740 [hep-ex].
. P Klos, J Menéndez, D Gazit, A Schwenk, 10.1103/PhysRevD.88.083516arXiv:1304.7684Phys. Rev. D. 8829901Phys.Rev.D. nucl-thP. Klos, J. Menéndez, D. Gazit, and A. Schwenk, Phys. Rev. D 88, 083516 (2013), [Erratum: Phys.Rev.D 89, 029901 (2014)], arXiv:1304.7684 [nucl-th].
. B S Hu, J Padua-Argüelles, S Leutheusser, T Miyagi, S R Stroberg, J D Holt, 10.1103/PhysRevLett.128.072502arXiv:2109.00193Phys. Rev. Lett. 12872502nucl-thB. S. Hu, J. Padua-Argüelles, S. Leutheusser, T. Miyagi, S. R. Stroberg, and J. D. Holt, Phys. Rev. Lett. 128, 072502 (2022), arXiv:2109.00193 [nucl-th].
. J Aalbers, LZarXiv:2207.03764hep-exJ. Aalbers et al. (LZ), (2022), arXiv:2207.03764 [hep-ex].
. S Knapen, J Kozaczuk, T Lin, 10.1103/PhysRevLett.127.081805arXiv:2011.09496Phys. Rev. Lett. 12781805hep-phS. Knapen, J. Kozaczuk, and T. Lin, Phys. Rev. Lett. 127, 081805 (2021), arXiv:2011.09496 [hep-ph].
. J A Dror, G Elor, R Mcgehee, T.-T Yu, 10.1103/PhysRevD.103.035001arXiv:2011.01940Phys. Rev. D. 103119903Phys.Rev.D. hep-phJ. A. Dror, G. Elor, R. McGehee, and T.-T. Yu, Phys. Rev. D 103, 035001 (2021), [Erratum: Phys.Rev.D 105, 119903 (2022)], arXiv:2011.01940 [hep-ph].
. T Li, S Miao, Y.-F Zhou, 10.1088/1475-7516/2015/03/032arXiv:1412.6220JCAP. 0332hep-phT. Li, S. Miao, and Y.-F. Zhou, JCAP 03, 032 (2015), arXiv:1412.6220 [hep-ph].
. R Essig, M Fernandez-Serra, J Mardon, A Soto, T Volansky, T.-T Yu, 10.1007/JHEP05(2016)046arXiv:1509.01598JHEP. 0546hep-phR. Essig, M. Fernandez-Serra, J. Mardon, A. Soto, T. Volansky, and T.-T. Yu, JHEP 05, 046 (2016), arXiv:1509.01598 [hep-ph].
. C P Liu, C.-P Wu, J.-W Chen, H.-C Chi, M K Pandey, L Singh, H T Wong, arXiv:2106.16214hep-phC. P. Liu, C.-P. Wu, J.-W. Chen, H.-C. Chi, M. K. Pandey, L. Singh, and H. T. Wong, (2021), arXiv:2106.16214 [hep-ph].
| [] |
[
"Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning",
"Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning",
"Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning",
"Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning"
] | [
"Omar Maddouri ",
"Xiaoning Qian ",
"Francis J Alexander ",
"Edward R Dougherty ",
"Byung-Jun Yoon Correspondence ",
"\nDepartment of Electrical and Computer Engineering\nTexas A&M University\n77843College StationTXUSA\n",
"\nComputational Science Initiative\nBrookhaven National Laboratory\nUpton11973NYUSA\n",
"Omar Maddouri ",
"Xiaoning Qian ",
"Francis J Alexander ",
"Edward R Dougherty ",
"Byung-Jun Yoon Correspondence ",
"\nDepartment of Electrical and Computer Engineering\nTexas A&M University\n77843College StationTXUSA\n",
"\nComputational Science Initiative\nBrookhaven National Laboratory\nUpton11973NYUSA\n"
] | [
"Department of Electrical and Computer Engineering\nTexas A&M University\n77843College StationTXUSA",
"Computational Science Initiative\nBrookhaven National Laboratory\nUpton11973NYUSA",
"Department of Electrical and Computer Engineering\nTexas A&M University\n77843College StationTXUSA",
"Computational Science Initiative\nBrookhaven National Laboratory\nUpton11973NYUSA"
] | [] | Graphical abstract Highlights d A transfer learning (TL) framework for Bayesian error estimation (BEE) is proposed d Relatedness between domains is modeled by a joint prior in a Bayesian paradigm d TL-based BEE can leverage data from other relevant domains to improve accuracy d Data from domains with moderate to high relatedness can improve BEE outcomes SUMMARYClassification has been a major task for building intelligent systems because it enables decision-making under uncertainty. Classifier design aims at building models from training data for representing feature-label distributions-either explicitly or implicitly. In many scientific or clinical settings, training data are typically limited, which impedes the design and evaluation of accurate classifiers. Atlhough transfer learning can improve the learning in target domains by incorporating data from relevant source domains, it has received little attention for performance assessment, notably in error estimation. Here, we investigate knowledge transferability in the context of classification error estimation within a Bayesian paradigm. We introduce a class of Bayesian minimum mean-square error estimators for optimal Bayesian transfer learning, which enables rigorous evaluation of classification error under uncertainty in small-sample settings. Using Monte Carlo importance sampling, we illustrate the outstanding performance of the proposed estimator for a broad family of classifiers that span diverse learning capabilities. | 10.1016/j.patter.2021.100428 | [
"https://arxiv.org/pdf/2109.02150v1.pdf"
] | 237,420,483 | 2109.02150 | e3aa00d272f106d6201f85c0ca9fb91c35b7104f |
Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning
Omar Maddouri
Xiaoning Qian
Francis J Alexander
Edward R Dougherty
Byung-Jun Yoon Correspondence
Department of Electrical and Computer Engineering
Texas A&M University
77843College StationTXUSA
Computational Science Initiative
Brookhaven National Laboratory
Upton11973NYUSA
Robust importance sampling for error estimation in the context of optimal Bayesian transfer learning
10.1016/j.patter.2021.100428Article
Graphical abstract Highlights d A transfer learning (TL) framework for Bayesian error estimation (BEE) is proposed d Relatedness between domains is modeled by a joint prior in a Bayesian paradigm d TL-based BEE can leverage data from other relevant domains to improve accuracy d Data from domains with moderate to high relatedness can improve BEE outcomes SUMMARYClassification has been a major task for building intelligent systems because it enables decision-making under uncertainty. Classifier design aims at building models from training data for representing feature-label distributions-either explicitly or implicitly. In many scientific or clinical settings, training data are typically limited, which impedes the design and evaluation of accurate classifiers. Atlhough transfer learning can improve the learning in target domains by incorporating data from relevant source domains, it has received little attention for performance assessment, notably in error estimation. Here, we investigate knowledge transferability in the context of classification error estimation within a Bayesian paradigm. We introduce a class of Bayesian minimum mean-square error estimators for optimal Bayesian transfer learning, which enables rigorous evaluation of classification error under uncertainty in small-sample settings. Using Monte Carlo importance sampling, we illustrate the outstanding performance of the proposed estimator for a broad family of classifiers that span diverse learning capabilities.
Correspondence [email protected]
In brief
Accurate estimation of classification error is challenging in scientific domains, where available data are limited. Although transfer of data and knowledge from relevant domains can alleviate this issue, previous studies on transfer learning have mostly focused on improving the learned models rather than enhancing the performance analysis. In this paper, we propose a transfer learning scheme for Bayesian error estimation that can leverage data from relevant domains to enhance the estimation of classification error in the domain of interest.
INTRODUCTION
Transfer learning (TL) provides promising means to repurpose the data and/or scientific knowledge available in other relevant domains for new applications in a given domain. The ability to transfer relevant data/knowledge across different domains practically enables learning effective models in target domains with limited data. Classifier design can take advantage of TL to address small-sample challenges we often face in various scientific applications. However, rigorous error estimators that can leverage such transferred data/knowledge for better estimation of classification error have been missing to date, which makes the design framework epistemologically incomplete. 1 Generally, the scientific validity of any predictive model is assessed by the ability to generalize outside the observed training sample. However, the available sample is often too small in many scientific applications (e.g., bio-marker discovery) to hold out sufficient data just for testing purpose, which makes the reuse of training data for both classifier design and error estimation inevitable. While various error estimation schemes exist to date, their accuracy THE BIGGER PICTURE In scientific domains with limited data availability, accurate classification error estimation is practically challenging. Although transfer learning (TL) may provide a promising solution under such circumstances by learning from data available in other relevant domains, it has not been explored for enhancing error estimation. Here, we place the problem of estimating the classification error in a Bayesian paradigm and introduce a TL-based error estimator that can significantly enhance the accuracy and robustness of error estimates under data scarcity. We demonstrate that our proposed TL-based Bayesian error estimation framework effectively models and exploits the relatedness between different domains to improve error estimation. Experimental results based on both synthetic data as well as real-world data show that our proposed error estimator clearly outperforms existing error estimators, especially in a small sample setting, by tapping into the data from other relevant domains.
Development/Pre-production: Data science output has been rolled out/validated across multiple domains/problems and reliability in a small-sample setting are often questioned. 2 For instance, in Dalton and Dougherty 3 many classification studies of cancer gene expression data have been listed where the performance was assessed by cross-validation (CV) based on small-size training datasets. Analyses in Braga-Neto and Dougherty 4 have shown that CV error estimators derived based on small-size samples show large variance, which explains the controversy across many biological studies that relied on datadriven CV. 5 Model-based error estimation also faces practical challenges as non-informative modeling assumptions may mislead the error estimators in case of model mismatch.
The ability for accurate error estimation based on small samples is also critical in other contexts, an example being continual learning, 6 where a series of labeled datasets are sequentially fed to the learner as in realistic learning scenarios. In recent years, continual learning regained attention as a promising strategy for avoiding ''catastrophic forgetting'' that may arise when the training data are split for a series of small learning operations called tasks. 7 Such a continual learning setting is becoming prevalent these days, where retaining the observed training data is either undesirable (confidentiality) or intractable (highthroughput systems), and developing reliable task-specific error estimators is indispensable. For instance, an intuitive approach to continual learning from a Bayesian perspective is to leverage the posterior of the current task to update the prior of the next task. 8 However, analysis in Farquhar and Gal 9 has shown that evaluation approaches for this prior-focused setup suffer from severe bias in realistic scenarios, particularly for finely partitioned data. Recent work in Goodfellow et al. 10 provided a solution for test data scarcity by reusing the same test set in the context of a continuously evolving classification problem. To avoid overfitting the test data, the authors employed a reusable holdout mechanism based on the area under the receiver operating characteristic curve metric. Nevertheless, this approach remains contingent on the availability of an independent test set. For these reasons, there is a pressing need to develop novel error estimators that can effectively overcome data scarcity limitations. For assessing different classification models in the context of small-size training datasets, having an accurate error estimator with TL capabilities that can take advantage of relevant datasets in other domains would be highly beneficial. Such an estimator would be readily applicable to continual learning as cross-task datasets can be seen as related source-target samples.
In the next sections, we provide a brief review of the standard error estimation techniques along with prevalent TL scenarios. A more comprehensive review can be found in the supplemental information, sections 3 and 5.
For unknown feature-label distributions, the classification error of a given classifier is typically estimated by leveraging a large sample collected from the true distribution. However, limiting factors, such as the excessive cost of large-scale data acquisition, make it often infeasible to collect and hold out large test sets. Consequently, the available small-size sample may have to be used for both training and evaluating the classifier, and researchers have strived to devise practical methods for accurate error estimation. Existing error estimation schemes can be broadly categorized into parametric and nonparametric methods. Non-parametric estimators compute the error rate by counting the misclassified points, where widely used estimators include the resubstitution, CV, and bootstrap estimators. Parametric methods include the popular plug-in estimator that naively estimates the true error from an empirical model. The Bayesian minimum mean-square error estimator (BEE) proposed in Dalton and co-workers 3,11 is another benchmark parametric estimator that significantly enhances the robustness by computing the expected true error with respect to the posterior of the model parameters. The BEE has shown notable improvements over standard estimators as it effectively handles the uncertainty about the underlying feature-label distribution. 3,11 Recently, TL has emerged as an alternative to provide remedies for pitfalls caused by training data scarcity in a target domain by utilizing available data from different yet relevant source domains. 12 Based on the properties of source and target domains, two scenarios of TL may arise. The first one, commonly known as ''homogeneous TL,'' occurs when the source and target domains share the same feature space. The second scenario is called ''heterogeneous TL'' and is considered when differences exist between domains in terms of their feature space or data dimensionality. In practice, the most common setting for TL, known also as domain adaptation, assumes similar families of feature-label distributions across domains.
In this study, we propose a TL framework for robust estimation of classification error based on a rigorous Bayesian paradigm. To the best of our knowledge, this study is the first work on TL-based BEE, which can significantly enhance our understanding of transferability across domains in the context of error estimation. Building on the Bayesian transfer learning framework proposed in Karbalayghareh et al., 13 we introduce a TL-based BEE estimator that can enhance the error estimation accuracy in the target domain by utilizing the data available in a relevant source domain based on the joint prior of their feature-label distributions. We present a rigorous study of error estimation in the context of Bayesian TL and show that our proposed TL-based BEE effectively represents and exploits the relatedness (or dependency) between different domains to improve error estimation in a challenging small-sample setting, where the number of observed data points from the target domain of interest is in the range of 5-50. For applicability of the proposed TL-based BEE estimator in real-world problems for arbitrary classifiers, we introduce an efficient and robust importance sampling setup with control variates where the importance density and the control variates function are carefully defined to reduce the variance of the estimator while keeping the overall sampling process computationally feasible and scalable. For this purpose, we utilize Laplace approximations for fast evaluation of matrix-variate confluent and Gauss hypergeometric functions. The performance of the TL-based BEE estimator is extensively evaluated using both synthetic datasets as well as real-world biological datasets. As our main focus in this study is the estimation of classification error, we consider a variety of existing classifiers with different levels of learning capabilities to demonstrate the general applicability of our TL-based BEE estimation scheme. We also show the outstanding performance of the proposed estimator with respect to standard error estimation techniques that are commonly used.
RESULTS AND DISCUSSION
Overview of the proposed Bayesian error estimation via TL We propose a class of Bayesian minimum mean-square error (MMSE) estimators for TL where the observed sample is a mixture of source and target data. The basic classification setting and a brief review of the standard BEE estimator are presented in the supplemental information, sections 2 and 4. For symbols and notations, see Table S1.
Rooted in signal estimation, the BEE has been motivated by optimal filtering for functions of random variables. 3 For a function of two random variables gðX;YÞ, the optimal estimator b gðYÞ of a filter gðYÞ after observing only Y in the mean-square sense is given by
b gðYÞ = E X ½gðX; YÞjY: (Equation 1)
Replacing X with the parameter vector q of the featurelabel distribution and Y by the sample S n (of size n), leads to the standard BEE that has been introduced in Dalton and Dougherty 3 as b εðS n Þ = E q ½ε n ðq; S n jS n Þ:
(Equation 2)
In TL, the sample S n is a mixture of source and target data such that S n = ðD s WD t Þ n , with n = N s + N t , and the classifier j n is designed either on D t , D s , or D s WD t . We note that D s and D t are two labeled datasets from the source and target domains with sizes N s and N t , respectively (see Bayesian TL framework for binary classification, for generation details). This requires close attention as the TL-based BEE is valid only for fixed classifiers given the sample. This assumption carries limitations. For instance, classifiers that are only fixed given D t but not D s , are not deterministic for every set of parameters estimated based on D s WD t . In this paper, we introduce the TL-based BEE defined as
b ε À ðD s WD t Þ n Á = E q  ε n À q; ðD s WD t Þ n Á ðD s WD t Þ n à ; (Equation 3)
where q = ½q t ; q s denotes the parameter vector of the joint model formed by the target parameters q t and the source parameters q s . For a fixed classifier given ðD s WD t Þ n , this estimator is optimal on average in the mean-square sense and unbiased when averaged over all parameters and samples. For classification in the target domain, the posterior density p à ðqÞ reduces to the posterior of the target parameters after observing the target and source data and takes the form
p à ðq t Þ = p à ðq t jD s ; D t Þ; (Equation 4)
where p à ðq t jD s ; D t Þ is obtained by marginalizing out the source domain parameters. Ultimately, the BEE for TL takes the form
b ε À ðD s WD t Þ n Á = E qt  ε n À q t ; ðD s WD t Þ n Á ðD s WD t Þ n à = E p à ðqt Þ Â ε n À q t ; ðD s WD t Þ n Áà : (Equation 5)
For the sake of simplicity we write
b ε = E p à ½ε n ; (Equation 6)
where p à = p à ðq t jD t ; D s Þ denotes the posterior of the target parameters after observing the hybrid sample D t WD s .
Experiments and datasets
To evaluate the performance of the proposed error estimator, we consider the mean-square error (MSE) as a performance measure to understand the joint behavior of the classification error ε n and its estimate b ε. For the random vector ðε n ; b εÞ, the MSE is defined as
MSEðb εÞ = E h jb ε À ε n j 2 i : (Equation 7)
In what follows, we present an overview of the experimental setup for demonstrating the performance of the proposed TLbased BEE based on three different types of classifiers (see experimental procedures, sections 4.5 and 4.6 for more details) applied to both synthetic data as well as real-world biological datasets.
Bayesian TL framework for binary classification
We consider a binary classification problem in the context of supervised TL where there are two common classes in each domain. Let D s and D t be two labeled datasets from the source and target domains with sizes N s and N t , respectively. We are interested in the scenario where N t ( N s . Let D y s = fx y s;1 ; x y s;2 ; /; x y s;ns g, y˛f0; 1g, where n y s denotes the size of source data in class y. Likewise, let D y t = fx y t;1 ; x y t;2 ; /; x y t;nt g, y˛f0; 1g, where n y t denotes the size of target data in class y. We consider a d-dimensional homogeneous transfer learning scenario where D s and D t are normally distributed and separately sampled from the source and target domains, respectively. where X T denotes the transpose of matrix X. This sampling is enabled through a joint prior distribution for L y s and L y t that marginalizes out the off-diagonal block matrix L y ts . Using a Gaussian-Wishart distribution as the joint prior for mean and precision matrices, the joint model factorizes as For conditionally independent mean vectors given the covariances, the joint prior in (Equation 11) further factorizes into
p À m y s ; m y t ; L y s ; L y t Á = p À m y s L y s Á pðm y t jL y t Þp À L y s ; L y t Á : (Equation 12)
The block diagonal precision matrices L y z for z˛ft; sg are obtained after sampling L y from a predefined joint Wishart distribution as defined in Karbalayghareh et al. 13 such that L y $ W 2d ðM y ; n y Þ, where n y is a hyperparameter for the degrees of freedom that satisfies n y R2d and M y is a ð2d 32dÞ positive definite scale matrix of the form where m y z is the ðd 31Þ mean vector of the mean parameter m y z and k y z is a positive scalar hyperparameter. The joint prior distribution pðL y s ; L y t Þ as derived in Karbalayghareh et al. 13 acts like a channel through which the useful knowledge transfers from the source to the target domain, causing the posterior of the target parameters of the underlying feature-label distribution to be distributed more narrowly around the true values.
Synthetic datasets
To simulate and verify the extent of knowledge transferability across domains, we consider a wide range of joint prior densities that model the different levels of relatedness between the source and target domains. The proposed setup is as follows. We consider a binary classification problem in the context of homogeneous TL with dimensions 2, 3, and 5. In the simulated datasets, the number of source data points per class varies between 10 and 500 and between 5 and 50 for target datasets. This mimics realistic settings of small-size sample conditions (especially in the target domain) as reported in the literature. 3 We set up the data distributions as follows.
n = n y = d + 20, k t = k y t = 100, k s = k y s = 100, m 0 t = 0 d , m 1 t = w 3 1 d , m 0 s = m 0 t + 10 3 1 d , m 1 s = m 1 t + 10 3 1 d ,definite cy˛f0; 1g, we set k ts = a ffiffiffiffiffiffiffiffi k t k s p
with k t >0, k s > 0, and jaj<1. As in Karbalayghareh et al., 13 the value of jaj controls the amount of relatedness between the source and target domains (see experimental procedures, section 4.6, for more details). To control the level of relatedness by adjusting only jaj without involving other confounding factors, we set k t = k s = 1 such that M y ts = a I d . In this setting, the correlation between the features across source and target domains are governed by jaj, where small values of jaj correspond to poor relatedness between source and target domains while larger values imply stronger relatedness. To sample from the joint prior, we first sample from a non-singular Wishart distribution W 2d ðM y ; nÞ to get a block partitioned sample of the form L y = L y t L y ts L yT ts L y s ! from which we extract ðL y t ; L y s Þ. Afterward, we sample m y z $ N ðm y z ; ðk y z L y z Þ À1 Þ for z˛fs; tg and y˛f0; 1g. In our simulations we use two types of datasets: training datasets that contain samples from both domains and testing datasets that contain only samples from the target domain. In all the simulations we consider testing datasets of 1,000 data points per class and we assume equal prior probabilities for the classes.
RNA sequencing datasets
To evaluate the performance of the TL-based BEE on real-world data, we consider classifying patients diagnosed with schizophrenia using transcriptomic profiles collected from psychiatric disorder studies. 14 Based on two RNA sequencing (RNA-seq) datasets listed in Table 1, we selected the transcriptomic profiles of three genes, based on a stringent feature selection procedure comprising the analysis of differential gene expression, clustering of gene-gene interactions, and statistical testing for multivariate normality. More specifically, we focus on analyzing the astrocyte-related cluster of differentiation 4, found to be significantly upregulated in subjects with schizophrenia. 14 We select the top three hub genes that collectively satisfy the Royston's multivariate normality test applied to the full datasets for both classes at a significance level of 99%. The identified genes satisfying all the aforementioned criteria include SOX9, AHCYL1, and CLDN10, with an average module centrality of 0.86 measured by genes' module membership (kME). 14 In addition to normalization and quality control performed in Gandal et al., 14 the selected features in both datasets have been further standardized to zero means and unit variances across both classes as in Karbalayghareh and co-workers. 13,15 We consider the dataset syn2759792, sampled from the brain dorsolateral prefrontal cortex (DLPFC) area, as a target dataset and syn4590909, sampled from the frontal cortex (FC) region, as a source dataset. Among 555 postmortem brain samples in syn2759792, we randomly draw 5 samples per class as training data and we use the remaining samples to evaluate the classification error. This process is repeated 10,000 times to estimate the average MSE deviation of the TL-based BEE from the true error. To determine the model hyperparameters, we assume shared values for case and control samples in source and target domains and we set n = 10 3 d = 30, n t = 5. As jaj represents a cross-domain property, we employ the TL-based BEE to conduct an exhaustive greedy search for Article classification error by leveraging data points from a source domain dataset. In our hyperparameter tuning experiments, we consider source datasets of different sizes (n s˛f 10; 30; 50g) and we retain the value of jaj that leads to the smallest MSE deviation from the true error across all the experiments. At each iteration, we randomly permute the source samples for statistical significance. The remaining parameters are set as follows: k t = n t , k s = n s , and k t = k s = 1 n , such that the mean of the Wishart precision matrices will be equal to the identity matrix, which matches the normal standardization. For mean vectors m t and m s , we pool all case and control samples in each domain and consider their means, respectively.
Performance on synthetic datasets
We start by evaluating the performance of the proposed TLbased BEE in estimating the Bayes error, which corresponds to the true error of the quadratic discriminant analysis (QDA) (see experimental procedures, section 4.5) in the target domain, for different levels of jaj and different size combinations of the utilized source and target datasets.
In Figure 1, we investigate the behavior of the TL-based BEE when the target data are fixed while we vary the size of the source data. We show the results for d = 2 in the first column, the results for d = 3 in the second column, and the results for d = 5 in the last column. The rows correspond to the results for target datasets with different sizes: n t = 20 on the top and n t = 50 on the bottom. The MSE curves show similar trends for all three values of d, where we can see that the deviation of the error estimate from the true error significantly decreases when highly related source data are employed. This behavior diminishes as the relatedness between the two domains decreases. Notably, using large source datasets (n s R200) of moderate to small relatedness values (jaj%0:7) does not negatively impact the performance of the estimator for low dimensions (d˛f2; 3g) as shown in the first and second columns of Figure 1. As the dimensionality further increases (d = 5), relying on large source datasets with moderate or poor relatedness to the target domain slightly increases the deviation of the estimated error from the true error (i.e., jaj = 0:7 in the third column). This tiny asymptotic deviation is explained by potential undesirable effects of relying on large source datasets of modest relatedness. However, it is important to note that the proposed TL-based BEE in the context of the given Bayesian TL framework suppresses this behavior, as it does not directly depend on the source data but the information transfer occurs through the joint prior. The joint prior acts like a bridge through which the useful knowledge passes from the source to the target domain. Effects of using source data in different TL settings (especially, a non-Bayesian setting) may require further investigation. Moreover, the simulation results in different columns show that the MSE deviation decreases as we rely on larger target datasets. However, the gain in performance as we use additional source data is reduced when target data are more abundant. This is illustrated by the slope of the MSE graphs that flattens as n t increases. Finally, Figure 1 shows that, for higher dimensions, the MSE deviation tends to increase. This is expected as increasing the dimensionality generally leads to a more difficult error estimation problem.
Next, Figure 2 shows the MSE deviation with respect to the size of the target dataset for dimensions 2, 3, and 5. The first row corresponds to the case of using source datasets of size n s = 50 and the second row shows the results for n s = 200. The performance of the TL-based BEE estimator improves with the increasing availability of target data. We can also clearly see that the MSE deviation from the true error asymptotically converges to comparable values for all relatedness levels. When highly related source data are available, the TL-based estimator yields accurate estimation results even when the target dataset is small. These results consolidate the findings in Figure 1 about the redundancy of source data in the presence of abundant target data. Across all graphs in Figure 2, we can see that a relatedness coefficient jaj = 0:95 results in a nearly constant deviation from the true error as a function of target data size, which suggests that highly related source data jaj>0:95 act almost identically like the target data, regardless of the shift across the domains in terms of their means. Similar to the trends shown in Figure 1, results across different columns of Figure 2 demonstrate that the error estimation difficulty increases with the increase of dimensionality. This is clearly reflected in the MSE deviation from the true error in Figure 2, which shows that, as the dimension increases from d = 2 (first column) to d = 5 (last column), the MSE increases by one order of magnitude. Now, we aim at investigating the effect of classification complexity on the performance of the proposed TL-based BEE. To this end, we conduct simulations, in which we vary the Bayes error through a wide range of possible values and evaluate the TL-based BEE at each given Bayes error for different sizes of target data while using source datasets of a fixed size n s = 200. In binary classification, the Bayes error has an upper bound specified by the true error of random classification, which is 0.5, as every data point can be randomly assigned one of the class labels. Ideally, we would vary the Bayes error across the interval ½0; 0:5 as in Dalton and Dougherty. 11 However, in our setup, we do not impose any structure on the covariance matrices, nor do we assume that they are scaled identities. This makes the control of the Bayes error much more difficult. In addition, the joint sampling setup within our Bayesian TL framework inhibits any modification of the randomized parameters. Consequently, the only practical way to adjust the Bayes error is to tune the mean vector parameters m y t that specify the means for the class mean vectors m y t with y˛f0; 1g. In our experiments, we were able to fully control the Bayes error for d = 2 and we considered the following values Figure 3 shows the MSE deviation with respect to the Bayes error for dimensions 2, 3, and 5. Results in the first row are obtained using target datasets of size 20 and those in the second row are obtained using target datasets of size 50. We can see that the Bayesian MMSE estimator performs best when using source data of high relatedness to the target domain as expected. For Bayes error in the range ½0:25; 0:35, the MSE deviation from the true error is very high, which makes this range of Bayes error as the most challenging setting for error estimation. For a Bayes error of 0.2, the MSE deviation is average across all the experiments, which confirms the validity of our previous assumption in selecting this value to investigate classification problems of moderate difficulty. We note that the TL-based BEE shifts the performance in favor of low and high Bayes error levels. Indeed, the TL-based BEE performs well in this case because the estimated target parameters are sufficiently accurate, even with a small target sample.
In addition to investigating the effect of different relatedness levels between source and target domains, in Figure 4 we have examined the performance of the TL-based BEE for the case when the source class means are swapped between the two classes, such that they show opposite trends compared with the class means in the target domain. For this purpose, we reproduced the experiments in Figure 1 after flipping the class means of source datasets with respect to the target classes (i.e., m y s = m 1Ày t , for y˛f0; 1g). In the first row of Figure 4, we use the generated source datasets as observed samples from the source domain. Interestingly, the obtained results match those observed in Figure 1. This postulates that the knowledge transfer across source and target domains in the context of the studied Bayesian TL framework does not depend on the arrangement of the class means in the source and target domains but only rests on the level of relatedness between the two domains. For verification, we have intentionally considered the same source datasets in the previous experiment as target datasets for estimating the TLbased BEE and we plotted the obtained results in the second row of Figure 4. Clearly, the TL-based BEE veers away from the true error as we consider additional source data points. This deviation is worse with poorly related source data (jaj = 0:1). These results confirm previous findings in Karbalayghareh et al. 13 that the joint prior model in the utilized Bayesian TL framework acts like a bridge that distills the useful knowledge from the source domain and effectively transfers it to the target domain.
Results from the second set of experiments that use a linear discriminant analysis (LDA) classifier (see experimental procedures, section 4.5) were similar to the ones obtained using the QDA classifier except for some differences in the performance of the TL-based BEE with respect to the Bayes error that we report in Figure 5 (see supplemental information, section 8, for additional results). The TL-based BEE performance has similar trends with respect to small and moderate Bayes errors when compared with the presented results obtained using the QDA classifier. A notable difference here is observed for large values of Bayes error where the TL-based BEE shows decreased performance in terms of MSE deviation from the true error, which is due to the fact that the employed LDA classifier is sub-optimal compared with the Bayes classifier. This is expected as linear decision boundaries tend to be more sensitive to deviations from true model parameters for highly overlapping class-conditional distributions. In our final set of experiments using synthetic datasets, we compare the performance of the proposed TLbased BEE to standard error estimators for different dimensions and various source datasets of relatedness level jaj = 0:9 to the target domain for an optimal Bayesian transfer learning (OBTL) classifier (see experimental procedures, section 4.5). In Figure 6, we show the MSE deviation with respect to different target dataset size. As clearly shown, our proposed TL-based BEE significantly outperforms all other standard error estimators by a substantial margin. In agreement with previous findings in the literature, the standard error estimators perform comparably for low dimensions (i.e., d = 2), where the bootstrap may show a slight advantage. As the dimensionality increases (i.e., d = 5), the performance shift of the studied estimators becomes more apparent. For example, the resubstitution estimator performs poorly in the small-sample regime while the bootstrap estimator outperforms leave-one-out cross validation (LOO) and CV. Furthermore, we noticed that increasing the size of the source dataset does not lead to any apparent performance improvement for the standard estimators. This is because these estimators do not directly depend on the source data for error estimation (as they are incapable of taking advantage of data from different yet relevant domains). However, providing additional source data to the TL-based BEE considerably reduces the MSE deviation from the true error for all dimensions as shown in Figure 6. Performance on real-world RNAseq datasets To analyze the performance of the TLbased BEE on real-world data, we have trained a QDA classifier on a small target dataset that consists of five sample points per class extracted from syn2759792 in Table 1. Using different source datasets collected from syn4590909, we show in Figure 7A the MSE deviation of the TLbased BEE from the true error with respect to jaj.
For all combinations and different sizes of source datasets, the FC brain region showed high relatedness to the DLPFC brain area where the optimal MSE deviation from the true error was obtained for jaj = 0:99. Interestingly, findings in Gandal et al. 14 also confirm that syn4590909 and syn2759792 are highly related, as independent gene expression assays for both brain regions have consistently replicated the gradient of transcriptomic severity observed for three different types of psychiatric disorders, including bipolar disorder and schizophrenia. 14 We note that the significant decrease in the MSE deviation from the true error in Figure 7A corresponds to the boost in performance caused by increasing jaj from 0.01 to larger values. This can be explained by the high relatedness between the two studied domains. Indeed, assuming very poor relatedness (i.e., jaj = 0:01) between the domains, deviating from the ground truth of high relatedness results in a very large MSE. We show in Figure 7B the increasing gain in accuracy of the TL-based BEE in estimating the classification error after using additional labeled observations from the source domain. These results again confirm the efficacy and advantages of our TL-based error estimation scheme, compared with other standard error estimation methods, when additional data are available from different source domains that are nevertheless relevant to the target domain. From a practical perspective, our proposed TL-based BEE has the potential to facilitate the analysis of real-world datasets in the context of small-sample classification. Challenges of designing and evaluating classifiers (e.g., for clinical diagnosis or prognosis) in a small-sample setting are prevalent in scientific studies in life sciences and physical sciences due to the formidable cost, time, and effort required for data acquisition. This is certainly the case for the example that we consider in this section, where invasive brain biopsies would be needed to get the data.
Insights gained
In this section, we summarize the insights gained from our analyses, which demonstrate the potential advantages of applying TL to the estimation of classification errors. Our results have shown that incorporating data and knowledge from relevant source domains is helpful to significantly enhance the classification error estimation accuracy. When an appropriate source domain is identified, the efficiency of the knowledge transfer process depends on the correlation of the features across domains, rather than the class-conditional mean values of the features, with our problem setups. From an error estimation perspective, our investigation has revealed that, unlike classifier design, the most challenging setting for error estimation arises in classification problems of moderate complexity in terms of Bayes error. When source datasets that are at least modestly relevant to the target domain of interest are available, knowledge transfer to the target domain by appropriate modeling of the joint prior could enhance both the accuracy and the reliability of the error estimation. This was validated in our current study, where the joint prior acts like a ''channel'' as well as a ''filter,'' through which useful relevant knowledge is passed from the source domain to the target domain. Our results have shown that using at least 200 data points from a relevant source domain, whose relatedness level is above 0.7, enables an accurate error estimation even with small target data (less than 50 sample points). Using real-world biological data (RNA-seq data), we have shown that the relatedness level can be empirically determined by exploring the range of possible values.
Limitations of the study
This section discusses the limitations of our current work in modeling assumptions, computational cost, and scalability to
ll
OPEN ACCESS
Article higher dimensions. Despite the precise mathematical definition of our error estimator, accurate estimation of the classification error is contingent on whether predictive posterior densities are available in closed forms or can be approximated in an effective manner. While such densities are available for Gaussian models (e.g., assuming joint Wishart priors), one may need to derive them for different priors for non-Gaussian distributions.
The computational complexity to accurately estimate the proposed TL-based BEE through direct sampling methods can be excessive and may scale poorly for higher dimensions. However, we efficiently overcame this limitation by developing a robust importance sampling setup that has shifted all the computational overhead related to the TL process from Monte Carlo sampling to the numerical evaluation of the importance likelihood. Developing similar statistical methods for TL-based BEE would be needed for different modeling assumptions. While the definition of the TL-based BEE and the proposed robust importance sampling scheme are general and applicable to higher dimensions, controlling the Bayes error for synthetic datasets for dimensions higher than 5 can be challenging, which was the main reason for choosing the dimensions d = 2; .; 5 in this study. However, this is not an issue in practice, as the classification complexity in real-world applications (reflected by the Bayes error) is an inherent property of a given classification problem governed by the underlying feature-label distribution, and not a design choice. Technically, the proposed TL-based BEE can be applied to classification problems based on high-dimensional features as long as the required computational resources are available.
Furthermore, we can also consider classifier design and error estimation based on a lower-dimensional representation of the original feature space-e.g., using principal-component analysis or auto-encoders-to make the computational cost manageable.
Conclusions
In this study, we have introduced a Bayesian MMSE estimator that draws from concepts and theories in TL to enable accurate estimation of classification error in the (target) domain of interest by utilizing samples from other closely related (source) domains. We have developed an efficient and robust importance sampling setup that can be used for accurate error estimation in small-sample scenarios that often arise in many real-world scientific problems. Extensive performance analysis based on both synthetic and real-world biological data demonstrates the outstanding performance of the proposed TL-based BEE clearly outperforming conventional estimators. In our proposed framework, Laplace approximations were used to alleviate the complexity associated with the exact evaluation of generalized hypergeometric functions that appear in the posterior distribution of the target parameters. Beyond the Gaussian model assumed in the validation experiments, we also provide a general mathematical definition for the TL-based BEE that can directly be extended to applications with non-Gaussian distributions where the model parameters can be inferred through Markov chain Monte Carlo (MCMC) methods. In this study, target and source domains were related through the joint prior of the model parameters that transfers useful knowledge across domains. A key property of the proposed TL-based BEE is its elegant ability to handle the uncertainty about the model parameters by integrating this prior with data, deducing robust estimates by accounting for all possible parameter values. Paramount practical challenges for the TL-based BEE include the identification of suitable source domains that share similar families of distributions as the target domain of interest. This is crucial as the relatedness across domains is mathematically modeled assuming the similarity of the feature-label distributions across domains. Furthermore, learning the joint prior for the distributions and modeling the relatedness between different domains may also present an engineering challenge. While techniques for knowledge-driven prior construction have been developed, 17,18 such techniques have yet to be developed for joint prior construction for relevant domains, which is an important future research direction.
An important aspect enabled by the proposed TL-based BEE is optimal data acquisition from multiple domains that aims at maximally enhancing the error estimation capability based on a finite budget for data acquisition. For example, if one has a fixed budget to acquire additional data from either the source or target domain, what would be the most cost-effective strategy for data acquisition? In typical TL scenarios, data acquisition cost may be relatively cheaper in the source domain than in the target domain, although the data acquired in the target domain might be more impactful. A natural question is how one can maximize the ''return-on-investment'' for data acquisition given the available budget. Such strategies for optimal experimental design [19][20][21][22][23][24] and active learning [25][26][27] have been actively studied in a Bayesian paradigm that enables objective-based uncertainty quantification via mean objective cost of uncertainty. 28,29 While this is beyond the scope of this current study, it opens up interesting directions for future research.
EXPERIMENTAL PROCEDURES
Resource availability
Lead contact Dr. Byung-Jun Yoon is the lead contact for this study and can be reached at [email protected].
Materials availability
This study did not generate any physical materials. Data and code availability All RNA-seq datasets that have been utilized in this study are publically available. All original code has been deposited at https://github.com/ omarmaddouri/TL_BEE, archived in Zenodo under the https://doi.org/10. 5281/zenodo.5594476, and are publicly available as of the date of publication. In addition to the proposed importance sampling estimate, we also provide implementation of the direct evaluation using the predictive posterior density of target parameters.
Bayesian TL for error estimation The advantage of the mathematical formulation that underlies the proposed TL-based BEE (and also the original TL Bayesian framework in Karbalayghareh et al. 13 ) is that it articulates a unified Bayesian inference model that assumes a specified prior distribution governing the parameter vector q t and acting like a bridge to help update p à ðq t Þ after observing D t and D s . From this standpoint, the derivation of the TL-based BEE for TL depends on determining p à ðq t Þ. To determine the TL-based BEE in the context of the presented Bayesian transfer learning framework we evoke the following theorem.
Theorem 1: 13 given the target D t and source D s data, the posterior distribution of target mean m y t and the target precision matrix L y t for the classes y˛f0; 1g has Gaussian-hypergeometric function distribution given by
p À m y t ; L y t D y t ; D y s Á = A y L y t j 1 2 exp À k y t;n 2 À m y t À m y t;n Á T L y t À m y t À m y t;n Á 3 L y t j n y + n y t ÀdÀ1 2 etr À 1 2 À T y t Á À1 L y t 1 F 1 2 6 6 4
n y + n y s 2 n y 2
; 1 2 F y L y t F y T T y s 3 7 7 5 ; (Equation 15)
where A y is a constant of proportionality given by
ðA y Þ À1 = 2p k y t;n !d 2 2 dðn y + n y t Þ 2 G d ðn y + n y t Þ 2
T y t ðn y + n y t Þ (Equation 16) and k y t;n = k y t + n y t ;
m y t;n = k y t m y t + n y t x y t k y t + n y t ; À T y t Á À1 = À M y t Á À1 + F y T C y F y + S y t + k y t n y t k y t + n y t ðm y t À x y t Þðm y t À x y t Þ T ; À T y s Á À1 = ðC y Þ À1 + S y s + k y s n y s k y s + n y s À m y s À x y s ÁÀ m y s À x y s Á T ;
(Equation 17) with C y and F y given by
C y = M y s À M y T ts À M y t Á À1 M y ts F y = ðC y Þ À1 M y T ts À M y t Á À1
and sample means and covariances for z˛fs; tg given by
x y z = 1 n y z X n y z i = 1 x y z;i S y z = X n y z i = 1 x y z;i À x y z x y z;i À x y z T : 1 F 1 a b ; X ! and 2 F 1 a; b c ; X !
are, respectively, the confluent and Gauss matrixvariate hypergeometric functions reviewed in the supplemental information, section 6. Now, using Theorem 1 and assuming that the class-0 prior probability c, q 0 t , and q 1 t are independent prior to observing D t and D s , the BEE for TL is given by
b ε = E p à ½cE p à  ε 0 n à + ð1 À E p à ½cÞE p à  ε 1 n à ; (Equation 18) where E p à  ε y n à = Z Q y t ε y n À q y t Á p à À q y t Á dq y t ; (Equation 19)
with Q y t being the parameter space that contains all possible values for q y t .
Computing TL-based BEE for arbitrary classifiers Computing the TL-based BEE for an arbitrary classifier j n involves the evaluation of the integral in (Equation 19). Even when we have an analytic expression for the true error of the studied classifier, the closed-form expression for the ll OPEN ACCESS Article TL-based BEE cannot be easily derived due to the complex expression of the target posterior in the presence of the matrix-variate hypergeometric functions. With non-linear classifiers, this becomes practically impossible as no closed-form expression exists for the true error itself. The standard way to approximate the true error in this case is to consider the test error. For a specified parameter q t , a large test set is generated from f qt ðx; yÞ, and the performance of j n is evaluated on that test set. This requires sampling from p à ðq y t Þ so that the integral in (Equation 19) can be approximated by a finite sum. Suppose we have N posterior sample points q y t;i $ p à ðq y t Þ; i = 1/N. Then the approximation is given by
E p à  ε y n à z 1 N X N i = 1 ε y n q y t;i : (Equation 20)
Because of the generalized confluent and Gauss hypergeometric functions in the expression of p à , sampling directly from the posterior is very laborious and the computational cost of applying MCMC methods is exorbitant as the execution may take several weeks even on high-performance computing clusters. To address this issue, in the next section we propose an efficient self-normalized importance sampling setup with control variates that provides accurate estimates for the TL-based BEE and significantly reduces the computation time to make the proposed TL-based BEE feasible.
Self-normalized importance sampling with control variates Importance sampling Importance sampling (IS) is a variance reduction technique that provides a remedy to sampling from complex distributions. 30 To estimate E p à ½ε y n , IS makes a multiplicative adjustment to ε y n to compensate for sampling from an alternative importance distribution F à instead of p à . If F à is a positive probability density function on Q y t , we can write
E p à  ε y n à = Z Q y t ε y n À q y t Á p à À q y t Á dq y t = Z Q y t ε y n À q y t Á p à À q y t Á F à À q y t Á F à À q y t Á dq y t = E F à " ε y n À q y t Á p à À q y t Á F à À q y t Á # : (Equation 21)
Achieving an accurate IS estimation is contingent on selecting an appropriate importance density that is nearly proportional to ε y n ðq y t Þp à ðq y t Þ. By analogy to Gordon and co-workers, 31 where k n = k + n; n n = n + n; m n = km + nx k + n ; and M À1 n = M À1 + S + kn k + n ðm À xÞðm À xÞ T ;
(Equation 23) depending on the sample mean and covariance matrix
x = 1 n X n i = 1 x i ; S = X n i = 1 ðx i À xÞðx i À xÞ T : (Equation 24
)
Using Lemma 1 we now get the expression of the importance density F Ã given by
p À m y t ; L y t D y t Á = 2p k y t;n ! À d 2 2 À dðn y + n y t Þ 2 G À1 d ðn y + n y t Þ 2 M y t;n À ðn y + n y t Þ 2 L y t 1 2 3 exp À k y t;n 2 À m y t À m y t;n Á T L y t À m y t À m y t;n Á L y t n y + n y t ÀdÀ1 2 etr À 1 2 M y t;n À1 L y t ; (Equation 25)
where k y t;n = k y t + n y t ;
m y t;n = k y t m y t + n y t x y t k y t + n y t ; M y t;n À1 = À M y t Á À1 + S y t + k y t n y t k y t + n y t ðm y t À x y t Þðm y t À x y t Þ T ;
(Equation 26) with sample mean and covariance given by
x y t = 1 n y t X n y t i = 1 x y t;i S y t = X n y t i = 1 x y t;i À x y t x y t;i À x y t T :
After simplifications, the expression of the TL-based BEE in (Equation 21) takes the form
E p à  ε y n à = E F à  ε y n À q y t Á L À q y t Áà ; (Equation 27)
where q y t = ðm y t ; L y t Þ and Lðq y t Þ is the likelihood ratio given by Although the likelihood ratio has a simplified expression, computing the hypergeometric functions involves the computation of series of zonal polynomials, which is computationally expensive and not scalable to high dimensions. To mitigate this limitation, we use the Laplace approximations of these functions (see Figure S1 and supplemental information, section 6). To rectify possible disproportionalities in likelihood ratios due to approximations, we consider the self-normalized IS estimate given by
b E F à  ε y n à z P N i = 1 ε y n q y t;i L q y t;i P N i = 1 L q y t;i (Equation 29)
with q y t;i $ F Ã ðq y t Þ; i = 1/N.
Control variates
For more stable and efficient estimates, we further combine IS with control variates. Using control variates in conjunction with IS is a variance reduction technique, in particular when a significant portion of a model for estimating the expectation can be solved explicitly. In our case, a useful control variates function (CVF) Vðq y t Þ satisfies ll OPEN ACCESS
Article
Patterns 3, 100428, March 11, 2022 11
E F Ã Â V À q y t ÁÃ = Z Q y t V À q y t Á F Ã À q y t Á dq y t = d; (Equation 30)
where d is a constant. Under such circumstances, a more stable estimate for the TL-based BEE can be derived as
E F à  ε y n à z P N i = 1 ε y n q y t;i L q y t;i P N i = 1 L q y t;i À 1 N X N i = 1 bV q y t;i F à q y t;i + bd; (Equation 31)
where q y t;i $ F Ã ðq y t Þ; i = 1/N and b is a weighting coefficient tuned to reduce the variance of the estimate. The optimal value of b is given by Equation 33) and cov½ ,; , and var½ , denote covariance and variance, respectively (see supplemental information, section 7.3, for more details). In practice, it is not likely that we know b opt beforehand, but it is estimated from the Monte Carlo sample. It turns out thatẼ F Ã has lower variance than b E F Ã by a factor of ð1 À corr½z y n ðq y t Þ; Vðq y t ÞÞ, where corr½a; b denotes the correlation coefficient between a and b and given by
b opt = cov  z y n À q y t Á ; V À q y t Áà var  V À q y t Áà ; (Equation 32) with z y n À q y t Á = ε y n À q y t Á L À q y t Á 1 N P N i = 1 L q y t;i (corr½a; b = cov½a; b ffiffiffiffiffiffiffiffiffiffiffiffi var½a p ffiffiffiffiffiffiffiffiffiffiffiffi ffi var½b p : (Equation 34)
To select an appropriate CVF we need to consider two criteria. First, its expectation with respect to F Ã should have an exact evaluation. Second, it has to be correlated with the estimated error. A favorable candidate is the analytic true error of linear classifiers. In this study, we consider a CVF given by the true error of an LDA classifier defined by g Nt ðxÞ = a T Nt
x + b Nt where a Nt = S À1 t ðx 1 t À x 0 t Þ, b Nt = À 1 2 a T ðx 1 t + x 0 t Þ + ln n 1 t n 0 t
, and the pooled covariance S t is given by
S t = À n 0 t À 1 Á S 0 t + À n 1 t À 1 Á S 1 t N t À 2 : (Equation 35)
x y t and S y t are the empirical estimates utilized in (Equation 26). Thus, the CVF is given by Equation 39) with Gð ,Þ being the regular univariate gamma function. Details for simplifying E F à ½Vðm y t ; L y t Þ are covered in supplemental information, section 7.4. The complete specification of the CVF concludes our IS setup. We enumerate some advantages of the proposed setup over direct sampling methods. First, the importance density F à is much simpler than the nominal density p à , which involves matrix-variate hypergeometric functions. Second, our setup successfully combines two variance reduction techniques that enable accurate estimation. Last, and most importantly, the independence of the generated Monte Carlo samples w.r.t source data permits the reuse of the sampled parameters with various source datasets for fixed models. This reusability significantly reduces the computational cost of sampling from F à and makes the utilization of advanced MCMC methods amenable as the whole process could be accelerated by a factor of 10-20, which also grows with the dimensionality and the number of used source datasets (see supplemental information, sections 7.5 and 7.6, for more details). For efficient sampling from F à , we use Hamiltonian Monte Carlo (HMC), proven to have a superior performance to standard MCMC samplers. 34 For this purpose, we utilize the STAN software, which offers a full Bayesian statistical inference framework with HMC. 34 Classifier design For a comprehensive evaluation of our TL-based error estimator, we design and perform a set of experiments. The proposed TL-based estimator is applied to a collection of classifiers with different levels of learning capacities and tested under various scenarios. To separate error estimation from classifier design, we start by analyzing the performance of the TL-based BEE estimator for fixed classifiers that do not depend on training data. This setup distinctly reveals the major characteristics of the TL-based BEE, excluding any confounding factors that may stem from classifier design and the performance of the resulting classifier.
Vðm y t ; L y t Þ = F 0 B @ðÀ1Þ
Next, we also conduct a comparative study of the TL-based BEE performance with respect to other widely used error estimators, which include resubstitution, CV, LOO, and the 0.632-bootstrap estimators. As these popular data-driven estimators involve classifier design on the training data, we will also consider a TLbased classifier designed on target and source data that operates in the target domain for comparison. For this, we employ the OBTL classifier introduced in Karbalayghareh et al., 13 which shares the same Bayesian framework on which our TL-based BEE is developed. In what follows, we recall the definition of each classifier considered in our evaluations and also present the details of the evaluation experiments performed in this study.
In the first set of experiments, we employ a fixed quadratic classifier assuming we know beforehand the true target parameters. For normally distributed data, this quadratic classifier corresponds also to the Bayes classifier that is optimal for the given feature-label distributions. Using QDA, we
define J QDA ðxÞ = x T Ax + b T x + c, where A = À 1 2 À L 1 t À L 0 t Á ; b = L 1 t m 1 t À L 0 t m 0 t ; c = À 1 2 m 1 T t L 1 t m 1 t À m 0 T t L 0 t m 0 t À 1 2 ln L 0 t L 1 t ! : (Equation 40)
The error estimation problem turns out to be an estimation of the Bayes error that coincides here with the true error of the designed QDA. Obviously, this classifier is independent from any observed sample as it is fixed assuming known true model parameters. Without loss of generality, we apply the TLbased BEE using labeled observations from a compound dataset compiled from target and source domains.
In the second set of experiments we investigate the behavior of the TLbased BEE within the class of sub-optimal classifiers. To this end, we consider a linear classifier derived through LDA and we define J LDA ðxÞ = a T x + b where a = S À1 t ðm 1 t À m 0 t Þ, b = À 1 2 a T ðm 1 t + m 0 t Þ, and the average covariance S t is given by
S t = À L 0 t Á À1 + À L 1 t Á À1 2 : (Equation 41) ll OPEN ACCESS
Article
Our goal is then to approximate the true error of this sub-optimal classifier using TL.
Next, we evaluate the performance of the TL-based BEE for the OBTL classifier that can take advantage of both source and target domain data. The OBTL classifier is defined by ; n y + n y t + 1 2 n y 2 ; T y s F y T y x F y T where k y x = k y t;n + 1 = k y t + n y t + 1; À T y x Á À1 = À T y t Á À1 + k y t;n k y t;n + 1 À m y t;n À x ÁÀ m y t;n À x Á T : (Equation 44) Figure 8 provides a combined illustration of the simulation setup for all three classifiers. For rigorous evaluation of the performance of the proposed TLbased BEE, we primarily focus our experiments on assessing the impact of using different types and amounts of source data. This is enabled by the joint prior imposed over the model parameters and controlled by the relatedness coefficient jaj that dictates the extent of interaction between the features in the two domains. For this purpose, we repeatedly conducted experiments following the flow chart in Figure 8 with different relatedness values (jaj = ½0:1; 0:3; 0:5; 0:7; 0:9; 0:95), where jaj = 0:1 corresponds to the lowest relatedness between the two domains and jaj = 0:95 reflects the highest relatedness within the range of studied values.
J OBTL ðxÞ = arg max
Simulation setup
In the first set of experiments, we start by drawing a joint sample ðL y t ; L y s Þ for each class y˛f0; 1g, as described previously. Next, we iterate over the values of the hyperparameter w to control m t ðwÞ through a dichotomic search to get a desired value t of the Bayes error. This is achieved by drawing a sample m y t $ N ðm t ðwÞ; ðk y t L y t Þ À1 Þ and then generating a test set based on the joint sample ðm y t ;L y t Þ. Using this test set, we determine the true error of the optimal QDA derived from ðm y t ;L y t Þ. If the desired Bayes error (true error of the designed QDA) is attained then the iteration stops, otherwise we update w and reiterate. In our experiments, unless otherwise specified, we set t = 0:2 to mimic a moderate level of classification complexity. This step is indeed crucial as it maintains the same level of complexity across the experiments and guarantees a fair comparison across different levels of relatedness. We note that this procedure is valid for general covariances as it acts only on updating the value of the mean parameter without altering the structure of the covariances nor the random mean vectors. Obviously, this approach to specify the Bayes error maintains the Bayesian TL framework intact. However, it is not guaranteed to find values of target parameters that correspond to the desired Bayes error, especially for high dimensions and complex classification (large Bayes error) as we discuss in Performance on synthetic datasets. Once the problem complexity is set and the classifier is fixed, we generate N d = 10; 000 training datasets that we use to evaluate the MSE of the TL-based BEE as depicted in Figure 8. To estimate the TL-based BEE, we employ the IS setup described previously and we draw 1,000 MC samples from the importance density using HMC sampler.
In the second set of experiments, we follow a similar setup using an LDA classifier designed based on the true model parameters. As before, we employ QDA to determine the Bayes error to maintain the same complexity level across different experiments. As in the first set of experiments, we use the TL-based BEE to estimate the true error of the designed LDA classifier.
In the last set of experiments on synthetic datasets, we conduct a comparative analysis study using an OBTL classifier designed using training datasets generated from the model parameters specified by the Bayes error. The error estimation task, in this scenario, aims at approximating the true error of the designed OBTL classifier determined using a large test set generated from the true feature-label distributions. As illustrated in Figure 8, QDA and LDA classifiers are fixed and derived from the true model parameters while the OBTL classifier is designed based on training datasets collected from the underlying feature-label distributions that correspond to the specified Bayes error. In all simulations, the designed classifiers are fixed given the observed samples and the TL-based BEE estimator is safely applied. Finally, regarding synthetic datasets, we note that the flow chart in Figure 8 is valid for all classifiers (QDA, LDA, and OBTL) and the notation J designates the classifier of interest in the corresponding set of experiments. For instance, in the second set of experiments, J refers to J LDA .
In addition to this in-depth analysis of the performance, behavior, and characteristics of our proposed TL-based BEE based on synthetic datasets, we also performed additional validation based on real-world biological datasets. By using RNA-seq datasets syn2759792 and syn4590909 taken from different brain regions for studying brain disorders, we train a QDA classifier using the target data from the RNA-seq dataset syn2759792, and we leverage the
ll
OPEN ACCESS
Article source data from syn4590909 to evaluate the performance of the proposed TL-based BEE.
SUPPLEMENTAL INFORMATION
Supplemental information can be found online at https://doi.org/10.1016/j. patter.2021.100428.
z˛fs;tg, m y z is a ðd 31Þ mean vector in domain z for class y, and L y z is a ðd 3dÞ precision matrix (inverse of covariance) in domain z for label y. An augmented feature vector x y = sample point from two related source and target domains given byx y $ N m y ; ðL y Þ À1 ; y˛f0;
p À m y s ; m y t ; L y s ; L y t Á = p À m y s ; m y t L y s ; L y t Á p À L y s ; L y t Á : (Equation 11)
t and M y s are also positive definite scale matrices and M ts denotes the off-diagonal component that models the interaction between source and target domains. Given L y z , and assuming normally distributed mean vectors, we get ; z˛fs; tg and y˛f0; 1g;(Equation 14)
where w is an adjustable scalar used to control the Bayes error in the target domain, and 0 d and 1 d are d31 all-zero and all-one vectors, respectively. For the scale matrices of Wishart distributions we set M y t = k t I d , M y s = k s I d , and M y ts = k ts I d , where I d is the identity matrix of rank d. To ensure that the joint scale matrix M y = M
in the task of estimating the true
Figure 1 .
1Effect of source data on the performance of the TL-based BEE for quadratic classifiers MSE deviation from true error for Gaussian distributions with respect to source sample size. The Bayes error is fixed at 0.2 in all subfigures. For direct evaluation and higher dimensions, see Figures S2 and S3.
the same range of values for d = 3 and d = 5 was more challenging, and our implemented heuristic did not converge for high values of Bayes error as setting m 0 t = m 1 t did not help in increasing the Bayes error. However, we were able to vary the Bayes error for d = 3 within the range ½0:05;0:1;0:15;0:2;0:25;0:3;0:35;0:4;0:45, and for d = 5, within ½0:05;0:1;0:15;0:2;0:25;0:3;0:35;0:4, sufficient for observing the trends.
Figure 2 .
2Effect of target data on the performance of the TL-based BEE for quadratic classifiers MSE deviation from true error for Gaussian distributions with respect to target sample size. The Bayes error is fixed at 0.2 in all subfigures.
Figure 3 .
3Effect of the classification complexity on the performance of the TLbased BEE for quadratic classifiers MSE deviation from QDA true error with respect to Bayes error. Source sample size was set to n s = 200 in all subfigures.
Figure 4 .
4Effect of the arrangement of the class means in the source and target domains on the performance of the TL-based BEE MSE deviation from true error with respect to source sample size. The source class means are flipped with respect to target classes ( m y s = m 1Ày t , for y˛f0; 1g). In the first row, the source datasets are correctly considered as source samples. In the second row, the source datasets are intentionally considered as target samples. The Bayes error is fixed at 0.2 and d = 5.
Figure 5 .
5Effect of the classification complexity on the performance of the TLbased BEE for linear classifiers MSE deviation from LDA true error for Gaussian distributions with respect to Bayes error. Source sample size is set to n s = 200 in all subfigures. See also Figures S4 and S5.
Figure 6 .Figure 7 .
67Comparative analysis of the performance of the TL-based BEE with respect to standard error estimators MSE deviation from true error with respect to target data size. The proposed TL-based BEE is compared with other widely used estimators. In all subfigures, the Bayes error is fixed at 0.Performance of the TL-based BEE on real-world RNA-seq datasets MSE deviation from QDA true error for normally distributed brain gene expression data with respect to jaj and n s . (A) Gene features from the FC brain region demonstrate high relatedness with those from DPLFC area (jaj = 0:99). (B) Utilizing the data from source domain significantly reduces the MSE of the TL-based BEE in the target domain.
,32 a plausible and cogent candidate for F Ã emanates as the posterior of target parameters upon observation of targetonly data. Obviously, both distributions are tracking the same model parameters in the target domain upon observation of data. To determine F Ã ðq y t Þ = pðm y t ; L y t D y t Þ we require the following lemma: Lemma 1: 33 if D = fx 1 ; /; x n g where x i is a d31 vector and x i $ N ðm;ðLÞ À1 Þ, for i = 1; /; n, and ðm; LÞ has a Gaussian-Wishart prior, such that m L $ N ðm; ðkLÞ À1 Þ and L $ W d ðM;nÞ, then the posterior of ðm; LÞ upon observing D is also a Gaussian-Wishart distribution such that m L; D $ N m n ; ðk n LÞ À1 ; L D $ W d ðM n ; n n Þ; (Equation 22)
O
OBTL ðxjyÞ; (Equation 42) where the objective function O OBTL ðxjyÞ denotes the effective class-conditional density pðxjyÞ given by the following theorem: Theorem 2: 13 the effective class-conditional density, denoted by pðxjyÞ = O OBTL ðxjyÞ, in the target domain is given byO OBTL ðxjyÞ = p À d
;
T y s F y T y t F y T
Figure 8 .
8Simulation diagram using synthetic dataFlow chart illustrating the simulation setup based on synthetic datasets.
Table 1 .
1Independent schizophrenia RNA-seq datasets sampled from two different brain tissuesDisease
No. of samples
Brain region
Dataset
Case
Control
Schizophrenia
53
53
frontal cortex
syn4590909 14
262
293
DLPFC
syn2759792 16
Total
315
346
ll
OPEN ACCESS
with F denoting the standard normal Gaussian cumulative distribution function. Now it remains only to determine E F Ã ½Vðm y t ; L y t Þ in closed-form to fully define the estimation setup. We can show after simplifications and using results from 11 that and Ið ,; ,; ,Þ denotes the regularized incomplete beta function given by Iðx; a; bÞ = Gða + bÞ GðaÞGðbÞy g Nt ðm y
t Þ
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
a T
Nt ðL y
t Þ
À1 a Nt
q
1
C
A;
(Equation 36)
E F Ã ½Vðm y
t ; L y
t Þ =
1
2
+
sgnðAÞ
2
I
0
B
@
A 2
A 2 + a Nt T
h
M y
t;n
i À1
a Nt
;
1
2
;
n y + n y
t À d + 1
2
1
C
A;
(Equation 37)
where sgnð ,Þ is the sign function,
A = ðÀ1Þ
y g Nt
À
m y
t;n
Á
ffiffiffiffiffiffiffiffiffiffiffiffiffi ffi
k y
t;n
1 + k y
t;n
s
;
(Equation 38)
Z x
0
t aÀ1 ð1 À tÞ bÀ1 dt;
(
Patterns 3, 100428, March 11, 2022
Patterns 3, 100428, March 11, 2022 5
Patterns 3, 100428, March 11, 2022
Patterns 3, 100428, March 11, 2022 7
Patterns 3, 100428, March 11, 2022
Patterns 3, 100428, March 11, 2022 9
Patterns 3, 100428, March 11, 2022 13
Patterns 3, 100428, March 11, 2022 15
Epistemology of computational biology: mathematical models and experimental prediction as the basis of their validity. E R Dougherty, U M Braga-Neto, 10.1142/S0218339006001726Biol. Syst. 14Dougherty, E.R., and Braga-Neto, U.M. (2006). Epistemology of computa- tional biology: mathematical models and experimental prediction as the basis of their validity. Biol. Syst. 14, 65-90. https://doi.org/10.1142/ S0218339006001726.
Cancer biomarkers: can we turn recent failures into success?. E P Diamandis, 10.1093/jnci/djq306J. Natl. Cancer Inst. 102Diamandis, E.P. (2010). Cancer biomarkers: can we turn recent failures into success? J. Natl. Cancer Inst. 102, 1462-1467. https://doi.org/10. 1093/jnci/djq306.
Minimum mean-square error estimation for classification error-Part I: definition and the Bayesian MMSE error estimator for discrete classification. L A Dalton, E R Dougherty, 10.1109/TSP.2010.2084572IEEE Trans. Signal Process. 59Dalton, L.A., and Dougherty, E.R. (2011). Minimum mean-square error estimation for classification error-Part I: definition and the Bayesian MMSE error estimator for discrete classification. IEEE Trans. Signal Process. 59, 115-129. https://doi.org/10.1109/TSP.2010.2084572.
Is cross-validation valid for small-sample microarray classification?. U M Braga-Neto, E R Dougherty, 10.1093/bioinformatics/btg419Bioinformatics. 20Braga-Neto, U.M., and Dougherty, E.R. (2004). Is cross-validation valid for small-sample microarray classification? Bioinformatics 20, 374-380. https://doi.org/10.1093/bioinformatics/btg419.
Controversies regarding and perspectives on clinical utility of biomarkers in hepatocellular carcinoma. P P Song, J F Xia, Y Inagaki, K Hasegawa, Y Sakamoto, N Kokudo, W Tang, 10.3748/wjg.v22.i1.262World J. Gastroenterol. 22Song, P.P., Xia, J.F., Inagaki, Y., Hasegawa, K., Sakamoto, Y., Kokudo, N., and Tang, W. (2016). Controversies regarding and perspectives on clinical utility of biomarkers in hepatocellular carcinoma. World J. Gastroenterol. 22, 262-274. https://doi.org/10.3748/wjg.v22.i1.262.
A case study of incremental concept induction. J C Schlimmer, D Fisher, Proceedings of the Fifth AAAI National Conference on Artificial Intelligence. the Fifth AAAI National Conference on Artificial IntelligenceSchlimmer, J.C., and Fisher, D. (1986). A case study of incremental concept induction. In Proceedings of the Fifth AAAI National Conference on Artificial Intelligence, pp. 496-501.
An empirical investigation of catastrophic forgetting in gradient. I J Goodfellow, M Mirza, D Xiao, A Courville, Y Bengio, arXiv, 1312.6211Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., and Bengio, Y. (2014). An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv, 1312.6211.
A unifying Bayesian view of continual learning. S Farquhar, Y Gal, arXiv, 1902.06494Farquhar, S., and Gal, Y. (2018). A unifying Bayesian view of continual learning. arXiv, 1902.06494.
S Farquhar, Y Gal, Towards robust evaluations of continual learning. arXiv, 1805. 9733Farquhar, S., and Gal, Y. (2018). Towards robust evaluations of continual learning. arXiv, 1805.09733.
Test data reuse for the evaluation of continuously evolving classification algorithms using the area under the receiver operating characteristic curve. A Gossmann, A Pezeshk, Y.-P Wang, B Sahiner, 10.1137/20M1333110SIAM J. Mathematics Data Sci. 3Gossmann, A., Pezeshk, A., Wang, Y.-P., and Sahiner, B. (2021). Test data reuse for the evaluation of continuously evolving classification algorithms using the area under the receiver operating characteristic curve. SIAM J. Mathematics Data Sci. 3, 692-714. https://doi.org/10.1137/20M1333110.
Bayesian minimum mean-square error estimation for classification error-Part II: linear classification of Gaussian models. L A Dalton, E R Dougherty, 10.1109/TSP.2010.2084573IEEE Trans. Signal Process. 59Dalton, L.A., and Dougherty, E.R. (2011). Bayesian minimum mean-square error estimation for classification error-Part II: linear classification of Gaussian models. IEEE Trans. Signal Process. 59, 130-144. https://doi. org/10.1109/TSP.2010.2084573.
A survey on transfer learning. S J Pan, Yang , Q , 10.1109/TKDE.2009.191IEEE Trans. Knowl. Data Eng. 22Pan, S.J., and Yang, Q. (2010). A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345-1359. https://doi.org/10.1109/TKDE. 2009.191.
Optimal Bayesian transfer learning. A Karbalayghareh, X Qian, E R Dougherty, 10.1109/TSP.2018.2839583IEEE Trans. Signal Process. 66Karbalayghareh, A., Qian, X., and Dougherty, E.R. (2018). Optimal Bayesian transfer learning. IEEE Trans. Signal Process. 66, 3724-3739. https://doi.org/10.1109/TSP.2018.2839583.
Shared molecular neuropathology across major psychiatric disorders parallels polygenic overlap. M J Gandal, J R Haney, N N Parikshak, V Leppa, G Ramaswami, C Hartl, A J Schork, V Appadurai, A Buil, T M Werge, 10.1126/science.aad6469Science. 359Gandal, M.J., Haney, J.R., Parikshak, N.N., Leppa, V., Ramaswami, G., Hartl, C., Schork, A.J., Appadurai, V., Buil, A., Werge, T.M., et al. (2018). Shared molecular neuropathology across major psychiatric disorders par- allels polygenic overlap. Science 359, 693-697. https://doi.org/10.1126/ science.aad6469.
J Hoffman, E Rodner, T Darrell, J Donahue, K Saenko, arXiv, 1301.3224Efficient Learning of Domain-Invariant Image Representations. Hoffman, J., Rodner, E., Darrell, T., Donahue, J., and Saenko, K. (2013). Efficient Learning of Domain-Invariant Image Representations. arXiv, 1301.3224.
Gene expression elucidates functional impact of polygenic risk for schizophrenia. M Fromer, P Roussos, S K Sieberts, J S Johnson, D H Kavanagh, T M Perumal, D M Ruderfer, E C Oh, A Topol, H R Shah, L L Klei, 10.1038/nn.4399Nat. Neurosci. 19Fromer, M., Roussos, P., Sieberts, S.K., Johnson, J.S., Kavanagh, D.H., Perumal, T.M., Ruderfer, D.M., Oh, E.C., Topol, A., Shah, H.R., Klei, L.L., et al. (2016). Gene expression elucidates functional impact of polygenic risk for schizophrenia. Nat. Neurosci. 19, 1442-1453. https://doi.org/10. 1038/nn.4399.
Constructing pathway-based priors within a Gaussian mixture model for Bayesian regression and classification. S Boluki, M S Esfahani, X Qian, E R Dougherty, 10.1109/TCBB.2017.2778715IEEE/ACM Trans. Comput. Biol. Bioinform. 16Boluki, S., Esfahani, M.S., Qian, X., and Dougherty, E.R. (2017). Constructing pathway-based priors within a Gaussian mixture model for Bayesian regression and classification. IEEE/ACM Trans. Comput. Biol. Bioinform. 16, 524-537. https://doi.org/10.1109/TCBB.2017.2778715.
Incorporating biological prior knowledge for Bayesian learning via maximal knowledge-driven information priors. S Boluki, M S Esfahani, X Qian, E R Dougherty, 10.1186/s12859-017-1893-4BMC Bioinformatics. 18Boluki, S., Esfahani, M.S., Qian, X., and Dougherty, E.R. (2017). Incorporating biological prior knowledge for Bayesian learning via maximal knowledge-driven information priors. BMC Bioinformatics 18, 552. https:// doi.org/10.1186/s12859-017-1893-4.
Optimal experimental design for gene regulatory networks in the presence of uncertainty. R Dehghannasiri, B.-J Yoon, E R Dougherty, 10.1109/TCBB.2014.2377733IEEE ACM Trans. Comput. Biol. Bioinformatics. 12Dehghannasiri, R., Yoon, B.-J., and Dougherty, E.R. (2014). Optimal experimental design for gene regulatory networks in the presence of un- certainty. IEEE ACM Trans. Comput. Biol. Bioinformatics 12, 938-950. https://doi.org/10.1109/TCBB.2014.2377733.
Discrete optimal Bayesian classification with error-conditioned sequential sampling. Pattern Recogn. A Broumand, M S Esfahani, B.-J Yoon, E R Dougherty, 10.1016/j.patcog.2015.03.02348Broumand, A., Esfahani, M.S., Yoon, B.-J., and Dougherty, E.R. (2015). Discrete optimal Bayesian classification with error-conditioned sequential sampling. Pattern Recogn. 48, 3766-3782. https://doi.org/10.1016/j.pat- cog.2015.03.023.
Efficient experimental design for uncertainty reduction in gene regulatory networks. R Dehghannasiri, B.-J Yoon, E R Dougherty, 10.1186/1471-2105-16-S13-S2BMC Bioinformatics. 16Dehghannasiri, R., Yoon, B.-J., and Dougherty, E.R. (2015). Efficient experimental design for uncertainty reduction in gene regulatory networks. BMC Bioinformatics 16, 1-18. https://doi.org/10.1186/1471-2105-16- S13-S2.
Model-based robust filtering and experimental design for stochastic differential equation systems. G Zhao, X Qian, B.-J Yoon, F J Alexander, E R Dougherty, 10.1109/TSP.2020.3001384IEEE Trans. Signal Process. 68Zhao, G., Qian, X., Yoon, B.-J., Alexander, F.J., and Dougherty, E.R. (2020). Model-based robust filtering and experimental design for stochas- tic differential equation systems. IEEE Trans. Signal Process. 68, 3849- 3859. https://doi.org/10.1109/TSP.2020.3001384.
Optimal experimental design for uncertain systems based on coupled differential equations. Y Hong, B Kwon, B.-J Yoon, 10.1109/ACCESS.2021.3071038IEEE Access. 9Hong, Y., Kwon, B., and Yoon, B.-J. (2021). Optimal experimental design for uncertain systems based on coupled differential equations. IEEE Access 9, 53804-53810. https://doi.org/10.1109/ACCESS.2021. 3071038.
Accelerating optimal experimental design for robust synchronization of uncertain Kuramoto oscillator model using machine learning. H.-M Woo, Y Hong, B Kwon, B.-J Yoon, 10.1109/TSP.2021.3130967IEEE Trans. Signal Process. Woo, H.-M., Hong, Y., Kwon, B., and Yoon, B.-J. (2021). Accelerating optimal experimental design for robust synchronization of uncertain Kuramoto oscillator model using machine learning. IEEE Trans. Signal Process. https://doi.org/10.1109/TSP.2021.3130967.
Uncertainty-aware active learning for optimal Bayesian classifier. G Zhao, E R Dougherty, B.-J Yoon, F J Alexander, X Qian, International Conference on Learning Representations. ICLRZhao, G., Dougherty, E.R., Yoon, B.-J., Alexander, F.J., and Qian, X. (2020). Uncertainty-aware active learning for optimal Bayesian classifier. In International Conference on Learning Representations (ICLR).
Bayesian active learning by soft mean objective cost of uncertainty. G Zhao, E R Dougherty, B.-J Yoon, F J Alexander, X Qian, International Conference on Artificial Intelligence and Statistics (AISTATS). 130Zhao, G., Dougherty, E.R., Yoon, B.-J., Alexander, F.J., and Qian, X. (2021). Bayesian active learning by soft mean objective cost of uncertainty. International Conference on Artificial Intelligence and Statistics (AISTATS) 130, 3970-3978.
G Zhao, E R Dougherty, B.-J Yoon, F J Alexander, X Qian, Efficient Active Learning for Gaussian Process Classification by Error Reduction. 35th Conference on Neural Information Processing Systems (NeurIPS). Zhao, G., Dougherty, E.R., Yoon, B.-J., Alexander, F.J., and Qian, X. (2021). Efficient Active Learning for Gaussian Process Classification by Error Reduction. 35th Conference on Neural Information Processing Systems (NeurIPS).
Quantifying the objective cost of uncertainty in complex dynamical systems. B.-J Yoon, X Qian, E R Dougherty, 10.1109/TSP.2013.2251336IEEE Trans. Signal Process. 61Yoon, B.-J., Qian, X., and Dougherty, E.R. (2013). Quantifying the objec- tive cost of uncertainty in complex dynamical systems. IEEE Trans. Signal Process. 61, 2256-2266. https://doi.org/10.1109/TSP.2013. 2251336.
Quantifying the multiobjective cost of uncertainty. B.-J Yoon, X Qian, E R Dougherty, 10.1109/ACCESS.2021.3085486IEEE Access. 9Yoon, B.-J., Qian, X., and Dougherty, E.R. (2021). Quantifying the multi- objective cost of uncertainty. IEEE Access 9, 80351-80359. https://doi. org/10.1109/ACCESS.2021.3085486.
. C Robert, G Casella, Monte Carlo Statistical Methods. SpringerRobert, C., and Casella, G. (2004). Monte Carlo Statistical Methods (Springer).
A novel approach to nonlinear/non-Gaussian Bayesian state estimation. N Gordon, J Salmond, A Smith, 10.1049/ip-f-2.1993.0015IEEE Proc. Radar Signal Process. 107-113. Gordon, N., Salmond, J., and Smith, A. (1993). A novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEEE Proc. Radar Signal Process. 107-113. https://doi.org/10.1049/ip-f-2.1993.0015.
A new use of importance sampling to reduce computational burden in simulation estimation. D A Ackerberg, 10.1007/s11129-009-9074-zQuant Mark Econ. 7Ackerberg, D.A. (2001). A new use of importance sampling to reduce computational burden in simulation estimation. Quant Mark Econ. 7, 343-376. https://doi.org/10.1007/s11129-009-9074-z.
R J Muirhead, Aspects of Multivariate Statistical Theory. WileyMuirhead, R.J. (2009). Aspects of Multivariate Statistical Theory (Wiley).
Stan: a probabilistic programming language. B Carpenter, A Gelman, M Hoffman, D Lee, B Goodrich, M Betancourt, M Brubaker, J Guo, P Li, A Riddell, 10.18637/jss.v076.i01J. Stat. Softw. 76Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. (2017). Stan: a probabilistic programming language. J. Stat. Softw. 76, 1-32. https://doi.org/10.18637/ jss.v076.i01.
| [] |
[
"Negative oxygen vacancies in HfO 2 as charge traps in high-k stacks",
"Negative oxygen vacancies in HfO 2 as charge traps in high-k stacks"
] | [
"J L Gavartin \nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n",
"RamoD Muñoz \nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n",
"A L Shluger \nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK\n",
"G Bersuker \nSEMATECH\n2706 Metropolis Dr78741AustinTXUSA\n",
"B H Lee \nSEMATECH\n2706 Metropolis Dr78741AustinTXUSA\n"
] | [
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK",
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK",
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonUK",
"SEMATECH\n2706 Metropolis Dr78741AustinTXUSA",
"SEMATECH\n2706 Metropolis Dr78741AustinTXUSA"
] | [] | We calculated the optical excitation and thermal ionization energies of oxygen vacancies in m-HfO2 using atomic basis sets, a non-local density functional and periodic supercell. The thermal ionization energies of negatively charged V − and V 2− centres are consistent with values obtained by the electrical measurements. The results suggest that negative oxygen vacancies are the likely candidates for intrinsic electron traps in the hafnum-based gate stack devices. | 10.1063/1.2236466 | [
"https://export.arxiv.org/pdf/cond-mat/0605593v1.pdf"
] | 119,468,676 | cond-mat/0605593 | 638323158bc358b78ca518f197427a6904453210 |
Negative oxygen vacancies in HfO 2 as charge traps in high-k stacks
24 May 2006
J L Gavartin
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
RamoD Muñoz
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
A L Shluger
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonUK
G Bersuker
SEMATECH
2706 Metropolis Dr78741AustinTXUSA
B H Lee
SEMATECH
2706 Metropolis Dr78741AustinTXUSA
Negative oxygen vacancies in HfO 2 as charge traps in high-k stacks
24 May 2006(Dated: December 23, 2021)
We calculated the optical excitation and thermal ionization energies of oxygen vacancies in m-HfO2 using atomic basis sets, a non-local density functional and periodic supercell. The thermal ionization energies of negatively charged V − and V 2− centres are consistent with values obtained by the electrical measurements. The results suggest that negative oxygen vacancies are the likely candidates for intrinsic electron traps in the hafnum-based gate stack devices.
Hafnium based oxides are currently considered as a practical solution satisfying stringent criteria for integration of high-k materials in the devices in future technology nodes. However, high-k transistor performance is often affected by high and unstable threshold potential 1 , V t , and low carrier mobility 2 . These effects are usually attributed to a high concentration of charge traps and scattering centers in the bulk of the dielectric and/or at its interface with the silicon channel. Although the reported trap densities vary greatly with fabrication techniques, the majority of data point to existence of a specific intrinsic shallow electron centre common to all HfO 2 based stacks while some extrinsic defects, such as Zr substitution in HfO 2 , have also been considered 3 .
Oxygen vacancies are dominating intrinsic defects in the bulk of many transition metal oxides including HfO 2 and ZrO 2 , and are thought to be also present in high concentrations in thin films. However, in spite of numerous experimental studies, evidence relating oxygen vacancies to measured characteristics of interface traps in high-k stacks is still mostly circumstantial. Therefore, accurate theoretical characterization of these defects is highly desirable.
Previous theoretical calculations of oxygen vacancy in HfO 2 and ZrO 2 reported the ground state properties obtained within local or semilocal approximations to density functional theory (DFT) methods (see 4,5 for a review). This approach, however, significantly underestimates band gaps, which hampers determining energies of defect levels with respect to the band edges and precludes identifying shallow defect states 5,6,7 . As a result, most of the early local DFT calculations (except, perhaps, ref. 8 ) failed to predict unambiguosly negative charge states of oxygen vacancy in HfO 2 . Significant improvement was achieved by Robertson et al. 4,9,10 who used screened exchange approximation to predict vacancy energy levels including V − charge state. However, these calculations were performed using a small periodic supercell and therefore corresponded to extremely high vacancy concentrations. The quality of the functional used is also largely unknown and needs independent verification.
In this work we used much bigger supercells and a nonlocal functional to calculate optical excitation and ther-mal ionization energies of oxygen vacancies in five charge states. To relate these energies to experimental data we distinguish optical absorption/reflection type measurements involving Frank-Condon type (vertical) excitations, and electrical thermal de-trapping measurements, where phonon-assisted electron excitations are accompanied by strong lattice relaxation. We focus on the results of so called de-trapping electical measurements 11,12,13 which are interpreted in terms of thermal ionization of shallow electron traps and demonstrate that the interpretation is consistent with thermal ionization of negative V − and V 2− vacancies.
Our periodic non-local density functional calculations were carried out using atomic basis set and a B3LYP hybrid density functional 14 implemented in the CRYS-TAL03 code 15 . This method reproduces band gaps of a range of transition metal oxides 16 and allows us to carry out geometry optimization of defect structures, calculate defect electronic properties and excitation energies within the same method. We used all electron basisset for oxygen atoms 17 , and an s,p,d valence basis set for Hf in conjunction with the relativistic small core effective potential due to Stevens et al. 18 . All calculations were carried out in the 96 atoms supercell of a monoclinic (m)-HfO 2 with a mesh of 36 k points in the irreducible Brillouin zone. The defect structures were optimized to the atomic forces below 0.03 eVÅ −1 . The compensating uniform background potential method was used for charged defects calculations 6,19 . The positions of occupied and un-occupied single-electron defect levels in the gap of m-HfO 2 predicted by periodic B3LYP calculations are presented in Figure 1. The results of the calculations can be summarized as follows: 1. The calculated single-electron band gap in m-HfO 2 is 6.1 eV. We note, that single-electron estimate neglects excitonic effects, and effects of phonon broadening which reduce measured optical band gap of HfO 2 by 0.8 eV 20 . Taking this into account, predicted E g is in good agreement with the experimental optical gap of 5.6-5.9 eV 21 . 2. Oxygen vacancy in the monoclinic HfO 2 may exist in five charge states +2, +1, 0, -1, -2 corresponding to up to four extra electrons in the vicinity of the vacant O 2− site. The ground state of the V 2− is spin singlet, which is 0.2 eV lower then the spin triplet configuration (cf. Fig. 1). 6. The change of the vacancy charge state is accompanied by significant displacements of the nearest neighbor (NN) Hf atoms and next shell of oxygen atoms (NNN) (see also ref. 22 . The NN Hf ions displace from their perfect lattice position approximately symmetrically. This displacement is away from the V 2+ and V + by 11% and 5% of a typical Hf-O distance, but towards the V − and V 2− by 4% and 8% ,respectively. The displacements of the NNN oxygen ions are substantially smaller and directed away from the negative vacancies. 7. The character of electron density distribution, strong lattice relaxation and relatively shallow single-electron levels ( Fig. 1) suggest that trapping of extra electrons in negative V − and V 2− vacancies is essentially polaronic in nature. The third and forth electrons induce strong lattice polarisation, which in turn, creates the potential well for these electrons. Now we discuss a relation between the single-electron spectrum of Fig. 1 and experimental trap ionization measurements. For optical type measurements, such as optical absorption, and reflection ellipsometry 23 , using energy differences between singleelectron levels can provide reasonable accuracy 5,22 . The situation is different in electrical measurements, which include thermal excitation of trapped electrons into the conduction band of HfO 2 11 (see inset of Fig. 2). The difference between the optical and thermal ionization can be qualitatively explained by the potential energy surface diagram depicted in Fig. 2 and earlier discussed for other systems (e.g. in ref. 7 ). Fast optical ionization of a trap of charge q (state A) into the bottom of the conduction band (state B * ) corresponds to a Frank-Condon type transition, E opt , when no lattice relaxation associated with the removed charge occurs. In contrast, a much slower thermal ionization of a trap, E th , is a phonon assisted process, during which the system transfers into a fully relaxed trap state q + 1 and the electron delocalized at the bottom of the conduction band (state B). The ionized trap induces lattice polarization, which in case of the oxygen vacancy is mainly associated with strong displacements of the NN Hf ions. This generalized relaxation coordinate is denoted as Q in the diagram in Fig. 2. E th is approximately given by:
E th = E opt − E rel ,(1)
where E rel is the lattice relaxation energy. Optical and thermal ionization energies can be calculated using the total energies of systems in different charge states as follows:
E opt (V q ) = E q (V q+1 ) − E q (V q ) + E − − E 0 ; (2) E rel (V q+1 ) = E q+1 (V q+1 ) − E q (V q+1 ).(3)
The notations in Eqs. 2-3 have the following meaning: E 0 -the total energy of the perfect HfO 2 crystal; E −the total energy of the perfect HfO 2 crystal with an electron at the bottom of the conduction band; E q (V q ) -the total energy of HfO 2 with the vacancy in a charge state q (q=+2,+1,0,-1,-2) in the optimized geometry; E q (V q+1 ) -the total energy of the vacancy in the charge state q + 1 but at the equilibrium geometry corresponding to the vacancy in the charge state q. We assume that the vacancy is well separated from the interface, and the optical and relaxation energies discussed here, depend only on the bulk properties of HfO 2 and not on the Si or metal band alignment. The calculated values of E opt , E rel and E th are summarized in Table I. We note that optical ionization energies, E opt calculated as total energy differences in Eq. 2 are close to single-electron energy differences (Fig. 1). This is to be expected, since the total energy differences for the systems with N and N±1 electrons in the DFT calculations are related to the single electron energies of the highest occupied and lowest unoccupied states 24 .
The thermal ionization energies E th are, however, 0.5 to 1.0 eV smaller than the optical energies due to the large lattice relaxation associated with the change of the charge state of the vacancy. Although the single particle energies of V − and V 2− are very similar (Fig. 1), thermal ionization energies of these defects differ by 0.2 eV. These values are consistent with 0.35, 0.5 eV activation energies extracted from thermal de-trapping kinetics measurements 11,12,13 . On the other hand, very large de-trapping energies for the V + and V 0 vacancies rule out these species as possible shallow electron traps. We should note that accounting for the thermal broadening of the defect levels and band tails, which will be discussed elsewhere, would improve the agreement between the calculated thermal ionization energies and the experimental energies even further.
To summarize, we have calculated the positions of single-electron levels for five charge states of oxygen vacancy in m-HfO 2 and related them to the existing experimental data. The results of trap discharging measurements 11,12,13 are consistent with thermal ionization of negatively charged V − and V 2− oxygen vacancies. These results further support the common assumption that oxygen vacancies are likely candidates for intrinsic electron traps in these devices and suggest that negative oxygen vacancies can be responsible for V t instability.
3 .
3The preferential site for the oxygen vacancy formation in m-HfO 2 depends on the charge state: V 2+ and V + are more stable in the 3-fold coordinated sites, whereas the V 0 , V − , and V 2− are energetically more favorable at the 4-fold coordinated sites (see also refs.5,22 ). The difference in formation energies between 3-and 4-coordinated sites of negatively charged vacancies is about 0.2 eV. 4. The density of electrons in the +1 and zero charge vacancy is peaked at the vacant site with significant admixture of d-states of the nearest neighbor hafnium ions, similar to plane wave DFT calculations5,22 . The singleelectron states of these electrons are located roughly in the middle of the gap(Fig. 1), significantly lower than those reported byRobertson 4,9,10 5. Both one and two extra electrons added to a neutral vacancy form more diffuse asymmetric states localized mainly on 3 (out of 4) Hf ions surrounding the vacancy.
FIG. 2 :
2Schematic potential energy surface (PES) for optical and thermal detrapping processes. Generalized displacement of the Hf atoms, Q, and energy scale is chosen to represent ionization of V 2− vacancy. The band diagram schematic illustrates optical and thermal ionization processes, respectively without (B*) and with (B) lattice relaxation.
TABLE I :
IOptical excitation energies, Eopt, relaxation ener-
gies, E rel , and thermal activation Energies, E th , for oxygen
vacancies in m-HfO2 calculated according to Eqs. 1-3.
q
Eopt E rel E th
V + 3.33 1.01 2.32
V 0 3.13 0.80 2.33
V − 1.24 0.48 0.76
V 2− 0.99 0.43 0.56
C. D. Young, G. Bersuker, G. A. Brown, P. Lysaght, P. Zeitzoff, R. W. Murto, and H. R. Huff, in IEEE In-
ternationasl Reliability Physics Symposium. Phoenix, AZternationasl Reliability Physics Symposium (Phoenix, AZ, 2004), pp. 597-598.
. R J Carter, E Cartier, A Kerber, L Pantisano, T Schram, S De Gendt, M Heyns, Appl. Phys. Lett. 83533R. J. Carter, E. Cartier, A. Kerber, L. Pantisano, T. Schram, S. De Gendt, and M. Heyns, Appl. Phys. Lett. 83, 533 (2003).
G Bersuker, B Lee, H Huff, J Gavartin, A Shluger, Defects in Advanced High-k Dielectric Nano-Electronic Semiconductor Devices. E. GusevSpringerG. Bersuker, B. Lee, H. Huff, J. Gavartin, and A. Shluger, in Defects in Advanced High-k Dielectric Nano-Electronic Semiconductor Devices, edited by E. Gusev (Springer, 2006), pp. 227-236.
. J Robertson, Rep. Prog. Phys. 69327J. Robertson, Rep. Prog. Phys. 69, 327 (2006).
A L Shluger, A S Foster, J L Gavartin, P V Sushko, Nano and Giga Challenges in Microelectronics. J. Greer, A. Korkin, and J. LabanowskiElsevierA. L. Shluger, A. S. Foster, J. L. Gavartin, and P. V. Sushko, in In Nano and Giga Challenges in Microelectron- ics., edited by J. Greer, A. Korkin, and J. Labanowski (Elsevier, 2003), pp. 151-222.
. J Gavartin, A S Foster, G I Bersuker, A L Shluger, J. Appl. Phys. 9753704J. Gavartin, A. S. Foster, G. I. Bersuker, and A. L. Shluger, J. Appl. Phys. 97, 053704 (2005).
. C G Van De Walle, J. Appl. Phys. 953851C. G. Van de Walle, J. Appl. Phys 95, 3851 (2004).
C Shen, M F Li, X P Wang, H Y Yu, Y P Feng, A T L Lim, Y C Yeo, D S H Chan, D L Kwong, IEEE International Electron Devices Meeting. 733C. Shen, M. F. Li, X. P. Wang, H. Y. Yu, Y. P. Feng, A. T. L. Lim, Y. C. Yeo, D. S. H. Chan, and D. L. Kwong, in IEEE International Electron Devices Meeting (2004), p. 733.
. K Xiong, R J , Microelectronics Eng, 80408K. Xiong and R. J., Microelectronics Eng. 80, 408 (2005).
. K Xiong, J Robertson, M C Gibson, S G Clark, Appl. Phys. Lett. 87183505K. Xiong, J. Robertson, M. C. Gibson, and S. G. Clark, Appl. Phys. Lett. 87, 183505 (2005).
G Bersuker, J Sim, C S Park, C Young, S Nadkarni, R Choi, B H Lee, IEEE International Reliability Physics Symposium. G. Bersuker, J. Sim, C. S. Park, C. Young, S. Nadkarni, R. Choi, and B. H. Lee, in IEEE International Reliability Physics Symposium (2006), pp. 179-183.
. G Ribes, J Mitard, M Denais, S Bruyere, F Monsieur, C Parthasarathy, E Vincent, G Ghibaudo, IEEE Trans. Dev. Materials Reliability. 5G. Ribes, J. Mitard, M. Denais, S. Bruyere, F. Monsieur, C. Parthasarathy, E. Vincent, and G. Ghibaudo, IEEE Trans. Dev. Materials Reliability 5, 5 (2005).
. G Bersuker, J H Sim, C D Young, C S Park, R Choi, P M Zeitzoff, G A Brown, B H Lee, R Murto, Microel. Reliability. 441509G. Bersuker, J. H. Sim, C. D. Young, C. S. Park, R. Choi, P. M. Zeitzoff, G. A. Brown, B. H. Lee, and R. Murto, Microel. Reliability 44, 1509 (2004).
. A D Becke, J. Chem. Phys. 981372A. D. Becke, J. Chem. Phys. 98, 1372 (1993).
. V Saunders, R Dovesi, C Roetti, R Orlando, C M Zicovich-Wilson, N M Harrison, K Doll, B Civalleri, B Bush, P L M D'arco, CRYSTAL 2003 User's Manual (University of TorinoV. Saunders, R. Dovesi, C. Roetti, R. Orlando, C. M. Zicovich-Wilson, N. M. Harrison, K. Doll, B. Civalleri, B. Bush, and P. L. M. D'Arco, CRYSTAL 2003 User's Manual (University of Torino, 2003).
. F Cora, M Alfredsson, G Mallia, D S Middlemiss, W C Mackrodt, R Dovesi, R Orlando, Structure and Bonding. 113171F. Cora, M. Alfredsson, G. Mallia, D. S. Middlemiss, W. C. Mackrodt, R. Dovesi, and R. Orlando, Structure and Bond- ing 113, 171 (2004).
. W J Stevens, H Basch, P Jasien, Can. J. Chem. 70612W. J. Stevens, H. Basch, and P. Jasien, Can. J. Chem. 70, 612 (1992).
C: Solid State Phys. M Leslie, M J Gillan, J. Phys. 18973M. Leslie and M. J. Gillan, J. Phys. C: Solid State Phys. 18, 973 (1985).
. S Sayan, T Emge, E Garfunkel, X Zhao, L Wielunski, D Vanderbilt, J S Suehle, S Suzer, M Banaszak-Holl, J. Appl. Phys. 967485S. Sayan, T. Emge, E. Garfunkel, X. Zhao, L. Wielunski, D. Vanderbilt, J. S. Suehle, S. Suzer, and M. Banaszak- Holl, J. Appl. Phys. 96, 7485 (2004).
V V , A Stesmans, High-k Dielectrics. M. HoussaIOP PublishingV. V. Afanas'ev and A. Stesmans, in High-k Dielectrics, edited by M. Houssa (IOP Publishing, Bristol and Philadelphia, 2004), pp. 217-250.
. A S Foster, F Lopez Gejo, A L Shluger, R M Nieminen, Phys. Rev B. 65174117A. S. Foster, F. Lopez Gejo, A. L. Shluger, and R. M. Nieminen, Phys. Rev B 65, 174117 (2002).
. H Takeuchi, D Ha, T.-K King, J. Vac. Sci Technol. A. 221337H. Takeuchi, D. Ha, and T.-K. King, J. Vac. Sci Technol. A 22, 1337 (2004).
Quantum theory of solid state: An introduction. L Kantorovich, KluwerL. Kantorovich, Quantum theory of solid state: An intro- duction (Kluwer, 2004).
| [] |
[
"The Activity of the Soft Gamma Repeater SGR 1900+14 in 1998 from Konus-Wind Observations: 1. Short Recurrent Bursts",
"The Activity of the Soft Gamma Repeater SGR 1900+14 in 1998 from Konus-Wind Observations: 1. Short Recurrent Bursts"
] | [
"E P Mazets \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n",
"T L Cline \nGoddard Space Flight Center\n20771GreenbeltMDUSA\n",
"R L Aptekar \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n",
"P Butterworth \nGoddard Space Flight Center\n20771GreenbeltMDUSA\n",
"D D Frederiks \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n",
"S V Golenetskii \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n",
"V N Il'inskii \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n",
"V D Pal'shin \nIoffe Physical-Technical Institute, St.Petersburg\n194021\n"
] | [
"Ioffe Physical-Technical Institute, St.Petersburg\n194021",
"Goddard Space Flight Center\n20771GreenbeltMDUSA",
"Ioffe Physical-Technical Institute, St.Petersburg\n194021",
"Goddard Space Flight Center\n20771GreenbeltMDUSA",
"Ioffe Physical-Technical Institute, St.Petersburg\n194021",
"Ioffe Physical-Technical Institute, St.Petersburg\n194021",
"Ioffe Physical-Technical Institute, St.Petersburg\n194021",
"Ioffe Physical-Technical Institute, St.Petersburg\n194021"
] | [] | Results are presented of the observations of the soft gamma repeater SGR 1900+14 made on the Wind spacecraft during the source reactivation period from May 1998 to January 1999. Individual characteristics of recurrent bursts, such as their time histories, energy spectra, and maximum and integrated energy fluxes, are considered. Some statistical distributions and relationships are also presented. The close similarity of these events to the recurrent bursts observed from other SGRs argues for a common emission mechanism. | null | [
"https://export.arxiv.org/pdf/astro-ph/9905195v2.pdf"
] | 119,440,223 | astro-ph/9905195 | 6ee27b65ede654dd9efd439d1bee9d3f7b450c91 |
The Activity of the Soft Gamma Repeater SGR 1900+14 in 1998 from Konus-Wind Observations: 1. Short Recurrent Bursts
May 1999
E P Mazets
Ioffe Physical-Technical Institute, St.Petersburg
194021
T L Cline
Goddard Space Flight Center
20771GreenbeltMDUSA
R L Aptekar
Ioffe Physical-Technical Institute, St.Petersburg
194021
P Butterworth
Goddard Space Flight Center
20771GreenbeltMDUSA
D D Frederiks
Ioffe Physical-Technical Institute, St.Petersburg
194021
S V Golenetskii
Ioffe Physical-Technical Institute, St.Petersburg
194021
V N Il'inskii
Ioffe Physical-Technical Institute, St.Petersburg
194021
V D Pal'shin
Ioffe Physical-Technical Institute, St.Petersburg
194021
The Activity of the Soft Gamma Repeater SGR 1900+14 in 1998 from Konus-Wind Observations: 1. Short Recurrent Bursts
May 1999Submitted to The Astronomy Letters.arXiv:astro-ph/9905195v2 21
Results are presented of the observations of the soft gamma repeater SGR 1900+14 made on the Wind spacecraft during the source reactivation period from May 1998 to January 1999. Individual characteristics of recurrent bursts, such as their time histories, energy spectra, and maximum and integrated energy fluxes, are considered. Some statistical distributions and relationships are also presented. The close similarity of these events to the recurrent bursts observed from other SGRs argues for a common emission mechanism.
INTRODUCTION
Recurrent short gamma-ray bursts with soft spectra have been known for over 20 years. The first two sources of such bursts were discovered and localized in March 1979 by the Konus experiment on Venera 11 and 12 . The extraordinary superintense gamma-ray outburst on March 5, 1979 (Mazets et al., 1979a) was followed by a series of 16 weaker short bursts from the FXP 0526-66 source, which were observed during the next few years (Golenetskii et al., 1984). Also in March 1979, three short soft bursts arriving from the source B1900+14 were detected (Mazets et al., 1979b). In 1983, Prognoz-9 and ICE observed a series of soft recurrent bursts from a third source, 1806-20 (Atteia et al., 1987, Laros et al., 1987. The sources of recurrent soft bursts were given the name soft gamma repeaters, SGRs. Interestingly, a retrospective analysis of Venera 11 and Prognoz-7 data shows that the short gamma-ray burst of 07.01.1979 also belonged to SGR 1806-20 (Atteia et al., 1987). Thus bursts from the first three soft gamma repeaters were detected within a three month period. This is remarkable, because the fourth soft gamma repeater, the SGR 1627-41, was detected and localized only 19 years later, in 1998 (Hurley et al., 1999a, Woods et al., 1999. A fifth SGR has also been observed (Hurley et al., 1997), but it still awaits a good localization.
Important new results came from studies aimed at association of the recurrent bursters with astrophysical objects visible in other wavelengths. The giant narrow initial pulse of the 1979 March 5 event was detected on a dozen different spacecraft. Triangulation yielded a very small source-localization box, about 0.1 square-arcmin, which projected on the outer edge of the N49 supernova remnant in the Large Magellanic Cloud (Cline et al., 1982). Later, ROSAT found a persistent X-ray source in this region (Rothschild et al., 1994).
The association of the SGR 0526-66 with N49 was sometimes questioned because of energy considerations (Mazets and Golentskii, 1981). For a distance of 55 kpc to the N49, the energy release in the March 5 event is 5 × 10 44 erg, and in the recurrent bursts, up to 8 × 10 42 erg, giving luminosities which exceed the Eddington limit for a neutron star by a factor of 10 4 − 10 6 (Mazets and Golenetskii, 1981). However arguments for large distances to the SGRs and, accordingly, for large energy releases continued to accumulate. Kulkarni and Frail (1993) established an association of the SGR 1806-20 with the supernova remnant G10.0-0.3, about 14 kpc distant (Corbel et al., 1997). Murakami et al. (1994) used ASCA to localize one of the events from SGR 1806-20 and discovered a soft X-ray source coinciding with its position. Subsequently, the observations of Kouveliotou et al. (1998) from RXTE revealed regular pulsations in the emission of this source with a period P = 7.47 s. A parallel analysis of ASCA archived data for 1992 confirmed this period and permitted determination of its derivativeṖ = 2.6 × 10 −3 s yr −1 . SGR 1900+14 is located close to the G48.2+0.6 supernova remnant and is believed to be associated with it (Kouveliotou et al., 1994). Coinciding with this repeater in position is a soft X-ray source reliably localized from ROSAT (Hurley et al., 1996). ASCA observations of this source made in April 1998 revealed a 5.16-s periodicity of the emission (Hurley et al., 1999b). When the SGR 1900+14 resumed its activity in June and August 1998 (Hurley et al., 1999c), RXTE observations confirmed this period and established the spin-down rate of the neutron starṖ = 3.5 × 10 −3 s yr −1 (Kouveliotou et al., 1999).
RXTE observations also yielded some evidence for a possible 6.7-s periodicity of the new SGR 1627-41 (Dieters et al., 1998). Thus the known soft gamma repeaters exhibit an association with young (< 10 4 years) supernova remnants, a periodicity of 5-8 s, and a secular spin-down by a few ms per year. Duncan (1995, 1996) suggested that the soft gamma repeaters are young neutron stars with superstrong, up to 10 15 G, magnetic fields and high spin-down rates because of high losses due to magnetic dipole radiation -the so-called magnetars. The fractures produced by magnetic stress in a neutron star's crust give rise to the release and transformation of magnetic energy into the energy carried away by particles and hard photons.
In this paper, we present the results of observations of recurrent bursts from SGR 1900+14 made in 1998 with a gamma-ray burst spectrometer onboard the Wind spacecraft (Aptekar et al., 1995).
OBSERVATIONS
Until 1998, recurrent bursts from the SGR 1900+14 were observed during two intervals: three events in 1979 (Mazets et al., 1979b) and another three in 1992 (Kouveliotou et al., 1993). SGR 1900+14 resumed burst emission in May 1998 (Hurley et al., 1998;Hurley et al., 1999d), which continued up to January 1999.
This time the frequency of recurrent bursts was found to be high and very irregular. Figure 1 shows the distribution within this time interval of the recurrent bursts with measured fluences S. Three subintervals with a distinctly higher source activity stand out. On August 27, 1998, SGR 1900+14 emitted a superintense outburst with a complex and spectacular time structure (Cline et al., 1998;Hurley et al., 1999d). This event is not shown in Fig. 1 because it will be considered in a separate paper (Mazets et al., 1999a). On May 30, 1998, an intense train of recurrent bursts occurred. Several tens of bursts varying in duration from 0.05 to 0.7 s arrived during as short a time as three minutes. The intervals between the bursts decreased at times to such an extent as to become comparable to the duration of the bursts themselves, and the radiation intensity between them did not drop down to the background level. Figure 2 displays the most crowded part of the train. In Fig. 1 it is represented by the feature with total flux S = 5.8 × 10 −5 erg cm −2 . The high burst occurrence frequency may cause losses in the information obtained. Readout of the information on a trigger event takes up about one hour. If other events arrive within this interval, only a very limited amount of the relevant information will be recorded in the housekeeping channel. Such cases did occur, and quite possibly they comprised two or three weaker trains of recurrent bursts, in particular, on September 1, 1998, 61232.-61585 s UT, with a total flux S ∼ 2 × 10 −5 erg cm −2 , and on October 24, 1998, 4921-5348 s UT, with S ∼ 10 −5 erg cm −2 .
All recurrent bursts are short events with a fairly complex time structure and soft energy spectra, which, when fitted with a dN/dE ∝ E −1 exp(−E/kT ) relation, are characterized by kT ≃ 20 − 30 keV. Figure 3 presents time histories of several events recorded before August 27. Their energy spectra are very similar. Figure 4 shows the spectrum of a burst on June 7. After the August 27 event, the second interval of increased activity began (see Fig. 1), but most of the bursts did not change their characteristics. Shown in Fig. 5 are time structures of a few events, and Fig. 6 displays a typical energy spectrum. The only pronounced difference in the period after August 27 was the onset of several long, up to 4 s, bursts with a correspondingly high total energy flux, up to 5 × 10 −5 erg cm −2 . Figure 7 presents time histories of two such bursts, with a typical energy spectrum shown in Fig. 8. Such long recurrent bursts were observed to be produced by other SGRs as well (Golenetskii et al., 1984).
As can be seen from these data, recurrent bursts exhibit a complex time structure, which cannot be described by a model of a single pulse with standard characteristic rise and decay times. The burst intensity rises in 15-20 ms. By contrast, long bursts take a substantially longer time to rise, up to ∼ 150 ms (Fig. 7). In many cases the main rise is preceded by an interval with a weaker growth in intensity or even by a single weak pulse (Figs. 3 and 7). The intensity decay extends practically through the whole event. At the end of a burst one frequently observes a strong steepening of the falloff (Figs. 3 and 7). Large-scale details in the time structure may indicate that the bursts consist of several structurally simpler but closely related events (Figs. 3, 5, and 7).
The value of kT for the photon spectra of different bursts lies in the 18-30 keV region. There is practically no spectral evolution within any one event, which is readily seen from Fig. 7. The maximum fluxes in a burst vary from 2 × 10 −6 to 5 × 10 −5 erg cm −2 s −1 . However for 80% of events they lie within a narrow region of (1 − 3) × 10 −5 erg cm −2 s −1 . Fluences vary within broader ranges, from 10 −7 to 5 × 10 −5 erg cm −2 . This implies that the energy release is partially determined by the duration of the emission process in the source. Figure 9 presents a fluence vs duration distribution of bursts (lg S vs lg ∆ T0.25 ). The measure of burst duration ∆ T0.25 is the time interval within which the radiation intensity is in excess of the 25% level of the maximum flux Fmax. The graph demonstrates a strong correlation between these quantities (ρ = 0.8).
As follows from the data presented here, for a 10 kpc distance of the SGR 1900+14 source (Case and Bhattacharya, 1998;Vasisht et al., 1994), and assuming the emission to be isotropic, the maximum source luminosity in recurrent bursts lies within the range of (1 − 4) × 10 41 erg s −1 , and the energy liberated in a recurrent burst is 2 × 10 39 to 8 × 10 41 erg.
CONCLUSION
The observations of this period of high activity of SGR 1900+14 have substantially broadened our ideas concerning such sources. The giant outburst on August 27 has come as a real surprise. Among other remarkable events is the intense train of bursts on May 30, 1998, when the frequency of recurrent bursts increased within a few minutes by at least a factor of 10 4 compared to that usually observed during the reactivation periods of known SGRs. On the other hand, the characteristics of the bursts themselves, their time histories, spectra, and intensity do not suggest radical differences from those of recurrent events observed in other soft gamma repeaters (Kouveliotou et al., 1987;Frederiks et al., 1997;Mazets et al., 1999b), which argues for the fundamental similarity between the emission processes occurring in different sources. It appears significant that the giant outburst with an energy release thousands of times larger than that typical of a single recurrent event did not noticeably affect the behavior and individual characteristics of recurrent bursts.
Partial support of the RSA and RFBR (Grant 99-02-17031) is gratefully acknowledged.
Fig. 1 .
1-Distribution of the recurrent-burst occurrence and fluences during the source reactivation period in 1998.
Fig. 2 .
2-The interval with the highest occurrence frequency in the recurrent-burst train on May 30, 1998. G1 and G2 display 15-50 keV and 50-250 keV backgroud-substructed count rates, respectively.
Fig. 3 .
3-Time structure of several bursts recorded before the August 27 giant burst.
Fig. 4 .
4-A typical energy spectrum of one of the bursts displayed inFig. 3 (980607a).
Fig. 5 .
5-Time structures of bursts observed after the August 27 event.
Fig. 6 .
6-A typical energy spectrum of one of the bursts(Fig. 5, 981031a).
Fig. 7 .
7-Time histories of two long recurrent bursts recorded in the 15-50 and 50-250-keV energy windows. The behavior of the count-rate ratio of these windows, which characterizes the spectral rigidity, does not practically reveal any spectral evolution in the SGR 1900+14 bursts.
Fig. 8 .
8-The spectrum of the 981028b burst.
Fig. 9 .
9-The strong correlation between the fluence and duration of a burst implies that the energy release in a source is proportional to the duration of emission for a small luminosity scatter.
. R L Aptekar, Space Science Rev. 71265Aptekar, R.L., et al. 1995, Space Science Rev., 71, 265
. J.-L Atteia, ApJ. 320105Atteia, J.-L., et al. 1987, ApJ, 320, L105
. T L Cline, ApJ. 25545Cline, T.L., et al. 1982, ApJ, 255, L45
. T L Cline, E P Mazets, S V Golenetskii, IAU Circ. 7002Cline, T.L., Mazets, E.P., & Golenetskii, S.V. 1998, IAU Circ. 7002
. S Corbel, ApJ. 478624Corbel, S., et al. 1997, ApJ, 478, 624
. S Dieters, IAU Circ. 6962Dieters, S., et al. 1998, IAU Circ. 6962.
D D Frederiks, AIP Conf. Proc., v. 428, 4th Huntsville Symposium, Gamma-Ray Bursts. Ch. Meegan, R. Preece, T. KoshutNew YorkAIP921Frederiks, D.D., et al. 1997, in AIP Conf. Proc., v. 428, 4th Huntsville Symposium, Gamma-Ray Bursts, ed. Ch. Meegan, R. Preece, T. Koshut (New York, AIP), p. 921
. S V Golenetskii, V N Ilyinskii, E P Mazets, Nature. 30741Golenetskii, S.V., Ilyinskii, V.N., & Mazets, E.P. 1984, Nature, 307, 41
. K Hurley, ApJ. 46313Hurley, K., et al. 1996, ApJ, 463, L13
. K Hurley, 6743IAU CircHurley, K., et al. 1997, IAU Circ. 6743
. K Hurley, 6929IAU CircHurley, K., et al. 1998, IAU Circ. 6929
. K Hurley, ApJ. in pressHurley, K., et al. 1999a, ApJ, in press
. K Hurley, ApJ. 510111Hurley, K., et al. 1999b, ApJ, 510, L111
. K Hurley, ApJ. 510107Hurley, K., et al. 1999c, ApJ, 510, L107
. K Hurley, Nature. 39741Hurley, K., et al. 1999d, Nature, 397, 41
. C Kouveliotou, ApJ. 32221Kouveliotou, C., et al. 1987, ApJ, 322, L21
. C Kouveliotou, Nature. 362728Kouveliotou, C., et al. 1993, Nature, 362, 728
. C Kouveliotou, Nature. 368125Kouveliotou, C., et al. 1994, Nature, 368, 125
. C Kouveliotou, Nature. 393235Kouveliotou, C., et al. 1998, Nature, 393, 235
. C Kouveliotou, ApJ. 510115Kouveliotou, C., et al. 1999, ApJ, 510, L115
. S Kulkarni, D A Frail, Nature. 36533Kulkarni, S., & Frail, D.A. 1993, Nature, 365, 33
. J G Laros, ApJ. 320111Laros, J.G., et al. 1987, ApJ, 320, L111
. E P Mazets, Nature. 282587Mazets, E.P., et al. 1979a, Nature, 282, 587
. E P Mazets, S V Golenetskii, Yu A Guryan, Soviet, Astron. Lett. 5No6343Mazets, E.P., Golenetskii, S.V., & Guryan, Yu.A. Soviet. Astron. Lett., 1979c, 5(No6), 343
. E P Mazets, Ap&SS. 803Mazets, E.P., et al. 1981, Ap&SS, 80, 3
. E P Mazets, S V Golenetskii, Ap&SS. 7547Mazets, E.P., & Golenetskii, S.V. 1981, Ap&SS, 75, 47
. E P Mazets, Astronomy Letters. 368127NatureMazets, E.P., et al. 1999a, Astronomy Letters, in press Mazets, E.P., et al. 1999b, ApJ, in press Murakami, T., et al. 1994, Nature, 368, 127
. R Rothschild, S Kulkarni, R Lingenfelter, Nature. 368432Rothschild, R., Kulkarni, S., & Lingenfelter R. 1994, Nature, 368, 432
. C Thompson, R C Duncan, MNRAS. 275255Thompson, C. & Duncan, R.C. 1995, MNRAS, 275, 255
. C Thompson, R C Duncan, ApJ. 473322Thompson, C. & Duncan, R.C. 1996, ApJ, 473, 322
. G Vasisht, ApJ. 43135Vasisht, G., et al. 1994, ApJ, 431, L35
. P M Woods, ApJ. in press This preprint was prepared with the AAS L A T E X macros v4.0Woods, P.M., et al. 1999, ApJ, in press This preprint was prepared with the AAS L A T E X macros v4.0.
| [] |
[
"Relativistic Coulomb problem for Z larger than 137",
"Relativistic Coulomb problem for Z larger than 137"
] | [
"A D Alhaidari \nSaudi Center for Theoretical Physics\nDepartment of Physics\nKing Fahd University of Petroleum & Minerals\n31261Jeddah, DhahranSaudi ArabiaSaudi Arabia\n"
] | [
"Saudi Center for Theoretical Physics\nDepartment of Physics\nKing Fahd University of Petroleum & Minerals\n31261Jeddah, DhahranSaudi ArabiaSaudi Arabia"
] | [] | We propose a relativistic one-parameter Hermitian theory for the Coulomb problem with an electric charge greater than 137. In the non-relativistic limit, the theory becomes identical to the Schrödinger-Coulomb problem for all Z. Moreover, it agrees with the Dirac-Coulomb problem to order 2 ( ) Z α , where α is the fine structure constant. The vacuum in the theory is stable and does not suffer from the "charged vacuum" problem for all Z. Moreover, transition between positive and negative energy states could be eliminated. The relativistic bound states energy spectrum and corresponding spinor wavefunctions are obtained. | 10.1142/s0217751x10049402 | [
"https://export.arxiv.org/pdf/1005.1414v1.pdf"
] | 118,543,443 | 1005.1414 | 73e34a2ad2f4587638d749a3d369ec14c95fd265 |
Relativistic Coulomb problem for Z larger than 137
A D Alhaidari
Saudi Center for Theoretical Physics
Department of Physics
King Fahd University of Petroleum & Minerals
31261Jeddah, DhahranSaudi ArabiaSaudi Arabia
Relativistic Coulomb problem for Z larger than 137
−1−numbers: 0365Ge0365Pm0365Ca Keywords: Dirac-CoulombHydrogen-like atomsZ greater than 137QED vacuumstable vacuumcharged vacuumrelativistic energy spectrumspinor bound states
We propose a relativistic one-parameter Hermitian theory for the Coulomb problem with an electric charge greater than 137. In the non-relativistic limit, the theory becomes identical to the Schrödinger-Coulomb problem for all Z. Moreover, it agrees with the Dirac-Coulomb problem to order 2 ( ) Z α , where α is the fine structure constant. The vacuum in the theory is stable and does not suffer from the "charged vacuum" problem for all Z. Moreover, transition between positive and negative energy states could be eliminated. The relativistic bound states energy spectrum and corresponding spinor wavefunctions are obtained.
Introduction
The problem of strong electric field in quantum electrodynamics (QED) has been the focus of renewed research interest for a long time. For a review see, for example, [1,2] and references therein. One of the most important physical effects in a strong timedependent electric field is the dynamical electron-positron pair production from vacuum. On the other hand, for sufficiently strong static electric potential electronpositron pairs could, in principle, be created spontaneously. However, the process of static pair creation, which is predicted by QED [3], has yet to be confirmed unequivocally by experiment [4].
The Dirac equation gives a good description of the relativistic electron under the influence of various kinds of potential couplings. In the Dirac-Coulomb problem, the ground state energy of the electron in a hydrogen-like atom decreases as the point nuclear charge Ze − increases [1,5]. The Sommerfeld's fine-structure energy spectrum formula indicates that the ground state of the electron becomes zero for 1 Zα = , where 2 0 4 1 137 e α π = ≈ ε is the fine structure constant [5]. On the other hand, for 1 137 Z α − > ≈ this energy becomes a complex number. That is, the Dirac Hamiltonian operator becomes non-Hermitian. Self-adjoint extension of the Hamiltonian is frequently achieved by taking into account the finite size of the nucleus [1,2,5]. For example, replacing the point nucleus by a uniform charged sphere of total charge Ze − and finite radius of the order of few Fermis. As a result, the ground state energy regains reality for all Z, but decreases below zero as Z increases until it reaches 2 mc − at the critical charge cr Z [1,2]. Increasing Z further forces the ground state to dive into the negative energy (lower) continuum and changes its character from a bound state to a resonant state, called a supercritical resonance [1,6]. An initially vacant supercritical state decays into an electron-positron pair; a free positron and a bound electron. Thus, −2− the vacuum, which was perturbed by the supercritical Coulomb potential, becomes charged. However, if the original bound state that became supercritical was fully occupied before diving, then no pairs are produced. Nonetheless, the charge of the electron gets embedded into the charge density generated by the vacuum polarization charge distribution. The vacuum will thus be carrying a net charge equal to the total charge of the electrons in supercritical resonant states. All remedies to the strong coupling scenario invoke concepts and employ tools outside the framework of oneparticle relativistic quantum mechanics where the Dirac-Coulomb problem is originally formulated. Of course, it is well established that QED in lowest order results in the Dirac-Coulomb theory. However, being a perturbative quantum field theory with Z α as the perturbation parameter, QED might not properly handle the strong coupling region where 1 Z α > and where the Dirac-Coulomb Hamiltonian becomes non-Hermitian.
In this work, we propose an alternative description of the relativistic electron in a strong static field generated by a point charge while avoiding the problem of a "charged vacuum". It is formulated within the theory of one-particle relativistic quantum mechanics and as such gives no direct implication on QED. It could, at best, be representing the lowest order limit of an alternative quantum field theory of electrodynamics at strong coupling (e.g., a non-perturbative version of QED). Specifically, we are proposing a one-parameter theory based on the Dirac equation with coupling to the vector Coulomb potential and a "pseudo Coulomb potential" (to be defined below). We require that this theory agrees with the original Dirac-Coulomb problem to order 2 ( ) Z α . Moreover, in the nonrelativistic limit it must reproduce the Schrödinger-Coulomb problem for all Z. In the following section, we define the problem and present our approach to the solution. This work is an extension to, and/or departure from earlier work on the subject wherein a scalar Coulomb coupling is introduced in addition to the vector Coulomb coupling [7]. We show that there is a physical difference between the "pseudo Coulomb potential" introduced in this work and the scalar Coulomb potential. We give the new relativistic energy spectrum and corresponding spinor wavefunctions.
Dirac-Coulomb problem for Z > 137
The solution of the Dirac-Coulomb problem is defined as the solution of the Dirac equation for an electron (mass m and electric charge e) in the field of a static point charge. That is, the vector potential in the Dirac equation has zero space component, while the time component is the attractive Coulomb potential ( ) V r Z r α = − . Due to spherical symmetry, the equation separates into radial and angular components. The solution of the angular component is standard and is independent of Z [5]. In the conventional relativistic units ( 1 c = = ), the Sommerfeld's fine-structure formula for the relativistic energy spectrum reads as follows [5] 1 2 2
2 2 2 1 n Z m n Z κ α ε κ α − ⎡ ⎤ ⎛ ⎞ ⎢ ⎥ = ± + ⎜ ⎟ ⎢ ⎥ + − ⎝ ⎠ ⎣ ⎦ ; 0,1, 2,.. n = ,(1)
where κ is the spin-orbit quantum number defined as ( ) . "Equivalence" is defined here as a relativistic Dirac theory that agrees with the original Dirac-Coulomb problem (for 137 Z ≤ ) up to order 2 ( ) Z α and whose nonrelativistic limit is identical to the Schrödinger-Coulomb problem for all Z. To do that, we proceed as follows.
In the units 1 c = = and in the standard representation of the Dirac matrices, the two-component radial equation with coupling to spherically symmetric scalar and vector potentials, where the space component of the later vanishes, reads as follows [5,7,8]
( ) ( ) 0 ( ) ( ) d r dr d r dr m V W r m V W r κ κ ε χ ε χ + − ⎛ ⎞ + + − − ⎛ ⎞ ⎜ ⎟ ⎜ ⎟= ⎜ ⎟ ⎜ ⎟ + − + − − ⎝ ⎠ ⎝ ⎠ ,(2)
where W(r) and V(r) are the scalar and vector potentials, respectively. Now, we take V and W to be Coulomb-like; (
θ − ≤ ≤ + , takes Eq. (2) into ( ) (1 1) 0 (1 1) ( ) d r r dr d r r dr r mC mS mS mC r γ αν γ αν φ ε ε φ + ± ± − ± ± ⎛ ⎞ ⎛ ⎞ − − ± − + − ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ − + + − − − ⎝ ⎠ ⎝ ⎠ ∓ ,(3)where cos C θ = , sin S θ =
, and ( ) ( )
2 2 2 1 C S α κ γ κ αμ κ μ ν = + = + − ,(4)φ χ χ φ χ χ + + + − − − ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ − ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ .(5)
In the Appendix, we show the calculation details. The parameters are chosen such that the Dirac Hamiltonian in (3) is Hermitian. Now, the same square root that appears in γ is also present in the rest of the parameters, C ± and S ± . Therefore, reality dictates that (1) above; for any value of the vector potential parameter ν we can always choose μ to make this square root real. Equation (3) gives one radial spinor component in terms of the other as follows
2 2 2 ν μ α − − ≤ . Unlike formula( ) 1 ( ) ( ) d r dr r m S r mC γ φ φ ε ± ± ± = − + ± ± ∓ ,(6)
Using this back in Eq. (3) results is the second order radial differential equation
( ) 2 2 2 2 2 ( 1) 2 ( ) 0 d m m r dr r r γ γ εν μ α ε φ ± ⎡ ⎤ ± + − + − − − = ⎢ ⎥ ⎣ ⎦ ,(7)
which is Schrödinger-like. Equation (6) is referred to as the "kinetic balance relation", which is not valid for mC
ε ± = ∓ . Now, since 1 0 C ± ≥
≥ , then the energy value mC ε ± = ∓ belongs to the negative/positive energy spectrum, respectively. Therefore, the top/bottom signs in Eqs. (6) and (7) correspond to positive/negative energy solutions, respectively. Since the two solution spaces are completely disconnected, we have to choose one of the two signs of the energy spectrum and obtain the corresponding solution, but not both. In what follows, we choose the top signs and obtain the positive energy solutions. The negative energy solutions are then obtained from these simply by applying the following map:
φ φ ± → ∓ , ε ε → − , κ κ → − , ν ν → − , μ μ → .
(8) Note also that under this map: C C ± → ∓ and S S ± → − ∓ .
We must now show that the nonrelativistic limit gives the correct Schrödinger-Coulomb problem defined by the solution of the following second order differential equation (written in the same units,
1 c = = ) 2 2 2 ( 1) 2 2 ( ) 0 d Z m mE r dr r r α ψ ⎡ ⎤ + − + − − = ⎢ ⎥ ⎣ ⎦ ,(9)
where is the orbital angular momentum quantum number. Now, the nonrelativistic limit of Eq. and/or natural way to select a fixed value for one of the two parameters μ or ν ? We leave the answer to this question for later and note that with these coupling parameters the Dirac equation (2) becomes is a scalar potential; both having equal magnitudes, αμ . We should also note that this constitutes a departure from earlier work on this problem, where a pure scalar Coulomb potential is added to the vector Coulomb potential (see, for example [7], and references therein). Moreover, the contribution of a pure scalar potential survives the non-relativistic limit, whereas this pseudo potential does not [8].
( ) 0 2 ( ) d Z r r dr d Z r r r dr m r m r κ μ κ α ε χ α α ε χ + − ⎛ ⎞ ⎛ ⎞ − − − ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ + − − + − ⎝ ⎠ ⎝ ⎠ .(10)
Energy spectrum
To obtain the bound states energy spectrum for this equivalent relativistic Coulomb problem, we first establish the parameter map between the relativistic equation (7) and the nonrelativistic equation (9). This map reads as follows: 0
ε > : ψ φ + → , m Z ε ν μ → + ,( ) 2{ , 0 1 , 0 γ γ γ γ > − − < → . (11a) −5− 0 ε < : ψ φ − → , m Z ε ν μ → + ,( ) 2( ) ( ) ( ) 1 2 2 2 2 | | | | | | ( ) 1 (1 ) ( 1) 1 (1 2 ) n Z Z Z n n n m α α α γ γ γ ε ξ ξ ξ ξ ξ − + + + ⎡ ⎤ ⎡ ⎤ = ± + − ± − + + − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ,(12)
where the parameter Z ξ μ = such that
( ) ( ) ( ) ( ) ( ) 1 2 2 2 2 0 ( ) 1 1 2 1 Z Z Z m m C α κ κ κ α α ε ξ ξ ξ ξ ξ − − ⎡ ⎤ ⎡ ⎤ = + − + + − = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ .(14)
Taking the limit Z α → ∞ shows that this minimum "positive" energy can never be less than −m for any value of the parameter ξ in the allowed range ). In Fig. 2, we plot few of the lowest energy in the spectrum for a given Z and for a range of values of the parameter ξ with
1 1 ( ) Z ξ α − ≥ − .
Spinor wavefunction
−6−
One method to obtain the two components of the radial spinor wavefunction is to use the parameter map (11). Applying this map on the non-relativistic Coulomb wave function [9] transforms it into the sought after eigenfunctions. However, we exploit here an alternative approach by proposing the following ansatz for the upper radial spinor
− + + + + − − − − − − ⎧ > ⎪ = ⎨ < ⎪ ⎩(19)
The lowest energy state at 0 ε corresponds to 0 γ < and n = 0. It reads as follows
( ) 0 2 1 0 0 0 ( ) r r A r e γ λ γ φ λ − − + −− = .(20)
Substituting the upper spinor component (19) into the kinetic balance relation (6) with the top sign and using the differential and recursion properties of the Laguerre polynomials [10], we obtain the lower component as follows
⎡ ⎤ = Γ − + + ⎢ ⎥ ⎣ ⎦ .(23)
Equations (19) and (21) show that the spinor wavefunction ( ) n r ψ , whose components are ( ) n r φ ± , is associated with the energy ( ) −7−
Conclusion
We added to the Dirac-Coulomb Hamiltonian a one-parameter "pseudo Coulomb potential". The result is a relativistic model for the Coulomb problem that maintained reality of the Hamiltonian of Hydrogen-like atoms for all Z. The lowest positive energy state does not dive into the vacuum (negative energy continuum) avoiding the problem of a charged vacuum. In fact, imposing a certain constraint on the parameter of the theory prevents transition between positive and negative energy states. Therefore, the space of solutions consists of two disconnected energy subspaces. This work embodies a departure from earlier work on the subject wherein a pure scalar Coulomb potential is introduced not the pseudo Coulomb potential presented here. In addition to the relativistic energy spectrum, we also obtained the corresponding spinor wavefunction.
φ χ φ χ + + − − ⎛ ⎞ ⎛ ⎞ = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ .
We can write Eq. (A2) as follows
U U m U U U U U U φ φ μ ν κ α ε α − − − − − − + − ⎡ − + + − ⎣ ⎤ + + = ⎦ (A3)
Using the following relations
( ) ( ) 1 0 1 0 1 C S S C U U − − − − − = , ( ) ( ) 0 1 1 1 0 S C C S U U − − = , ( ) ( ) 0 1 1 0 1 1 0 1 0 U U − − − = ,(A4)mC S C mS C S mS C S mC S C ν ν α κ αμ ε κ αμ φ κ αμ α κ αμ ε φ + − ⎛ ⎞ − + − − − + + − ⎛ ⎞ = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ − + + + − − − − − ⎝ ⎠ ⎝ ⎠ (A5)
This gives coupled first order differential equation for the two spinor components, φ ± .
By eliminating one component in terms of the other, we obtain a second order differential equation. We require that this becomes a Schrödinger-like equation (i.e., it contains no first order derivatives), which dictates that at lease one of the diagonal elements in (A5) be constant (i.e., independent of r). This means that we should choose the angle θ such that ( )
9
9(7)for positive energy (the top sign) is obtained by taking m E ε ≅ + , where E m << . This results in the same Schrödinger equation (electric charge Z ν μ = + . Thus, the question now is as follows: For 137 Z > , can we find a pair of real parameters ν and μ such that could take any desired value! Now, comes the question of uniqueness: Is there a unique
.
Therefore, the physical content of the potential coupling in the Dirac Hamiltonian is a combination of the potential ( ) The former is the usual vector Coulomb potential while the latter is neither a scalar nor a vector. We call it the "pseudo Coulomb potential". It could be written as −
we choose to keep the parameter arbitrary since we have established the "equivalence" criterion mentioned above independently of ξ. The new energy spectrum formula (12) shows that the lowest positive energy bound state corresponds to 0 γ < and n = 0, where it becomes
this theory does not suffer from the "charged vacuum" problem since the positive energy electron state can never become embedded into the negative energy continuum ( m ε < − ). Hence, the vacuum in this theory is stable. Moreover, if we require that positive and negative energy subspaces be disconnected, then the parameter ξ will be restricted even further otherwise, transition between positive and negative energy states can occur for large enough Z α . In fact, this stronger condition on ξ guarantees that the geometric cosine function satisfy 1 0 C ± ≥ ≥ . Now, since the lowest positive energy is mC − , then the highest negative energy is mC + − and the energy gap between the positive and negative energy spectra is
radial components of the spinor wavefunction obtained above in (19) and (21) are for positive energies. To obtain the corresponding negative energy solutions, we apply on them the map(8). Fig. 3, we plot the radial spinor components ( ) n r φ ± for some of the lowest positive energy states and for 200 Z = .
It should be obvious (e.g., by equating coefficients of r in this condition) that this constant must be zero. Therefore, the Schrödinger-, where sign is either + or − independent of the ± sign in front of ν. Squaring both sides of this equation and solving the resulting quadratic equation for C, we obtain C ± and S ± . Substituting (A6) in (A5) gives Eq.
Figure Captions
Figure Captions
Fig. 1 :Fig. 2 :Fig. 3 :
123The positive energy spectrum obtained from Eq. (12) as a function of Z with allowed positive/negative energy transitions (dashed curves) and with no transitions (solid curves) The lowest positive energy in the spectrum given by Eq. The radial spinor components n φ + (solid curve) and n φ − (dashed curve) for some of the lowest positive energy states and
Fig. 3b
Fig. 3b
Acknowledgements: This work is sponsored by the Saudi Center for Theoretical Physics. Partial support by King Fahd University of Petroleum and Minerals is highly appreciated.Appendix: Transformation of the Dirac equation (2) into equation (3)The unitary transformation2 2( ) exp( ) i U θ θσ = could be written as the following 2×2 matrixApplying this transformation to Eq. (2), we obtain
W Greiner, B Müller, J Rafelski, Quantum Electrodynamics of Strong Fields. BerlinSpringer-VerlagW. Greiner, B. Müller, and J. Rafelski, Quantum Electrodynamics of Strong Fields (Springer-Verlag, Berlin, 1985)
P W Milonni, The Quantum Vacuum: An Introduction to Quantum Electrodynamics. San DiegoAcademicP. W. Milonni, The Quantum Vacuum: An Introduction to Quantum Electrodynamics (Academic, San Diego, 1993);
V R Khalilov, Electrons in Strong Electromagnetic Fields: An Advanced Classical and Quantum Treatment. AmsterdamGordon & BreachV. R. Khalilov, Electrons in Strong Electromagnetic Fields: An Advanced Classical and Quantum Treatment (Gordon & Breach, Amsterdam, 1996)
. J Reinhardt, B Müller, W Greiner, Phys. Rev. A. 24103J. Reinhardt, B. Müller, and W. Greiner, Phys. Rev. A 24, 103 (1981).
. E Ackad, M Horbatsch, Phys. Rev. A. 7862711E. Ackad and M. Horbatsch, Phys. Rev. A 78, 062711 (2008)
W Greiner, Relativistic Quantum Mechanics: Wave Equations. BerlinSpringerW. Greiner, Relativistic Quantum Mechanics: Wave Equations (Springer, Berlin, 1994);
J D Bjorken, S D Drell, Relativistic Quantum Mechanics. New YorkMcGraw-HillJ. D. Bjorken, and S. D. Drell, Relativistic Quantum Mechanics (McGraw- Hill, New York, 1964)
. E Ackad, M Horbatsch, Phys. Rev. A. 7622503E. Ackad and M. Horbatsch, Phys. Rev. A 76, 022503 (2007)
. H Goudarzi, M Golmohamadi, Int. J. Theor. Phys. 473121H. Goudarzi and M. Golmohamadi, Int. J. Theor. Phys. 47, 3121 (2008);
. G-X Ju, Z-Z Ren, Comm. Theor. Phys. 49319G-X Ju and Z-Z Ren, Comm. Theor. Phys. 49, 319 (2008);
. V Y Lazur, O K Reity, V V Rubish, Theor. Math. Phys. 143559V. Y. Lazur, O. K. Reity, and V. V. Rubish, Theor. Math. Phys. 143, 559 (2005);
. A Leviatan, Int. J. Mod. Phys. E. 14111A. Leviatan, Int. J. Mod. Phys. E 14, 111 (2005);
. Phys. Rev. Lett. 92202501Phys. Rev. Lett. 92, 202501 (2004);
. S-H Dong, G-H Sun, D Popov, J. Math. Phys. 444467S-H Dong, G-H Sun, and D. Popov, J. Math. Phys. 44, 4467 (2003)
. A D Alhaidari, H Bahlouli, A Al-Hasan, Phys. Lett. A. 34987A. D. Alhaidari, H. Bahlouli, and A. Al-Hasan, Phys. Lett. A 349, 87 (2006)
. A Messiah, Quantum Mechanics. INorth-HollandA. Messiah, Quantum Mechanics, Vol. I (North-Holland, Amsterdam, 1958);
E Merzbacher, Quantum Mechanics, 2 nd ed. New YorkWileyE. Merzbacher, Quantum Mechanics, 2 nd ed. (Wiley, New York, 1970)
Formulas and Theorems for the Special Functions of Mathematical Physics. W Magnus, F Oberhettinger, R P Soni, Springer-VerlagNew YorkW. Magnus, F. Oberhettinger, and R. P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics (Springer-Verlag, New York, 1966)
| [] |
[
"The Fixed-Cycle Traffic-Light queue with multiple lanes and temporary blockages",
"The Fixed-Cycle Traffic-Light queue with multiple lanes and temporary blockages"
] | [
"Rik W Timmerman \nEindhoven University of Technology\n\n",
"Marko A A Boon \nEindhoven University of Technology\n\n"
] | [
"Eindhoven University of Technology\n",
"Eindhoven University of Technology\n"
] | [] | Traffic-light modelling is a complex task, because many factors have to be taken into account. In particular, capturing all traffic flows in one model can significantly complicate the model. Therefore, several realistic features are typically omitted from most models. We introduce a mechanism to include pedestrians and focus on situations where they may block vehicles that get a green light simultaneously. More specifically, we consider a generalization of the Fixed-Cycle Traffic-Light (FCTL) queue. Our framework allows us to model situations where (part of the) vehicles are blocked, e.g. by pedestrians that block turning traffic and where several vehicles might depart simultaneously, e.g. in case of multiple lanes receiving a green light simultaneously. We rely on probability generating function and complex analysis techniques which are also used to study the regular FCTL queue. We study the effect of several parameters on performance measures such as the mean delay and queue-length distribution. | 10.1080/23249935.2022.2133980 | [
"https://export.arxiv.org/pdf/2112.11292v3.pdf"
] | 245,353,348 | 2112.11292 | 08101c4f41852a380f00bc5ced7978db9c74f3f2 |
The Fixed-Cycle Traffic-Light queue with multiple lanes and temporary blockages
4th August 2022
Rik W Timmerman
Eindhoven University of Technology
Marko A A Boon
Eindhoven University of Technology
The Fixed-Cycle Traffic-Light queue with multiple lanes and temporary blockages
4th August 2022
Traffic-light modelling is a complex task, because many factors have to be taken into account. In particular, capturing all traffic flows in one model can significantly complicate the model. Therefore, several realistic features are typically omitted from most models. We introduce a mechanism to include pedestrians and focus on situations where they may block vehicles that get a green light simultaneously. More specifically, we consider a generalization of the Fixed-Cycle Traffic-Light (FCTL) queue. Our framework allows us to model situations where (part of the) vehicles are blocked, e.g. by pedestrians that block turning traffic and where several vehicles might depart simultaneously, e.g. in case of multiple lanes receiving a green light simultaneously. We rely on probability generating function and complex analysis techniques which are also used to study the regular FCTL queue. We study the effect of several parameters on performance measures such as the mean delay and queue-length distribution.
Introduction
Traffic lights are currently omnipresent in urban areas and one of their aims is to let vehicles drive across an intersection in such a way that the delay is as small as possible. The modelling of queues in front of traffic lights therefore has always been and still is an important topic of study in road-traffic engineering. The overall aim is to create a model that is as realistic as possible, which poses to be a difficult task. There are many studies devoted to traffic control at intersections, ranging from simulation studies and the use of artificial intelligence to analytical and explicit calculations to find good control strategies. This study provides a more realistic extension of the so-called Fixed-Cycle Traffic-Light (FCTL) queue, see e.g. [19], which allows us to perform analytical computations. We call the model that we consider in this paper the blocked Fixed-Cycle Traffic-Light (bFCTL) queue with multiple lanes. Our main aim is to provide an exact computation of the steady-state queue length of the bFCTL queue with multiple lanes, although a transient analysis (possibly with time-varying parameters) is also possible.
The regular FCTL queue is a well-studied model in traffic engineering, see [8,9,19,21,30,32,33,40,41]. The typical features of the FCTL queue are:
• A fixed cycle length, fixed green and red times;
• A general arrival process;
• Constant interdeparture times of queued vehicles;
• Whenever the queue becomes empty during a green period, it remains empty since newly arriving vehicles pass the crossing at full speed without experiencing any delay.
Due to all the fixed settings, the model focuses on a single lane and does not capture any dependencies or interactions with other lanes. Unfortunately, in many cases the FCTL queue cannot be applied as a realistic model to study the queue-length distribution in front of a traffic light. Take, for example, an intersection where vehicles from a single stream are spread onto two lanes which are both heading straight and where both lanes are governed by the same traffic light, see also Figure 1(a). Indeed, since there are two parallel lanes in each direction, two vehicles can cross the intersection simultaneously and vehicles will in general switch lanes (if needed) to join the lane with the shorter queue. Moreover, it might be the case that the vehicles are blocked during the green period, e.g. because of a pedestrian crossing the intersection (receiving a green light at the same time as the stream of vehicles that we model), see Figure 1(b) for a visualization. Such blockages might also occur in a multi-lane scenario (where all lanes are going in the same direction) as visualized in Figure 1(c). It is apparent that these situations cannot be modeled by the standard FCTL queue. However, it is extremely relevant to understand such intersections better as is also indicated in e.g. [37,44] and more generally, it is e.g. important to investigate pedestrian behaviour at intersections as is done in e.g. [48]. The study in this paper provides an extension of the FCTL queue to account for such situations. They seem to be the most common in practice, see e.g. [22] for another study on the case as in Figure 1(b). For extensions and other scenarios, we refer the reader to Section 5. Note that the blocking mechanisms discussed in this paper give rise to more complicated model dynamics and dependencies, which make it impossible to use traditional methods (e.g. Webster's approximation for the mean delay [41]).
A shared right-turn lane as in Figure 1(b), that is a lane with vehicles that are either turning right or are heading straight, has been studied before. However, to the best of our knowledge, there are no papers with a rigorous analysis taking stochastic effects into account while computing e.g. the mean queue length for such lanes. Shared right-turn lanes where vehicles are blocked by pedestrians crossing immediately after the right turn have been considered in e.g. [3,12,13,14,22,31,34,35]. Several case studies, such as [13] and [34], indicate that there is a potentially severe impact by pedestrians blocking vehicles. This is for example also reflected in the Highway Capacity Manual (HCM) as published by the Transportation Research Board [39], where the focus is on capacity estimation. Most papers have also (a) (b) (c) Figure 1 A visualization of three intersections that can be modeled by the bFCTL queue with multiple lanes. In (a), the blue rectangle indicates a combination of lanes which can be analyzed as a bFCTL queue with two lanes. The other lanes at the intersection, the complement of the blue rectangle, can be considered separately because of the fixed settings. In (b), the blue rectangle indicates a lane that can be modeled as a bFCTL queue with a single lane with blockages. In (c), the blue rectangle indicates two lanes that we can model as a bFCTL queue with two lanes where vehicles are potentially blocked by pedestrians.
focused on the estimation of the so-called saturation flow rate, or capacity, of shared lanes where turning vehicles are possibly blocked by pedestrians, see e.g. [14,31,35]. In [12], it is stated that the used functions for the capacity estimation for turning lanes (such as those in the HCM) might have to be extended to account for stochastic behaviour. In a small case study, [12] confirm that the capacity estimation by the HCM yields an overestimation in various cases. The overestimation of the capacity by the HCM is also observed in several other papers, such as in [13,14] and [22], and is probably due to random/stochastic effects. The bFCTL queue explicitly models such stochastic behaviour. A potential application of the bFCTL queue with a single lane as depicted in Figure 1(b) can be found in the model that is studied in [22], which has also been the source of inspiration for this paper. A description of the model in [22] is as follows, where we replace the left-turn assumption for left-driving traffic to a right-turn assumption for the more standard case of right-driving traffic. We have a shared lane with straight-going and right-turning traffic controlled by a traffic light, where immediately after the right turn there is a crossing for pedestrians. The pedestrians may block the right-turning vehicles as the vehicles and pedestrians may receive a green light simultaneously. The right-turning vehicles that are blocked, immediately block all vehicles behind them.
Another potential application of the bFCTL queue is to account for bike lanes. Bikes might make use of a dedicated lane or mix with other traffic and in both cases a turning vehicle might be (temporarily) blocked by bicycles because the bicycles happen to be in between the vehicle and the direction that the vehicle is going. As such, blockages have an influence on the performance measures of the traffic light. It is important to take such influences into account in order to find good traffic-light settings. Several papers studying the impact of bikes can be found in [4,15,20] and [11]. Also other types of blocking might occur, such as by a shared-left turn lane and opposing traffic receiving a green light simultaneously, see e.g. [10,25,26,27,28,43,45,46]. As such, the bFCTL queue (either with multiple lanes or not) is a relevant addition to the literature because it enables a more suitable modelling of traffic lights at intersections with crossing pedestrians and bikes, which leads to trafficlight control strategies for more realistic situations. In order to model a situation where two opposing streams of vehicles potentially block one another as in e.g. [45], the bFCTL queue would have to be extended. For more references on the topics discussed in this paragraph see also the review paper by [16]. Another related study is [33] who introduce a model with "distracted" drivers, which can be considered as an FCTL queue with independent blockages, but this blocking mechanism is a special case of the one discussed in the present paper.
As mentioned before, we call the model that we consider in this paper the bFCTL queue with multiple lanes. On the one hand we thus allow for the modelling of vehicle streams that are spread over multiple lanes and on the other hand we allow for vehicles to be (temporarily) blocked during the green phase. The key observation to constructing the mathematical model is that we can model multiple parallel (say m) lanes as one single queue where batches of (up to) m delayed vehicles can depart in one time slot, for more details see Section 2. The resulting queueing model is one-dimensional just like the standard FCTL queue, which allows us to obtain the probability generating function (PGF) of the steady-state queue-length distribution of the bFCTL queue with multiple lanes and to provide an exact characterization of the capacity.
In summary, our main contributions are as follows:
(i) We extend the general applicability of the Fixed-Cycle Traffic-Light (FCTL) queue. We allow for traffic streams with multiple lanes and for vehicles to be blocked during the green phase. We refer to this model variation as the blocked Fixed-Cycle Traffic-Light (bFCTL) queue with multiple lanes.
(ii) We provide an exact capacity analysis for the bFCTL queue relieving the need for simulation studies.
(iii) We provide a way to compute the PGF of the steady-state queue-length distribution of the bFCTL queue and show that it can be used to obtain several performance measures of interest.
(iv) We provide a queueing-theoretic framework for the study of shared lanes with potential blockages by pedestrians. This e.g. allows for the study of several performance measures and allows us to model the impact of randomness on the performance measures.
Paper outline
The remainder of this paper is organized as follows. In Section 2, we give a detailed model description. This is followed by a capacity analysis, a derivation of the PGF of the steadystate queue-length distribution, and a derivation of some of the main performance measures in Section 3. In Section 4, we provide an overview of relevant performance measures for some numerical examples and point out various interesting results. We wrap up with a conclusion and some suggestions for future research in Section 5.
Detailed model description
In this section we provide a detailed model description of the bFCTL queue with multiple lanes. We assume that there are multiple lanes for a traffic stream, that is a group of vehicles coming from the same road and heading into one (or several) direction(s), governed by a single traffic light. A visualization can be found in Figure 2(a). As can be seen in Figure 2(a), we assume that there are m lanes and that vehicles spread themselves among the available lanes in such a way that m vehicles can depart if there are at least m vehicles. In practice, this assumption makes sense as drivers gladly minimize their delay by choosing free lanes. The traffic-light model is then turned into a queueing model with a single queue with batch services of vehicles, see Figure 2(b). The batches generally consist of m delayed vehicles (we consider delayed vehicles as is done in the study of the FCTL queue, see e.g. [8]), except if less than m delayed vehicles are present at the moment that a batch is taken into service: then all vehicles are taken into service. We further assume that the time axis is divided into time intervals of constant length, where each interval corresponds to the time it takes for a batch of delayed vehicles to depart from the queue. We will refer to these intervals as slots.
We now turn to discuss two concrete, motivational examples that fit the framework of the bFCTL queue with multiple lanes. After that, we describe the assumptions of the bFCTL queue more formally.
Example 1 (Shared right-turn lane)
In this example we consider the scenario as in Figure 1( We are now set to formalize the assumptions for the bFCTL queue with multiple lanes. We number them for clarity and provide additional remarks if necessary. We start with a standard assumption for FCTL queues and a standard assumption on the independence of arriving vehicles, see, e.g. [40].
Assumption 1 [Discrete-time assumption]
We divide time into discrete slots. The red and green times, r and g respectively, are fixed multiples of those discrete slots and the total cycle length, c = g + r, thus consists of an integer number of slots. Each slot corresponds to the duration of the departure of a batch of maximally m delayed vehicles, where m is the maximum number of vehicles that can cross the intersection simultaneously. Any arriving vehicle that finds at least m other vehicles waiting in front of the traffic light is delayed and joins the queue. Assumption 2 (Independence of arrivals) All arrivals are assumed to be independent. In particular, the arrivals during slot i do not affect the arrivals in slot j when i = j.
The next three assumptions, Assumptions 3, 4, and 5, relate to the blockages of vehicles and that allow us to explicitly model such blockages.
Assumption 3 (Green period division)
For the green period we distinguish between two parts, g 1 and g 2 , with g = g 1 + g 2 . During the first part of the green period, blockages might occur (see also Assumption 4 below). During the second part of the green period there are no blockages at all. We further assume that g 2 > 0 for technical reasons.
We make a division of the green period into two parts as is done in e.g. [22]. Moreover, such a division is often present in reality and it slightly eases the computations later on. This e.g. means that during the second part of the green period there is a "no walk" sign flashing, during which pedestrians are not allowed to cross the intersection. We note that if g 1 = 0 (and m = 1), we obtain the standard FCTL queue.
Further, we assume that the second part of the green period is strictly positive, mainly for technical reasons. This basically implies that at least one batch of vehicles can depart from the queue during each cycle and that there is no batch of vehicles in the queue at the end of the cycle that has caused a blockage before. If g 2 would be zero and if a batch of vehicles is blocked at the end of slot g 1 , this would allow for a blockage to carry over to the next cycle, leading to a slightly more complex model. Moreover, the red and green times could be taken random in the regular FCTL queue when the times are independent of one another, see e.g. [7]. At the expense of additional complexity, our framework for the bFCTL queue could be adjusted to account for such sources of randomness. This would allow one to model (to some extent) randomness in, for example, crossing times of pedestrians.
Next, we make an assumption about the blocking of batches of vehicles during the first part of the green period. We take into account that (i) not all batches of vehicles at the head of the queue are potentially blocked (e.g. because only turning batches of vehicles can be blocked); that (ii) if a batch of vehicles is blocked, all vehicles behind it are blocked as well; that (iii) once a blockage occurs, it carries over to the next slot; and that (iv) blockages occur only in the combined event of having a right-turning batch of vehicles at the head of the queue and pedestrians crossing the road.
Assumption 4 (Potential blocking of batches)
A batch of vehicles, arriving at the head of the queue in time slot i, turns right with probability p i . Independently, in time slot j, pedestrians cross the road with probability q j , blocking right-turning traffic. As a consequence, whenever a new batch arrives at the head of the queue, this batch will be served in that particular time slot if (i) the batch goes straight ahead, or (ii) the batch turns right but there are no crossing pedestrians. Once a batch (of right-turning vehicles) is blocked, it will remain blocked until the next time slot when no pedestrians cross the road. Note that this will be time slot g 1 + 1 at the latest. If the batch at the head of the queue is blocked, it will also block all the other batches in the queue, including those that would go straight. Both p i and q i are allowed to depend on the slot i.
Remark 1
We make a couple of remarks on the values of the p i . First, we note that p i is not representing the probability that the batch at the head of the queue is a turning batch, but rather the probability that a newly arriving batch that gets to the head of the queue in slot i, is a turning batch. In practice, this will usually not depend on the slot in which the batch gets to the head of the queue. This would imply that p i = p (see, e.g., Example 2) and that we could drop the subscript i. However, we are able to let p i depend on the slot in the derivation of the formulas and opt to provide the general case where p i is allowed to depend on i.
Moreover, in the case that m > 1, we will often assume that either p i = 0, as is the case in Figure 1(a), or p i = 1, as is the case in Figure 1(c). This is mainly due to the fact that all vehicles in a batch have to be treated similarly: the framework of the bFCTL queue does not allow for batches consisting of one right-turning vehicle that is blocked and one straight-going vehicle that is allowed to depart because it is not blocked. I.e. a case with mixed traffic and multiple lanes, such as the shared right-turn lane example in Figure 1(b) but with m > 1, is not modeled by the bFCTL queue. We do not consider this to be a severe restriction as it will often be the case in practice that p i = 0 or p i = 1 if m > 1. We stress that the case with m = 1 as depicted by the blue rectangle in Figure 1(b) can be studied by the bFCTL queue.
Remark 2
We would like to stress that the blockage of a batch of vehicles carries over to the next slot. E.g. if a vehicle is a right-turning vehicle in Figure 1(b) and is blocked, it is still at the head of the queue in the next slot. So, as soon as a blockage actually takes place, we are essentially in a different state of the system than in the case where there is no blockage: if there is a blockage in time slot i then we are sure that there is a right-turning batch at the head of the queue in time slot i + 1. This is why we have two mechanisms for the blocking: on the one hand we have the p i to check whether new batches that get to the head of the queue are right turning and on the other hand we have the pedestrians crossing in slot i accounted for by the q i .
We need one final assumption which is a slightly adapted version of the standard FCTL assumption. We require a slight change because of the potential blocking of vehicles during the first part of the green phase and because of the possibility that there is more than one delayed vehicle departing in a single slot during the green period because of the batch-service structure.
Assumption 5 (bFCTL assumption)
We assume that any vehicle arriving during a slot where m−1 or less vehicles are in the queue, may depart from the queue immediately together with the m − 1 or less delayed vehicles. There are two exceptions: (i) if this batch of m − 1 or less vehicles is blocked or (ii) if the queue was empty and there is an arriving vehicle that gets blocked, in which case that vehicle gets blocked together with any arriving vehicles after that vehicle. In the former case, all arriving vehicles together with the delayed vehicles remain at the queue. In the latter case, the first blocked vehicle is delayed and any arriving vehicles behind it (if any) are also delayed and blocked where we restrict ourselves to the situation where the queue is empty. If the queue was not empty, then we assume that either all arriving vehicles in that slot are blocked and delayed (because the batch at the head of the queue is blocked) or that all arriving vehicles are allowed to depart along with the batch of delayed vehicles (because the batch at the head of the queue is not blocked). Summarizing, if the queue length at the start of the slot is at least 1 but at most m − 1, we either have no departures (in case of a blockage) or all vehicles are allowed to cross the intersection (including arriving vehicles). If the queue length is 0, we only have a non-zero queue at the end of the slot if a vehicle gets blocked: then the blocked vehicle and any vehicles arriving behind it are queued.
Remark 3
The bFCTL assumption allows one to model a situation where arriving vehicles get blocked if the queue was already empty before the start of the slot. Although, in principle, one can use any distribution for the number of arriving vehicles that are blocked, there are only few logical choices in practice. For example, in the case of Figure 1(b), the number of (potentially) blocked vehicles that arrive at the queue during slot i would correspond to the number of vehicles counting from the first right-turning vehicle among all vehicles arriving in slot i: these vehicles will be blocked if there is a crossing pedestrian in slot i. In Figure 1
(c), any arriving vehicle is a turning vehicle. So, if there is a crossing pedestrian, all arriving vehicles in slot i are blocked.
The combination of all the above assumptions enables us to view the process as a discretetime Markov chain, which in turn allows us to obtain the capacity and the PGF of the steadystate queue-length distribution of the bFCTL queue with multiple lanes. We do so in the next section.
Capacity analysis, PGFs, and performance measures for the bFCTL queue
In this section we provide an exact analysis for the bFCTL queue. We start with an exact characterization of the capacity in Subsection 3.1. In Subsection 3.2, we obtain the steadystate queue-length distribution in terms of PGFs where we thus focus on the transforms of the queue-length distribution, because we cannot directly obtain closed-form expressions for the probabilities. We can use the methods devised in e.g. [1] and [18] to obtain numerical values from the PGFs for the queue-length probabilities and moments respectively. Without giving details, we stress that our recursive approach in Subsection 3.2 also allows us to provide a transient analysis in which case we can also take time-varying parameters into account. In Subsection 3.3, we study several important performance measures of the bFCTL queue.
Capacity analysis for the bFCTL queue
In this subsection we develop a computational algorithm to determine the capacity for the bFCTL queue. The capacity is defined as the maximum number of vehicles that can cross the intersection in the given lane group, per time unit. In the standard FCTL queue, the capacity can simply be determined by multiplying the saturation flow with the ratio of the green time and the cycle length. In the bFCTL model, however, there are subtle dependencies which carry over from one cycle to the next cycle. We will capture these dependencies by means of a Markov reward model. The Markov chain with the associated transition probabilities that we use is depicted in Figure 3. We are interested in the number of departures of delayed vehicles in each time slot. For this reason, the Markov chain that we consider here only has states (i, s) for i = 1, . . . , g 1 representing the slots during the first part of the green period and s = u, b, representing the case where vehicles are not blocked (s = u) and the case where vehicles are blocked (s = b). We also have states i for i = g 1 + 1, . . . , g 1 + g 2 + r representing the slots during the second part of the green period and the red period. Finally, we create an artificial state 0 to gather the rewards from states (1, b) and (1, u). The long-term mean number of departures of delayed vehicles can now be determined by means of a Markov reward analysis.
0
(1, u)
(1, b) (2, u) (2, b)
. . . Figure 3 Markov chain used to study the capacity of the bFCTL queue.
. . . (g 1 , u) (g 1 , b) g 1 + 1 g 1 + 2 . . . g 1 + g 2 g 1 + g 2 + 1 . . . g 1 + g 2 + r 1 − p 1 q 1 p 1 q 1 1 − p 2 q 2 p 2 q 2 1 − q 2 q 2 1 − p 3 q 3 p 3 q 3 1 − q 3 q 3 1 − p g 1 q g 1 p g 1 q g 1 1 − q g 1 q g 1 1 1 1 1 1 1 1 1
We use Markov reward theory to obtain the capacity of the bFCTL queue. In order to use Markov reward theory, we work backwards from state g 1 + g 2 + r to obtain the reward in state 0. Indeed, we get the mean number of vehicles that is able to depart from the queue in an arbitrary cycle when we compute the reward in state 0. The rewards that we assign to each transition are as follows: if we make a transition to a state (i, u) for i = 1, . . . , g 1 , we receive a reward m reflecting the maximum of m delayed vehicles departing from the queue. We also get a reward m if we make a transition from state g 1 + i to state g 1 + i + 1 for i = 1, . . . , g 2 − 1. For all other transitions, we receive no reward as there are no vehicles departing. We denote the received reward up to state (i, s) with r i,s with i = 1, . . . , g 1 and s = u, b and the received reward up to state i with r i for i = 0 and i = g 1 + 1, . . . , g 1 + g 2 + r. Then we get the following relations between the rewards in the various states. We start with defining the total reward in state g 1 + g 2 + r to be 0 (there are no vehicle departures while being in state g 1 + g 2 + r), i.e. r g 1 +g 2 +r = 0. For states i = g 1 + g 2 , . . . , g 1 + g 2 + r − 1, we obtain
r i = r i+1 ,(3.2)
as there are no departures during the red period. However, for states i = g 1 +1, . . . ,
g 1 + g 2 −1, we have r i = m + r i+1 (3.3)
as there are (potentially) m delayed vehicles departing. For state (g 1 , b) we have that
r g 1 ,b = r g 1 +1 , (3.4)
as there are no departures when the vehicles are blocked. For state (g 1 , u) we obtain
r g 1 ,u = m + r g 1 +1 (3.5)
as there are, at most, m delayed vehicles departing from the queue when we transition from
state (g 1 , u) to g 1 + 1. Similarly, for states (i, b) with i = 1, . . . , g 1 − 1, we get r i,b = q i+1 r i+1,b + (1 − q i+1 )r i+1,u (3.6)
and for states (i, u) with i = 1, . . . , g 1 − 1, we get
r i,u = m + p i+1 q i+1 r i+1,b + (1 − p i+1 q i+1 )r i+1,u . (3.7)
Finally, for state 0, we get
r 0 = p 1 q 1 r 1,b + (1 − p 1 q 1 )r 1,u . (3.8)
Then we have that r 0 is the average reward received when traversing the Markov chain as depicted in Figure 3. This average reward translates to the mean number of delayed vehicles that are able to depart from the queue during a cycle, which is exactly the capacity of this lane group. We can thus compute the capacity of the bFCTL queue for each set of input parameters. Along with the mean number arrivals per cycle, we can also check whether the bFCTL queue renders a stable queueing model. If we denote the mean number of arrivals in slot i by [Y i ], the mean number of arrivals per cycle is
c i=1 [Y i ] and the bFCTL queue is stable if r 0 > c i=1 [Y i ].
The procedure to check for stability is summarized in Algorithm 1.
Algorithm 1 Algorithm to check for stability of the bFCTL queue.
1: Input: [Y i ] for i = 1, . . . , c, g 1 , g 2 , c, p i for i = 1, . . . , g 1 , and q i for i = 1, . . . , g 1 . 2: Use Equations (3.1) up to (3.8) to determine r 0 . 3: if c i=1 [Y i ] < r 0 then 4:
The bFCTL queue is stable. 5: else 6: The bFCTL queue is not stable. 7
Derivation of the PGFs for the bFCTL queue
First, we need to introduce some further concepts and notation before we continue our quest to obtain the relevant PGFs of the queue-length distribution. We introduce two states, one corresponding to a situation where the queue is blocked and one where this is not the case, cf. Assumption 4 and Remark 2 and as is done in Subsection 3.1. We denote the random variable of being in either of the two states with S and S takes the values b (blocked) and u (unblocked). By definition, blocked states only occur during the first part of the green period and if there are vehicles in the queue. We define S to be equal to u if the queue is empty. We denote the joint steady-state queue length (measured in number of vehicles) and the state S at the end of slot i = 1, . . . , g 1 with the tuple (X i , S) and we denote its PGF with X i, j (z) where i = 1, . . . , g 1 and j = u, b. We note that X i,b (z) and X i,u (z) are partial generating functions: we e.g. have
X i,b (z) = [z X i 1{S = b}], where 1{S = b} = 1 if S = b
and 0 otherwise. For the slots i = 1, . . . , c we denote the steady-state queue length with X i and its PGF with X i (z),
so for i = 1, . . . , g 1 we have that X i (z) = X i,u (z) + X i,b (z).
We note that, as we are looking at the steady-state distribution of the number of vehicles in the queue, we need to require stability of the queueing model. We can check whether or not the stability condition is satisfied by means of Algorithm 1 devised in Subsection 3.1.
We further denote with Y i the number of arrivals during slot i and with Y i,b we denote the total number of arrivals of potentially blocked vehicles during slot i, see also Assumption 5. We denote their PGFs respectively with Y i (z) and Y i,b (z). Later in this subsection, we provide
Y i,b (z) for several concrete examples.
In the next part of this subsection we provide the recursion between the X i, j (z), i = 1, . . . , g 1 and j = u, b, and the X i (z), i = g 1 + 1, . . . , c. Afterwards, we wrap up with some technicalities that need to be overcome to obtain a full characterization of all the PGFs.
Recursion for the X i, j (z)
We start with the relation between X 1,b (z) and X c (z). We distinguish several cases while making a transition from slot c to a blocked state in slot 1. We get
X 1,b (z) =p 1 q 1 [z X c +Y 1 1{X c > 0}] + q 1 [z Y 1,b 1{X c = 0}1{Y 1,b > 0}]+ 0 · [1{X c = 0}1{Y 1,b = 0}] =p 1 q 1 X c (z)Y 1 (z) + q 1 (X c = 0) Y 1,b (z) − Y 1,b (0) − p 1 Y 1 (z) .
(3.10)
We explain this relation as follows: if the queue is nonempty at the end of slot c, we need both a right-turning batch of vehicles and a crossing pedestrian in slot 1 to get a blockage, which happens with probability p 1 q 1 . The queue length at the end of slot 1 is then X c + Y 1 .
The second term can be understood as follows: if X c = 0, the queue at the end of slot c is empty and then we get to a blocked state if there is a pedestrian crossing (which happens with probability q 1 ) and if Y 1,b > 0, in which case the queue length is Y 1,b . Note that we further have that the case X 1,b = 0 cannot occur (by definition) as indicated by the term on the second line of Equation (3.10). Similarly, we derive X 1,u (z):
X 1,u (z) =(1 − p 1 q 1 ) [z X c +Y 1 −m 1{X c ≥ m}] + (1 − p 1 q 1 ) [z 0 1{1 ≤ X c ≤ m − 1}]+ (1 − q 1 ) [z 0 1{X c = 0}] + q 1 [z 0 1{X c = 0}1{Y 1,b = 0}] (3.11) =(1 − p 1 q 1 )X c (z) Y 1 (z) z m + (1 − p 1 q 1 ) m−1 l=1 (X c = l) 1 − Y 1 (z) z m−l + (X c = 0) 1 − q 1 + q 1 Y 1,b (0) − (1 − p 1 q 1 ) Y 1 (z) z m .
This relation can be understood in the following way: first, if there are at least m vehicles at the end of slot c and if there is no blockage (which occurs with probability 1 − p 1 q 1 , i.e. the complement of a blockage occurring), then the queue length at the end of slot 1 is X c +Y 1 −m. Secondly, if there is at least 1 but at most m−1 vehicles at the end of slot c, we have an empty queue at the end of slot 1 if there is no blockage (which is the case with probability 1 − p 1 q 1 ). Thirdly, if the queue is empty at the end of slot c, then the queue remains empty if there are no pedestrians crossing (occurring with probability 1 − q 1 ) or if there is a pedestrian crossing (occurring with probability q 1 ) while Y 1,b = 0. This fully explains Equation (3.11).
In a similar way, we obtain the following relations for slots i = 2, . . . , g 1 :
X i,b (z) =p i q i [z X i−1 +Y i 1{S = u}] + q i [z X i−1 +Y i 1{S = b}]+ q i [z Y i,b 1{X i−1 = 0}1{S = u}1{Y i,b > 0}] =p i q i X i−1,u (z)Y i (z) + q i X i−1,b (z)Y i (z)+ q i (X i−1 = 0, S = u) Y i,b (z) − Y i,b (0) − p i Y i (z) , (3.12)
where we have to take both transitions from slot i − 1 while being blocked (the case S = b) and not being blocked (the case S = u) into account, and
X i,u (z) =(1 − p i q i ) [z X i−1 +Y i −m 1{X i−1 ≥ m}1{S = u}]+ (1 − q i ) [z X i−1 +Y i −m 1{X i−1 ≥ m}1{S = b}]+ (1 − p i q i ) [z 0 1{1 ≤ X i−1 ≤ m − 1}1{S = u}]+ (1 − q i ) [z 0 1{1 ≤ X i−1 ≤ m − 1}1{S = b}]+ (1 − q i ) [z 0 1{X i−1 = 0}1{S = u}]+ q i [z 0 1{X i−1 = 0}1{S = u}1{Y i−1,b = 0}] (3.13) =(1 − p i q i )X i−1,u (z) Y i (z) z m + (1 − q i )X i−1,b (z) Y i (z) z m + (1 − p i q i ) m−1 l=1 (X i−1 = l, S = u) 1 − Y i (z) z m−l + (1 − q i ) m−1 l=1 (X i−1 = l, S = b) 1 − Y i (z) z m−l + (X i−1 = 0, S = u) 1 − q i + q i Y i,b (0) − (1 − p i q i ) Y i (z) z m .
In order to derive X g 1 +1 (z), we note that we need to take the cases into account where the queue was blocked or not during slot g 1 . We then get
X g 1 +1 (z) = [z X g 1 +Y g 1 +1 −m 1{X g 1 ≥ m}1{S = u}]+ [z X g 1 +Y g 1 +1 −m 1{X g 1 ≥ m}1{S = b}]+ [z 0 1{X g 1 ≤ m − 1}1{S = u}] + [z 0 1{X g 1 ≤ m − 1}1{S = b}] = X g 1 ,u (z) + X g 1 ,b (z) Y g 1 +1 (z) z m + m−1 l=0 (X g 1 = l, S = u 1 − Y g 1 +1 (z) z m−l + m−1 l=1 (X g 1 = l, S = b 1 − Y g 1 +1 (z) z m−l .
(3.14)
For i = g 1 + 2, . . . , g 1 + g 2 , we obtain the following
X i (z) = [z X i−1 +Y i −m 1{X i−1 ≥ m}] + [z 0 1{X i−1 ≤ m − 1}] =X i−1 (z) Y i (z) z m + m−1 l=0 ( (X i−1 = l) 1 − Y i (z) z m−l ,(3.15)
while for slots i = g 1 + g 2 + 1, . . . , c we get
X i (z) = [z X i−1 +Y i ] = X i−1 (z)Y i (z). (3.16)
The combination of all equations above, provides us with a recursion with which we can express X g 1 +g 2 (z) in terms of Y i (z), Y i,b (z), (X i = l, S = u) and (X i = l, S = b) for i = 1, . . . , g 1 and l = 0, . . . , m − 1, and (X i = l) for i = g 1 + 1, . . . , g 1 + g 2 − 1, i = c, and l = 0, . . . , m − 1, with the following general form:
X g 1 +g 2 (z) = X n (z) X d (z) ,(3.17)
with known X n (z) and X d (z). We refrain from giving X n (z) and X d (z) in the general case because of their complexity and only provide them under simplifying assumptions later in this subsection. The Y i (z) are known, but we still need to obtain the Y i,b (z), the (X i = l, S = u) and (X i = l, S = b) for i = 1, . . . , g 1 and l = 0, . . . , m − 1, and the (X i = l) for i = g 1 + 1, . . . , g 1 + g 2 − 1, i = c, and l = 0, . . . , m − 1. We start with the Y i,b (z) and then come back to the unknown probabilities. The occurrence of the PGF Y i,b (z) directly relates to Assumption 5. As mentioned before in Remark 3, one could, a priori, use any positively distributed, discrete random variable. However, when we have a specific example in mind, there is usually one logical definition, see also Remark 5 below.
Remark 5
In general, we define Y i,b to be the random variable of the total number of arrivals of potentially blocked vehicles during slot i, cf. Assumption 5. In case m = 1, such as in Figure 1(b), the interpretation of the Y i,b (z) is straightforward. We simply count the number of arriving vehicles starting from the first vehicle that is a turning vehicle. We get the following expression for Y i,b (z):
Y i,b (z) = ∞ k=0 (Y i,b = k)z k = ∞ j=0 (Y i = j)(1 − p i ) j + ∞ k=1 ∞ j=k (Y i = j)(1 − p i ) j−k p i z k = Y i (1 − p i ) + ∞ j=1 p i (Y i = j)(1 − p i ) j j k=1 z 1 − p i k = Y i (1 − p i ) + ∞ j=1 p i (Y i = j)(1 − p i ) j z 1 − z 1−p i j 1 − p i − z = Y i (1 − p i ) + p i z 1 − p i − z ∞ j=1 (Y i = j) (1 − p i ) j − z j = Y i (1 − p i ) + p i z 1 − p i − z (Y i (1 − p i ) − Y i (z)) ,
where in the second step we condition on the total number of arrivals and take into account how we can get to k blocked vehicles; in the third step we interchange the order of the summation; and in the fourth step we compute a geometric series. If m > 1, the interpretation as above for the case m = 1 is not necessarily meaningful. It is more difficult to compute the Y i,b in a logical and consistent way. This has to do with the fact that if m > 1 we consider batches of vehicles that are either all blocked or not, whereas the Y i,b 's are about individual vehicles. As mentioned before in Remark 1, if m > 1 we often have that either p i = 0 or p i = 1. If p i = 0, the general expression for Y i,b (z) reduces to:
Y i,b (z) = Y i (1) + 0 · (Y i (1) − Y i (z)) = Y i (1) = 1,
which makes sense as there are no turning vehicles in case p i = 0. If p i = 1, we have that:
Y i,b (z) = Y i (0) − (Y i (0) − Y i (z)) = Y i (z),
which is also logical: every arriving vehicle is a turning vehicle if p i = 1, so we have that
Y i,b (z) = Y i (z).
Except for the constants (X i = l, S = u) and (X i = l, S = b) for i = 1, . . . , g 1 and l = 0, . . . , m − 1, and (X i = l) for i = g 1 + 1, . . . , g 1 + g 2 − 1, i = c, and l = 0, . . . , m − 1, we are now done. We explain how to find the (so far) unknown constants in the next part of this subsection.
Finding the unknowns in
X g 1 +g 2 (z)
As mentioned before, we still need to find several unknowns before the expression for X g 1 +g 2 (z) is complete. The standard framework for the FCTL queue as described in e.g. [40] is also applicable to the bFCTL queue with multiple lanes with some minor differences. Although we are dealing with more complex formulas, the key ideas are identical. We have m(g 1 + g 2 ) + (m − 1)g 1 unknowns in the numerator X n (z) of X g 1 +g 2 (z) in Equation (3.17) and we have m(g 1 + g 2 ) roots with |z| ≤ 1 for the denominator X d (z) of X g 1 +g 2 (z), assuming stability of the queueing model. An application of Rouché's theorem, see e.g. [2], shows that X d (z) indeed has m(g 1 + g 2 ) roots on or within the unit circle assuming stability. One root is z = 1, which leads to a trivial equation and as a substitute for this root, we put in the additional requirement that X g 1 +g 2 (1) = 1. The remaining (m − 1)g 1 equations are implicitly given in Equations (3.10) and (3.12). We give them here separately for completeness. We have for k = 1, ..., m − 1
(X 1 = k, S = b) = p 1 q 1 k l=1 (X c = l) (Y 1 = k − l) + q 1 (X c = 0) (Y 1,b = k),
and for i = 2, . . . , g 1 and k = 1, . . . , m − 1
(X i = k, S = b) = k l=1 {p i q i (X i−1 = l, S = u) + q i (X i−1 = l, S = b)} (Y i = k − l) + q i (X i−1 = 0, S = u) (Y i,b = k),
which provides us with the (m − 1)g 1 additional equations. In total, we obtain a set of m(g 1 + g 2 ) + (m − 1)g 1 linear equations with m(g 1 + g 2 ) + (m − 1)g 1 unknowns, which we can solve to find the unknown (X i = l, S = u), for i = 1, . . . , g 1 and l = 0, . . . , m − 1, the unknown (X i = l, S = b), for i = 1, . . . , g 1 and l = 1, . . . , m − 1, and the unknown (X i = l), for i = g 1 + 1, . . . , g 1 + g 2 − 1, i = c, and l = 0, . . . , m − 1. Due to the complicated structure of our formulas, we do not obtain a similar, easy-to-compute Vandermonde system as for the standard FCTL queue (see [40]), but a linear solver is in general able to find the unknowns (we did not encounter any numerical issues/problems in the examples that we studied).
There are several ways to obtain the roots of X d (z) in Equation (3.17). Because those roots are subsequently used in solving a system of linear equations, we need to find the required roots with a sufficiently high precision, certainly if m(g 1 + g 2 ) + (m − 1)g 1 is large. In some cases, the roots can be found analytically, e.g. in case the number of arrivals per slot has a Poisson or geometric distribution. In other cases, the roots have to obtained numerically. There are several ways to do so. An algorithm to find roots is given in [8], Algorithm 1, while two other methods, one based on a Fourier series representation and one based on a fixed point iteration, are described in [23].
Performance measures
Now that we have a complete characterization of X g 1 +g 2 (z), we can find the PGFs of the queuelength distribution at the end of the other slots by employing Equations (3.10) up to (3.16). This basically implies that we can find any type of performance measure related to the queuelength distribution. As an example we find the PGF of the queue-length distribution at the end of an arbitrary slot. We denote this PGF with X (z) and obtain the following expression:
X (z) = 1 c c i=1 X i (z).
Another important performance measure is the delay distribution. The mean of the delay distribution, [D], can easily be derived from the mean queue length at the end of an arbitrary slot by means of Little's law with a time-varying arrival rate (for a proof of Little's law in this setting see e.g. [36]):
[D] = X (1) 1 c c i=1 Y i (1)
.
The PGF of the delay distribution can be derived (as is done for the FCTL queue in [40]), but such a derivation is more difficult. In the regular FCTL queue, the number of slots an arriving car has to wait is deterministic when conditioned on the number of vehicles in the queue and the time slot in which the car arrives. This is not the case for the bFCTL queue as the occurrence of blockages is random. By proper conditioning on the various blocked slots and queue lengths, one can obtain the delay distribution from the distribution of the queue length. We do not pursue this here.
If we want to obtain probabilities and moments from a PGF, we need to differentiate the PGF and respectively put z = 0 or z = 1. In our experience, this has not proven to be a problem. However, differentiation might become prohibitive in various settings, e.g. when m(g 1 + g 2 )+(m−1)g 1 becomes large or if we want to obtain tail probabilities. There are ways to circumvent such problems. If we are pursuing probabilities and do not want to rely on differentiation, we might use the algorithm developed by Abate and Whitt in [1] to numerically obtain probabilities from a PGF. For obtaining moments of random variables from a PGF, an algorithm was developed in [18] which finds the first N moments of a PGF numerically. Essentially, this shows that, from the PGF, we can obtain any type of quantity related to the steady-state distribution of the queue length, in the form of a numerical approximation.
All formulas computed in this section have been verified by comparing the numerical results with a simulation which mimics our discrete-time queueing model. More information about this simulation is given in Appendix A.
Examples
We start in Subsection 4.1 with several special cases of the bFCTL queue for which we provide explicit expressions for the PGF of the overflow queue and relate those special cases to the existing literature. Subsequently, we make a comparison between the capacity obtained in the HCM [6] and the capacity in our model in Subsection 4.2. After that, we investigate the influence of several parameters on the performance measures in numerical examples. We consider performance measures like the mean and variance of the steady-state queue-length distribution, both at specific moments and at the end of an arbitrary slot, the mean delay, and several interesting queue-length probabilities. We study the influence of the p i and q i in Subsection 4.3. In Subsection 4.4, we compare the case of turning and straight-going traffic on a single lane, as present in the bFCTL queue where blockages of all vehicles might occur, and cases where we have dedicated lanes for the right-turning and straight-going traffic where only turning vehicles are blocked. Note that we will consider each lane separately in those examples, so there is no conflict with e.g. Remark 1.
Special cases of the bFCTL queue
We study several special cases of the bFCTL queue, e.g. cases where the bFCTL queue reduces to the FCTL queue.
If q i = 1, an explicit expression for the PGF of the distribution of the overflow queue, X g 1 +g 2 (z), can be written down relatively easily. When it is further assumed, for the ease of exposition, that all
p i = p, Y i d = Y , Y i,b d
= Y b and m = 1, the following expression for X g 1 +g 2 (z) is obtained:
X g 1 +g 2 (z) = X n (z) X d (z) ,(4.1)
with X n (z) = z g 1 +g 2
g 2 −1 i=0 Y (z) z g 2 −i−1 1 − Y (z) z (X g 1 +i = 0)+ z g 1 Y (z) g 2 g 1 −1 i=0 (X i = 0, S = u) Y b (0) − (1 − p) Y (z) z (1 − p) Y (z) z g 1 −i−1 + (Y b (z) − Y b (0) − pY (z)) Y (z) g 1 −i−1 + pY (z) g 1 −i i−1 j=0 (X j = 0, S = u) Y b (0) − (1 − p) Y (z) z (1 − p) Y (z) z i− j−1 ,(4.2)
where (X 0 = 0, S = u) is to be interpreted as (X c = 0), and
X d (z) = z g 1 +g 2 − (1 − p) g 1 + pz g 1 g 1 −1 i=0 1 − p z i Y (z) c . (4.3)
The reason that we provide an explicit formula for this particular case is that this formula is significantly easier than the formula in the case where q i < 1 for one or more i = 1, . . . , g 1 . The stability condition (cf. Algorithm 1 in Subsection 3.1) for this example is relatively easy to derive and reads as follows:
µc < g 1 + g 2 , if p = 0, µc < g 2 , if p = 1, µc < g 2 + (1 − (1 − p) g 1 ) 1−p p , otherwise,
where µ is the mean arrival rate per slot, i.e. µ = [Y ]. This can be understood as follows: if p = 0 there are no turning vehicles and we obtain the regular FCTL queue with green period g 1 + g 2 . If p = 1 all vehicles are turning vehicles and there are no departures during the first part of the green period because q i = 1, so we obtain the FCTL queue with green period g 2 . The other case can be understood as follows: on the left-hand side we have the average number of arrivals per cycle whereas on the right-hand side we have the average number of slots available for delayed vehicles to depart. Indeed, on the right-hand side we have g 2 , the number of green slots during the second part of the green period which are all available for vehicles to depart, and the number of green slots available for departures during the first green period:
g 1 i=1 (1 − p) i = (1 − (1 − p) g 1 ) 1 − p p .
If p i = 0 for all i, i.e. there are no blockages occurring at all (regardless of the q i ), the FCTL queue with multiple lanes (with green period g = g 1 + g 2 ) is obtained. Note that we do not have to include the state S, because there are no blockages of batches of vehicles. If m = 1, we obtain the regular FCTL queue as studied in e.g. [40]. This can e.g. be observed when putting p i = 0 and m = 1 in Equations (4.1), (4.2), and (4.3). The expression for X g 1 +g 2 (z) or, alternatively, X g (z) is (after rewriting):
X g (z) = (z − Y (z))z g−1 g−1 i=0 (X i = 0) Y (z) z g−i−1 z g − Y (z) c ,(4.4)
where (X 0 = 0) is to be interpreted as (X c = 0). For general m, we have the following formula:
X g (z) = z mg g−1 i=0 m−1 l=0 (X i = l) 1 − Y (z) z m−l Y (z) z g−i−1 z mg − Y (z) c ,(4.5)
where the (X 0 = l), l = 0, . . . , m − 1, are to be interpreted as (X c = l). The stability condition for this case can be verified to be µc < mg which is in accordance with Algorithm 1. It can also be verified that the bFCTL queue reduces to the regular FCTL queue with green time g = g 2 and red time r + g 1 , if p i = 1 and q i = 1.
We note that for the FCTL queue with a single lane and no blockages (i.e. p i = 0 or p i = 1 and q i = 1) there is an alternative characterization of the PGF in terms of a complex contour integral, see [8].
It remains an open question whether such a contour-integral representation exists for the bFCTL with multiple lanes, as the polynomial structure in terms of Y (z)/z as present in Equation (4.4) is not present in the general bFCTL queue. This feature of the FCTL queue seems essential to obtain a contour-integral expression as is done in [8].
In [8], a decomposition result is presented in Theorem 2. It shows that several related queueing processes can in fact be decomposed in the independent sum of the FCTL queue and some other queueing process. It is likely that the bFCTL queue with multiple lanes allows for some of those generalizations as well. We mention randomness in the green and red time distributions as a relevant potential extension.
Capacity
In order to compare our model and the existing literature (focusing on the HCM There is a procedure provided in the HCM to compute this factor, but in our model this simply corresponds to the q i and we will determine the f Rp b factor on the q i . Further, in order to make a comparison with our model, we turn the saturation flow of the shared lane into a number of vehicles per cycle. More concretely, we choose the green period to be 30 seconds, split into the two phases as follows: g 1 = 20 and g 2 = 10. We pick the cycle length to be 90 seconds, the time slots to have length 2 seconds and we focus on a single shared lane, so we have at most 1 vehicle departing per time slot. Further, we choose the right-turning portion vehicles to be 1 or 0.9 in our examples. Lastly, for the HCM formula, we assume that vehicles heading straight have a crossing time of 1 second. To account for this effect in our bFCTL model, we use the correction discussed in Remark 4. In this example we have:
m * = p i m turn + (1 − p i )m through = p i × 1 + (1 − p i ) × 2.
This enables us to compute the capacity in our model and in the HCM up to the q i .
We first focus on the cases with p i = 1 and we display the capacity according to the HCM in Figure 4(a). Note that f Rp b is at least 1/3 because g 1 = 20 and g 2 = 10, implying that during at least a part 1/3 of the cycle, turning vehicles are not blocked. In Figure 4(a) we also depict two capacities according to the bFCTL queue. In case (1) we assume that all the q i are the same and are chosen in such a way that the f Rpb in the HCM formula is matched. E.g. in case f Rp b = 1/3, we choose q i = 0 as there are no pedestrians and in case f Rpb = 2/3, we choose q i = 1/2. In case (2), we consider a step function for the q i such that
q i = 1 if i < k 0 if i > k k * otherwise,
for some values of k and k * such that the q i match with the value for f Rpb that is used in the formula for the HCM. The capacity according to the bFCTL queue when p i = 1 is equal to (after simplification)
g 1 + g 2 2 − g 1 i=1 q i . (4.6)
This shows that when g 1 i=1 q i is translated into the factor f Rpb in the HCM, we have an identical capacity. E.g. if the q i = 0, then also in the bFCTL queue, the capacity is equal to 15 vehicles per cycle. Equation (4.6) also indicates that it does not matter in which slots the pedestrians are crossing if p i = 1 (when looking at the capacity). In this case, the q i only influence the capacity through their sum, however in general the individual p i and q i have an impact on the capacity (and the queue-length process). Similar observations hold if p i = 0, i.e. there are no turning vehicles.
If the p i are not equal to 1, there are differences between the capacity in the HCM and the bFCTL queue. We study an example where p i = 0.2. The results are depicted in Figure 4(b). The values for the capacity obtained with the function in the HCM are slightly lower than the values that we obtain in both cases of the bFCTL queue.
In contrast with the previous example, there are differences between all three choices which relate to various causes. The main reason for the occurring difference between cases (1) and (2) in the bFCTL queue, is that the individual q i are determining the capacity rather than the total value of the q i 's alone as was the case when p i = 1. Here we thus see that our detailed description of the queueing model in terms of slots is necessary to fully understand the capacity (and, more generally, the queueing process).
In this subsection we have been working under several assumptions. If one would, e.g., also incorporate start-up delays as is done in [22], we would see that the capacity in the HCM results in an overestimation of the capacity as is more generally observed [22]. We also expect that the distribution of the q i over the different slots has a bigger impact on the capacity and queueing process if start-up delays are incorporated. Implementing such effects into our model is possible (probably in a similar way as including a departure variable as discussed above), but is beyond the scope of the present paper.
The bFCTL queue with turning vehicles and pedestrians
In this subsection, we study the bFCTL queue with a single lane, so m = 1. The setting in this subsection is as depicted in Figure 1(b). We mainly focus on the distribution of X g 1 +g 2 , to which we refer as the overflow queue, as this is the distribution from which some interesting performance measures can be derived. This distribution reflects the probability distribution of the queue size at the moment that the green light switches to a red light. We also briefly consider some other performance measures.
Influence of the number of turning vehicles
First, we vary the fraction of right-turning vehicles p i and study its influence on X g 1 +g 2 . We choose the p i to be the same for each i, so we have p i = p, and we vary p. We choose the value of the q i = q to be 1, so there are always pedestrians on the pedestrian crossing during the first part of the green period with length g 1 . In this way, we can effectuate the influence of the fraction of turning vehicles on the performance measures. Further, we choose g 1 to be either 2 or 10 and we choose g 2 = r = 2g 1 . The arrival process is taken to be Poisson with mean 0.39. Note that the lane is close to its point of saturation, because the capacity can be shown to be equal to 0.4. We display results for (X g 1 +g 2 ≤ j) for j = 0, . . . , 10 in Figure 5.
As can be observed from Figure 5, the fraction of turning vehicles may dramatically influence the number of queueing vehicles. There is virtually no queue at the end of the green period when there are no turning vehicles (p = 0), whereas in more than 50% of the cases there is a queue of at least 10 vehicles at the end of the green period when all vehicles are turning vehicles (p = 1). The blockages of the turning vehicles in the latter case effectively reduce the green period by a factor 1/3 in our examples (as q = 1), which causes the huge difference in performance. We note that the distribution of X g 1 +g 2 coincides with the overflow queue distribution in the FCTL queue when p = 0 (when we take g 1 + g 2 as the green period and r as the red period in the FCTL queue) and when p = 1 and q = 1 (with g 2 the green period and r + g 1 the red period). When comparing Figures 5(a) and 5(b), we see that the influence of p is not uniform across the two examples. In case p = 0 or p = 1, the probability of a large overflow queue is larger for the case where g 1 = 2. This might be clarified by noting that a larger cycle reduces the amount of within-cycle variance which reduces the probabilities of a large queue length. If 0 < p < 1 this does not seem to be the case. This might be due to the fact that a relatively big part of the first green period is eaten away by turning vehicles that are blocked when g 1 = 10. For example, when p > 0 and the first vehicle is a turning vehicle, immediately the entire period g 1 is wasted because q = 1. This is of course also the case when g 1 = 2, but the blockage is resolved sooner and during the second part of the green period the blocked vehicle may depart relatively soon in comparison with the case where g 1 = 10.
In Figure 6(a), we see the probability of an empty queue after slot i, where i = 1, 2, . . . , c, for two different values of p. For the case p = 0 (in orange) we have a monotone increasing sequence of probabilities during the green period as one would expect: this setup corresponds to a regular FCTL queue and once the queue empties during the green period, it stays empty. We see that for the case p = 0.6 (in blue) the probabilities of an empty queue after slot i are much lower (as there are more turning vehicles which might be blocked and hence cause the queue to be non-empty). In fact, the probability of an empty queue even decreases when going from slot 2 to slot 3. This can be clarified by the fact that the queue might start building again even when the queue is (almost) empty: e.g. if the queue is empty during the first green period and there is an arrival of a turning vehicle, that vehicle will be blocked as q = 1 in which case the queue is no longer empty.
The same type of behaviour is reflected in the mean queue length at the end of a slot, as can be observed in Figure 6(b). Even though the green period already started, the queue in the example with p = 0.6 still grows (in expected value) during the first part of the green period, see the first two blue bars. This is caused by the fact that vehicles might be blocked, which demonstrates the possibly severe impact of blocked vehicles on the performance of the system.
Influence of the pedestrians
Secondly, we investigate the influence of the presence of pedestrians by studying various values for the q i . A high value of the q i corresponds to a high density of pedestrians as q i corresponds to the probability that a turning vehicle is not allowed to depart during the first green period. Conversely, a low value of the q i corresponds to a low density of pedestrians and a relatively high probability of a turning vehicle departing during the first green period. We choose p i = p = 0.5 and take g 1 = g 2 = r = 10. We take Poisson arrivals with mean 0.36. We study one set of examples where the q i are constant over the various slots, see Figure 7(a). We also study the influence of the dependence of the q i on i by investigating two cases with all parameters as before in Figure 7(b). In one case we take q i = 0.5 for all i, but in the other case we take q i = 1 − (i − 1)/g 1 . The latter case reflects a decreasing number of pedestrians blocking the turning flow of vehicles during the first part of the green period. We note that it is important to estimate the correct blocking probabilities q i from data, when applying our analysis to a real-life situation as the q i have an impact on the performance measures. In Figure 7(a), we clearly see that the more pedestrians, the longer the queue length at the end of the green period is. Indeed, if there are more pedestrians, there are relatively many blockages of vehicles which causes the queue to be relatively large.
Moreover, it is important to capture the dependence of the q i on the slot i in the right way, see Figure 7(b). Even though, on average over all slots, the mean number of pedestrians present is similar in the two cases, we see a clear difference between the two examples. In the case with decreasing q i (in blue), we see an initial increase of the mean queue length during the first green slots of the cycle, caused by a relatively large fraction of turning vehicles (p = 0.5) and a high value of q i . This is not the case in the other example where q i = 0.5 for all i. After some slots of the first green period, the decrease in the mean queue length is quicker for the example where the q i decrease when i increases, which can (at least partly) be explained by the decreasing q i . During the remaining part of the cycle, the queue in front of the traffic light behaves more or less the same in both examples and even the mean overflow queue, [X g 1 +g 2 ], is not that much different for the two examples. This implies, as can also be observed in Figure 7(b), that the mean queue length during the red period is comparable as well for our setting. This does not hold for the mean queue length at the end of an arbitrary slot and the mean delay, because of the differences in the queue length during the first part of the green period.
Shared right-turn lanes and dedicated lanes
We continue with a study of several numerical examples that focus on the differences between shared right-turn lanes and dedicated lanes for turning traffic. We do so in order to provide relevant insights in the benefit of splitting the vehicles in different streams. Firstly, we study the difference between a single shared right-turn lane (as visualized in Figure 8(a)) and a case where the straight-going and turning vehicles are split into two different lanes. In the latter case, we thus have two lanes, one for the straight-going traffic and one for the turning traffic (as visualized in Figure 8(b)) which we can analyze as two separate bFCTL queues. Secondly, we compare two two-lane settings. The first is visualized in Figure 8(b), while the other is a two-lane scenario where one lane is a dedicated lane for straight-going traffic and the other is a shared right-turn lane as depicted in Figure 8(c). We thus allow for straightgoing traffic to mix with some of the right-turning vehicles in the latter case. We do so in order to make sure that the shared right-turn lane together with the lane for vehicles heading straight has the same capacity as the two lanes where the two streams of vehicles are split (as opposed to the first example in this subsection). In both two-lane scenarios we, again, analyze the two lanes as two separate bFCTL queues.
One lane for the shared right-turn
We start with comparing the traffic performance of a single shared right-turn lane as in Figure 8(a), case (1), and a two-lane scenario where the turning vehicles and the straight-going vehicles are split as in Figure 8(b), case (2). We refer in the latter case to the lane which has right-turning vehicles as lane 1 and to the other lane we refer as lane 2. We assume that the arrival process is Poisson and that the arrival rate of turning vehicles, µ 1 , and straight-going vehicles, µ 2 , are the same in both cases. The total arrival rate of vehicles is µ = µ 1 + µ 2 in case (1). We choose p i = 0.3 for the shared right-turn lane, whereas in the two-lane case we have p i = 1 for lane 1 and p i = 0 for lane 2 and arrival rates µ 1 = 0.3µ at lane 1 and µ 2 = 0.7µ at lane 2. Further, we choose q i = 1, g 1 = 8, g 2 = 20, and r = 20. We compute the mean queue length at the end of an arbitrary time slot for both lanes in case (2), denoted with [X (i) ] for lane i, and the total mean queue length at the end of an arbitrary time slot, denoted with [X t ], and which equals [X (1) ] + [X (2) ]. For case (1) we denote the mean queue length at the end of an arbitrary time slot with [X t ]. The delay of an arbitrary car is denoted with [D] for both cases (1) and (2). We study an example with various values of µ in Figure 9. Mean number of vehicles The total Poisson arrival rate, µ, on the horizontal axis and in (a) the mean queue length at the end of an arbitrary time slot for the various cases and lanes where [X t ] = [X (1) ] + [X (2) ] for case (2), and in (b) the mean delay for the various cases.
[X t ] case (1) [X (t) ] case (2) [X (1) ] case (2) [X (2) ] case (2)
In Figure 9, we can clearly see that the total mean queue length at the two lanes in case (2) is lower than the mean queue length at the single lane in case (1). This makes sense from various points of view: in case (2), we have twice as many lanes as in case (1), so we would expect a smaller total mean queue length in case (2). Moreover, in case (1), it might happen that straight-going vehicles are blocked. Such blockages cannot occur in case (2), as all turning traffic is on lane 1 and all vehicles that go straight are on lane 2. These two reasons are the main drivers for the performance difference in cases (1) and (2). From the point of view of the traffic performance, it thus makes sense to split the traffic on a shared right-turn lane into two separate streams of vehicles on two lanes while assuming one lane available for departures in case (1) and two lanes in case (2). We observe similar results when looking at the mean delay and comparing cases (1) and (2).
Remark 6
We emphasized before that the blocking mechanism makes it impossible to use existing methods to analyze the queue lengths and delays. However, in this particular example we have chosen the parameter settings in such a way that case (2) can be analyzed using existing methods. The reason is that we have two separate lanes, each with its own "extreme" blocking mechanism: lane 1 contains only turning vehicles and all of them are blocked during g 1 . Essentially, this turns this lane into a regular FCTL queue with an extra long red period (r + g 1 ) and a shorter green period (g 2 ). Lane 2 contains only vehicles going straight, none of which are blocked. This means that this lane is essentially a regular FCTL queue as well. As a consequence, these two lanes can be analyzed separately using standard FCTL methods. When applying the method described in [40], the mean delay would be exactly the same as computed in Figure 9(b). Moreover, this means that we can also use Webster's well-known approximation for the mean delay for case (2). This has also been visualized in Figure 9(b) and, indeed, the approximation is remarkably accurate. Still, we stress that this is only possible because we have chosen an extreme blocking mechanism (q i = 1) in combination with Poisson arrivals (Webster's approximation only works for Poisson arrival processes).
Two lanes for the shared right-turn
Now we turn to an example where we still have two dedicated lanes as in case (2) of the previous example, one for turning traffic and one for straight-going traffic, see Figure 8(b), but we compare it with a two-lane example where the vehicles mix, see Figure 8(c). All turning vehicles will be on lane 1, but we allow some straight-going traffic to be present on lane 1 too. Lane 1 is thus a shared right-turn lane. On lane 2, we only have vehicles that are heading straight. This could, e.g., model a scenario in which some straight-going vehicles desire to take a specific lane, strategically anticipating on an upcoming exit. Anticipation in lane changing behaviour is more generally investigated in e.g. [17] in urban scenarios. We could adapt the value of p depending on this number of strategic vehicles. In order to make a comparison between the various cases that we study and that is as fair as possible, we assume the following: the total arrival rate and the fraction of turning vehicles are the same.
We assume that the probability that an arbitrary vehicle is a turning vehicle is 0.3 and we vary the total Poisson arrival rate µ to study the influence of the strict splitting of the turning vehicles. In case (1), we thus have an arrival rate at the right-turning lane that satisfies µ 1 = 0.3µ, whereas on the other lane we have an arrival rate µ 2 = 0.7µ. At lane 1 we have p i = 1 and at lane 2 we have p i = 0. In case (2) we distinguish between two subcases. In subcase (2a) we assume that the total arrival rate at both lanes is the same and thus µ 1 = µ 2 = 0.5µ. In subcase (2b), we assume that the arrival rate is split in the ratio 2 : 3, so µ 1 = 0.4µ and µ 2 = 0.6µ. This implies that in subcase (2a) we choose p i = 0.6 (the fraction of turning vehicles is then pµ 1 = 0.6 · 0.5µ = 0.3µ) and in subcase (2b) we choose p i = 0.75 (the fraction of turning vehicles is then pµ 1 = 0.75 · 0.4µ = 0.3µ), to make sure that we match the number of turning vehicles in case (1). Further, we choose q i = 1, g 1 = 8, g 2 = 16 and r = 16. Then, we study the mean queue length at the end of an arbitrary time slot of both lanes, [X (1) ] and [X (2) ], and the total average mean queue length at the end of an arbitrary time slot, denoted with [X t ]. We obtain Figure 10.
In Figure 10, we see only small differences in the total mean queue lengths at the end of an arbitrary time slot for low arrival rates. At both lanes, there are few vehicles in the queue. This is different for the examples in Figure 10 with a higher arrival rate. In all examples for case (1) we see that the mean queue length at lane 2, the straight-going traffic lane, is higher than for lane 1. This is due to the relatively high fraction of vehicles that have to use lane 2 due to the strict splitting between turning and straight-going vehicles. In some sense, lane 1, which only has turning vehicles, has overcapacity that cannot be used for the busier lane 2 with only straight-going traffic. This is different for the other two cases, where the traffic is split more evenly across the two lanes. As one would expect, the longest queue in subcase (2a) is present at lane 1, as the arrival rate at both lanes is the same and because vehicles are only blocked at lane 1, the shared right-turn lane. This points towards another potential improvement and this is found in subcase (2b) where we balance the arrival rate differently. The right balance leads to a more economic use of both lanes and, hence, also the best performance in this example when looking at [X t ].
The results in Figures 9 and 10 might seem conflicting at a first glance, but they are not. In the case of a single, shared right-turn lane as in Figure 9, we see a higher mean queue length than for the two dedicated lanes case in Figure 9. This is the other way around in Figure 10 (considering case (2b)). This is mainly explained by the fact that in case (2b) in Figure 10, we have two lanes and thus twice as many potential departures as in case (1) in Figure 9. This is one of the main factors in the explanation of the differences in the mean performance between the examples studied in Figures 9 and 10.
The two examples in this subsection tell us that a separate or dedicated lane for turning traffic does not necessarily improve the traffic flow. The intuition behind this is that a dedicated lane might have overcapacity which is not employed (e.g. in the case of an asymmetric load on both lanes). This issue is less present when the two dedicated lanes are turned into two lanes, one exclusively for straight-going traffic and one shared lane. This is confirmed by our simulations. As such, an in-depth study is needed to obtain the best layout of the intersection and the best traffic-light control. As a side-remark, we surpass the possibility here that in Figure 10, case (1), we might control the two lanes in a different way, e.g. by prolonging the green period for one of the lanes. This is not possible in cases (2a) and (2b).
Conclusion and discussion
In this paper, we have established a recursion for the PGFs of the queue-length distribution at the end of each slot which can be used to provide a full queue-length analysis of the bFCTL queue with multiple lanes. This is an extension of the regular FCTL queue so that we can account for temporal blockages of vehicles receiving a green light, for example because of a Mean number of vehicles Figure 10 The total Poisson arrival rate, µ and the mean queue length at the end of an arbitrary time slot for the various cases, split among lane 1 (a), lane 2 (b) and the total among the two lanes (c) where [X t ] = [X (1) ] + [X (2) ].
[X t ] case (1) [X t ] case (2a) [X t ] case (2b) (c)
crossing pedestrian at the turning lane or because of a (separate) bike lane, and to account for a vehicle stream that is spread over multiple lanes. These features might impact the trafficlight performance as we have shown by means of various numerical examples. The blocking of turning vehicles and the number of lanes corresponding to a vehicle stream therefore has to be taken into account when choosing the settings for a traffic light.
We briefly touched upon how one should design the layout of an intersection. Interestingly, it might be suboptimal to have a dedicated lane for turning traffic. It seems that mixing turning and straight-going traffic has benefits over a strict separation of those two traffic streams when there are two lanes for this turning and straight-going traffic. We advocate a further investigation into the influence of separating or mixing different streams of vehicles in front of traffic lights. It might be possible to find the optimal division of straightgoing and turning vehicles over the various lanes, e.g. by enumerating several possibilities. A more structured optimization seems difficult because of the intricate expressions involved, but would definitely be worthwhile to investigate. Some research on the splitting of different traffic streams has already been done in e.g. [24,38,42] and [47] and the present study can be seen as an alternative way of modelling the situation at hand.
A possible extension of the results on the bFCTL queue is a study of (the PGF of) the delay distribution. We have refrained from deriving the delay distribution because of its (notational) complexity. Using proper conditioning, one can obtain (the PGF of) the delay distribution for the bFCTL queue.
The work in [22], in which a simulation study of a similar model is performed, has been a source of inspiration for the study in this paper. There are some extensions possible when comparing our work with [22]. We e.g. did not study the influence of start-up delays as is done in [22]. Investigating such start-up delays at the beginning of the green period is easily done in our framework: we simply need to adjust the Y i for the first few slots. Another approach to deal with start-up delays is presented in [29]. Start-up delays which depend on the blocking of vehicles and different slot lengths for different combinations of turning/straight going vehicles, are harder to tackle. One could e.g. introduce additional states (besides states u and b) to deal with this. Although the developed recursion does not directly allow for such a generalization, it seems possible to account for this at the expense of a more complex recursion. For the ease of exposition, we have refrained from doing so and we leave a full study on this topic for future research.
A further possible extension of the bFCTL queue would be to consider different blocking behaviours: instead of e.g. a fixed probability q i for each slot i, a more general blocking process might be considered. For example, if there are no pedestrians during slot i for the model depicted in Figure 1(b), then the probability that there are also no pedestrians in slot i + 1, might be relatively high. In other words, there might be dependence between the various slots when considering the presence of pedestrians. We gave an example where there is dependence between the current and the next slot, but it is also possible to consider such dependencies among more than two slots. It is worthwhile to investigate generalizations of the blocking process in order to further increase the general applicability of the bFCTL queue with multiple lanes.
Another generalization for the blocking mechanism, is to block only a part of the m vehicles that are at the head of the queue. Indeed, we restrict ourselves to the cases where either all vehicles in a batch of size m are blocked (or not). In various real-life examples, it might be the case that only part of the m vehicles are blocked. It would be interesting to investigate whether such a model can be analyzed. Further, a situation with "a right turn is always permitted" scenario might be investigated. In such a case, right-turning vehicles are always free to turn, but might be blocked by straight-going vehicles in front them, which have to wait for a red traffic light, or are blocked by pedestrians. Straight-going vehicles might be blocked by turning traffic waiting for pedestrians. It seems that such a case, at the expense of additional complexity, can be tackled by a similar type of recursion as the one that is developed in this paper by extending and generalizing the blocking mechanism (and, thus, the recursion) to the red period.
Discussion. We end this paper with a discussion on its practical applicability. Although we have extended the standard model for traffic signals with fixed settings, there are still quite some possible improvements, as discussed in the above paragraphs. Still, to the best of our knowledge, this paper is the first to present analytical results for traffic intersections with blocking mechanisms, based on a queueing theoretic approach. Note that standard formulas like Webster's approximation for the mean delay [41] cannot be used in these situations. From a practical point of view, the most relevant extension to the current analysis would be to deal with start-up delays that depend on the blocking of vehicles. One way to do this, is by considering different slot lengths for different combinations of turning/straight going vehicles, inspired by an analysis in [29]. This would make it possible to compute a saturation flow adjustment factor due to the right-turning movements at shared lane conditions (see also Biswas et al. [5]). Finally, we also advocate an investigation whether the bFCTL queue with a vehicle-actuated mechanism (rather than the fixed green and red times that we consider) results in a tractable model.
A Simulation of the bFCTL queue
All numerical results in this paper have been obtained by implementing the bFCTL analysis in Mathematica. To validate these results, we have written a simulation of the model in Python. This simulation is a slightly more general version of the Python simulation used in the paper by Huang et al. [22]. In this paper, we wanted to show that their model can (also) be analysed using methods from queueing theory. Although the main message of the current paper is that this exact analysis is the preferred solution method, we want to give more insight in the simulation used to validate our results. For this reason, we include the most relevant parts of the Python code to simulate the bFCTL queue (see Listing 1).
To show the accuracy of the simulation, we have repeated the experiment of Subsection 4.3.1. In more detail, we have run the bFCTL simulation for the example shown in Figure 6(b), with Poisson arrivals with rate 0.39, g 1 = 2, g 2 = 4, r = 4, q i = 1 and m = 1. The exact results (using our theoretical analysis) and the simulation results are given in Table 1. The confidence intervals are based on 100 runs of 10,000 cycles each. Indeed, the simulation is accurate and confirms the correctness of the formulas derived in this paper. However, to obtain this level of accuracy, the simulations ran for almost two minutes, whereas the analytical results are obtained in just a few seconds. Admittedly, surely the efficiency of the code in Listing 1 can be improved and more time would be gained by running the simulations in parallel on multiple cores. Still, the analytical methods will always outperform the simulation in terms of computation time and accuracy. Table 1 Simulation results for the bFCTL queue with Poisson arrivals with rate 0.39, g 1 = 2, g 2 = 4, r = 4, q i = 1 and m = 1. The confidence intervals are based on 100 runs of 10,000 cycles each.
Listing 1 Simulation of the bFCTL queue in Python.
1 arrs = random . poisson ( arrRate , (c , ncycles )) # random arrivals 2 noBlocks = zeros (( c , ncycles )) 3 noDeps = zeros (( c , ncycles ))
Figure 2
2Visualization of (a) the bFCTL model in terms of an intersection with a traffic stream spread over m lanes and (b) the corresponding queueing model, where the server takes batches of m vehicles into service simultaneously unless there are less than m vehicles present; in that case all vehicles are taken into service.
[6]), we provide several examples in this subsection.The formula for the capacity of a permitted right-turn lane in a shared lane in the HCM is s sr = .[6] equation (31-105). Here, s sr is the saturation flow of the shared lane, s th the saturation flow of an exclusive through lane, P r the right-turning portion of vehicles, E R the equivalent number of through vehicles for a protected right-turn vehicle and f Rpb is the bicycle-pedestrian adjustment factor for right-turn groups. The latter is defined as the average amount of time during the green period during which right-turning vehicles are not blocked, i.e., in our model, there are no pedestrians crossing.
Figure 4
4Capacity in vehicles per cycle for the example according to the HCM and to the bFCTL queue with two different choices for the q i (case (1) and case (2)) as detailed in the text. We have p i = 1 in (a) and p i = 0.2 in (b).
Figure 4 (
4a) makes sense: if f Rp b is for example equal to 1, there are no pedestrians crossing (i.e. q i = 0), and then the number of vehicles departing per cycle is (g 1 + g 2 )/2 = 15.
Figure 5
5Cumulative Distribution Function (CDF) of the overflow queue for various values of p i = p, q i = q = 1, and Poisson arrivals with mean 0.39. In (a) we have g 2 = r = 2g 1 = 4 and in (b) we have g 2 = r = 2g 1 = 20.
Figure 6
6In (a) (X i = 0) for slot number i = 1, . . . , 10 is displayed for two different values of p i , where orange corresponds to p i = p = 0 and blue to p i = p = 0.6, with 2g 1 = g 2 = r = 4, q i = q = 1, and with Poisson arrivals with mean 0.39. In (b) the same two examples are studied, but the mean queue length [X i ] at the end of slot i is shown.
Figure 7
7In (a) the CDF of the overflow queue is displayed for various values of the q i with all q i = q the same, p i = p = 0.5, Poisson arrivals with mean 0.36, and g 1 = g 2 = r = 10. In (b) the [X i ] are compared for slot number i = 1, . . . , 30 with in orange q i = 0.5 and in blue q i = 1 − (i − 1)/g 1 for i = 1, . . . , g 1 . Further, it is assumed that p i = p = 0.5, that the number of arrivals in each slot follows a Poisson distribution with mean 0.36, and that g 1 = g 2 = r = 10.
Figure 8
8The various lane configurations considered in Subsection 4.4. In (a) we have a single lane with a shared right-turn lane. In (b) we have two dedicated lanes: one for straight-going vehicles and one for right-turning traffic, whereas in (c) we have a two-lane setup with one lane for straight-going vehicles only and a shared right turn.
Figure 9 The total Poisson arrival rate, µ, on the horizontal axis and in (a) the mean queue length at the end of an arbitrary time slot for the various cases and lanes where [X t ] = [X (1) ] + [X (2) ] for case (2), and in (b) the mean delay for the various cases.
b).We have batches of vehicles of size 1, i.e. batches are individual vehicles.We distinguish between vehicles that are going straight ahead and vehicles that turn right. We do so because only right-turning vehicles can be blocked by crossing pedestrians. The probability that an arbitrary vehicle at the head of the queue is a turning vehicle is p. Such a turning vehicle is blocked by a pedestrian in slot i with probability q i , i.e. a pedestrian is present on the crossing with probability q i . If a turning vehicle is blocked, all vehicles behind it are also blocked. Then, we proceed to the next slot, i + 1, and check whether there are any pedestrians crossing (with probability q i+1 ): if there are pedestrians crossing, all vehicles in the queue keep being blocked and otherwise, the turning vehicle at the head of the queue may depart and the blockage of all other vehicles is removed.Moreover, if the queue becomes empty during the green period, it will in general not start building again (cf. the FCTL assumption for the regular FCTL queue, see e.g.[40]), except if there arrives a turning vehicle and there is a crossing pedestrian. The turning vehicle is then blocked and any vehicles arriving in the same slot behind this vehicle are also blocked.Example 2 (Two turning lanes) In this example we consider the scenario as inFigure 1(c). We have batches of vehicles of size 2.In this example, there is no need to make a distinction between vehicles: each vehicle is a turning vehicle with probability 1, i.e. p = 1.During each slot i, there are pedestrians on the crossing with probability q i and if there is a pedestrian, all vehicles in the batch are blocked, as are all other vehicles in the queue: there are no vehicles that can complete the right turn. All vehicles in the queue keep being blocked until there are no pedestrians crossing anymore. Also in this example, the queue of vehicles might dissolve entirely during the green period. If that happens, it only starts building again if there are vehicles arriving and if there are pedestrians crossing. In such cases, all arriving vehicles get blocked and remain blocked until there are no pedestrians anymore.
: end if Remark 4 One of our model restrictions (Assumption 1) is that vehicles depart at the end of each time slot, meaning that we do not correct for the fact that turning vehicles might need more time to accelerate. A simple method to account for this effect, which reduces the capacity in practice, is to modify the reward structure of the Markov chain. One can modify the value of m in Equations (3.3), (3.5), and (3.7) to account for the lower departure rate of turning vehicles. For example, one can use m * = p i m turn + (1 − p i )m through , (3.9) where m through and m turn represent the average number of through-vehicles and turning vehicles, respectively, crossing the intersection per time unit. For this capacity calculation, these numbers do not need to be integers. See Section 4.2 for a numerical example and a comparison to the HCM capacity formula.
[ X i
X] p i = 0 p i = 0.6 iExact Sim CI Lower Sim CI Upper Exact Sim CI Lower Sim CI Upper1 1.297
1.289
1.298
3.901
3.852
3.927
2 0.926
0.917
0.925
4.148
4.102
4.178
3 0.657
0.649
0.657
3.610
3.562
3.638
4 0.465
0.458
0.465
3.126
3.078
3.154
5 0.329
0.323
0.330
2.699
2.652
2.726
6 0.233
0.228
0.233
2.325
2.280
2.352
7 0.623
0.617
0.623
2.715
2.669
2.742
8 1.013
1.007
1.014
3.105
3.059
3.133
9 1.404
1.396
1.405
3.495
3.448
3.522
10 1.793
1.785
1.794
3.885
3.838
3.912
18 #
18handle arrivals and departures if blocked if noBlocks [j , i ] == 0: # with probability 1 -q [ j ] 24 X += arrs [j , i ] # queue remains blocked else : # not blocked during g1 period if noDeps [j , i ] == 1: # with probability p [ j ] if noBlocks [j , i ] == 1: # with prob . q [ j ] blocked = True # block turning vehicles X += arrs [j , i ] # arrivals join the queue # handle arrivals and departures if NOT blocked ( or blockage was resolved ) if slot < g1 + g2 and not blocked : 40 X += arrs [j , i ] -m # m vehicles depart , new ones arrive if X < 0: # in this case , the queue becomes empty if slot == g1 + g2 -1: # at end of green period , store the queue length X_g Xg . append ( X ) Xi [ slot ] += X Xi = Xi / ncycles 53 print ( Xi )# Print the mean queue length at end of time slot 54 print ( mean ( Xg )) # Print the mean queue length at end of green period19
if slot < g1 :
# g1 period
20
if blocked :
# blocked during g1 period
21
22
blocked = False
# blockage resolved
23
else :
25
26
27
28
29
30
elif slot < g1 + g2 :
# g2 period
31
blocked = False
# blockages are always resolved in this period
32
else :
33
X += arrs [j , i ]
# red period
34
35
36
37
if X < m :
# all vehicles can depart
38
X = 0
# and no new ones will arrive
39
else :
41
42
X = 0
43
44
45
46
47
time += 1
48
slot += 1
49
if slot == c :
# reset slot number at end of cycle
50
slot = 0
51
52
AcknowledgementsWe would like to thank Onno Boxma for several interesting discussions that a.o. improved the readability of this manuscript. We are also thankful to Joris Walraevens who suggested the current exposition of the PGF recursion and to the reviewers who suggested several improvements of the paper.Funding The work in this paper is supported by the Netherlands Organization for Scientific Research (NWO) under grant number 438-13-206.Disclosure statementThe authors report there are no competing interests to declare.Appendix
Numerical inversion of probability generating functions. Joseph Abate, Ward Whitt, Operations Research Letters. 124Abate, Joseph, and Ward Whitt. 1992. "Numerical inversion of probability generating functions." Operations Research Letters 12 (4): 245-251.
On the application of Rouché's theorem in queueing theory. Ivo J B F Adan, S H Johan, Erik M M Van Leeuwaarden, Winands, Operations Research Letters. 343Adan, Ivo J. B. F., Johan S. H. van Leeuwaarden, and Erik M. M. Winands. 2006. "On the application of Rouché's theorem in queueing theory." Operations Research Letters 34 (3): 355-360.
Left-turn gap acceptance models considering pedestrian movement characteristics. Wael K M Alhajyaseen, Hideki Miho Asano, Nakamura, Accident Analysis and Prevention. 50Alhajyaseen, Wael K. M., Miho Asano, and Hideki Nakamura. 2013. "Left-turn gap ac- ceptance models considering pedestrian movement characteristics." Accident Analysis and Prevention 50: 175-185.
Effect of bicycles on capacity of signalized intersections. D Allen, Joseph E Patrick, Nagui M Hummer, Joseph S Rouphail, Milazzo, Transportation Research Record. 16461Allen, D. Patrick, Joseph E. Hummer, Nagui M. Rouphail, and Joseph S. Milazzo. 1998. "Effect of bicycles on capacity of signalized intersections." Transportation Research Re- cord 1646 (1): 87-95.
Saturation Flow Model for Signalized Intersection under Mixed Traffic Condition. Sabyasachi Biswas, Souvik Chakraborty, Indrajit Ghosh, Satish Chandra, Transportation Research Record. 267215Biswas, Sabyasachi, Souvik Chakraborty, Indrajit Ghosh, and Satish Chandra. 2018. "Saturation Flow Model for Signalized Intersection under Mixed Traffic Condition." Transportation Research Record 2672 (15): 55-65.
Highway Capacity Manual the 6th Edition. The national academies of science, engineering, medicine. Board, Transportation ResearchBoard", "Transportation Research. 2016. Highway Capacity Manual the 6th Edition. The national academies of science, engineering, medicine.
Optimal capacity allocation for heavy-traffic fixed-cycle traffic-light queues and intersections. M A A Boon, A J E M Janssen, J S H Van Leeuwaarden, R W Timmerman, In preparationBoon, M. A. A., A. J. E. M. Janssen, J. S. H. van Leeuwaarden, and R. W. Timmerman. 2021. "Optimal capacity allocation for heavy-traffic fixed-cycle traffic-light queues and intersections." In preparation .
Pollaczek contour integrals for the fixed-cycle traffic-light queue. Marko A A Boon, J E M Guido, Johan S H Janssen, Rik W Van Leeuwaarden, Timmerman, Queueing Systems. 911-2Boon, Marko A. A., Guido J. E. M. Janssen, Johan S. H. van Leeuwaarden, and Rik W. Timmerman. 2019. "Pollaczek contour integrals for the fixed-cycle traffic-light queue." Queueing Systems 91 (1-2): 89-111.
Networks of fixed-cycle intersections. Marko A A Boon, Johan S H Van Leeuwaarden, Transportation Research Part B: Methodological. 117Boon, Marko A. A., and Johan S. H. van Leeuwaarden. 2018. "Networks of fixed-cycle intersections." Transportation Research Part B: Methodological 117: 254-271.
Traffic performance of shared lanes at signalized intersections based on cellular automata modeling. Chen Chai, D Yiik, Wong, Journal of Advanced Transportation. 488Chai, Chen, and Yiik D. Wong. 2014. "Traffic performance of shared lanes at signalized intersections based on cellular automata modeling." Journal of Advanced Transportation 48 (8): 1051-1065.
Evaluating bicycle-vehicle conflicts and delays on urban streets with bike lane and on-street parking. Jingxu Chen, Zhibin Li, Wei Wang, Hang Jiang, Transportation Letters. 101Chen, Jingxu, Zhibin Li, Wei Wang, and Hang Jiang. 2018. "Evaluating bicycle-vehicle conflicts and delays on urban streets with bike lane and on-street parking." Transport- ation Letters 10 (1): 1-11.
Saturation flow rate analysis for shared left-turn lane at signalized intersections in Japan. Peng Chen, Hideki Nakamura, Miho Asano, Procedia-Social and Behavioral Sciences. 16Chen, Peng, Hideki Nakamura, and Miho Asano. 2011. "Saturation flow rate analysis for shared left-turn lane at signalized intersections in Japan." Procedia-Social and Beha- vioral Sciences 16: 548-559.
Investigation of saturation flow on shared right-turn lane at signalized intersections. Peng Chen, Hongsheng Qi, Jian Sun, Transportation Research Record. 24611Chen, Peng, Hongsheng Qi, and Jian Sun. 2014. "Investigation of saturation flow on shared right-turn lane at signalized intersections." Transportation Research Record 2461 (1): 66-75.
Influence of pedestrian traffic on capacity of right-turning movements at signalized intersections. Xiaoming Chen, Chunfu Shao, Yue Hao, Transportation Research Record. 20731Chen, Xiaoming, Chunfu Shao, and Yue Hao. 2008. "Influence of pedestrian traffic on capacity of right-turning movements at signalized intersections." Transportation Re- search Record 2073 (1): 114-124.
Influence of bicycle traffic on capacity of typical signalized intersection. Xiaoming Chen, Chunfu Shao, Hao Yue, Tsinghua Science and Technology. 122Chen, Xiaoming, Chunfu Shao, and Hao Yue. 2007. "Influence of bicycle traffic on ca- pacity of typical signalized intersection." Tsinghua Science and Technology 12 (2): 198- 203.
Review on theoretical delay estimation model for signalized intersections. Cheng Cheng, Yuchuan Du, Lijun Sun, Yuxiong Ji, Transport Reviews. 364Cheng, Cheng, Yuchuan Du, Lijun Sun, and Yuxiong Ji. 2016. "Review on theoretical delay estimation model for signalized intersections." Transport Reviews 36 (4): 479- 499.
Modelling driving decisions: a latent plan approach. Charisma Choudhury, F , Moshe E Ben-Akiva, Transportmetrica A: Transport Science. 96Choudhury, Charisma F, and Moshe E Ben-Akiva. 2013. "Modelling driving decisions: a latent plan approach." Transportmetrica A: Transport Science 9 (6): 546-566.
Numerical computation of the moments of a probability distribution from its transform. Gagan L Choudhury, David M Lucantoni, Operations Research. 442Choudhury, Gagan L., and David M. Lucantoni. 1996. "Numerical computation of the moments of a probability distribution from its transform." Operations Research 44 (2): 368-381.
On the traffic-light queue. John N Darroch, The Annals of Mathematical Statistics. 35Darroch, John N. 1964. "On the traffic-light queue." The Annals of Mathematical Stat- istics 35: 380-388.
Effect of bicycles on the saturation flow rate of turning vehicles at signalized intersections. Yanming Guo, Quan Yu, Yunlong Zhang, Jian Rong, Journal of Transportation Engineering. 1381Guo, Yanming, Quan Yu, Yunlong Zhang, and Jian Rong. 2012. "Effect of bicycles on the saturation flow rate of turning vehicles at signalized intersections." Journal of Trans- portation Engineering 138 (1): 21-30.
Comparison of macroscopic models for signalized intersection analysis. Lawrence T Hagen, Kenneth G Courage, Transportation Research Record. Hagen, Lawrence T., and Kenneth G. Courage. 1989. "Comparison of macroscopic mod- els for signalized intersection analysis." Transportation Research Record 1225.
Random nature of shared left-turn lanes at signalized intersections. Shaoluen Huang, Azusa Toriumi, Takashi Oguchi, 2020 IEEE Intelligent Transportation Systems Conference (ITSC). IEEEHuang, Shaoluen, Azusa Toriumi, and Takashi Oguchi. 2020. "Random nature of shared left-turn lanes at signalized intersections." In 2020 IEEE Intelligent Transportation Sys- tems Conference (ITSC), 3159-3166. IEEE.
Analytic computation schemes for the discrete-time bulk service queue. Guido J E M Janssen, Johan S H Van Leeuwaarden, Queueing Systems. 502-3Janssen, Guido J. E. M., and Johan S. H. van Leeuwaarden. 2005. "Analytic computation schemes for the discrete-time bulk service queue." Queueing Systems 50 (2-3): 141-163.
Lengths of turn lanes on intersection approaches: three-branch fork lanes -left-turn, through, and right-turn lanes. Shinya Kikuchi, Nopadon Kronprasert, Masanobu Kii, Transportation Research Record. 20231Kikuchi, Shinya, Nopadon Kronprasert, and Masanobu Kii. 2007. "Lengths of turn lanes on intersection approaches: three-branch fork lanes -left-turn, through, and right-turn lanes." Transportation Research Record 2023 (1): 92-101.
Capacity of shared left-turn lanes -a simplified approach. Herbert S Levinson, Transportation Research Record. Levinson, Herbert S. 1989. "Capacity of shared left-turn lanes -a simplified approach." Transportation Research Record (1225).
An arterial signal optimization model for intersections experiencing queue spillback and lane blockage. Yue Liu, Gang-Len Chang, Transportation Research Part C: Emerging Technologies. 191Liu, Yue, and Gang-Len Chang. 2011. "An arterial signal optimization model for inter- sections experiencing queue spillback and lane blockage." Transportation Research Part C: Emerging Technologies 19 (1): 130-144.
A lane-group based macroscopic model for signalized intersections account for shared lanes and blockages. Yue Liu, Jie Yu, Gang-Len Chang, Saed Rahwanji, 11th International IEEE Conference on Intelligent Transportation Systems. IEEELiu, Yue, Jie Yu, Gang-Len Chang, and Saed Rahwanji. 2008. "A lane-group based mac- roscopic model for signalized intersections account for shared lanes and blockages." In 2008 11th International IEEE Conference on Intelligent Transportation Systems, 639-644. IEEE.
A two-dimensional simulation model for modelling turning vehicles at mixed-flow intersections. Zian Ma, Jian Sun, Yunpeng Wang, Transportation Research Part C: Emerging Technologies. 75Ma, Zian, Jian Sun, and Yunpeng Wang. 2017. "A two-dimensional simulation model for modelling turning vehicles at mixed-flow intersections." Transportation Research Part C: Emerging Technologies 75: 103-119.
Networks of fixed-cycle traffic-lights. Klaas J Maes, Eindhoven University of TechnologyMaster's thesisMaes, Klaas J. 2015. "Networks of fixed-cycle traffic-lights." Master's thesis, Eindhoven University of Technology.
A solution to the fixed-cycle traffic light problem for compound Poisson arrivals. Donald R Mcneil, Journal of Applied Probability. 53McNeil, Donald R. 1968. "A solution to the fixed-cycle traffic light problem for com- pound Poisson arrivals." Journal of Applied Probability 5 (3): 624-635.
Effect of pedestrians on capacity of signalized intersections. Joseph S Milazzo, M Nagui, Joseph E Rouphail, D Hummer, Patrick Allen, Transportation Research Record. 16461Milazzo, Joseph S., Nagui M. Rouphail, Joseph E. Hummer, and D. Patrick Allen. 1998. "Effect of pedestrians on capacity of signalized intersections." Transportation Research Record 1646 (1): 37-46.
Approximation methods for queues with application to the fixed-cycle traffic light. Gordon F Newell, Siam Review. 72Newell, Gordon F. 1965. "Approximation methods for queues with application to the fixed-cycle traffic light." Siam Review 7 (2): 223-240.
An exact root-free method for the expected queue length for a class of discrete-time queueing systems. Anna Oblakova, Ahmad Al Hanbali, Richard J Boucherie, W Jan-Kees, Henk H M Van Ommeren, Zijm, Queueing Systems. 923-4Oblakova, Anna, Ahmad Al Hanbali, Richard J. Boucherie, Jan-Kees W. van Ommeren, and Henk H M Zijm. 2019. "An exact root-free method for the expected queue length for a class of discrete-time queueing systems." Queueing Systems 92 (3-4): 257-292.
Effect of pedestrians on the saturation flow rate of right turn movements at signalized intersection -case study from Rasht city. Mostafa Roshani, Iraj Bargegol, IOP Conference Series: Materials Science and Engineering. IOP Publishing24542032Roshani, Mostafa, and Iraj Bargegol. 2017. "Effect of pedestrians on the saturation flow rate of right turn movements at signalized intersection -case study from Rasht city." In IOP Conference Series: Materials Science and Engineering, Vol. 245, 042032. IOP Publishing.
Pedestrian impedance of turningmovement saturation flow rates: comparison of simulation, analytical, and field observations. Nagui M Rouphail, Brian S Eads, Transportation Research Record. 15781Rouphail, Nagui M., and Brian S. Eads. 1997. "Pedestrian impedance of turning- movement saturation flow rates: comparison of simulation, analytical, and field ob- servations." Transportation Research Record 1578 (1): 56-63.
L = λW : a discounted analogue and a new proof. StidhamJr, Shaler, Operations Research. 206Stidham Jr., Shaler. 1972. "L = λW : a discounted analogue and a new proof." Opera- tions Research 20 (6): 1115-1126.
Models to evaluate the severity of pedestrian-vehicle conflicts in five cities. Ahmed Tageldin, Tarek Sayed, Transportmetrica A: Transport Science. 152Tageldin, Ahmed, and Tarek Sayed. 2019. "Models to evaluate the severity of pedestrian-vehicle conflicts in five cities." Transportmetrica A: Transport Science 15 (2): 354-375.
Probabilistic model for signalized intersection capacity with a short right-turn lane. Zong Z Tian, Ning Wu, Journal of Transportation Engineering. 1323Tian, Zong Z., and Ning Wu. 2006. "Probabilistic model for signalized intersection ca- pacity with a short right-turn lane." Journal of Transportation Engineering 132 (3): 205- 212.
Highway Capacity Manual 5th Edition HCM2010. Transportation Research Board. Transportation Research BoardTransportation Research Board. 2010. Highway Capacity Manual 5th Edition HCM2010. Transportation Research Board, Washington D.C.
Delay analysis for the fixed-cycle traffic-light queue. Johan S H Van Leeuwaarden, Transportation Science. 402van Leeuwaarden, Johan S. H. 2006. "Delay analysis for the fixed-cycle traffic-light queue." Transportation Science 40 (2): 189-199.
Traffic signal settings. Fo V Webster, Technical ReportRoad Research BoardWebster, Fo V. 1958. Traffic signal settings. Technical Report. Road Research Board.
Capacity of shared-short lanes at unsignalized intersections. Ning Wu, Transportation Research Part A: Policy and Practice. 333-4Wu, Ning. 1999. "Capacity of shared-short lanes at unsignalized intersections." Trans- portation Research Part A: Policy and Practice 33 (3-4): 255-274.
Modelling blockage probability and capacity of shared lanes at signalized intersections. Ning Wu, Procedia-Social and Behavioral Sciences. 16Wu, Ning. 2011. "Modelling blockage probability and capacity of shared lanes at sig- nalized intersections." Procedia-Social and Behavioral Sciences 16: 481-491.
On the design and operational performance of waiting areas in at-grade signalized intersections: an overview. Yadan Yan, Xiaobo Qu, Hui Li, Transportmetrica A: Transport Science. 1410Yan, Yadan, Xiaobo Qu, and Hui Li. 2018. "On the design and operational performance of waiting areas in at-grade signalized intersections: an overview." Transportmetrica A: Transport Science 14 (10): 901-928.
Analytical evaluation of the use of left-turn phasing for single left-turn lane only. Qiaoli Yang, Zhongke Shi, Shaowei Yu, Jie Zhou, Transportation Research Part B: Methodological. 111Yang, Qiaoli, Zhongke Shi, Shaowei Yu, and Jie Zhou. 2018. "Analytical evaluation of the use of left-turn phasing for single left-turn lane only." Transportation Research Part B: Methodological 111: 266-303.
Optimal allocation of lane space and green splits of isolated signalized intersections with short left-turn lanes. Ronghan Yao, H Michael Zhang, Journal of Transportation Engineering. 1397Yao, Ronghan, and H. Michael Zhang. 2013. "Optimal allocation of lane space and green splits of isolated signalized intersections with short left-turn lanes." Journal of Transportation Engineering 139 (7): 667-677.
Modeling left-turn blockage and capacity at signalized intersection with short left-turn bay. Yunlong Zhang, Jiaxin Tong, Transportation Research Record. 20711Zhang, Yunlong, and Jiaxin Tong. 2008. "Modeling left-turn blockage and capacity at signalized intersection with short left-turn bay." Transportation Research Record 2071 (1): 71-76.
Simulation of pedestrian behavior during the flashing green signal using a modified social force model. Zhuping Zhou, Yang Zhou, Ziyuan Pu, Yongneng Xu, Transportmetrica A: Transport Science. 152Zhou, Zhuping, Yang Zhou, Ziyuan Pu, and Yongneng Xu. 2019. "Simulation of ped- estrian behavior during the flashing green signal using a modified social force model." Transportmetrica A: Transport Science 15 (2): 1019-1040.
| [] |
[
"BINOMIAL VANISHING IDEALS",
"BINOMIAL VANISHING IDEALS"
] | [
"Azucena Tochimani ",
"Rafael H Villarreal "
] | [] | [] | In this note we characterize, in algebraic and geometric terms, when a graded vanishing ideal is generated by binomials over any field K. | 10.13069/jacodesmath.12847 | [
"https://arxiv.org/pdf/1503.02323v2.pdf"
] | 119,736,650 | 1503.02323 | 27d2418500d8814d5c8f9b8192208204099564c9 |
BINOMIAL VANISHING IDEALS
21 Apr 2015
Azucena Tochimani
Rafael H Villarreal
BINOMIAL VANISHING IDEALS
21 Apr 2015
In this note we characterize, in algebraic and geometric terms, when a graded vanishing ideal is generated by binomials over any field K.
Introduction
Let S = K[t 1 , . . . , t s ] be a polynomial ring over a field K with the standard grading induced by setting deg(t i ) = 1 for all i. By the dimension of an ideal I ⊂ S we mean the Krull dimension of S/I. The affine and projective spaces over the field K of dimensions s and s − 1 are denoted by A s and P s−1 , respectively. Points of P s−1 are denoted by [α], where 0 = α ∈ A s . Given a set Y ⊂ P s−1 define I(Y), the vanishing ideal of Y, as the graded ideal generated by the homogeneous polynomials in S that vanish at all points of Y. Conversely, given a homogeneous ideal I ⊂ S define V (I), the zero set of I, as the set of all [α] ∈ P s−1 such that f (α) = 0 for all homogeneous polynomial f ∈ I. The zero sets are the closed sets of the Zariski topology of P s−1 . The Zariski closure of Y is denoted by Y.
We will use the following multi-index notation: for a = (a 1 , . . . , a s ) ∈ Z s , set t a = t a 1 1 · · · t as s . We call t a a Laurent monomial. If a i ≥ 0 for all i, t a is called a monomial of S. A binomial of S is an element of the form f = t a − t b , for some a, b in N s . An ideal I ⊂ S generated by binomials is called a binomial ideal. A binomial ideal I ⊂ S with the property that t i is not a zero-divisor of S/I for all i is called a lattice ideal.
In this note we classify binomial vanishing ideals in algebraic and geometric terms. There are some reasons to study vanishing ideals. They are used in algebraic geometry [5] and algebraic coding theory [4,8]. They are also used in polynomial interpolation problems [3,6,11].
The set S = P s−1 ∪ {[0]} is a monoid under componentwise multiplication, that is, given [α] = [(α 1 , . . . , α s )] and [β] = [(β 1 , . . . , β s )] in S, the product operation is given by
[α] · [β] = [α · β] = [(α 1 β 1 , . . . , α s β s )],
where [1] = [(1, . . . , 1)] is the identity element. Accordingly the affine space A s is also a monoid under componentwise multiplication.
The contents of this note are as follows. In Section 2 we recall some preliminaries on projective varieties and vanishing ideals. Let Y be a subset of If Y is a submonoid of an affine torus (see Definition 3.9), then I(Y ) is a non-graded lattice ideal [2, Proposition 2.3]. We give a graded version of this result, namely, if Y is a submonoid of a projective torus, then I(Y) is a lattice ideal (Corollary 3.10).
P s−1 . If Y∪{[0]} is a submonoid of P s−1 ∪{[0]}, we show that I(Y) is
Let I(Y) be a vanishing ideal of dimension 1. According to [9, Proposition 6.7(a)] I(Y) is a lattice ideal if and only if Y is a finite subgroup of a projective torus. We complement this result by showing that-over an algebraically closed field-Y is a finite subgroup of a projective torus if and only if there is a finite subgroup H of K * = K \ {0} and v 1 , . . . , v s ∈ Z n that parameterize Y relative to H (Proposition 3.12). For finite fields, this result was shown in [9, Proposition 6.7(b)].
Finally, we classify the graded lattice ideals of dimension 1 over an algebraically closed field of characteristic zero. It turns out that they are the vanishing ideals of finite subgroups of projective tori (Proposition 3.14).
For all unexplained terminology and additional information, we refer to [1,5] (for algebraic geometry and vanishing ideals) and [2,10,12] (for binomial and lattice ideals).
Preliminaries
In this section, we present a few results that will be needed in this note. All results of this section are well-known.
Definition 2.1. Let K be a field. We define the projective space of dimension s − 1 over K, denoted by P s−1 K or P s−1 if K is understood, to be the quotient space (K s \ {0})/ ∼ where two points α, β in K s \ {0} are equivalent under ∼ if α = cβ for some c ∈ K.
It is usual to denote the equivalence class of α by [α]. The affine space of dimension s over the field K,
denoted A s K or A s , is K s .
For any set Y ⊂ P s−1 define I(Y), the vanishing ideal of Y, as the ideal generated by the homogeneous polynomials in S that vanish at all points of Y. Conversely, given a homogeneous ideal I ⊂ S define its zero set as
V (I) = [α] ∈ P s−1 | f (α) = 0, ∀f ∈ I homogeneous .
A projective variety is the zero set of a homogeneous ideal. It is not difficult to see that the members of the family
τ = {P s−1 \ V (I) | I is a graded ideal of S}
are the open sets of a topology on P s−1 , called the Zariski topology. In a similar way we can define affine varieties, vanishing ideals of subsets of the affine space A s , and the corresponding Zariski topology of A s . The Zariski closure of Y is denoted by Y.
Lemma 2.2. Let K be a field.
(a) [1, pp. 191-192 The converse of Lemma 2.4 is true. This follows from the next result.
] If Y ⊂ A s and Y ⊂ P s−1 , then Y = V (I(Y )) and Y = V (I(Y)). (b) If K is a finite field, then Y = V (I(Y )) and Y = V (I(Y)). Proof. Part (b) follows from (a) because Y = Y and Y = Y, if K is finite.
Lemma 2.5. Let Y and Y be finite subsets of P s−1 and A s respectively, let P and [P ] be points in Y and Y , respectively, with P = (α 1 , . . . , α s ), and let I [P ] and I P be the vanishing ideal of [P ] and P , respectively. Then
(2.1) I [P ] = ({α k t i − α i t k | k = i ∈ {1, . . . , s}}), I P = (t 1 − α 1 , . . . , t s − α s ), where α k = 0 for some k. Furthermore I(Y) = [Q]∈Y I [Q] , I(Y ) = Q∈Y I Q , I [P ]
is a prime ideal of height s − 1 and I P is a prime ideal of height s.
A classification of vanishing ideals generated by binomials
We continue to employ the notations and definitions used in Sections 1 and 2. In this part we classify vanishing ideals generated by binomials.
Let (S, · , 1) be a monoid and let K be a field. As usual we define a character χ of S in K (or a K-character of S) to be a homomorphism of S into the multiplicative monoid (K, ·, 1). Thus χ is a map of S into K such that χ(1) = 1 and χ(αβ) = χ(α)χ(β) for all α, β in S.
λ 1 χ 1 (α) + · · · + λ m χ m (α) = 0
for all α ∈ S are λ 1 = · · · = λ m = 0.
= {x ∈ A s | [x] ∈ Y ∪ {[0]
}} is a submonoid of A s . Take a homogeneous polynomial 0 = f = λ 1 t a 1 + · · · + λ m t am that vanishes at all points of Y, where λ i ∈ K \ {0} for all i and a 1 , . . . , a m are distinct non-zero vectors in N s . We set a i = (a i 1 , . . . , a is ) for all i. For each i consider the K-character of S given by χ i : S → K, (α 1 , . . . , α s ) → α a i1 1 · · · α a is s .
As f ∈ I(Y), one has that λ 1 χ 1 + · · · + λ m χ m = 0. Hence, by Theorem 3.1, we get that m ≥ 2 and χ i = χ j for some i = j. Thus t a i − t a j is in I(Y). For simplicity of notation we assume that i = 1 and j = 2. Since [1] ∈ Y, we get that λ 1 + · · · + λ m = 0. Thus f = λ 2 (t a 2 − t a 1 ) + · · · + λ m (t am − t a 1 ).
Since f − λ 2 (t a 2 − t a 1 ) is a homogeneous polynomial in I(Y), by induction on m, we obtain that f is a sum of homogeneous binomials in I(Y).
This result can be restated as:
Proposition 3.4. [2]
If Y is a submonoid of A s and τ ∈ K * , then I(Y ) is a binomial ideal and I(τ Y ) is a non-pure binomial ideal. = (a 1 , . . . , a s ) ∈ N s , we set |a| = i a i . Then it is not hard to see that the set 1 · · · α as s = α b 1 1 · · · α bs s and β a 1 1 · · · β as s = β b 1 1 · · · β bs s , and consequently (α 1 β 1 ) a 1 · · · (α s β s ) as = ( Proof. This is a direct consequence of Lemma 2.4 and Corollary 3.7 because any finite set is closed in the Zariski topology. Proof. (a) ⇒ (b): By Lemma 2.4 the set Y is finite. Using Corollary 3.8 and Lemma 2.5 it follows that Y is a submonoid of T . As the cancellation laws hold in T and Y is finite, we get that Y is a group.
Proof. That I(Y ) is a binomial ideal follows readily by adapting the proof of Theorem 3.2. Let
{t b i − t c i } r i=1 be a set of generators of I(Y ) with b i , c i in N s for all i. If a{t b i /τ |b i | − t c i /τ |c i | } r i=1 generates I(τ Y ), that is, I(τ Y ) is a non-pure binomial ideal.α 1 β 1 ) b 1 · · · (α s β s ) bs , i.e., f vanishes at [α] · [β] = [α · β] if α · β = 0. Thus [α] · [β] ∈ V (I(Y)) ∪ {[0]}.
(b) ⇒ (a): This is a direct consequence of Corollary 3.10.
Proposition 3.12. Let K be an algebraically closed field. If Y ⊂ P s−1 , then the following are equivalent:
(a) Y is a finite subgroup of a projective torus T .
(b) There is a finite subgroup H of K * and v 1 , . . . , v s ∈ Z n such that
Y = {[(x v 1 , . . . , x vs )] | x = (x 1 , . . . , x n ) and x i ∈ H for all i} ⊂ P s−1 .
Proof. (b) ⇒ (a): It is not hard to verify that Y is a subgroup of T using the parameterization of Y relative to H.
Y = [α 1 ] i 1 · · · [α n ] in i 1 , . . . , i n ∈ Z .
We set α i = (α i1 , . . . , α is ) for i = 1, . . . , n.
≤ i ≤ n there is m i = o([α i ]) such that [α i ] m i = [1]. Thus (α m i i1 , . . . , α m i is ) = (λ i , . . . , λ i ) for some λ i ∈ K * . Pick µ i ∈ K * such that µ m i i = λ i . Setting, β ij = α ij /µ i ,
one has β m i ij = 1 for all i, j, that is all β ij 's are in K * and have finite order. Consider the subgroup H of K * generated by all β ij 's. This group is cyclic because K is a field. If β is a generator of (H, · ), we can write α ij /µ i = β v ji for some v ji in N. Hence We set v i = (v i1 , . . . , v in ) for i = 1, . . . , s. Let Y H be the set in P s−1 parameterized by the monomials y v 1 , . . . , y vs relative to H. If [γ] ∈ Y, then we can write
[γ] = [α 1 ] i 1 · · · [α n ] in = [((β i 1 ) v 11 · · · (β in ) v 1n , . . . , (β i 1 ) v s1 · · · (β in ) vsn )] for some i 1 , . . . , i n ∈ Z. Thus [γ] ∈ Y H . Conversely if [γ] ∈ Y H , then [γ] = [(x v 1 , . . . , x vs )]
for some x 1 , . . . , x n in H. Since any x k is of the form β i k for some integer i k , one can write [γ] = [α 1 ] i 1 · · · [α n ] in , that is, [γ] ∈ Y.
Remark 3.13. The equivalence between (a) and (b) was shown in [9, Proposition 6.7(b)] under the assumption that K is a finite field. Proposition 3.14. Let K be an algebraically closed field of characteristic zero and let I be a graded ideal of S of dimension 1. Then I is a lattice ideal if and only if I is the vanishing ideal of a finite subgroup Y of a projective torus T .
Proof. ⇒) Assume that I = I(L) is the lattice ideal of a lattice L in Z s . Since I is graded and dim(S/I) = 1, for each i ≥ 2, there is a i ∈ N + such that f i := t a i i − t a i 1 ∈ I. This polynomial has a factorization into linear factors of the form t i − µt 1 with µ ∈ K * . In characteristic zero a lattice ideal is radical [12,Theorem 8.2.27]. Therefore I is the intersection of its minimal primes and each minimal prime is generated by s − 1 linear polynomials of the form t i − µt 1 . It follows that I is the vanishing ideal of some finite subset Y of a projective torus T . By Corollary 3.7, Y is a submonoid of T . As the cancellation laws hold in T and Y is finite, we get that Y is a group.
⇐) This implication follows at once from Corollary 3.10.
Lemma 2. 3 .
3Let K be a field. If Y is a subset of A s or a subset of P s−1 and Z = V (I(Y )), then I(Z) = I(Y ). In particular I(Y ) = I(Y ). Proof. Since Y ⊂ Z, we get I(Z) ⊂ I(Y ). As I(Z) = I(V (I(Y ))) ⊃ I(Y ), one has equality. Lemma 2.4. [1, Proposition 6, p. 441] If Y ⊂ P s−1 and dim(S/I(Y)) = 1, then |Y| < ∞.
Theorem 3.1. (Dedekind's Theorem [7, p. 291]) If χ 1 , . . . , χ m are distinct characters of a monoid S into a field K, then the only elements λ 1 , . . . , λ m in K such that
Theorem 3. 2 .
2If Y is a subset of P s−1 and Y ∪ {[0]} is a submonoid of P s−1 ∪ {[0]} under componentwise multiplication, then I(Y) is a binomial ideal.Proof. The set S
Theorem 3 . 3 .
33Let Y be a subset of P s−1 such that[1] ∈ Y and [α] · [β] ∈ Y for all [α], [β] in Y with α · β = 0. Then I(Y) is a binomial ideal.The next result was observed in the Remark after [2, Proposition 2.3].
Theorem 3. 5 .
5Let K be a field and let Y be a subset of P s−1 . Then I(Y) is a binomial ideal if and only if V (I(Y)) ∪ {[0]} is a monoid under componentwise multiplication. Proof. ⇒) Consider an arbitrary non-zero binomial f = t a −t b in I(Y) with a = (a i ) and b = (b i ) in N s . As I(Y) is graded, f is homogeneous. First notice that [1] ∈ V (I(Y)) because f vanishes at [1]. Take [α], [β] in V (I(Y)) with α = (α i ), β = (β i ). Then α a 1
⇐) Thanks to Theorem 3.2 one has that I(V (I(Y))) is a binomial ideal. Recall that V (I(Y)) is equal to Y (see Lemma 2.2). On the other hand, by Lemma 2.3, I(Y) = I(Y). Thus I(Y) is a binomial ideal.
Remark 3 . 6 .
36If Y ⊂ A s , then I(Y ) is a binomial ideal if and only if V (I(Y )) is a submonoid of A s under componentwise multiplication. This follows by adapting the proof of Theorem 3.5.
Corollary 3 . 7 .
37If Y is a subset of P s−1 which is closed in the Zariski topology, then I(Y) is a binomial ideal if and only if Y ∪ {[0]} is a submonoid of P s−1 ∪ {[0]}.Proof. Thanks to Theorem 3.5 it suffices to recall that V (I(Y)) is equal to Y (see Lemma 2.2).
Corollary 3 . 8 .
38If Y is a subset of P s−1 and dim(S/I(Y)) = 1, then I(Y) is a binomial ideal if and only if Y ∪ {[0]} is a submonoid of P s−1 ∪ {[0]}.
Definition 3 . 9 .
39The set T = {[(x 1 , . . . , x s )] ∈ P s−1 | x i ∈ K * for all i} is called a projective torus in P s−1 , and the setT * = (K * ) s is called an affine torus in A s , where K * = K \ {0}.If Y is a submonoid of an affine torus T * , then I(Y ) is a non-graded lattice ideal (see[2, Proposition 2.3]). The following corollary is the graded version of this result.
Corollary 3. 10 .
10If Y is a submonoid of a projective torus T , then I(Y) is a lattice ideal.Proof. By Theorem 3.2, I(Y) is a binomial ideal. Thus it suffices to show that t i is not a zerodivisor of S/I(Y) for all i. If f ∈ S and t i f vanishes at all points of Y, then so does f , as required.
Corollary 3 .
311. [9, Proposition 6.7(a)] If Y ⊂ P s−1 and dim(S/I(Y)) = 1, then the following are equivalent: (a) I(Y) is a lattice ideal. (b) Y is a finite subgroup of a projective torus T .
(a) ⇒ (b): By the fundamental theorem of finitely generated abelian groups, Y is a direct product of cyclic groups. Hence, there are [α 1 ], . . . , [α n ] in Y such that
[α 1
1] = [(β v 11 , . . . , β v s1 )], . . . , [α n ] = [(β v 1n , . . . , β vsn )].
a binomial ideal (Theorem 3.2). The same type of result holds if Y is a subset of A s (Proposition 3.4). Then we show that I(Y) is a binomial ideal if and only if V (I(Y)) ∪ {[0]} is a monoid under componentwise multiplication (Theorem 3.5). As a result if Y is finite, then I(Y) is a binomial ideal if and only if Y ∪ {0} is a monoid (Corollary 3.7). This essentially classifies all graded binomial vanishing ideals of dimension 1 (Corollary 3.8)
As [α 1 ], . . . , [α n ] have finite order, for each 1
Acknowledgments. We thank Thomas Kahle for his comments and for pointing out the Remark after [2, Proposition 2.3]. The authors would also like to thank the referees for their careful reading of the paper and for the improvements that they suggested.
D Cox, J Little, D O'shea, Ideals, Varieties, and Algorithms. Springer-VerlagD. Cox, J. Little and D. O'Shea, Ideals, Varieties, and Algorithms, Springer-Verlag, 1992.
. D Eisenbud, B Sturmfels, Binomial ideals, Duke Math. J. 84D. Eisenbud and B. Sturmfels, Binomial ideals, Duke Math. J. 84 (1996), 1-45.
Polynomial interpolation in several variables. M Gasca, Mariano , T Sauer, Adv. Comput. Math. 124M. Gasca, Mariano and T. Sauer, Polynomial interpolation in several variables, Adv. Comput. Math. 12 (2000), no. 4, 377-410.
. M González-Sarabia, C Rentería, H Tapia-Recillas, Finite Fields Appl. 84Reed-Muller-type codes over the Segre varietyM. González-Sarabia, C. Rentería and H. Tapia-Recillas, Reed-Muller-type codes over the Segre variety, Finite Fields Appl. 8 (2002), no. 4, 511-518.
Algebraic Geometry. A first course. J Harris, Graduate Texts in Mathematics. 133Springer-VerlagJ. Harris, Algebraic Geometry. A first course, Graduate Texts in Mathematics 133, Springer-Verlag, New York, 1992.
Interpolation in affine and projective space over a finite field. M Hellus, R Waldi, J. Commut. Algebra. to appearM. Hellus and R. Waldi, Interpolation in affine and projective space over a finite field, J. Commut. Algebra, to appear.
Basic Algebra I , Second Edition. N Jacobson, Freeman and CompanyNew YorkN. Jacobson, Basic Algebra I , Second Edition, W. H. Freeman and Company, New York, 1996.
Affine cartesian codes. H H López, C Rentería, R H Villarreal, Des. Codes Cryptogr. 711H. H. López, C. Rentería and R. H. Villarreal, Affine cartesian codes, Des. Codes Cryptogr. 71 (2014), no. 1, 5-19.
Regularity and algebraic properties of certain lattice ideals. J Neves, M Pinto, R H Villarreal, Bull. Braz. Math. Soc. (N.S.). 454J. Neves, M. Vaz Pinto and R. H. Villarreal, Regularity and algebraic properties of certain lattice ideals, Bull. Braz. Math. Soc. (N.S.) 45 (2014), no. 4, 777-806.
Degree and algebraic properties of lattice and matrix ideals. L Carroll, F Planas-Vilanova, R H Villarreal, SIAM J. Discrete Math. 281L. O'Carroll, F. Planas-Vilanova and R. H. Villarreal, Degree and algebraic properties of lattice and matrix ideals, SIAM J. Discrete Math. 28 (2014), no. 1, 394-427.
Vanishing ideals over rational parameterizations. A Tochimani, R H Villarreal, arXiv:1502.05451v1PreprintA. Tochimani and R. H. Villarreal, Vanishing ideals over rational parameterizations. Preprint, 2015, arXiv:1502.05451v1.
R H Villarreal, Monomial Algebras, Second Edition, Monographs and Research Notes in Mathematics. CRC PressR. H. Villarreal, Monomial Algebras, Second Edition, Monographs and Research Notes in Mathematics, CRC Press, 2015.
| [] |
[
"Proximity superconductivity in atom-by-atom crafted quantum dots",
"Proximity superconductivity in atom-by-atom crafted quantum dots"
] | [
"Lucas Schneider *e-mail:[email protected] \nDepartment of Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n",
"Khai That Ton \nDepartment of Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n",
"Ioannis Ioannidis \nI. Institute for Theoretical Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n\nCentre for Ultrafast Imaging\nLuruper Chaussee 149D-22761HamburgGermany\n",
"Jannis Neuhaus-Steinmetz \nDepartment of Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n",
"Thore Posske \nI. Institute for Theoretical Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n\nCentre for Ultrafast Imaging\nLuruper Chaussee 149D-22761HamburgGermany\n",
"Roland Wiesendanger \nDepartment of Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n",
"Jens Wiebe \nDepartment of Physics\nUniversity of Hamburg\nD-20355HamburgGermany\n"
] | [
"Department of Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"Department of Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"I. Institute for Theoretical Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"Centre for Ultrafast Imaging\nLuruper Chaussee 149D-22761HamburgGermany",
"Department of Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"I. Institute for Theoretical Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"Centre for Ultrafast Imaging\nLuruper Chaussee 149D-22761HamburgGermany",
"Department of Physics\nUniversity of Hamburg\nD-20355HamburgGermany",
"Department of Physics\nUniversity of Hamburg\nD-20355HamburgGermany"
] | [] | Gapless materials in electronic contact with superconductors acquire proximity-induced superconductivity in a region near the interface 1,2 . Numerous proposals build on this addition of electron pairing to originally non-superconducting systems like ferromagnets and predict intriguing quantum phases of matter, including topological-3-7 , odd-frequency-8 , or nodal-point 9 superconductivity. However, atomic-scale experimental investigations of the microscopic mechanisms leading to proximity-induced Cooper pairing in surface or interface states are missing. Here, we investigate the most miniature example of the proximity effect on only a single quantum level of a surface state confined in a quantum corral 10,11 on a superconducting substrate, built atom-by-atom by a scanning tunneling microscope. Whenever an eigenmode of the corral is pitched close to the Fermi energy by adjusting the corral's size, a pair of particle-hole symmetric states enters the superconductor's gap. We identify the in-gap states as scattering resonances theoretically predicted 50 years ago by Machida and Shibata 12 , which had so far eluded detection. We further show that the observed anticrossings of the in-gap states indicate proximity-induced pairing in the quantum corral's eigenmodes. Our results have direct consequences on the interpretation of in-gap states in unconventional or topological superconductors, corroborate concepts to induce superconductivity into a single quantum level and further pave the way towards superconducting artificial lattices.MainCombining the individual properties of different quantum materials in hybrid structures offers a seemingly inexhaustible variety of exotic phases of matter 13 , including strongly correlated electron systems 14 , topologically non-trivial spin textures 15 , quantum anomalous Hall effects 16 , or unconventional superconductivity 4,6-8 . Particularly interesting states are formed when superconductivity (SC) is induced into intrinsically non-superconducting materials by the proximity effect 1,2 , giving rise to topological-3-7 , spin-triplet-8 , nodal-point-9 or Fulde-Ferrell-Larkin-Ovchinnikov-SC 17 . A good understanding and control of the proximity effect in metallic nanostructures is crucial for the development of such novel heterostructures. Pairing in the normal metal is induced via Andreev reflection processes at the interface with the superconductor. If the transparency of the interface between a normal metal in the clean limit and the superconductor is high, SC is induced over a length scale which can exceed dozens of nanometers 18 . However, for many of the exciting heterostructures, SC has to be induced through interface states or into surface states 6,19-22 . These are typically well decoupled from | null | [
"https://export.arxiv.org/pdf/2212.00657v1.pdf"
] | 254,125,446 | 2212.00657 | 33224a74bbbec22ed72c902796e216be5fe4ea69 |
Proximity superconductivity in atom-by-atom crafted quantum dots
Lucas Schneider *e-mail:[email protected]
Department of Physics
University of Hamburg
D-20355HamburgGermany
Khai That Ton
Department of Physics
University of Hamburg
D-20355HamburgGermany
Ioannis Ioannidis
I. Institute for Theoretical Physics
University of Hamburg
D-20355HamburgGermany
Centre for Ultrafast Imaging
Luruper Chaussee 149D-22761HamburgGermany
Jannis Neuhaus-Steinmetz
Department of Physics
University of Hamburg
D-20355HamburgGermany
Thore Posske
I. Institute for Theoretical Physics
University of Hamburg
D-20355HamburgGermany
Centre for Ultrafast Imaging
Luruper Chaussee 149D-22761HamburgGermany
Roland Wiesendanger
Department of Physics
University of Hamburg
D-20355HamburgGermany
Jens Wiebe
Department of Physics
University of Hamburg
D-20355HamburgGermany
Proximity superconductivity in atom-by-atom crafted quantum dots
1
Gapless materials in electronic contact with superconductors acquire proximity-induced superconductivity in a region near the interface 1,2 . Numerous proposals build on this addition of electron pairing to originally non-superconducting systems like ferromagnets and predict intriguing quantum phases of matter, including topological-3-7 , odd-frequency-8 , or nodal-point 9 superconductivity. However, atomic-scale experimental investigations of the microscopic mechanisms leading to proximity-induced Cooper pairing in surface or interface states are missing. Here, we investigate the most miniature example of the proximity effect on only a single quantum level of a surface state confined in a quantum corral 10,11 on a superconducting substrate, built atom-by-atom by a scanning tunneling microscope. Whenever an eigenmode of the corral is pitched close to the Fermi energy by adjusting the corral's size, a pair of particle-hole symmetric states enters the superconductor's gap. We identify the in-gap states as scattering resonances theoretically predicted 50 years ago by Machida and Shibata 12 , which had so far eluded detection. We further show that the observed anticrossings of the in-gap states indicate proximity-induced pairing in the quantum corral's eigenmodes. Our results have direct consequences on the interpretation of in-gap states in unconventional or topological superconductors, corroborate concepts to induce superconductivity into a single quantum level and further pave the way towards superconducting artificial lattices.MainCombining the individual properties of different quantum materials in hybrid structures offers a seemingly inexhaustible variety of exotic phases of matter 13 , including strongly correlated electron systems 14 , topologically non-trivial spin textures 15 , quantum anomalous Hall effects 16 , or unconventional superconductivity 4,6-8 . Particularly interesting states are formed when superconductivity (SC) is induced into intrinsically non-superconducting materials by the proximity effect 1,2 , giving rise to topological-3-7 , spin-triplet-8 , nodal-point-9 or Fulde-Ferrell-Larkin-Ovchinnikov-SC 17 . A good understanding and control of the proximity effect in metallic nanostructures is crucial for the development of such novel heterostructures. Pairing in the normal metal is induced via Andreev reflection processes at the interface with the superconductor. If the transparency of the interface between a normal metal in the clean limit and the superconductor is high, SC is induced over a length scale which can exceed dozens of nanometers 18 . However, for many of the exciting heterostructures, SC has to be induced through interface states or into surface states 6,19-22 . These are typically well decoupled from
the bulk bands and thus it is unclear a priori whether they will acquire sufficient pairing if their distance to the superconductor is larger than a few nanometers 18,22 . To study this effect in detail, we downscale the problem as much as possible by investigating only a single resonance mode of a surface state. This is achieved by laterally confining the surface state in a quantum corral, forming a particular quantum dot (QD). These can naturally occur in nanoscopic islands 23,24 or, in a more tunable platform, in artificially designed adsorbate arrays 10,25,26 where the QD walls are built atom-by-atom using the tip of a scanning tunneling microscope as a tool. Although the surface states are typically well decoupled from metallic bulk states in the direction perpendicular to the surface plane, scattering at step edges or the adsorbates is known to introduce a measurable coupling to the bulk electronic states leading to a lifetime broadening of the QD's eigenmodes on the order of several meV 27,28 . Notably, in contrast to the more widely studied semiconductor or molecular QDs 29 , the electron density screening the metallic QDs investigated here is by orders of magnitude larger, which leads to largely suppressed electron-electron interactions, i.e., the QD charging energy is negligible, and thereby, the QD can be described by spin-degenerate single-particle eigenmodes. Recently, coupled arrays of such QDs with tunable interactions between adjacent sites have evolved as an exciting platform for the simulation of quantum materials 30,31 . However, while there has been progress in choosing different material templates for incorporating more complex phenomena like, e.g., Rashba spin-orbit coupling into these QDs 32 , pathways for inducing SC into their individual eigenmodes have not been studied so far. To this end, artificial lattices including SC pairing terms are among the most promising platforms for the realization and control of topological superconductivity and in particular of Majorana zero modes 33,34 .
Here, we study artificial QDs defined by a cage of Ag atoms on thin Ag(111) islands (see Figs. 1a and b, Methods, and Supplementary Note 1) grown on superconducting Nb(110) using scanning tunneling microscopy (STM) and -spectroscopy (STS). We employ superconducting Nb tips for enhanced energy resolution in the STS experiments. The use of Nb tips leads to a shift of spectral features to higher energies by the value of the tip's superconducting gap ∆t, i.e., states at the sample's Fermi energy EF are found at bias voltages of e·V = ±∆t. The proximity to Nb(110) opens a superconducting gap of 2∆s in the bulk states of Ag(111) for island thicknesses well below Ag = 100 nm 18,35 . We measure a value of ∆s = 1.35 meV (Supplementary Note 2), which is similar to the gap of elemental Nb, ∆Nb = 1.50 meV 3,36 , indicating a high interface quality between Nb and Ag. The outline of the experiment is shown in Fig. 1b: the scattered Ag(111) surface state electrons visible as wavy patterns at the surface of Ag islands (Fig. 1a) are confined within a couple of lattice constants in the direction perpendicular to the surface 37 but still have a finite coupling ∝ √Γ to the superconducting Ag bulk electrons 28,38 . We further confine these electrons laterally within QDs built of walls of Ag atoms resulting in spin-degenerate eigenmodes of energies r which can be pitched to the Fermi energy F by adjusting the width x of the QD. We then investigate the proximity effect of the bulk electrons onto these QD eigenmodes. b, Sketch of the experimental setup with the QD having walls built of single atoms, laterally confining the surface state electrons such that spin-degenerate QD eigenmodes of energies r are formed. The QD eigenmodes couple to the gapped superconducting substrate ( s ) with a strength ∝ √Γ. r can be pitched with respect to the Fermi energy F by adjusting the width x of the QD. c, Constant-current STM image of a rectangular QD with side-lengths Lx and Ly consisting of 44 non-magnetic Ag atoms. Lx and Ly are defined as the distance between the Ag atoms in the inner ring. The QD is surrounded by another 44 Ag atoms preventing surface state modes located outside the QD from leaking into the structure. d, Constant-current STM image of the same structure with one of the QD walls moved to the lower end as indicated by the white arrow. e, Upper panels: Constant-height dI/dV maps at bias voltages indicated in the respective panels measured in the interior of the QD in panel d (area marked by the yellow dashed lines). All panels are 15 x 7.5 nm 2 in size. Lower panels: Simulation of a rectangular box with infinitely high potential walls and dimensions L x = 16.4 nm, L y = 9.1 nm assuming a parabolic dispersion of the quasiparticles in 2D with m eff = 0.51m 0 and E 0 = -30.3meV (see Supplementary Note 4 for details). The quantum numbers [nx, ny] of the dominant eigenmodes at the energies of the experimental maps are given below each simulated map. Note, that some maps are a mixture of two eigenmodes because of energy level broadening. f, dI/dV line profiles along the dashed orange vertical lines of the two QDs marked in panels c and d obtained in constant height mode. QD eigenmodes with ny = 1 and nx as indicated by the arrows above the panel are observed. Their respective energy is shifted when the length Lx is altered as illustrated by the black arrows. Parameters: V = 50 mV, I = 1 nA, Vmod = 5 mV for panel a; V = 5 mV, I = 1 nA for panels c and d; Vstab = -100 mV, Istab = 2 nA, Vmod = 2 mV for e and f.
Design and pitching of quantum dot states
As shown by Limot et al. 39 , individual Ag atoms can be reproducibly gathered by approaching the STM tip close to the Ag(111) surface (see Supplementary Note 3 for details). These can be arranged to form rectangular artificial QDs of tunable sizes (Figs. 1c, d) using lateral atom manipulation techniques (see Methods). Since the Ag walls of the QDs have a finite transparency for the surface state electrons, a second wall of Ag atoms is constructed around the central QD wall in order to screen the interior from surface state modes located outside of the structure. The spatial structure of the QD's eigenmodes can be mapped by measuring the differential conductance dI/dV(x, y, E) at a particular bias voltage eV = E. The resulting patterns (Fig. 1e, upper panels) closely resemble the eigenmodes of a two-dimensional rectangular box potential with infinite walls having a well-defined number of antinodes in x and y direction [nx, ny] (Fig. 1e, lower panels, see Supplementary Note 4 for details). In the following, the width Ly of the QD is kept fixed while the length Lx is tuned by moving the upper Ag wall laterally (see Fig. 1d). This leads to a change in the confinement conditions such that the eigenenergies of the QD states are shifted. Experimentally, this can be verified by measuring dI/dV line profiles along lines close to the central axis of a given QD (Fig. 1f, upper panel): the eigenmodes with ny = 1 and nx = 1, 2, 3, … can be identified and are marked by black arrows. When the QD length Lx is changed from 24.0 nm to 16.4 nm (lower panel), a shift of the individual states to higher energies can be observed (black arrows) 26 . Note that it can be already seen by comparison of the top and bottom panels of Fig. 1f that, decreasing the length Lx of the QD, the linewidth of the eigenmodes and thereby their coupling ∝ √Γ to the bulk superconducting electrons increases, which is a well-known effect due to increased surface-bulk scattering 28,38 . These effects will be used in the following to continuously pitch QD eigenmodes with different couplings through F by accordingly tuning Lx.
Emergence of in-gap states
We now focus on the low-energy properties of the QDs in the region of the superconducting gap. dI/dV spectroscopy of the QD presented in Fig. 1d shows clean superconductor-insulatorsuperconductor (SIS) tunneling without any in-gap states at spatial locations where no QD eigenmodes are present (gray curve in Fig. 2a and gray cross in Fig. 1e): sharp and prominent peaks appear at bias voltages corresponding to e·V = ±(∆t + ∆s), indicating tunneling between the coherence peaks of tip and sample. Only a weaker thermal resonance peak at e·V = ±(∆t -∆s), i.e., at roughly zero bias, is observed (the bias range |e·V| < ∆t is left out in Figs. 2a,c, see Supplementary Note 2 for more details). This confirms that the bulk gap of Ag(111) is fully developed for the given island thickness. In contrast, when measuring on a maximum of the QD eigenmode closest to EF, we find a pair of sharp electronic states at particle-hole symmetric energies ±(∆t + ±) within the gap (blue curve in Fig. 2a and blue cross in Fig. 1e). When mapping the spatial distribution of these states (Fig. 2b), we find that they closely resemble the shape of the expected QD eigenmode at E ≈ EF as obtained from particle-in-a-box simulations (rightmost panel). Similar sharp, individual in-gap states are also observed in naturally occurring Ag(111) regions exhibiting strong 2D confinement on the sample (see Supplementary Note 5). To gain more insight into the nature of these in-gap states, we tune the QD's length Lx and study the evolution of both the eigenmodes outside and inside the gap (Fig. 2c). As expected, the eigenmodes with quantum numbers [nx, 1] outside the gap move in energy following the well-known Lx -2 behavior (white dashed lines, see also Supplementary Note 4). Moreover, it can be seen that the peaks at ±(∆t + ∆s) (white vertical dashed lines) remain at the same energy for all QD sizes, indicating that they stem from the proximitized Ag bulk states. Most notably, it can be observed that the in-gap states at varying energies ±(∆ t + ±) appear whenever a QD eigenmode energy Er approaches EF. The absolute value for ± is lowest when the QD's length Lx is such that the Er would cross EF if extrapolated from outside the superconducting gap to the energetical region inside the gap (see dashed lines in Fig. 2c).
We evaluate this minimum value min for different eigenmodes of the QC and compare the results with their estimated energetic broadening at energies outside of the gap (see Supplementary Note 4 for details on the analysis). The energetic broadening is known to be predominantly related to the inverse lifetime of quasiparticles in the respective QD eigenmode for energies close to EF. Furthermore, as noted above, of the eigenmodes close to EF decreases with increased QD size 28,38 . Indeed, this trend can be seen in Fig. 2d for the eigenmodes with increasing nx, i.e., for wider QDs. As a main result of this work, there is a clear correlation between min and ∝ 2 : For increased couplings of a zero-energy QD eigenmode to the substrate superconductor, min is shifted from the Fermi energy towards the substrate's gap edge ∆s, i.e., the QD eigenmode's gap is gradually getting wider (see Fig. 2d). In the following, the origin of the in-gap states will be theoretically investigated and the link to the superconducting proximity effect will be substantiated.
Theoretical model of a spin-degenerate level coupled to a superconducting bath
The observation of in-gap states is a surprising result, since particle-hole-symmetric states inside the gap of an s-wave superconductor are commonly believed to only appear for magnetic impurities 40,41 . In-gap states emerging around non-magnetic impurities are mostly considered to be evidence for unconventional SC 42,43 . In our samples, we exclude that magnetism plays a role on the pure and well-characterized noble-metal surface with only nonmagnetic adatoms. Furthermore, Nb is a conventional s-wave superconductor and the proximity effect induced in a normal metal with negligible spin-orbit coupling is not expected to induce considerable unconventional pairing. However, as shown theoretically 50 years ago by Machida and Shibata 12 , there is always a sub-gap solution for the problem of a localized spin-degenerate level, as present in the QDs in our samples, coupled to a superconducting bath due to resonance scattering 12,40 . We consider the Hamiltonian of Machida and Shibata in Ref. 12 where , ( , † ) and ( † ) refer to the annihilation (creation) operators of superconducting bath electrons and the localized level, respectively. denotes the superconductor's normal electronic dispersion, ∝ √Γ is the coupling strength of the localized level at energy r to the bath and s is the order parameter of s-wave superconductivity in the bath. Calculating the local density of states (LDOS) of the localized level (see Methods) confirms that there is always a pair of bound states at in-gap energies 12 for all nonvanishing . In the following, we will refer to these states as Machida-Shibata-States (MSSs). We depict the energy evolution of MSSs as a function of the localized level's energy r in the normal state for different choices of in Fig. 3. For / s << 1 (Fig. 3a), the localized level couples only weakly to the superconductor and its energy evolves mostly continuously through the gap while its particle-hole-symmetric partner state at -features negligible spectral weight in the LDOS. As / s is increased (Fig. 3b), the states with ± show a pronounced anti-crossing behavior as r approaches zero. Moreover, both states at ± acquire a finite spectral weight in the LDOS, indicating that the superconductor mixes particle-and hole-like states. This situation is closely reminiscent of the experimental data in Fig. 2c. For strong coupling / s >> 1, the in-gap states shift close to s irrespective of r , consistent with the regular proximity effect being induced into the localized resonance level leading to a full superconducting gap. We observe a similar effect in a tight-binding description of a QD weakly coupled to a superconducting surface layer (Supplementary Note 6), corroborating that the simplified description of the QD's eigenmode as a single localized quantum level r shown in Fig. 3 is appropriate. The predicted shift of the MSSs' minimal energy with increasing is included as a grey dashed line in Fig. 2d. Its good quantitative agreement with the experimental data without additional fitting parameters suggests that the resonances found experimentally are indeed the previously unobserved MSSs. While these results demonstrate that the lowest-energy quasiparticle excitations of the local level become gradually gapped out with increasing coupling to the superconducting bath, it is not a priori clear whether the local level experiences proximity superconductivity. To this end, we perform a Schrieffer-Wolff transformation of Eq. (1) to obtain the effective low-energy theory of the level when r lies within the superconductor's gap (see Supplementary Note 7 for details). The resulting Hamiltonian reads
ℋ D ′ = ∑ r (1 + ind / s ) † − ( ind ↑ ↓ + h. c. ).(2)
Indeed, it includes a term for the induced pairing energy ind of the level's quasiparticles, resembling the BCS-like mean-field-expression for superconductivity. Based on Eq. (2), it can be seen that, for r = 0, the lowest energy eigenstates of the system are energetically located at ± = ± ind . Thereby, the values of min we measured for the different QD eigenmodes (Fig. 2d) can indeed be identified with the proximity gap magnitudes ind , which approach s for strong coupling .
Visualization of particle-hole-mixing
Notably, the observed in-gap states at + andare not symmetric in intensity. Their peak asymmetry in spectral weight can be analyzed in terms of the Bogoliubov mixing angle B = ArcTan (√| | 2 /| | 2 ) = ArcTan (√ + / − ) .
Here, and are the respective particle-and hole-amplitudes of the Bogoliubov quasiparticles, which are related to the peak heights ± at positive and negative peak energies ± measured in tunneling spectroscopy 44 . The results are shown in Fig. 4. For maximal particle-hole mixing (| | 2 = | | 2 ), the angle B equals /4. For Bogoliubov quasiparticles, this case is expected when their energy approaches the pairing energy ± ≈ ± ind . In the experimental data, we find a value of B ≈ /4 whenever ̅ ≈ min ( r ≈ 0, see Supplementary Note 4). This finding further supports the above conjecture that min can be interpreted as a proximity-induced superconducting pairing ind in the QD resonance level. When moving to larger in-gap state energies, B either increases (for r > 0) or decreases (for r < 0). This trend is found consistently for all eigenmodes and qualitatively agrees well with the expectations for Bogoliubov excitation solutions of Eq. (2) (dashed gray line in Fig. 4).
Discussion
The appearance of sharp in-gap states in STM experiments on superconductors is typically considered to be a fingerprint of either a local magnetic moment 40,41 or unconventional superconducting pairing 42,43 . Our experimental observation of MSSs clearly challenges this conclusion. The energy of the MSSs critically depends on the ratio of the resonance level linewidth measured outside of the superconducting gap and the superconducting pairing energy s present in the bath. For typical localized levels residing on single atomic impurities, this ratio is / s >> 1 and thus the bound states are located at energies very close to the coherence peaks of the bath superconductor. Therefore, these resonances were previously neither studied extensively theoretically nor experimentally observed. However, the linewidths of the eigenmodes of the QDs studied here are of similar magnitude as the superconducting gap, which leads to the low-energy in-gap states depicted in Fig. 2c that are well split off from the coherence peaks. It should be noted that the spectral function displays two main features (see Fig. 3): the MSSs, which are -in theory -infinitely sharp in-gap states for all values of and the Lorentzian resonance level of linewidth outside of the gap. The measured linewidth of the MSSs is 80µeV, which is just at the border of our experimental resolution, so the intrinsic linewidth of the MSS is even smaller, < 80µeV. In turn this implies that the lifetime of Bogoliubov quasiparticles in the MSSs exceeds ℏ/ = > 8ps at T = 4.5K. The sharpness of the in-gap states can be understood as a consequence of negligible scattering into the gapped bulk states, unlike in the metallic state at > s where the level obtains a broadening >> .
As we have shown, the continuous shift of the MSSs to energies ± s as the level's coupling to a superconductor is increased can be interpreted as the most miniature version of the proximity effect being induced into a single quantum level. The strongest coupling is observed for the narrowest investigated quantum dot (nx = 1 in Fig. 2d), resulting in a comparably large gap ind of up to 85% of s induced into the QD eigenmode. This strongly suggests that the proximity effect originates from scattering of the surface state at the QDs walls, which is maximal for the narrowest QDs, as also speculated by recent works 19,22 . Since the coupling is controlled by the QD size, the induced gap ind is found to be tunable as well (see Fig. 2d). Moreover, as demonstrated in Fig. 4, the experimentally observed resonance peaks behave like Bogoliubov excitations, which are expected to carry an energy dependent fractional charge 45 of (| | 2 − | | 2 ) . This could potentially be directly probed by STM-based shot-noise measurements 46 , opening avenues for studying quasiparticles with tunable fractional charge on the atomic-scale.
We anticipate that the concept of impurity-supported proximity-induced Cooper pairing in atom-by-atom designed quantum corrals could be helpful in general to induce SC into arbitrary surface states, potentially also combined with non-trivial topology. Amongst others, the latter presents a pathway for the creation of unconventional SC and Majorana bound states 5,6,19,22,47,48 . Moreover, patterning the surface states of (111) noble metal surfaces by precisely positioned scattering centers has evolved to one of the most promising platforms in the direction of artificial lattices. These have been shown to host Dirac fermions 49,50 , flat bands [51][52][53] , wavefunctions in fractal geometries 54 or topologically non-trivial states 53,55 . Eventually, our results facilitate studying the interaction of these exotic phenomena with superconducting pairing in a simple and tunable platform. Notably, while Coulomb interactions inside the noble metal QDs we study here are typically screened well by the charge carriers in the system's bulk, it would be interesting to extend this platform towards reduced screening, potentially enabling atomic scale studies of the crossover from spindegenerate to spinful quantum dots coupled to superconductors 56 .
the Ag(111) surface by approaching the tip to the surface as shown in Ref. 39 and Supplementary Note 3. Ag QDs were constructed by lateral atom manipulation 59,60 at low tunneling resistances of R ≈ 100 kΩ.
Theoretical model for resonance scattering at a spin-degenerate level coupled to a superconducting bath We consider a system of a single spin-degenerate local level coupled to an s-wave superconducting 3D bath, following the model introduced in Ref. 12
where ( † ) is an electron annihilation (creation) operator and ( ) = 〈〈 ; † 〉〉 is the shorthand notation for the usual retarded Green's function 61
where = 2 , = 2 /(2 2 ) is the density of states per spin species of the substrate above the critical temperature at the Fermi level, and is its volume. Furthermore, represents the spin up and down contribution, respectively, and the standard approximation of linearizing = ( − ) around the Fermi energy has been used, with and being the Fermi velocity and momentum, respectively. The LDOS is given by
We note the emergence of in-gap states as found in Ref. 12 . In contrast to a metallic bath, where the scattering results in a spectral broadening of the local level, the superconducting bath induces superconductivity by proximity to the local level. Hence, when r lies within the superconductor's gap, the state at r splits into two particle-hole symmetric ones around F . Notably, for energy scales much larger than s , Eq.(7) reduces to a typical Lorentzian LDOS of width at position r , as observed in the experiment.
The obtained spin-degenerate single-level Hamiltonian with proximity induced pairing (Eq. (2)) is equivalent to the Green's function approach above to second order in the coupling constant ∝ √Γ. Assuming a spherical Fermi surface, we derive the explicit form of the induced superconducting term and the correction to the chemical potential of the quantum level in Supplementary Note 7.
Data availability
The authors declare that the data supporting the findings of this study are available within the paper and its supplementary information files.
Code availability
The analysis codes that support the findings of the study are available from the corresponding authors on reasonable request.
Fig. 1 |
1Atom-by-atom built quantum dots coupled to the superconducting substrate. a, 3D rendering of the constant-current STM topography of a Ag island with a thickness of 12 nm. The simultaneously measured dI/dV signal is used as the 3D model's texture, showing quasiparticle interference patterns of the surface state electrons on the Ag(111) surface termination. The island grows on top of a pseudomorphic Ag double layer on Nb(110) (sketched profile, see Supplementary Note 1 for details).
Fig. 2 |
2In-gap states of near-zero-energy pitched QD eigenmodes. a, dI/dV spectra measured at two different positions in the QD with Lx = 16.4 nm, Ly = 9.1 nm shown in Figs. 1d-f. The respective positions are marked by gray and blue crosses in Fig. 1e. Both spectra were acquired at the same tip height. The values of the tip's superconducting gap eV = ±∆t and the sum eV = ±(∆t + ∆s) of the tip gap and the proximity induced Ag bulk gap ∆s are marked by dashed orange and purple lines, respectively. The bias range |e·V| < ∆t is left out in a and c, see Supplementary Note 2 for examples of spectra over the full voltage range. In-gap states appear at particle-hole symmetric energies ±(∆t + ±), which are marked by black arrows. b, Left: Constant-height dI/dV maps measured at the energies of the in-gap state peaks in the same area as in Fig. 1e. Right: Particle-in-a-box simulation evaluated at zero energy with dominant contribution of the eigenmode with [nx, ny] = [3, 1]. c, Evolution of averaged dI/dV spectra from dI/dV line-profiles measured along the central vertical axis of different QDs (see, e.g., dashed orange vertical lines in Figs. 1c and d) as a function of the QD length L x . The white dashed lines mark the evolution of the eigenmodes with n y = 1 and n x = {1, 2, 3, 4} obtained from fitting the dI/dV spectra at energies outside of the gap in Supplementary Note 4. The length of the QD presented in panels a and b is marked by the blue arrow on the left side. d, Linewidths of different QD eigenmodes extracted from fitting data from different QDs to Lorentzian peaks at energies outside of the gap. These are compared to the minimal energies of the in-gap states found when r ≈ 0 (see Supplementary Note 4 for details of the fitting). The gray dashed line is the expected theoretical relation between min and for a spin-degenerate level coupled to a superconducting bath 12 . Parameters: Vstab = -15 mV, Istab = 4 nA, Vmod = 50 µV for panels a and c; Vstab = -15 mV, Istab = 4 nA, Vmod = 100 µV for panel b. Note, that further QDs constructed and analyzed as described in Supplementary Note 5 are included in panel d.
Fig. 3 |
3Machida-Shibata states (MSS) from resonance scattering at a spin-degenerate localized level. a, Energy dependent local electron density of states LDOS( ) of a single localized level at energy r coupled to a superconducting bath with the parameter s . The coupling strength ∝ 2 (see Methods) equals 0.1 s . The induced gap ind and the energies of the in-gap states ± are marked. b, Same as panel a but for = 1.0 s . c, Same as panel a but for = 3.0 s . An energetic broadening of = 0.03 s has been added in all panels (see Methods).
Fig. 4 |
4Particle-hole mixture of the in-gap states. Bogoliubov angle B of the in-gap states with different energies ̅ normalized to their minimal energies min . The gray dashed line represents the expected relation for Bogoliubov quasiparticles with an induced gap of ind = min as derived from the effective Hamiltonian in Eq. (2) (see Supplementary Note 7 for details). Inset: Bogoliubov quasiparticles are coherent combinations of electrons (filled circle) and holes (empty circle). The Bogoliubov angle B of a quasiparticle quantifies the amount of particle-hole mixing.
is a small and positive real number. By solving the system of equations of motion Eq. (4) for the Hamiltonian in Eq.
and the Hamiltonian given in Eq. (1). We calculate the LDOS at the local level. For that, we use the Green functions' equations of motion in energy space61 ( ) =
+ 〈〈[ , ℋ]; † 〉〉 ,
MethodsExperimental proceduresThe experiments were performed in a commercially purchased SPECS STM system operated at T = 4.5 K which is equipped with home-built UHV chambers for sample preparation57. A Nb(110) single crystal was used as a substrate and cleaned by high temperature flashes to T ≈ 2000 K with an e-beam heater. As shown previously36, this method yields an ordered but oxygen-reconstructed Nb(110) surface. Ag was deposited from an e-beam evaporator using a high-purity rod at a deposition rate of about 0.1 monolayers (ML)/min. In agreement with previous studies, evaporation of Ag at elevated temperatures leads to the formation of two pseudomorphic monolayers of Ag followed by Stranski-Krastanov growth of large Ag(111) islands (see Supplementary Note 1). In order to get preferably small and thin islands, we grew Ag islands in a three-step process, starting with the deposition of 2MLs at T ≈ 600 K creating two closed wetting layers. In a second step, the temperature was reduced to T ≈ 400 K to limit the lateral diffusion of Ag on the surface and to create more nucleation centers for the Stranski-Krastanov islands. Under these conditions, another 2MLs of Ag were deposited, followed by three additional MLs grown at T ≈ 600 K again to guarantee a well annealed surface of the topmost layers. STM images were obtained by regulating the tunneling current Istab to a constant value with a feedback loop while applying a constant bias voltage Vstab across the tunneling junction. For measurements of differential tunneling conductance (dI/dV) spectra, the tip was stabilized at bias voltage Vstab and current Istab as individually noted in the figure captions. In a next step, the feedback loop was switched off and the bias voltage was swept from -Vstab to +Vstab. The dI/dV signal was measured using standard lock-in techniques with a small modulation voltage Vmod (RMS) of frequency f = 1097.1Hz added to Vstab. dI/dV line-profiles were acquired recording multiple dI/dV spectra along a one-dimensional line of lateral positions on the sample, respectively. Note that the tip was not stabilized again after each individual spectrum was acquired but the line-profiles were measured in constant-height mode. This avoids artifacts stemming from a modulated stabilization height. At the chosen stabilization parameters, the contribution of multiple Andreev reflections and direct Cooper pair tunneling to the superconducting tip can be neglected (see Supplementary Note 2). Throughout this work, we use Nb tips made from a mechanically cut and sharpened highpurity Nb wire. The tips were flashed in situ to about 1500 K to remove residual contaminants or oxide layers. The use of superconducting tips increases the effective energy resolution of the experiment beyond the Fermi-Dirac limit 58 but requires careful interpretation of the acquired dI/dV spectra. These are proportional to the convolution of the sample's LDOS, the superconducting tip DOS and the difference of the Fermi-Dirac distributions of tip and sample. Notably, the latter can play a large role when measuring at T = 4.5 K. Details on the interpretation of SIS tunneling spectra and on the determination of the tip's superconducting gap t can be found in Supplementary Note 2. Ag atoms were reproducibly extracted out ofCompeting interestsThe authors declare no competing interests.
Boundary Effects in Superconductors. P G De Gennes, Rev. Mod. Phys. 36de Gennes, P. G. Boundary Effects in Superconductors. Rev. Mod. Phys. 36, 225-237 (1964).
Proximity effects in a superconductor/ferromagnet junction. A I Buzdin, J. Phys. Chem. Solids. 69Buzdin, A. I. Proximity effects in a superconductor/ferromagnet junction. J. Phys. Chem. Solids 69, 3257-3260 (2008).
Topological Shiba bands in artificial spin chains on superconductors. L Schneider, Nat. Phys. 17Schneider, L. et al. Topological Shiba bands in artificial spin chains on superconductors. Nat. Phys. 17, 943-948 (2021).
Topological superconductivity in a van der Waals heterostructure. S Kezilebieke, Nature. 588Kezilebieke, S. et al. Topological superconductivity in a van der Waals heterostructure. Nature 588, 424-428 (2020).
Observation of a Majorana zero mode in a topologically protected edge channel. B Jäck, Science. 364Jäck, B. et al. Observation of a Majorana zero mode in a topologically protected edge channel. Science 364, 1255-1259 (2019).
Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator. L Fu, C L Kane, Phys. Rev. Lett. 10096407Fu, L. & Kane, C. L. Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator. Phys. Rev. Lett. 100, 096407 (2008).
Atomic-scale interface engineering of Majorana edge modes in a 2D magnet-superconductor hybrid system. A Palacio-Morales, Sci. Adv. 56600Palacio-Morales, A. et al. Atomic-scale interface engineering of Majorana edge modes in a 2D magnet-superconductor hybrid system. Sci. Adv. 5, eaav6600 (2019).
Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures. F S Bergeret, A F Volkov, K B Efetov, Rev. Mod. Phys. 77Bergeret, F. S., Volkov, A. F. & Efetov, K. B. Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures. Rev. Mod. Phys. 77, 1321- 1373 (2005).
Higher-Order Topology and Nodal Topological Superconductivity in Fe(Se,Te) Heterostructures. R.-X Zhang, W S Cole, X Wu, S Das Sarma, Phys. Rev. Lett. 123167001Zhang, R.-X., Cole, W. S., Wu, X. & Das Sarma, S. Higher-Order Topology and Nodal Topological Superconductivity in Fe(Se,Te) Heterostructures. Phys. Rev. Lett. 123, 167001 (2019).
. M F Crommie, C P Lutz, D M Eigler, Confinement of Electrons to Quantum Corrals on a Metal Surface. Science. 262Crommie, M. F., Lutz, C. P. & Eigler, D. M. Confinement of Electrons to Quantum Corrals on a Metal Surface. Science 262, 218-220 (1993).
Quantum mirages formed by coherent projection of electronic structure. H C Manoharan, C P Lutz, D M Eigler, Nature. 403Manoharan, H. C., Lutz, C. P. & Eigler, D. M. Quantum mirages formed by coherent projection of electronic structure. Nature 403, 512-515 (2000).
Bound States Due to Resonance Scattering in Superconductor. K Machida, F Shibata, Prog. Theor. Phys. 47Machida, K. & Shibata, F. Bound States Due to Resonance Scattering in Superconductor. Prog. Theor. Phys. 47, 1817-1823 (1972).
2D materials and van der Waals heterostructures. K S Novoselov, A Mishchenko, A Carvalho, A H Castro Neto, Science. 353Novoselov, K. S., Mishchenko, A., Carvalho, A. & Castro Neto, A. H. 2D materials and van der Waals heterostructures. Science 353, (2016).
Artificial heavy fermions in a van der Waals heterostructure. V Vaňo, Nature. 599Vaňo, V. et al. Artificial heavy fermions in a van der Waals heterostructure. Nature 599, 582-586 (2021).
Writing and Deleting Single Magnetic Skyrmions. N Romming, Science. 341Romming, N. et al. Writing and Deleting Single Magnetic Skyrmions. Science 341, 636- 639 (2013).
The Quantum Anomalous Hall Effect: Theory and Experiment. C.-X Liu, S.-C Zhang, X.-L Qi, Annu. Rev. Condens. Matter Phys. 7Liu, C.-X., Zhang, S.-C. & Qi, X.-L. The Quantum Anomalous Hall Effect: Theory and Experiment. Annu. Rev. Condens. Matter Phys. 7, 301-321 (2016).
Spin-polarized supercurrents for spintronics: a review of current progress. M Eschrig, Reports Prog. Phys. 78104501Eschrig, M. Spin-polarized supercurrents for spintronics: a review of current progress. Reports Prog. Phys. 78, 104501 (2015).
Two-band superconductivity of bulk and surface states in Ag thin films on Nb. T Tomanic, M Schackert, W Wulfhekel, C Sürgers, H V Löhneysen, Phys. Rev. B. 94220503Tomanic, T., Schackert, M., Wulfhekel, W., Sürgers, C. & Löhneysen, H. V. Two-band superconductivity of bulk and surface states in Ag thin films on Nb. Phys. Rev. B 94, 220503 (2016).
Topological superconductivity and Majorana fermions in metallic surface states. A C Potter, P A Lee, Phys. Rev. B. 8594516Potter, A. C. & Lee, P. A. Topological superconductivity and Majorana fermions in metallic surface states. Phys. Rev. B 85, 094516 (2012).
Signature of a pair of Majorana zero modes in superconducting gold surface states. S Manna, Proc. Natl. Acad. Sci. U. S. A. 117Manna, S. et al. Signature of a pair of Majorana zero modes in superconducting gold surface states. Proc. Natl. Acad. Sci. U. S. A. 117, 8775-8782 (2020).
Colloquium : Topological insulators. M Z Hasan, C L Kane, Rev. Mod. Phys. 82Hasan, M. Z. & Kane, C. L. Colloquium : Topological insulators. Rev. Mod. Phys. 82, 3045-3067 (2010).
Superconductivity in the Surface State of Noble Metal Gold and its Fermi Level Tuning by EuS Dielectric. P Wei, S Manna, M Eich, P Lee, J Moodera, Phys. Rev. Lett. 122247002Wei, P., Manna, S., Eich, M., Lee, P. & Moodera, J. Superconductivity in the Surface State of Noble Metal Gold and its Fermi Level Tuning by EuS Dielectric. Phys. Rev. Lett. 122, 247002 (2019).
Selective trapping of hexagonally warped topological surface states in a triangular quantum corral. M Chen, Sci. Adv. 53988Chen, M. et al. Selective trapping of hexagonally warped topological surface states in a triangular quantum corral. Sci. Adv. 5, eaaw3988 (2019).
Electron Confinement to Nanoscale Ag Islands on Ag(111): A Quantitative Study. J Li, W.-D Schneider, R Berndt, S Crampin, Phys. Rev. Lett. 80Li, J., Schneider, W.-D., Berndt, R. & Crampin, S. Electron Confinement to Nanoscale Ag Islands on Ag(111): A Quantitative Study. Phys. Rev. Lett. 80, 3332-3335 (1998).
Very weak bonds to artificial atoms formed by quantum corrals. F Stilp, Science. 372Stilp, F. et al. Very weak bonds to artificial atoms formed by quantum corrals. Science 372, 1196-1200 (2021).
Coupling quantum corrals to form artificial molecules. S Freeney, S T P Borman, J W Harteveld, I Swart, SciPost Phys. 985Freeney, S., Borman, S. T. P., Harteveld, J. W. & Swart, I. Coupling quantum corrals to form artificial molecules. SciPost Phys. 9, 085 (2020).
Role of Bulk and Surface Phonons in the Decay of Metal Surface States. A Eiguren, Phys. Rev. Lett. 8866805Eiguren, A. et al. Role of Bulk and Surface Phonons in the Decay of Metal Surface States. Phys. Rev. Lett. 88, 066805 (2002).
Lifetime reduction of surface states at Cu, Ag, and Au(111) caused by impurity scattering. S Heers, P Mavropoulos, S Lounis, R Zeller, S Blügel, Phys. Rev. B. 86125444Heers, S., Mavropoulos, P., Lounis, S., Zeller, R. & Blügel, S. Lifetime reduction of surface states at Cu, Ag, and Au(111) caused by impurity scattering. Phys. Rev. B 86, 125444 (2012).
Hybrid superconductor-quantum dot devices. S De Franceschi, L Kouwenhoven, C Schönenberger, W Wernsdorfer, Nat. Nanotechnol. 5De Franceschi, S., Kouwenhoven, L., Schönenberger, C. & Wernsdorfer, W. Hybrid superconductor-quantum dot devices. Nat. Nanotechnol. 5, 703-711 (2010).
Electronic Quantum Materials Simulated with Artificial Model Lattices. S E Freeney, M R Slot, T S Gardenier, I Swart, D Vanmaekelbergh, ACS Nanosci. Au. 2Freeney, S. E., Slot, M. R., Gardenier, T. S., Swart, I. & Vanmaekelbergh, D. Electronic Quantum Materials Simulated with Artificial Model Lattices. ACS Nanosci. Au 2, 198- 224 (2022).
Creating designer quantum states of matter atom-by-atom. A A Khajetoorians, D Wegner, A F Otte, I Swart, Nat. Rev. Phys. 1Khajetoorians, A. A., Wegner, D., Otte, A. F. & Swart, I. Creating designer quantum states of matter atom-by-atom. Nat. Rev. Phys. 1, 703-715 (2019).
Creating Tunable Quantum Corrals on a Rashba Surface Alloy. W Jolie, ACS Nano. 16Jolie, W. et al. Creating Tunable Quantum Corrals on a Rashba Surface Alloy. ACS Nano 16, 4876-4883 (2022).
Realization of a minimal Kitaev chain in coupled quantum dots. T Dvir, arXiv:2206.080451-35Dvir, T. et al. Realization of a minimal Kitaev chain in coupled quantum dots. arXiv:2206.08045 1-35 (2022).
Precursors of Majorana modes and their length-dependent energy oscillations probed at both ends of atomic Shiba chains. L Schneider, Nat. Nanotechnol. 17Schneider, L. et al. Precursors of Majorana modes and their length-dependent energy oscillations probed at both ends of atomic Shiba chains. Nat. Nanotechnol. 17, 384- 389 (2022).
Field-screening properties of proximity-coupled Nb/Ag double layers. H Stalzer, A Cosceev, C Sürgers, H V Löhneysen, Europhys. Lett. 76Stalzer, H., Cosceev, A., Sürgers, C. & Löhneysen, H. V. Field-screening properties of proximity-coupled Nb/Ag double layers. Europhys. Lett. 76, 121-127 (2006).
Preparation and electronic properties of clean superconducting Nb(110) surfaces. A B Odobesko, Phys. Rev. B. 99115437Odobesko, A. B. et al. Preparation and electronic properties of clean superconducting Nb(110) surfaces. Phys. Rev. B 99, 115437 (2019).
High-resolution photoemission study of the electronic structure of the noble-metal (111) surfaces. S D Kevan, R H Gaylord, Phys. Rev. B. 36Kevan, S. D. & Gaylord, R. H. High-resolution photoemission study of the electronic structure of the noble-metal (111) surfaces. Phys. Rev. B 36, 5809-5818 (1987).
Resonator design for use in scanning tunneling spectroscopy studies of surface electron lifetimes. S Crampin, H Jensen, J Kröger, L Limot, R Berndt, Phys. Rev. B. 7235443Crampin, S., Jensen, H., Kröger, J., Limot, L. & Berndt, R. Resonator design for use in scanning tunneling spectroscopy studies of surface electron lifetimes. Phys. Rev. B 72, 035443 (2005).
Atom Transfer and Single-Adatom Contacts. L Limot, J Kröger, R Berndt, A Garcia-Lekue, W A Hofer, Phys. Rev. Lett. 94126102Limot, L., Kröger, J., Berndt, R., Garcia-Lekue, A. & Hofer, W. A. Atom Transfer and Single-Adatom Contacts. Phys. Rev. Lett. 94, 126102 (2005).
Impurity-induced states in conventional and unconventional superconductors. A V Balatsky, I Vekhter, J X Zhu, Rev. Mod. Phys. 78Balatsky, A. V., Vekhter, I. & Zhu, J. X. Impurity-induced states in conventional and unconventional superconductors. Rev. Mod. Phys. 78, 373-433 (2006).
Single magnetic adsorbates on s -wave superconductors. B W Heinrich, J I Pascual, K J Franke, Prog. Surf. Sci. 93Heinrich, B. W., Pascual, J. I. & Franke, K. J. Single magnetic adsorbates on s -wave superconductors. Prog. Surf. Sci. 93, 1-19 (2018).
Imaging the effects of individual zinc impurity atoms on superconductivity in Bi2Sr2CaCu2O8+δ. S H Pan, Nature. 403Pan, S. H. et al. Imaging the effects of individual zinc impurity atoms on superconductivity in Bi2Sr2CaCu2O8+δ. Nature 403, 746-750 (2000).
Impurity-Induced Bound Excitations on the Surface of Bi2Sr2CaCu2O8. A Yazdani, C M Howald, C P Lutz, A Kapitulnik, D M Eigler, Phys. Rev. Lett. 83Yazdani, A., Howald, C. M., Lutz, C. P., Kapitulnik, A. & Eigler, D. M. Impurity-Induced Bound Excitations on the Surface of Bi2Sr2CaCu2O8. Phys. Rev. Lett. 83, 176-179 (1999).
Bogoliubov angle and visualization of particle-hole mixture in superconductors. K Fujita, Phys. Rev. B. 7854510Fujita, K. et al. Bogoliubov angle and visualization of particle-hole mixture in superconductors. Phys. Rev. B 78, 054510 (2008).
Charge of a quasiparticle in a superconductor. Y Ronen, Proc. Natl. Acad. Sci. Natl. Acad. Sci113Ronen, Y. et al. Charge of a quasiparticle in a superconductor. Proc. Natl. Acad. Sci. 113, 1743-1748 (2016).
Atomic scale shot-noise using cryogenic MHz circuitry. F Massee, Q Dong, A Cavanna, Y Jin, M Aprili, Rev. Sci. Instrum. 8993708Massee, F., Dong, Q., Cavanna, A., Jin, Y. & Aprili, M. Atomic scale shot-noise using cryogenic MHz circuitry. Rev. Sci. Instrum. 89, 093708 (2018).
Proximity-induced superconducting gap in the quantum spin Hall edge state of monolayer WTe 2. F Lüpke, Nat. Phys. 16Lüpke, F. et al. Proximity-induced superconducting gap in the quantum spin Hall edge state of monolayer WTe 2 . Nat. Phys. 16, 526-530 (2020).
Induced superconductivity in the quantum spin Hall edge. S Hart, Nat. Phys. 10Hart, S. et al. Induced superconductivity in the quantum spin Hall edge. Nat. Phys. 10, 638-643 (2014).
Designer Dirac fermions and topological phases in molecular graphene. K K Gomes, W Mar, W Ko, F Guinea, H C Manoharan, Nature. 483Gomes, K. K., Mar, W., Ko, W., Guinea, F. & Manoharan, H. C. Designer Dirac fermions and topological phases in molecular graphene. Nature 483, 306-310 (2012).
p Orbital Flat Band and Dirac Cone in the Electronic Honeycomb Lattice. T S Gardenier, ACS Nano. 14Gardenier, T. S. et al. p Orbital Flat Band and Dirac Cone in the Electronic Honeycomb Lattice. ACS Nano 14, 13638-13644 (2020).
Experimental realization and characterization of an electronic Lieb lattice. M R Slot, Nat. Phys. 13Slot, M. R. et al. Experimental realization and characterization of an electronic Lieb lattice. Nat. Phys. 13, 672-676 (2017).
Topological states in engineered atomic lattices. R Drost, T Ojanen, A Harju, P Liljeroth, Nat. Phys. 13Drost, R., Ojanen, T., Harju, A. & Liljeroth, P. Topological states in engineered atomic lattices. Nat. Phys. 13, 668-671 (2017).
Robust zero-energy modes in an electronic higher-order topological insulator. S N Kempkes, Nat. Mater. 18Kempkes, S. N. et al. Robust zero-energy modes in an electronic higher-order topological insulator. Nat. Mater. 18, 1292-1297 (2019).
Design and characterization of electrons in a fractal geometry. S N Kempkes, Nat. Phys. 15Kempkes, S. N. et al. Design and characterization of electrons in a fractal geometry. Nat. Phys. 15, 127-131 (2019).
Edge-Dependent Topology in Kekulé Lattices. S E Freeney, J J Van Den Broeke, A J J Harsveld Van Der Veen, Phys. Rev. Lett. 124236404Freeney, S. E., van den Broeke, J. J., Harsveld van der Veen, A. J. J., Swart, I. & Morais Smith, C. Edge-Dependent Topology in Kekulé Lattices. Phys. Rev. Lett. 124, 236404 (2020).
Numerical Renormalization Group Studies on Single Impurity Anderson Model in Superconductivity: A Unified Treatment of Magnetic, Nonmagnetic Impurities, and Resonance Scattering. T Yoshioka, Y Ohashi, J. Phys. Soc. Japan. 69Yoshioka, T. & Ohashi, Y. Numerical Renormalization Group Studies on Single Impurity Anderson Model in Superconductivity: A Unified Treatment of Magnetic, Nonmagnetic Impurities, and Resonance Scattering. J. Phys. Soc. Japan 69, 1812-1823 (2000).
Superconductivity of lanthanum revisited: Enhanced critical temperature in the clean limit. P Löptien, L Zhou, A A Khajetoorians, J Wiebe, R Wiesendanger, J. Phys. Condens. Matter. 26Löptien, P., Zhou, L., Khajetoorians, A. A., Wiebe, J. & Wiesendanger, R. Superconductivity of lanthanum revisited: Enhanced critical temperature in the clean limit. J. Phys. Condens. Matter 26, (2014).
Vacuum tunneling of superconducting quasiparticles from atomically sharp scanning tunneling microscope tips. S H Pan, E W Hudson, J C Davis, Appl. Phys. Lett. 73Pan, S. H., Hudson, E. W. & Davis, J. C. Vacuum tunneling of superconducting quasiparticles from atomically sharp scanning tunneling microscope tips. Appl. Phys. Lett. 73, 2992-2994 (1998).
Positioning single atoms with a scanning tunnelling microscope. D M Eigler, E K Schweizer, Nature. 344Eigler, D. M. & Schweizer, E. K. Positioning single atoms with a scanning tunnelling microscope. Nature 344, 524-526 (1990).
Atomic and Molecular Manipulation with the Scanning Tunneling Microscope. J A Stroscio, D M Eigler, Science. 254Stroscio, J. A. & Eigler, D. M. Atomic and Molecular Manipulation with the Scanning Tunneling Microscope. Science 254, 1319-1326 (1991).
Pedagogical introduction to equilibrium Green's functions: condensed-matter examples with numerical implementations. M M Odashima, B G Prado, E Vernek, Rev. Bras. Ensino Física. 39Odashima, M. M., Prado, B. G. & Vernek, E. Pedagogical introduction to equilibrium Green's functions: condensed-matter examples with numerical implementations. Rev. Bras. Ensino Física 39, (2016).
| [] |
[
"Numerical wetting benchmarks -advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method",
"Numerical wetting benchmarks -advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method"
] | [
"Muhammad Hassan Asghar \nMathematical Modeling and Analysis Group\nDarmstadtTU\n",
"Mathis Fricke \nMathematical Modeling and Analysis Group\nDarmstadtTU\n",
"Dieter Bothe \nMathematical Modeling and Analysis Group\nDarmstadtTU\n",
"Tomislav Marić \nMathematical Modeling and Analysis Group\nDarmstadtTU\n"
] | [
"Mathematical Modeling and Analysis Group\nDarmstadtTU",
"Mathematical Modeling and Analysis Group\nDarmstadtTU",
"Mathematical Modeling and Analysis Group\nDarmstadtTU",
"Mathematical Modeling and Analysis Group\nDarmstadtTU"
] | [] | The numerical simulation of wetting and dewetting of geometrically complex surfaces benefits from unstructured numerical methods because they discretize the domain with second-order accuracy. A recently developed unstructured geometric Volume-of-Fluid (VOF) method, the plicRDF-isoAdvector method, is chosen to investigate wetting processes because of its volume conservation property and high computational efficiency. The present work verifies and validates the plicRDF-isoAdvector method for wetting problems. We present four verification studies. The first study investigates the accuracy of the interface advection near walls. The method is further investigated for the spreading of droplets on a flat and a spherical surface, respectively, for which excellent agreement with the reference solutions is obtained. Furthermore, a 2D capillary rise is considered, and a benchmark comparison based on results from previous work is performed. The benchmark suite, input data, and Jupyter Notebooks used in this study are publicly available to facilitate further research and comparison with other numerical codes. | null | [
"https://export.arxiv.org/pdf/2302.02629v1.pdf"
] | 256,616,059 | 2302.02629 | 4f87f3c674adcb4443c652104ba808072ed8b0cf |
Numerical wetting benchmarks -advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method
Muhammad Hassan Asghar
Mathematical Modeling and Analysis Group
DarmstadtTU
Mathis Fricke
Mathematical Modeling and Analysis Group
DarmstadtTU
Dieter Bothe
Mathematical Modeling and Analysis Group
DarmstadtTU
Tomislav Marić
Mathematical Modeling and Analysis Group
DarmstadtTU
Numerical wetting benchmarks -advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method
wettinggeometrically complex surfaceunstructured Volume-of-Fluid method
The numerical simulation of wetting and dewetting of geometrically complex surfaces benefits from unstructured numerical methods because they discretize the domain with second-order accuracy. A recently developed unstructured geometric Volume-of-Fluid (VOF) method, the plicRDF-isoAdvector method, is chosen to investigate wetting processes because of its volume conservation property and high computational efficiency. The present work verifies and validates the plicRDF-isoAdvector method for wetting problems. We present four verification studies. The first study investigates the accuracy of the interface advection near walls. The method is further investigated for the spreading of droplets on a flat and a spherical surface, respectively, for which excellent agreement with the reference solutions is obtained. Furthermore, a 2D capillary rise is considered, and a benchmark comparison based on results from previous work is performed. The benchmark suite, input data, and Jupyter Notebooks used in this study are publicly available to facilitate further research and comparison with other numerical codes.
Introduction
The wetting of a solid surface by a liquid is encountered in many natural and technical processes, including the spreading of paint, ink, lubricant, dye, or pesticides. In many of the typical applications, the solid surface is not flat and homogeneous but shows geometrically complex structures, chemical heterogeneity or porosity. In particular, it has been demonstrated extensively [1] that features like surface structure and roughness can be used as a tool to significantly enhance the process performance. A good example is the increase in heat transfer in boiling using textured surfaces [2]. In order to describe and predict wetting processes on complex surfaces, it is necessary to develop simulation tools that can handle the complex geometry of the boundary, strong deformations of the fluid interface, as well as separation and merging of fluid structures. The two most well-known simulation methods for multiphase flows that naturally handle these requirements are the unstructured Volume of Fluid (VOF) method (cf. [3] for a recent review) and the Level Set method (cf. [4,5] for recent reviews). In this study, we have chosen the Volume of Fluid (VOF) method because it is widely used for two-phase flows in technical systems and since it allows for highly accurate conservation of phase-specific volumes based on the use of a phase indicator function.
The unstructured VOF method can be classified into two categories regarding the underlying approach for the advection of volume fractions, i.e., the discretized version of the phase indicator function -algebraic and geometric VOF methods. Algebraic VOF methods solve a linear algebraic system for the advection of the volume fraction field. A well-known OpenFOAM [6] solver that uses the algebraic VOF method is interFoam [7]. Although computationally very efficient, algebraic VOF methods may lead to inaccurate results [7,8] caused by artificial diffusion of the interface. On the other hand, geometric VOF methods reconstruct the fluid interface to approximate the phasespecific volumes fluxed through each face of the cell (see [3] for a recent review). A face-based fluxed volume can be obtained using either geometric volume calculation or the Reynolds transport theorem to compute the rate of change of volume passing through each face. The latter approach is underlying the plicRDF-isoAdvector geometric VOF method [8,9] developed by Roenby et al. [8].
The plicRDF-isoAdvector method is based on geometric approximations both in the interface reconstruction step and in the interface advection step (cf. section 2). It achieves second-order convergence of the geometrical VOF advection error in the L 1 -norm on unstructured meshes for time steps restricted to CFL numbers below 0.2 [9]. Gamet et al. [10] have validated the plicRDF-isoAdvector method for rising bubbles and benchmarked it against interFoam [7], Basilisk [11] and results from the Finite Element Methods available in the existing literature (TP2D [12,13], FreeLIFE [14], and MooNMD [15]). In particular, the plicRDF-isoAdvector method performed better than interFoam in capturing the helical trajectory and shape of the bubble for surface tension dominant flow. Siriano et al. [16] have tested the numerical method for rising bubbles with high-density ratios. These studies show that the plicRDF-isoAdvector method is an efficient and accurate unstructured geometrical VOF method for two-phase flow problems.
In order to verify and validate the plicRDF-isoAdvector method's suitability for wetting problems, we investigated its performance in four test cases. The first case study is the near-wall interface advection verification test proposed by Fricke et al. [17]. It investigates the numerical contact angle evolution when the interface is advected using a known divergence-free velocity field. The results are presented in section 3.
Next, droplet spreading on a flat surface is considered -a classical wetting validation case study. Dupont and Legendre [18], and Fricke et al. [19] have studied the droplet shape with stationary geometrical relations. The droplet spreading with and without the influence of gravity is also studied. The results are presented in section 4.
Subsequently, droplet spreading on a spherical surface [20] is considered for testing the accuracy of the plicRDF-isoAdvector method on unstructured near-wall refined meshes for a geometrically more complex surface. The results are presented in section 5.
Lastly, in line with Gründing et al. [21], we study the transient capillary rise based on full continuum mechanical simulations. Here, the Navier slip condition is used as a regularization of the moving contact line singularity described by Huh and Scriven [22]. Gründing et al. [21] show that both the dimensionless group identified in [23,24], and the Navier slip length have a major impact on the rise dynamics. The present study compares the simulation results for the capillary rise dynamics with the data from [21]. The results are presented in section 6.
For all four case studies, the input data, the primary data, the secondary data, and the postprocessing utilities are publicly available online [25,26]. The post-processing, based on Jupyter notebooks [27], simplifies the verification/validation of wetting processes, not just for OpenFOAM, Figure 1: Schematic diagram of a multiphase domain Ω with contact line Γ(t). The interface Σ(t) intersects the solid domain boundary ∂Ω wall at the contact angle θ.
Ω Ω + (t) Ω − (t) Σ(t) θ n Σ ∂Ω wall Γ(t) ∂Ω
but for any other simulation software, provided the files storing the secondary data (error norms) are organized as described in the README.md file [28]. This study uses the ESI OpenFOAM version (git tag OpenFOAM-v2112) [29]. The solver inter-Flow from the TwoPhaseFlow OpenFOAM project [30] (branch of2112) is used for the plicRDF-isoAdvector method. A python library, PyFoam [31], version 2021.6, has been used for setting up parameter studies. For initialization of the droplet, the volume fraction field is computed using the Surface-Mesh/Cell Approximation Algorithm (SMCA) [32] and exact implicit surfaces. An Open-FOAM submodule, cfMesh [33], is employed for discretizing the domain using unstructured meshes. An open-source software, FreeCAD [34], version 0.18.4, is used to create .stl files.
Numerical method
In this section, we provide an overview of the plicRDF-isoAdvector numerical method [9,30].
Volume-of-Fluid method
Consider a physical domain Ω as illustrated in fig. 1, composed of two sub-domains occupied with different incompressible fluids, denoted by Ω + (t) and Ω − (t). The phase-indicator function
χ(t, x) = 1 x ∈ Ω + (t) 0 x / ∈ Ω + (t)(1)
distinguishes the sub-domains Ω ± . Evidently, the sub-domain Ω + is then
Ω + (t) := {x ∈ Ω : χ(t, x) = 1}.(2)
The volume fraction α c (t) of the phase Ω + (t) inside a fixed control volume Ω c at time t is defined as
α c (t) := 1 |Ω c | Ωc χ(t, x) dV .(3)
The value of the volume fraction inside a cell indicates whether phase Ω + (t) is present inside the cell. Indeed, it holds that
α c = 1 ⇐⇒ cell is inside Ω + (t),
α c ∈ (0, 1) ⇐⇒ cell intersects the interface ("interface cell"),
α c = 0 ⇐⇒ cell is inside Ω − (t).(4)
Within each phase Ω + (t) or Ω − (t), mass conservation for incompressible fluids has the form of
∇ · v = 0,(5)
where v is the fluid velocity. In situations without phase change, the phase indicator function keeps its value along trajectories of the two-phase flow, i.e., it satisfies (in a distributional sense) the transport equation
∂ t χ + v · ∇χ = 0.(6)
The phase-indicator χ(t, ·) determines the phase-dependent local values of the physical quantities such as the single-field density ρ and viscosity µ in the single-field formulation of the two-phase Navier Stokes equations. For constant densities ρ + , ρ − and constant viscosities µ + , µ − , we have
ρ(t, x) = χ(t, x)ρ + + (1 − χ(t, x))ρ − ,(7)
and
µ(t, x) = χ(t, x)µ + + (1 − χ(t, x))µ − .(8)
The momentum balance reads as
∂ t (ρv) + ∇ · (ρvv) = −∇p + ∇ · µ(∇v + ∇v T ) + ρg + f Σ ,(9)
where p is pressure, g is gravitational acceleration, and f Σ = σκn Σ δ Σ . The VOF methods integrate eq. (6) over a fixed control volume Ω c within a time step [t n , t n+1 ], followed by the application of the Reynolds transport theorem. This leads to the integral form of the volume fraction transport equation (see [3] for details) according to
α c (t n+1 ) = α c (t n ) − 1 |Ω c | t n+1 t n ∂Ωc χ(t, x)v · n dS dt.(10)
The boundary ∂Ω c of the cell Ω c , used on the r.h.s. of eq. (10), is a union of surfaces (faces) that are bounded by line segments (edges), namely
∂Ω c = ∪ f ∈Fc S f .(11)
Using this decomposition (eq. (11)) of ∂Ω c , eq. (10) can be written as
α c (t n+1 ) = α c (t n ) − 1 |Ω c | f ∈Fc t n+1 t n S f χ(t, x)v · n dS dt.(12)
The double integral on the right-hand side of eq. (12) defines the amount of the phase-specific volume |V α f | fluxed over the face S f within a time interval [t n , t n+1 ]. Equation (12) is still an exact equation since no approximations have been made so far. It is the basis for every geometric VOF method [3], that all obtain the form,
α c (t n+1 ) = α c (t n ) − 1 |Ω c | f ∈Fc V α f ,(13)
differing only in how V α f -the fluxed phase-specific volume -is calculated. If one could calculate V α f exactly for arbitrary v and χ, and exactly represent the domain boundary c∈C ∂Ω c , eq. (13) would be an exact equation.
The plicRDF-isoAdvector method
Geometric VOF methods rely on a cell-based geometrical approximation of the interface. Each interface reconstruction algorithm thus aims to accurately compute the interface normal n Σ,c and position x Σ,c . At first, the interface orientation algorithm approximates n Σ,c . Then the interface positioning algorithm places the interface at x Σ,c . The well-known Piecewise Linear Interface Calculation (PLIC) algorithm is the common interface reconstruction algorithm. The Youngs' algorithm [35], the Least Squares Fit (LSF) algorithm [36], and the plicRDF reconstruction method [9] are a few examples of the PLIC-based interface orientation algorithms.
The plicRDF reconstruction method [9] is an iterative variant of the Reconstructed Distance Function (RDF) method [37], which reconstructs the so-called signed distance functions (RDFs), and normals can be approximated as a discrete gradient of the RDFs. The method iteratively updates the approximated normals, and the residual stopping criterion is applied to minimize the number of reconstruction algorithm iterations. The plicRDF method estimates the initial interface normal in the first time step as
n k=0 Σ,c ≡ n Σ,c (t = 0) ≈ ∇ c α c |∇ c α c | ,(14)
where ∇ c is the discrete unstructured Finite Volume least squares gradient. The initial interfacenormal estimates are used by a PLIC positioning algorithm (e.g., [38]) to place the interface, resulting in PLIC centroid positions {x k=0 Σ,i } i∈I -with I as the cell-local index set of all interfacecells. For time steps n > 1, the estimate of n k=0 Σ,c in the cell Ω c is based on the weighted average of the interface-normal values {n n−1 Σ,l } l∈Nc from the previous time-step (t n−1 ), associated with the interface cells {Ω Σ,l } l∈Nc in the point-neighborhood of Ω c , given by the cell-local index set N c , and is obtained as
n k=0 Σ,c = l w k=0 l n n−1 Σ,l l w k=0 l ,(15)
and the weights are calculated using a Semi-Lagrangian method as
w k=0 l = n n−1 Σ,l · [(x c − v c ∆t) − x n−1 Σ,l ] ,(16)
where x c is the centre of cell Ω c , and v c is the velocity in the cell Ω c . Starting with an initial interface-normal estimate n k=0 Σ,c , plicRDF improves the interface-normal orientation iteratively by reconstructing the signed distance function (RDF) in the tubular neighborhood of the interface cells (cf. fig. 2). The RDF Ψ c reconstructed at the centroid x c of the finite volume Ω c is obtained as a weighted average of the distances {Ψ n } n∈Nc associated with centroids of the cells {Ω n } n∈Nc in the point-neighborhood of Ω c -given by the cell-index set N c . The signed distance of the centroid x c to the interface-plane in a neighboring interface cell Ω n is
Ψ k c = n k Σ,n · (x c − x k Σ,n ).(17)
From these distances, the RDF in the centre of the cell Ω c is obtained as
Ψ k c = n w k nΨ k n n w k n ,(18)
with weights calculated as
w k n = |n k Σ,n · (x c − x k Σ,n )| 2 |x c − x k Σ,n )| 2 .(19)
The Least Squares Finite Volume gradient of the RDF is used to update the interface-normals
n k+1 Σ,c = ∇ c Ψ k c ∇ c Ψ k c ,(20)
where k = 1, . . . , 5 is used in [9]. The interface positioning algorithm uses {n k+1 Σ,n } n∈I to position the PLIC interface planes, resulting in {x k+1 Σ,n } n∈I needed for the next iteration.
The isoAdvector advection scheme
The isoAdvector numerical method [8] calculates the phase-specific fluxed volume V α f as
V α f = t n+1 t n S f χ(t, x)v · n dS dt = t n+1 t n v f (t) ·n f S f χ(t, x) dS dt + O(h 2 ) = t n+1 t n F f (t) |S f | S f χ(t, x) dS dt + O(h 2 ) = 0.5(F n f + F n+1 f ) |S f | t n+1 t n S f χ(t, x) dS dt + O(∆t 2 ) + O(h 2 )(21)
with velocity v f (t) as the face-centered velocity resulting in O(h 2 ), defining F f := v f · n f |S f |, the volumetric flux across the face S f . The vector n f is the unit-normal vector of the face S f . For a fixed prescribed velocity (for example, for the purpose of verifying advection), the velocity v f is directly known. Otherwise, it is obtained from the solution of the Navier Stokes Equation, as detailed in [39]. The isoAdvector scheme geometrically evaluates the integral
t n+1 t n S f χ(t, x) dS dt = t n+1 t n A f (t) dt,(22)
where A f (t) is the instantaneous face-area submerged in Ω + (t) at time t. Details on the submerged face-area integration are available in [8].
With the calculation of the phase-specific fluxed volume V α f , the cell volume fraction value is updated using eq. (13). The isoAdvector scheme restores strict boundedness α c ∈ [0, 1] by redistributing over-and-undershoots in α c in the upwind direction.
Boundary conditions at the solid boundary
In the following, we work in the frame of reference where the solid boundary ∂Ω is at rest. We assume the boundary of the domain to be impermeable, i.e.,
v ⊥ | ∂Ω = 0,(23)
where v ⊥ is the velocity component normal to the boundary. The Navier slip boundary condition (for a flat boundary) is given as
v ∂Ω = λ ∂v ∂y ∂Ω ,(24)
where v || is the velocity component tangential to the boundary, λ is the slip length, and ∂v ∂y is the velocity gradient in the wall-normal direction y. Note that the no-slip boundary condition, i.e.,
v || | ∂Ω = 0,(25)
is recovered from eq. (24) for λ = 0. In this study, we apply the impermeability condition (23) together with either the Navier slip (eq. (24)) or the no-slip (eq. (25)) condition.
It is important to note that for the no-slip case, the numerical method relies on the so-called "numerical slip" [40] to move the contact line. The numerical slip is an inherent property of the advection algorithm, which uses the face-centred velocity to transport the volume fraction field. The face-centred velocity at the boundary cell is not strictly zero and therefore allows for the motion of the contact line. Since the amount of numerical slip is related to the mesh size, one will typically find a mesh dependence of the contact line speed if a numerical slip is used. On the other hand, mesh convergence of the contact line speed can be reached with the Navier slip condition provided that the slip length λ is resolved by the mesh (see Section 6).
Boundary condition at the contact line
At the contact line, i.e., the region where the interface touches the domain boundary, the contact angle θ (see fig. 1) is defined by the geometric relation
cos θ = n Σ · n ∂Ω ,(26)
where n Σ is the interface unit normal and n ∂Ω is the outer unit normal vector of the domain boundary ∂Ω. In this paper, for simplicity, the contact angle θ is always prescribed as the equilibrium contact angle.
For the numerical treatment of boundary interface cells Ω c , Scheufler and Roenby [41] have introduced the concept of a ghost interface point x G,c placed on the opposite side of the boundary face b of the cell Ω c (fig. 3). It is the unique point at a distance of 2∆x from the PLIC centroid position x Σ,c such that the ghost interface normal n G,c becomes oriented in such a way that it satisfies the contact angle boundary condition. In order to transmit this information about the contact angle into the algorithm, the point x G,c and the normal n G,c are then used as an additional contribution in eq. (19) to reconstruct the RDF using eq. (18) at the centroid x c of the cell Ω c . Figure 3: Contact angle treatment by the plicRDF-isoAdvector method. The ghost interface point x G,c with ghost interface normal n G,c satisfies the contact angle boundary condition and is used as an additional contribution in eq. (19) to reconstruct the RDF at the centroid xc of cell Ωc.
x b θ 2 ∆ x x Σ,c n G,c x G,c ∆x xc Ωc
The curvature model
For the curvature calculation in the interface cell Ω c , we have used the parabolic fit curvature model [41]. A surface given by a quadratic form is computed by fitting it to the PLIC centroids {x Σ,n } n∈Nc inside all neighboring interface-cells. The curvature of the surface is then approximated by the one of the quadratic surface.
Of the curvature models evaluated, the parabolic fit has shown better performance. The simulation results obtained with an arbitrarily chosen slip length and using the height function and RDF curvature models [41] are publicly available online [42,43,44].
Verification of the advection accuracy near the contact line
Definition of case study
In this study, we consider the advection of the droplet interface using a divergence-free velocity field and report the accuracy of the interface advection near walls. It has been shown in [45,46] that the contact line advection problem is a well-posed initial value problem if the velocity field is sufficiently regular and tangential to the domain boundary. The interface motion and the contact angle evolution can be computed from the velocity field and the initial geometry. Notably, the full time evolution of the contact angle can be inferred from the solution of a system of ordinary differential equations [45]. In this study, we replicate one of the case studies presented in Fricke et al. [17].
Σ(t 0 ) Σ(t)ẋ (t) = v(t, x(t))
x 0 Kinematics of contact angle transport: In the following, we will study the time-evolution of the contact angle along a flow trajectory x(t) defined as the solution of the initial value probleṁ
x(t; t 0 , x 0 ) = v(t, x(t; t 0 , x 0 )), x(t 0 ; t 0 , x 0 ) = x 0 .(27)
We may also write x(t) for short, keeping in mind that an initial position must be specified. It has been shown in [45] that the contact line is invariant with respect to the flow generated by (27) provided that v is tangential, i.e., if v ⊥ = 0 at the solid boundary. This means that a trajectory that starts at the contact line will always stay at the contact line, and the function
θ(t) := θ(t, x(t))
is well-defined. Mathematically, the time-evolution of the contact angle can be deduced from the time-evolution of the normal field n Σ along x(t) via
cos θ(t, x(t)) = −n Σ (t, x(t)) · n ∂Ω (t, x(t)),(28)
where n ∂Ω is the outer unit normal vector of the domain boundary ∂Ω. The normal field along x(t) satisfies the evolution equation (see [45])
d dt n Σ (t, x(t)) = −P Σ ∇v(t, x(t)) T n Σ (t, x(t)).(29)
Here P Σ = I − (n Σ ⊗ n Σ ) is the orthogonal projection onto the tangent space of Σ at (t, x(t). Note that this projection appears because theṅ Σ must be orthogonal to n Σ due to the normalization of the vector field. In practice, it may be simpler to solve the ordinary differential equation
(ODE) without P Σ , i.e.,ν (t) = −∇v(t, x(t)) T · ν(t), ν(t 0 ) = n Σ (t 0 , x 0 ),(30)
and then obtain the normal field by normalization of ν according to
n Σ (t, x(t)) = ν(t) ν(t) .(31)
It is easy to show that both methods are, in fact, equivalent.
For this case study, we follow one of the examples in [17]. The interface Σ is advected using a velocity field called "vortex-in-a-box" given by
v(t, x 1 , x 2 ) = v 0 cos πt τ (− sin(πx 1 ) cos(πx 2 ), cos(πx 1 ) sin(πx 2 )).(32)
The periodicity of the field in time allows for comparing the droplet's initial shape at t = t 0 and the shape after the time period τ . If the advection problem is solved exactly, the droplet shape at t = 0 will coincide with the shape at t = τ . Otherwise, the difference in the volume fractions fields can be used to quantify the error. Moreover, the full time-evolution of the transported contact angle is obtained from the solution of the ODE system (eqs. (27), (30) and (31)). Figure 5: Initial configuration of the interface advection problem having a droplet Ω + with an interface Σ on a flat surface ∂Ω b . The contact angle θ is the angle formed by interface Σ at the contact line Γ.
Ω + Ω − Σ θ n Σ Γ n ∂Ω ∂Ω b
Computational setup
We consider a droplet Ω + with an interface Σ on a flat surface ∂Ω b (fig. 5). The interface Σ with a unit normal vector n Σ meets the domain bottom boundary ∂Ω b , and the intersection is called the contact line Γ. A 2D computational domain of dimensions [1 × 0.25] in the xy-plane is simulated. The initial shape of the droplet is a spherical cap with a dimensionless radius of R 0 = 0.2, placed on the bottom boundary
∂Ω b = {(x, 0) : 0 ≤ x ≤ 1}.(33)
The initial position of the spherical cap is located at the coordinates (0.4, −0.1), resulting in an initial contact angle
θ 0 = cos −1 0.1 0.2 = 60 • .(34)
In this study, we choose v 0 = 0.1, τ = 0.2, and ensure that the Courant number satisfies Co = U ∆t ∆x < 0.01. We quantify the geometrical shape error as
E 1 = c |α c (τ ) − α c (0)|V c ,(35)
and the maximum error in the transported contact angle as
E ∞ = max t∈[0,T ] |θ num (t) − θ ref (t)|,(36)
where T is the maximum simulation time.
Remark: For practical reasons, a no-slip boundary condition at the bottom boundary and a zerogradient boundary condition for the volume fractions are formally specified for the solver. However, the solution is not affected since this study is only an initial value problem.
Simulation results
We discretized the domain using uniform meshes. Figure 6 shows the convergence of E 1 errors for the velocity field given by eq. (32). As the mesh resolution increases, the order of convergence for E 1 errors also increases (see table 1 for E 1 error values and order of convergence at different mesh resolutions). In this study, we investigate the spreading of a droplet on a flat surface [18]. The focus is on the effect of the static contact angle boundary condition and the Bond number, Bo = ρ l gR0 σ , on the equilibrium shape of the droplet. For a droplet that spreads with Bo 1, the surface tension forces dominate, and the droplet at equilibrium maintains a spherical cap shape and satisfies the contact angle boundary condition. On the other hand, for Bo 1, the gravitational forces dominate, and the droplet forms a puddle, whose height is directly proportional to the capillary length, l Ca = σ ρ l g . The droplet's volume V and the equilibrium contact angle θ e are used to derive the geometrical relations for the equilibrium shape of the droplet [18,19,20]. Furthermore, we have also studied the mesh convergence of the spreading droplets. Figure 8 illustrates the schematic diagram of a semi-spherical droplet initialized on a flat surface with an initial radius R 0 . Note that θ 0 = 90 • is an (arbitrary) choice for the initial contact angle. The droplet spreads and attains an equilibrium state at θ e , having height e and wetted radius L. The droplet wets the surface if the initial contact angle is larger than the equilibrium contact angle. Contrary to this behavior, we observe dewetting if the initial contact angle is smaller than the equilibrium contact angle.
We have considered droplets of water-glycerol (75% glycerol) and pure water. The viscosity of water-glycerol is larger than that of pure water by a factor of 30, while the surface tension is slightly A three-dimensional computational domain (see table 3
for domain parameters) is simulated. The droplet is initialized at the center of the domain's bottom boundary
∂Ω b = {(x, y, 0) : 0 ≤ x ≤ 5, 0 ≤ y ≤ 5}.(37)
Parameter Value Unit
Droplet initial radius, R 0 1 mm Droplet initial position, (x 0 , y 0 , z 0 ) (2.5, 2.5, 0) mm Domain size (5, 5, 4) mm (n x , n y , n z ) (100, 100, 80) cells Table 3: Computational and geometrical parameters for the droplet spreading on a flat surface with an equilibrium contact angle θe.
The bottom boundary has a no-slip boundary condition for the velocity. The time step ∆t is restricted to CFL number below 0.01.
Geometrical relations for a droplet at equilibrium
Droplet spreading with a very small Bond number attains a spherical cap shape at the equilibrium. The wetted radius L and the height e of the spherical cap are given by the geometrical relations
L V 1 3 = g(θ) := sin θ π(1 − cos θ) 2 (2 + cos θ) 3 − 1 3 ,(38)
The intersection of a spherical cap and the horizontal flat surface produces a circular contact line Γ (see fig. 9), whose area is referred to as wetted area A and can be calculated as
A = πL 2 .(40)
For a droplet spreading with a large Bond number (Bo 1), the puddle height e is given by
e = 2 σ ρ l g sin θ 2 .(41)
The estimate for the wetted area A is obtained by adding up the wetted area of each boundary face f , which is calculated as follows:
∂Ω α(x) ds = f ∈Fc α f S f 2 + O(h 2 ).(42)
Here, α f represents the volume fraction value of the boundary face, which is obtained using Open-FOAM functionalities in the function object and S f is the face-area normal vector of the face f .
Spreading of a droplet with a very small Bond number
Convergence study
A mesh convergence study for the wetted area is conducted with four levels of mesh refinement with 10, 16, 20, and 40 cells per droplet radius. The results shown in fig. 10a, illustrate a convergent behavior of the water-glycerol droplet with respect to the stationary state at θ e = 50°. For a coarser mesh (cells per radius=10), an overshoot in the wetted area curve is observed because of the numerical dissipation being small on a coarse mesh. With numerical slip, any overshoot would disappear with mesh refinement (as observed in fig. 10a), as the contact line would not be able to move in the limit ∆x → 0 (∆x is the mesh cell size). Similar convergent behavior with respect to the stationary state is observed for the water-glycerol droplet with θ e = 110°(see fig. 10b).
In contrast to this behavior, water droplets exhibit oscillatory spreading behavior (see figs. 10c and 10d), taking longer to reach equilibrium, but ultimately converge towards the stationary state. Figure 11 illustrates the different shapes of the droplet during its spreading process. In the beginning, due to the difference between the initial and equilibrium contact angles, the droplet is far from reaching its equilibrium state, leading to a rapid movement of the contact line without any significant change in the droplet's global shape, as depicted in fig. 11b. As the spreading continues, the droplet's apex velocity increases in a downward direction ( fig. 11c), and the droplet's overall shape starts to change until it reaches its equilibrium state ( fig. 11d). Similar spreading behavior is reported in the literature ( [47,48,49]). The comparison of dimensionless geometrical quantities at equilibrium with reference solutions for a range of contact angles is shown in fig. 12. The simulation results are in very good agreement with the reference solution for both hydrophilic and hydrophobic cases. We note that for the highly hydrophobic case (e.g., for θ e = 150 • ), the stationary state is not reached in a reasonable time if the droplet is initialized as a semisphere. This can be understood in terms of the initial potential energy, which is very high in the case of a semisphere (initial droplet). The quick release of this potential energy may even cause a droplet detachment from the surface. However, if the droplet is initialized closer to the equilibrium state (θ 0 ≈ 180 • ), convergence to the reference stationary state is observed. As shown in fig. 13, the equilibrium shape of the water-glycerol droplet, represented by α = 0.5 contours in ParaView [50], is compared to the reference spherical cap for a specific contact angle. The comparison illustrates that the equilibrium shape of the droplet has a very good qualitative match with the reference shape.
Geometrical characteristics of the droplet
Spreading of a droplet with varying Bond numbers
We now consider a droplet spreading for a range of the Bond number (see table 4). Here we have two spreading behaviors -the surface tension-dominant spreading (Bo 1) and the gravitationaldominant spreading (Bo 1), with the transition of behavior to be observed at Bo ≈ 1. Figure 14 shows the non-dimensional equilibrium droplet height comparison with the reference solutions in the limiting cases Bo → ∞ and Bo → 0. The simulation results for the spreading of the water and water-glycerol droplets are in excellent agreement with the reference solution for hydrophobic and hydrophilic cases.
In summary, the simulation results using the plicRDF-isoAdvector method are in excellent Figure 12: Droplet spreading on a horizontal flat surface: Geometrical characteristics of the droplet against the equilibrium contact angle θe. wetted radius for water, equilibrium height for water, wetted radius for waterglycerol and equilibrium height for water-glycerol. The stationary solution for wetted radius (-·-) and equilibrium droplet height (-·-) is given by, eqs. (38) and (39), respectively. Bo -water-glycerol 1e-05 1e-02 1e-01 5e-01 1 5 10
Bo -water 7.4e-06 7.4e-03 7.4e-02 3.7e-01 7.4e-01 3.7 7.4 agreement with the reference solution in terms of mesh convergence study, droplet shape comparison, and the incorporation of gravity effects on droplet spreading. It is noted, however, that for contact angles greater than 150 degrees, the simulation results show a significant dependence on the initial conditions. This is a common issue for two-phase flow solvers when dealing with extreme contact angles (< 10 • , > 150 • ), as reported in [18,20].
Droplet spreading on a spherical surface
Definition of the case study
This study investigates the spreading of a droplet on a complex spherical surface with a very small Bond number (Bo 1) [20]. As discussed in section 4, a droplet that spreads with Bo 1, maintains a spherical cap shape at the equilibrium.
A three-dimensional computational domain is simulated, with domain parameters provided in table 5. The domain is discretized using an unstructured Cartesian mesh, with a refined local mesh around the sphere and a uniform mesh size of 20 cells per radius. The droplet, with an initial radius R 0 , is placed on the top of the spherical surface. The spherical surface has a no-slip boundary condition for the velocity (it applies numerical slip [40] for the motion of the contact line). The droplet spreads on the spherical surface until it reaches the equilibrium, satisfying the static contact angle boundary condition. The physical parameters of the water-glycerol droplet are provided in table 2. The time step ∆t is restricted to CFL number below 0.01 to ensure stability.
Parameter
Value Unit
Droplet initial radius, R 0 1 mm Figure 15: Schematic diagram of the initial configuration of a droplet (--) with initial radius R 0 (left) and final equilibrium shape (--) (right) spreading on a spherical surface (-) with equilibrium contact angle θe. The equilibrium height is e, and the contact radius is r.
Geometrical relations for a droplet at equilibrium
The conservation of the droplet's volume V allows the formulation of geometrical relations for the contact radius r and the droplet height e, which define the spherical cap at the equilibrium as
α + β = θ e , r = R sin β = R 0 sin α, e = R(1 + cos β) − R 0 (1 − cos α),
Here, the unknown parameters are α, β, r, e, and R. With the known droplet volume (V exact = 4πR 3 0 3 ) and initial guess for α and β, an intermediate volume V k (α, β, R 0 ) is calculated by solving eq. (43) iteratively for k-iterations using the bisection method. The values of α and β that minimize |V exact − V k (α, β, R 0 )| are then used to approximate the contact radius r and droplet height e. Figure 16 shows the droplet's shapes at different instances during spreading. The droplet's spreading dynamics are similar to those observed on a flat surface, as previously discussed in section 4. At the beginning of spreading, initial rapid spreading locally at the contact line is observed, as shown in fig. 16b. As the simulation time progresses, the droplet's apex velocity increases in the downward direction ( fig. 16c), and the global shape of the droplet starts to change until the droplet attains the equilibrium shape ( fig. 16d).
Post-processing
The numerical solution of the contact radius r and equilibrium height e involves identifying the boundary cells that contain the contact line. The contact line position x Γ is determined by identifying the intersection point of the interface element Σ and the domain boundary ∂Ω (as shown in fig. 17). To locate this point, the signed distances Φ v are reconstructed at the vertices v ∈ V of the boundary interface cells Ω c,Σ that have V vertices and F c faces. For a face f ∈ F c with an intersection point, the signed distance values Φ v must change signs when looping over its vertices (as depicted in fig. 17). If the intersection point is within the cell, it is marked as a contact line cell, and the contact angle is subsequently calculated using the interface normal n Σ (t, x(t)) and the outward unit normal vector n ∂Ω to the boundary as
θ = cos −1 ( −n Σ (t, x(t)), n ∂Ω (x(t)) ).(44)
Identifying a contact line can be challenging due to the formation of wisps -small artificial interface elements that appear in the bulk phase (as discussed by Marić et al. [51]). To address this issue, an OpenFOAM function object was developed to detect and remove wisps from the contact line detection process.
Geometrical characteristics of the droplet
The simulation results for the non-dimensional equilibrium contact radius and height of a droplet spreading on a spherical surface for a range of equilibrium contact angles are presented in fig. 18. These results are compared to the reference solution from eq. (43) and are in excellent agreement. The equilibrium droplet shapes for various contact angles are illustrated in fig. 19 using α = 0.5 contours. The comparison shows a good qualitative match between the reference and simulated droplet shapes.
The simulation results using the plicRDF-isoAdvector method have shown good agreement with the reference solution for droplet shape and geometrical characteristics. However, it should be noted that the symmetrical spreading of the droplet on the spherical surface is very sensitive to numerical noise. Simulations with small perturbations in the initial spherical shape can lead to a different equilibrium shape, with the droplet tilting to one side of the spherical surface. Even in such a case, the droplet shape remains close to a spherical cap, and the static contact angle boundary condition is still satisfied.
6. 2D capillary rise 6
.1. Definition of the case study
The process of liquid flowing through narrow spaces, known as capillary action, has been studied profoundly in the literature (see, e.g., [52,23,24]). The process can be observed in the distribution of water from plants' roots to the rest of the body, the rising of liquids in porous media such as paper, and oil extraction from reservoirs, among others. In this validation study, we consider the two-dimensional case which corresponds to the rise of a liquid column between two planar parallel surfaces, as shown in [21]. We present a mesh convergence study for both the case of a no-slip (numerical slip) and a Navier slip boundary condition with a resolved slip length. As reported in [21], resolving the slip length with the computational mesh is crucial for finding the mesh convergence of the contact line dynamics. We present the comparison of the plicRDF-isoAdvector method with other numerical methods:
1. the OpenFOAM solver interTrackFoam, an Arbitrary Lagrangian-Eulerian (ALE) method. 2. the Free Surface 3D (FS3D), an in-house two-phase flow solver implying the geometric Volumeof-Fluid (VOF) method. 3. the OpenFOAM-based algebraic VOF solver, interFoam. 4. the Bounded Support Spectral Solver (BoSSS) is based on the extended discontinuous Galerkin method.
Quéré et al. [23], and Fries and Dreyer [24] study the capillary rise based on a simplified model introduced by Bosanquet [53]. The latter model is an ODE resulting from empirical modeling of the forces acting on the liquid column. It can be shown easily [23,24] that the dynamics of the capillary rise in Bosanquet's model is controlled by a single non-dimensionless group Ω defined as Ω = 9σ cos θµ 2 ρ 3 g 2 R 5 .
Moreover, Quéré et al. [23] showed that Bosanquet's model shows a regime transition at a critical value Ω c = 2. The column approaches its stationary state monotonically for Ω > 2 while it shows rise height oscillations for Ω < 2. Notably, Gründing et al. [21] showed in their study that the Navier slip length, which is not accounted for in Bosanquet's model, can significantly influence the rise dynamics and the transient regime. In fact, rise height oscillations are increasingly damped out as the slip length decreases. This observation led to improved ODE modeling of capillary rise, taking into account the flow near the contact line [54].
In the stationary state, the height of the liquid column can be estimated by Jurin's law as
h Jurin,2D = σ cos θ Rρg ,(46)
where σ is the surface tension coefficient, R is the radius of the capillary, ρ is the density of the liquid, g is the gravitational acceleration, and θ is the contact angle. However, equation (46) neglects the liquid volume in the interface region, hence, overestimating the true stationary rise height. Gründing et al. [21,54] computed a corrected stationary capillary height from the liquid volume in the interface region (assuming a spherical cap shape). The corrected formula reads as
h = h Jurin,2D − R 2 cos θ 2 − sin θ − sin −1 (cos θ) cos θ .(47)
Computational domain
The initial configuration of the two-dimensional computational domain is shown in fig. 20. The domain is discretized using the blockMesh utility of OpenFOAM, which creates a uniform Cartesian mesh in both the x and the y direction. The volume fraction field is initialized as a box 2R × 2R at the bottom of the capillary. As the simulation starts, the interface evolves to satisfy the contact angle boundary condition and then rises. The set of physical parameters taken from [21] is designed to achieve different values for Ω while keeping the Bond number constant (see table 6).
Mesh convergence study
A mesh convergence study for the 2D capillary rise with a mesh resolution of 16 to 256 cells per diameter of the capillary was conducted. We keep Ω = 1 and the slip length λ = 0.1 mm for this study. The simulations are done using both Navier slip and no-slip (numerical slip) boundary conditions. The results in fig. 21a are obtained using the no-slip boundary condition. However, since the method has some implicit "numerical slip", we still observe a contact line motion. As a consequence of the numerical slip being linked to the grid size, the results show a strong dependence on mesh resolution regarding the dynamics. In particular, the oscillations are increasingly dampened with the increase in the mesh resolution. This is expected since, for no-slip, it is well known that the viscous dissipation near the contact is non-integrably singular. As the slip length decreases, thus approaching the no-slip limit, the numerical solution starts showing the signature of the illposedness of the limiting problem (Huh and Scriven paradox [22]). However, for Navier slip with positive slip length, pressure and viscous dissipation are integrable [55]. Therefore, the simulations with Navier slip (see fig. 21b) show mesh convergence. Although the solutions with the numerical slip and Navier slip differ in the rise dynamics, it is to be noted that the stationary rise height is mesh-independent for both cases and levels at the corrected stationary rise height (given by eq. (47)).
6.3. Effect of the dimensionless parameter Ω on the capillary rise Figure 22 shows the simulation results for Ω = 0.1 and Ω = 0.5. A comparison of several other numerical methods with the plicRDF-isoAdvector method is also presented. As the Ω values chosen here are less than the critical value Ω c = 2, we expect to see oscillations during the capillary rise from Bosanquet's theory. The slip length is chosen to be λ = R/5 = 1 mm. For Ω = 0.1 (see fig. 22a), we observe strong oscillations which decrease in amplitude with time and the solution asymptotically reaches the reference (corrected) rise height value. It can also be noted that for the first two peaks, the plicRDF-isoAdvector scheme's result resembles the interFoam solver's result. However, with time, the oscillations for the plicRDF-isoAdvector method are dampened slightly faster than with the other numerical methods. Similar behavior can be observed for Ω = 0.5 (see fig. 22b), but with fewer oscillations, and the simulations results level with the stationary height according to eq. (47).
Conclusions and Outlook
We have benchmarked the plicRDF-isoAdvector method for wetting through four case studies. The first study investigated interface advection where a first-to second-order convergence transition is observed. The second study validated the contact line spreading dynamics on a flat surface with an excellent agreement to reference solutions, except for highly hydrophobic cases. The third study tested the method for droplet spreading on a spherical surface showing accurate results but with some sensitivity to numerical noise. The last study was a 2D capillary rise, with mesh-convergent results for the stationary state and contact line dynamics. The benchmark's input data and postprocessing Jupyter Notebooks are publicly available [25,26] and provide a valuable starting point for benchmarking numerical methods for wetting processes. The plicRDF-isoAdvector method can effectively simulate a broad spectrum of wetting problems. However, there are still challenges in simulating contact angles that are either very small (hydrophilic support) or very large (hydrophobic support). The plicRDF reconstruction indirectly considers the contact angle at the wall through a weighted sum of signed-distance values whose discrete gradient determines the cell-centered interface orientation. Also, the influence of the contact angle in the kinematic motion of a PLIC interface is mediated by the cell-centered PLIC interface orientation. Therefore, future developments should reconsider the approach via weighted contribution of the contact angle in the RDF PLIC reconstruction. From the gained experience, we recommend local adaptive mesh refinement near the interface with at least three cell layers of as uniform as possible mesh resolution surrounding the interface. Although plicRDF-isoAdvector can handle mesh grading, an interface passing through a non-uniform mesh-grading near a wall causes a loss in convergence order. An unstructured geometrical VOF interface reconstruction for contact line evolution is necessary, that exactly satisfies the prescribed contact angle in wall-adjacent cells, without this incurring a loss of accuracy away from wall-adjacent cell layer.
Acknowledgements
We acknowledge the financial support by the German Research Foundation (DFG) within the Collaborative Research Centre 1194 (Project-ID 265191195).
The use of the high-performance computing resources of the Lichtenberg High-Performance Cluster at the TU Darmstadt is gratefully acknowledged.
Figure 2 :
2The reconstructed signed distance function field for a droplet.
Figure 4 :
4Kinematic transport of the contact angle.
Figure 6 :
6Convergence of E 1 errors for the velocity field(32).
Figure 7
7show the simulation results of the numerical evolution of the transported contact angle. The solutions of the ODE (eqs. (27), (28), (30) and (31)) provide the reference value of the instantaneous transported contact angle θ ref (t). The numerical solution converges to the reference solution and delivers near first-order convergent results for E ∞ (see table 1). 4. Droplet spreading on a flat surface 4.1. Definition of the case study
Figure 7 :
7Left: Numerical contact angle evolution for the velocity field (eq. (32)) with a uniform mesh. Right: Maximum error in the transported contact angle (eq. (36)).
Figure 8 :
8Schematic diagram of initial configuration (-) of a droplet with initial radius R 0 and final equilibrium shape (--) attained by spreading on a flat surface with an equilibrium contact angle θe.smaller. The physical properties of both liquids are presented in table 2.
Figure 9 :
9A spherical cap sitting on a flat surface. e = L tan θ 2 .
Figure 10 :
10Convergence study of the wetted area of the spreading droplet on a flat horizontal surface. The top images (figs. 10a and 10b) and bottom images (figs. 10c and 10d) show the droplet spreading simulation results for water-glycerol and water droplets, respectively, with a mesh size of 10, 16, 20, and 40 cells per radius. The initial contact angle is θ 0 = 90°. The equilibrium contact angles are θe = 50°(left, wetting) and 110°(right, dewetting). The stationary solution (eq. (40)) is represented by the dotted horizontal (black) line.
Figure 11 :
11Simulation results of droplet spreading on a horizontal flat surface at different instants of time. The initial and equilibrium contact angles are θ 0 = 90°and θe = 50°, respectively. The arrows represent the velocity field.
Figure 13 :
13Droplet spreading on a horizontal flat surface: Equilibrium drop shapes for equilibrium contact angle θe = 10°, 50°, 90°, 110°: numerical (-) and theoretical (--).
Figure 14 :
14Normalized droplet height e * (e * = e/e 0 , where e 0 is the equilibrium height at Bo 1) on a flat surface with an equilibrium contact angle θe; left: θe = 50°, right: θe = 110°.
cos α) 2 (2 − cos α).
Figure 16 :
16Simulation results of droplet spreading on a spherical surface at different instants of time. The equilibrium contact angle is θe = 50°. The arrows represent the velocity field.
Figure 17 :
17Contact angle and contact line position from PLIC reconstruction. The marked cell is the interface cell at the boundary. The signed distances are calculated for each vertex of this cell. The change in signs of these distances then detects the presence of the contact line.
Figure 18 :
18Dimensionless geometrical characteristics of the equilibrium droplet shape on a spherical surface: height (-), contact radius (-), numerical height ( ), numerical contact radius ( ).
Figure 19 :
19Equilibrium drop shapes for equilibrium contact angle θe = 30°, 50°, 90°, 110°, and 140°: geometrical reference solution (--) (eq. (43)), numerical (-).
Figure 20 :
20Schematic diagram of the initial configuration of the 2D capillary rise.
slip with slip length λ = 0.1 mm.
Figure 21 :
21Mesh convergence study using a no-slip and a Navier slip boundary condition. Non-dimensionless number Ω = 1 and θe = 30°.
Figure 22 :
22Numerical result for capillary rise for the Ω-study (oscillatory regime).
Table 2 :
2Physical parameters of the liquid droplet.
Table 4 :
4Range of the Bond numbers for droplets spreading on a flat surface under the influence of gravity.
Table 5 :
5Computational and geometrical parameters for the droplet spreading on a spherical surface with given equilibrium contact angle θe.Droplet
Solid
R 0
R 0
r
e
R
θe
R 0
α
β
Pa s ms −2 Nm −1°--Ω
R
ρ
µ
g
σ
θ e Ca max
Bo
-
m
kgm −3 0.1 0.005 1663.8 0.01
1.04
0.2
30 0.0033 0.217
0.5 0.005
133.0
0.01
6.51
0.1
30 0.015 0.217
1
0.005
83.1
0.01
4.17
0.04
30 0.029 0.217
Table 6 :
6Physical parameters for different Ω.
The Surface Wettability Effect on Phase Change. M Marengo, 10.1007/978-3-030-82992-6J. D. ConinckSpringer International Publishing2022M. Marengo, J. D. Coninck (Eds.), The Surface Wettability Effect on Phase Change, Springer International Publishing, 2022. doi:10.1007/978-3-030-82992-6.
Critical heat flux maxima during boiling crisis on textured surfaces. N S Dhillon, J Buongiorno, K K Varanasi, 10.1038/ncomms9247Nature Communications. 6N. S. Dhillon, J. Buongiorno, K. K. Varanasi, Critical heat flux maxima during boiling crisis on textured surfaces, Nature Communications 6 (2015). doi:10.1038/ncomms9247.
Unstructured un-split geometrical Volume-of-Fluid methods -A review. T Marić, D B Kothe, D Bothe, 10.1016/j.jcp.2020.109695Journal of Computational Physics. 420109695T. Marić, D. B. Kothe, D. Bothe, Unstructured un-split geometrical Volume-of-Fluid methods -A review, Journal of Computational Physics 420 (2020) 109695. doi:https://doi.org/10. 1016/j.jcp.2020.109695.
A review of level-set methods and some recent applications. F Gibou, R Fedkiw, S Osher, 10.1016/j.jcp.2017.10.006Journal of Computational Physics. 353F. Gibou, R. Fedkiw, S. Osher, A review of level-set methods and some recent applications, Journal of Computational Physics 353 (2018) 82-109. doi:https://doi.org/10.1016/j.jcp. 2017.10.006.
A review of level set methods to model interfaces moving under complex physics: Recent challenges and advances. R I Saye, J A Sethian, 10.1016/bs.hna.2019.07.003Handbook of Numerical Analysis. Elsevier212020R. I. Saye, J. A. Sethian, A review of level set methods to model interfaces moving under complex physics: Recent challenges and advances, in: Handbook of Numerical Analysis, volume 21, Elsevier, 2020. doi:10.1016/bs.hna.2019.07.003.
OpenFOAM: user Guide v2006. Opencfd Ltd, Last accessed on 2020-10-12OpenCFD Ltd., OpenFOAM: user Guide v2006, 2006. https://openfoam.com/ documentation/guides/latest/doc/, Last accessed on 2020-10-12.
Evaluating the performance of the two-phase flow solver interFoam. S S Deshpande, L Anumolu, M F Trujillo, 10.1088/1749-4699/5/1/014016doi:10.1088/ 1749-4699/5/1/014016Computational science & discovery. 514016S. S. Deshpande, L. Anumolu, M. F. Trujillo, Evaluating the performance of the two-phase flow solver interFoam, Computational science & discovery 5 (2012) 014016. doi:10.1088/ 1749-4699/5/1/014016.
A computational method for sharp interface advection. J Roenby, H Bredmose, H Jasak, 10.1098/rsos.160405Royal Society open science. 3160405J. Roenby, H. Bredmose, H. Jasak, A computational method for sharp interface advection, Royal Society open science 3 (2016) 160405. doi:https://doi.org/10.1098/rsos.160405.
Accurate and efficient surface reconstruction from volume fraction data on general meshes. H Scheufler, J Roenby, 10.1016/j.jcp.2019.01.009Journal of Computational Physics. 383H. Scheufler, J. Roenby, Accurate and efficient surface reconstruction from volume fraction data on general meshes, Journal of Computational Physics 383 (2019) 1-23. doi:https://doi. org/10.1016/j.jcp.2019.01.009.
Validation of volume-of-fluid OpenFOAM® isoAdvector solvers using single bubble benchmarks. L Gamet, M Scala, J Roenby, H Scheufler, J.-L Pierson, 10.1016/j.compfluid.2020.104722Computers & Fluids. 213104722L. Gamet, M. Scala, J. Roenby, H. Scheufler, J.-L. Pierson, Validation of volume-of-fluid OpenFOAM® isoAdvector solvers using single bubble benchmarks, Computers & Fluids 213 (2020) 104722. doi:https://doi.org/10.1016/j.compfluid.2020.104722.
A quadtree-adaptive multigrid solver for the Serre-Green-Naghdi equations. S Popinet, 10.1016/j.jcp.2015.09.009Journal of Computational Physics. 302S. Popinet, A quadtree-adaptive multigrid solver for the Serre-Green-Naghdi equations, Jour- nal of Computational Physics 302 (2015) 336-358. doi:https://doi.org/10.1016/j.jcp. 2015.09.009.
Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. S Osher, J A Sethian, 10.1016/0021-9991(88)90002-2Journal of Computational Physics. 79S. Osher, J. A. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations, Journal of Computational Physics 79 (1988) 12-49. doi:https://doi.org/10.1016/0021-9991(88)90002-2.
Efficient Solvers for Incompressible Flow Problems: An Algorithmic and Computational Approache. S Turek, Springer Science & Business Media6S. Turek, Efficient Solvers for Incompressible Flow Problems: An Algorithmic and Computa- tional Approache, volume 6, Springer Science & Business Media, 1999.
A finite element level set method for viscous free-surface flows. N Parolini, E Burman, 10.1142/9789812701817_0038Applied and industrial mathematics in Italy. World ScientificN. Parolini, E. Burman, A finite element level set method for viscous free-surface flows, in: Applied and industrial mathematics in Italy, World Scientific, 2005, pp. 416-427. doi:https: //doi.org/10.1142/9789812701817_0038.
MooNMD-a program package based on mapped finite element methods, Computing and Visualization in Science. V John, G Matthies, 10.1007/s00791-003-0120-16V. John, G. Matthies, MooNMD-a program package based on mapped finite element methods, Computing and Visualization in Science 6 (2004) 163-170. doi:https://doi.org/10.1007/ s00791-003-0120-1.
Numerical Simulation of High-Density Ratio Bubble Motion with interIsoFoam. S Siriano, N Balcázar, A Tassone, J Rigola, G Caruso, 10.3390/fluids7050152Fluids. 7152S. Siriano, N. Balcázar, A. Tassone, J. Rigola, G. Caruso, Numerical Simulation of High- Density Ratio Bubble Motion with interIsoFoam, Fluids 7 (2022) 152. doi:https://doi.org/ 10.3390/fluids7050152.
Contact line advection using the geometrical Volume-of-Fluid method. M Fricke, T Marić, D Bothe, 10.1016/j.jcp.2019.109221Journal of Computational Physics. 407109221M. Fricke, T. Marić, D. Bothe, Contact line advection using the geometrical Volume-of-Fluid method, Journal of Computational Physics 407 (2020) 109221. doi:https://doi.org/10. 1016/j.jcp.2019.109221.
Numerical simulation of static and sliding drop with contact angle hysteresis. J.-B Dupont, D Legendre, 10.1016/j.jcp.2009.07.034Journal of Computational Physics. 229J.-B. Dupont, D. Legendre, Numerical simulation of static and sliding drop with contact angle hysteresis, Journal of Computational Physics 229 (2010) 2453-2478. doi:https://doi.org/ 10.1016/j.jcp.2009.07.034.
M Fricke, B Fickel, M Hartmann, D Gründing, M Biesalski, D Bothe, 10.48550/ARXIV.2003.04914arXiv:2003.04914A geometry-based model for spreading drops applied to drops on a silicon wafer and a swellable polymer brush film. arXiv preprintM. Fricke, B. Fickel, M. Hartmann, D. Gründing, M. Biesalski, D. Bothe, A geometry-based model for spreading drops applied to drops on a silicon wafer and a swellable polymer brush film, arXiv preprint arXiv:2003.04914 (2020). doi:10.48550/ARXIV.2003.04914.
A coupled Volume of Fluid and Immersed Boundary Method for simulating 3D multiphase flows with contact line dynamics in complex geometries. H Patel, S Das, J Kuipers, J Padding, E Peters, 10.1016/j.ces.2017.03.012Chemical Engineering Science. 166H. Patel, S. Das, J. Kuipers, J. Padding, E. Peters, A coupled Volume of Fluid and Immersed Boundary Method for simulating 3D multiphase flows with contact line dynamics in complex geometries, Chemical Engineering Science 166 (2017) 28-41. doi:https://doi.org/10.1016/ j.ces.2017.03.012.
A comparative study of transient capillary rise using direct numerical simulations. D Gründing, M Smuda, T Antritter, M Fricke, D Rettenmaier, F Kummer, P Stephan, H Marschall, D Bothe, 10.1016/j.apm.2020.04.020Applied Mathematical Modelling. 86D. Gründing, M. Smuda, T. Antritter, M. Fricke, D. Rettenmaier, F. Kummer, P. Stephan, H. Marschall, D. Bothe, A comparative study of transient capillary rise using direct numerical simulations, Applied Mathematical Modelling 86 (2020) 142-165. doi:https://doi.org/10. 1016/j.apm.2020.04.020.
Hydrodynamic model of steady movement of a solid/liquid/fluid contact line. C Huh, L E Scriven, 10.1016/0021-9797(71)90188-31016/0021-9797(71Journal of Colloid and Interface Science. 35C. Huh, L. E. Scriven, Hydrodynamic model of steady movement of a solid/liquid/fluid contact line, Journal of Colloid and Interface Science 35 (1971) 85-101. doi:https://doi.org/10. 1016/0021-9797(71)90188-3.
Rebounds in a Capillary Tube. D Quéré, É Raphaël, J.-Y Ollitrault, 10.1021/la9801615Langmuir. 15D. Quéré,É. Raphaël, J.-Y. Ollitrault, Rebounds in a Capillary Tube, Langmuir 15 (1999) 3679-3682. doi:https://doi.org/10.1021/la9801615.
Dimensionless scaling methods for capillary rise. N Fries, M Dreyer, 10.1016/j.jcis.2009.06.036Journal of Colloid and Interface Science. 338N. Fries, M. Dreyer, Dimensionless scaling methods for capillary rise, Journal of Colloid and Interface Science 338 (2009) 514-518. doi:https://doi.org/10.1016/j.jcis.2009.06.036.
plicRDF-isoAdvector benchmarks for wetting processes. M H Asghar, T Marić, Last accessed on 2022-10-26Asghar, M. H. and Marić, T. , plicRDF-isoAdvector benchmarks for wetting processes, 2022. https://github.com/CRC-1194/b01-wetting-benchmark, Last accessed on 2022-10-26.
Validation and verification of the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method for wetting problems -parabolic-Fit curvature -input data. M H Asghar, M Fricke, D Bothe, T Marić, 10.48328/tudatalib-982M. H. Asghar, M. Fricke, D. Bothe, T. Marić, Validation and verification of the plicRDF- isoAdvector unstructured Volume-of-Fluid (VOF) method for wetting problems -parabolic- Fit curvature -input data, 2022. URL: https://tudatalib.ulb.tu-darmstadt.de/handle/ tudatalib/3621. doi:10.48328/tudatalib-982.
T Kluyver, B Ragan-Kelley, F Pérez, B Granger, M Bussonnier, J Frederic, K Kelley, J Hamrick, J Grout, S Corlay, P Ivanov, D Avila, S Abdalla, C Willing, Jupyter Notebooks -a publishing format for reproducible computational workflows. F. Loizides, B. ScmidtNetherlandsIOS PressPositioning and Power in Academic Publishing: Players, Agents and AgendasT. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla, C. Willing, J. development team, Jupyter Notebooks -a publishing format for reproducible computational workflows, in: F. Loizides, B. Scmidt (Eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas, IOS Press, Netherlands, 2016, pp. 87-90. URL: https://eprints.soton.ac. uk/403913/.
Validation and verification of the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method for wetting problems -parabolic-Fit curvature -jupyter notebooks, csv files, secondary data, parameter variation file. M H Asghar, M Fricke, D Bothe, T Marić, 10.48328/tudatalib-983M. H. Asghar, M. Fricke, D. Bothe, T. Marić, Validation and verification of the plicRDF- isoAdvector unstructured Volume-of-Fluid (VOF) method for wetting problems -parabolic- Fit curvature -jupyter notebooks, csv files, secondary data, parameter variation file, 2022. URL: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3622. doi:10.48328/ tudatalib-983.
. Openfoam, Com, OpenFOAM.com, OpenFOAM-v2112, 2022. URL: https://develop.openfoam.com/ Development/openfoam/-/tree/OpenFOAM-v2112.
. H Scheufler, Twophaseflow , H. Scheufler, TwoPhaseFlow, 2022. URL: https://github.com/DLR-RY/TwoPhaseFlow/ tree/of2112.
. Bernhard Gschaider, / Contrib, Pyfoam, Bernhard Gschaider, Contrib/PyFoam, 2005. https://openfoamwiki.net/index.php/ Contrib/PyFoam, Last accessed on 2020-10-19.
triSurfaceImmersion: Computing volume fractions and signed distances from triangulated surfaces immersed in unstructured meshes. T Tolle, D Gründing, D Bothe, T Marić, 10.1016/j.cpc.2021.108249Computer Physics Communications. 273108249T. Tolle, D. Gründing, D. Bothe, T. Marić, triSurfaceImmersion: Computing volume fractions and signed distances from triangulated surfaces immersed in unstructured meshes, Com- puter Physics Communications 273 (2022) 108249. doi:https://doi.org/10.1016/j.cpc. 2021.108249.
. Franjo Juretic, build = 14aeaf8dab-20211220Last accessed on 2022-10-19Franjo Juretic, integration-cfmesh, 2021. https://develop.openfoam.com/Community/ integration-cfmesh/-/commit/f362ee65334e08056abdabab45e588503553e0ef, build = 14aeaf8dab-20211220, Last accessed on 2022-10-19.
freecad: A 3D parametric modeler. W M Riegel, Y Van Havre, W. M. Jürgen Riegel, Y. van Havre, freecad: A 3D parametric modeler, 2022. URL: https: //www.freecadweb.org/.
Time-dependent multi-material flow with large fluid distortion. D L Youngs, Numerical Methods for Fluid Dynamics. D. L. Youngs, Time-dependent multi-material flow with large fluid distortion, Numerical Meth- ods for Fluid Dynamics (1982). URL: https://cir.nii.ac.jp/crid/1571417126191472512.
Interface reconstruction with least-square fit and split Eulerian-Lagrangian advection. R Scardovelli, S Zaleski, 10.1002/fld.431International Journal for Numerical Methods in Fluids. 41R. Scardovelli, S. Zaleski, Interface reconstruction with least-square fit and split Eulerian- Lagrangian advection, International Journal for Numerical Methods in Fluids 41 (2003) 251- 274. doi:https://doi.org/10.1002/fld.431.
Estimating curvature from volume fractions. S J Cummins, M M Francois, D B Kothe, 10.1016/j.compstruc.2004.08.017Computers & Structures. 83S. J. Cummins, M. M. Francois, D. B. Kothe, Estimating curvature from volume fractions, Computers & Structures 83 (2005) 425-434. doi:https://doi.org/10.1016/j.compstruc. 2004.08.017.
Iterative Volume-of-Fluid interface positioning in general polyhedrons with Consecutive Cubic Spline interpolation. T Marić, 10.1016/j.jcpx.2021.100093Journal of Computational Physics: X. 11100093T. Marić, Iterative Volume-of-Fluid interface positioning in general polyhedrons with Con- secutive Cubic Spline interpolation, Journal of Computational Physics: X 11 (2021) 100093. doi:https://doi.org/10.1016/j.jcpx.2021.100093.
SAAMPLE: A segregated accuracy-driven algorithm for multiphase pressure-linked equations. T Tolle, D Bothe, T Marić, 10.1016/j.compfluid.2020.104450Computers & Fluids. 200104450T. Tolle, D. Bothe, T. Marić, SAAMPLE: A segregated accuracy-driven algorithm for multiphase pressure-linked equations, Computers & Fluids 200 (2020) 104450. doi:https: //doi.org/10.1016/j.compfluid.2020.104450.
Numerical simulation of moving contact line problems using a volume-of-fluid method. M Renardy, Y Renardy, J Li, 10.1006/jcph.2001.6785Journal of Computational Physics. 171M. Renardy, Y. Renardy, J. Li, Numerical simulation of moving contact line problems using a volume-of-fluid method, Journal of Computational Physics 171 (2001) 243-263. doi:https: //doi.org/10.1006/jcph.2001.6785.
TwoPhaseFlow: An OpenFOAM based framework for development of two phase flow solvers. H Scheufler, J Roenby, 10.48550/arXiv.2103.00870arXiv:2103.00870arXiv preprintH. Scheufler, J. Roenby, TwoPhaseFlow: An OpenFOAM based framework for development of two phase flow solvers, arXiv preprint arXiv:2103.00870 (2021). doi:https://doi.org/10. 48550/arXiv.2103.00870.
Numerical Wetting Benchmarks -Advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using the parabolic fit curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file. M H Asghar, M Fricke, D Bothe, T Marić, 10.48328/tudatalib-983.5M. H. Asghar, M. Fricke, D. Bothe, T. Marić, Numerical Wetting Benchmarks -Advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using the parabolic fit curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file, 2023. URL: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3622.5. doi:10. 48328/tudatalib-983.5.
Numerical Wetting Benchmarks -Advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using the RDF curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file. M H Asghar, M Fricke, D Bothe, T Marić, 10.48328/tudatalib-1069.2M. H. Asghar, M. Fricke, D. Bothe, T. Marić, Numerical Wetting Benchmarks -Advanc- ing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using the RDF curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file, 2023. URL: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3730.2. doi:10. 48328/tudatalib-1069.2.
Numerical Wetting Benchmarks -Advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using height-function curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file. M H Asghar, M Fricke, D Bothe, T Marić, 10.48328/tudatalib-1068.2M. H. Asghar, M. Fricke, D. Bothe, T. Marić, Numerical Wetting Benchmarks -Advancing the plicRDF-isoAdvector unstructured Volume-of-Fluid (VOF) method using height-function curvature model-Jupyter Notebooks, CSV files, Secondary Data, Parameter variation file, 2023. URL: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3729.2. doi:10. 48328/tudatalib-1068.2.
A kinematic evolution equation for the dynamic contact angle and some consequences. M Fricke, M Köhne, D Bothe, 10.1016/j.physd.2019.01.008Physica D: Nonlinear Phenomena. 394M. Fricke, M. Köhne, D. Bothe, A kinematic evolution equation for the dynamic contact angle and some consequences, Physica D: Nonlinear Phenomena 394 (2019) 26-43. doi:https: //doi.org/10.1016/j.physd.2019.01.008.
On the kinematics of contact line motion. M Fricke, M Köhne, D Bothe, 10.1002/pamm.201800451PAMM. 18201800451M. Fricke, M. Köhne, D. Bothe, On the kinematics of contact line motion, PAMM 18 (2018) e201800451. doi:https://doi.org/10.1002/pamm.201800451.
A mesh-dependent model for applying dynamic contact angles to VOF simulations. S Afkhami, S Zaleski, M Bussmann, 10.1016/j.jcp.2009.04.027Journal of Computational Physics. 228S. Afkhami, S. Zaleski, M. Bussmann, A mesh-dependent model for applying dynamic con- tact angles to VOF simulations, Journal of Computational Physics 228 (2009) 5370-5389. doi:https://doi.org/10.1016/j.jcp.2009.04.027.
Capillary spreading of a droplet in the partially wetting regime using a diffuse-interface model. V Khatavkar, P Anderson, H Meijer, 10.1017/S0022112006003533Journal of Fluid Mechanics. 572V. Khatavkar, P. Anderson, H. Meijer, Capillary spreading of a droplet in the partially wet- ting regime using a diffuse-interface model, Journal of Fluid Mechanics 572 (2007) 367-387. doi:https://doi.org/10.1017/S0022112006003533.
Some generic capillary-driven flows. W Villanueva, G Amberg, 10.1016/j.ijmultiphaseflow.2006.05.003International Journal of Multiphase Flow. 32W. Villanueva, G. Amberg, Some generic capillary-driven flows, International Journal of Multiphase Flow 32 (2006) 1072-1086. doi:https://doi.org/10.1016/j.ijmultiphaseflow. 2006.05.003.
. Inc Kitware, 9Los Alamos National LaboratoryParaView-v5Kitware, Inc, Los Alamos National Laboratory , ParaView-v5.9, 2021. https://www. paraview.org/documentation/, Last accessed on 2022-10-19.
An enhanced un-split face-vertex flux-based VoF method. T Marić, H Marschall, D Bothe, 10.1016/j.jcp.2018.03.048Journal of Computational Physics. 371T. Marić, H. Marschall, D. Bothe, An enhanced un-split face-vertex flux-based VoF method, Journal of Computational Physics 371 (2018) 967-993. doi:https://doi.org/10.1016/j.jcp. 2018.03.048.
The Dynamics of Capillary Flow. E W Washburn, 10.1103/PhysRev.17.273Physical Review. 17E. W. Washburn, The Dynamics of Capillary Flow, Physical Review 17 (1921) 273-283. doi:10.1103/PhysRev.17.273.
On the flow of liquids into capillary tubes, The London, Edinburgh, and Dublin Philosophical Magazine and. C H Bosanquet, 10.1080/14786442308634144Journal of Science. 45C. H. Bosanquet, On the flow of liquids into capillary tubes, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 45 (1923) 525-531. doi:https://doi. org/10.1080/14786442308634144.
An enhanced model for the capillary rise problem. D Gründing, 10.1016/j.ijmultiphaseflow.2020.103210International Journal of Multiphase Flow. 128103210D. Gründing, An enhanced model for the capillary rise problem, International Journal of Multiphase Flow 128 (2020) 103210. doi:https://doi.org/10.1016/j.ijmultiphaseflow. 2020.103210.
The steady movement of a liquid meniscus in a capillary tube. C Huh, S G Mason, 10.1017/S0022112077002134Journal of Fluid Mechanics. 81C. Huh, S. G. Mason, The steady movement of a liquid meniscus in a capillary tube, Journal of Fluid Mechanics 81 (1977) 401-419. doi:https://doi.org/10.1017/S0022112077002134.
| [
"https://github.com/CRC-1194/b01-wetting-benchmark,",
"https://github.com/DLR-RY/TwoPhaseFlow/"
] |
[
"Minimal Dirac Neutrino Mass Models from U(1) R Gauge Symmetry and Left-Right Asymmetry at Colliders",
"Minimal Dirac Neutrino Mass Models from U(1) R Gauge Symmetry and Left-Right Asymmetry at Colliders"
] | [
"Sudip Jana [email protected] \nDepartment of Physics\nOklahoma State University\n74078StillwaterOKUSA\n",
"Vishnu P K \nDepartment of Physics\nOklahoma State University\n74078StillwaterOKUSA\n",
"Shaikh Saad \nDepartment of Physics\nOklahoma State University\n74078StillwaterOKUSA\n"
] | [
"Department of Physics\nOklahoma State University\n74078StillwaterOKUSA",
"Department of Physics\nOklahoma State University\n74078StillwaterOKUSA",
"Department of Physics\nOklahoma State University\n74078StillwaterOKUSA"
] | [] | In this work, we propose minimal realizations for generating Dirac neutrino masses in the context of a right-handed abelian gauge extension of the Standard Model. Utilizing only U (1) R symmetry, we address and analyze the possibilities of Dirac neutrino mass generation via (a) tree-level seesaw and (b) radiative correction at the one-loop level. One of the presented radiative models implements the attractive scotogenic model that links neutrino mass with Dark Matter (DM), where the stability of the DM is guaranteed from a residual discrete symmetry emerging from U (1) R . Since only the right-handed fermions carry non-zero charges under the U (1) R , this framework leads to sizable and distinctive Left-Right asymmetry as well as Forward-Backward asymmetry discriminating from U (1) B−L models and can be tested at the colliders.We analyze the current experimental bounds and present the discovery reach limits for the new heavy gauge boson Z at the LHC and ILC. Furthermore, we also study the associated charged lepton flavor violating processes, dark matter phenomenology and cosmological constraints of these models. * metry [30] for generating the Dirac neutrino mass [28, 31-35]. Both of the two possibilities are attractive and can be regarded as the minimal gauge extensions of the SM. However, the phenomenology of U (1) R model is very distinctive compared to the U (1) B−L case. In the literature, gauged U (1) B−L symmetry has been extensively studied whereas gauged U (1) R extension has received very little attention. Unlike the U (1) B−L case, in our set-up, the SM Higgs doublet is charged under this U (1) R symmetry to allow the desired Yukawa interactions to generate mass for the charged fermions, this leads to interactions with the new gauge boson that is absent in U (1) B−L model. The running of the Higgs quartic coupling gets modified due to having such interactions with the new gauge boson Z that can make the Higgs vacuum stable [36]. Due to the same reason, the SM Higgs phenomenology also gets altered [37].We show by detail analysis that despite their abelian nature, U (1) R and U (1) B−L have distinguishable phenomenology. The primary reason that leads to different features is:U (1) R gauge boson couples only to the right-handed chiral fermions, whereas U (1) B−L is chirality-universal. As a consequence, U (1) R model leads to large left-right (LR) asymmetry and also forward-backward (FB) asymmetry that can be tested in the current and future colliders that make use of the polarized initial states, such as in ILC. We also comment on the differences of our U (1) R scenario with the other U (1) R models existing in the literature.Slightly different features emerge as a result of different charge assignment of the righthanded neutrinos in our set-up for the realization of Dirac neutrino mass. In the existing U (1) R models, flavor universal charge assignment for the right-handed neutrinos are considered and neutrinos are assumed to be Majorana particles. Whereas, in our set-up, neutrinos are Dirac particles that demands non-universal charge assignment for the right-handed neutrinos under U (1) R . Neutrinos being Dirac in nature also leads to null neutrinoless double beta decay signal.The originality of this work is, by employing only the gauged U (1) R symmetry, we construct Dirac neutrino masses at the tree-level and one-loop level (with or without DM) which has not been done before and, by a detailed study of the phenomenology associated to the new heavy gauge boson, we show that U (1) R model is very promising to be discovered in the future colliders. Due to the presence of the TeV or sub-TeV scale BSM particles, these models can give rise to sizable rate for the charged lepton flavor violating processes which we also analyze. On top of that, we bring both the dark matter and the neutrino mass generation issues under one umbrella without imposing any additional symmetry and, work out the associated dark matter phenomenology. We also discuss the cosmological consequences due to the presence of the light right-handed neutrinos in our framework.The paper is organized as follows. In Section 2, we discuss the framework where SM is extended by an abelian gauge symmetry U (1) R . In Section 3, we present the minimal | 10.1140/epjc/s10052-019-7441-9 | [
"https://export.arxiv.org/pdf/1904.07407v2.pdf"
] | 119,120,122 | 1904.07407 | 806ec0be06e0aa50e3fe69d5f1ab0e692ca197cc |
Minimal Dirac Neutrino Mass Models from U(1) R Gauge Symmetry and Left-Right Asymmetry at Colliders
29 Sep 2019
Sudip Jana [email protected]
Department of Physics
Oklahoma State University
74078StillwaterOKUSA
Vishnu P K
Department of Physics
Oklahoma State University
74078StillwaterOKUSA
Shaikh Saad
Department of Physics
Oklahoma State University
74078StillwaterOKUSA
Minimal Dirac Neutrino Mass Models from U(1) R Gauge Symmetry and Left-Right Asymmetry at Colliders
29 Sep 2019
In this work, we propose minimal realizations for generating Dirac neutrino masses in the context of a right-handed abelian gauge extension of the Standard Model. Utilizing only U (1) R symmetry, we address and analyze the possibilities of Dirac neutrino mass generation via (a) tree-level seesaw and (b) radiative correction at the one-loop level. One of the presented radiative models implements the attractive scotogenic model that links neutrino mass with Dark Matter (DM), where the stability of the DM is guaranteed from a residual discrete symmetry emerging from U (1) R . Since only the right-handed fermions carry non-zero charges under the U (1) R , this framework leads to sizable and distinctive Left-Right asymmetry as well as Forward-Backward asymmetry discriminating from U (1) B−L models and can be tested at the colliders.We analyze the current experimental bounds and present the discovery reach limits for the new heavy gauge boson Z at the LHC and ILC. Furthermore, we also study the associated charged lepton flavor violating processes, dark matter phenomenology and cosmological constraints of these models. * metry [30] for generating the Dirac neutrino mass [28, 31-35]. Both of the two possibilities are attractive and can be regarded as the minimal gauge extensions of the SM. However, the phenomenology of U (1) R model is very distinctive compared to the U (1) B−L case. In the literature, gauged U (1) B−L symmetry has been extensively studied whereas gauged U (1) R extension has received very little attention. Unlike the U (1) B−L case, in our set-up, the SM Higgs doublet is charged under this U (1) R symmetry to allow the desired Yukawa interactions to generate mass for the charged fermions, this leads to interactions with the new gauge boson that is absent in U (1) B−L model. The running of the Higgs quartic coupling gets modified due to having such interactions with the new gauge boson Z that can make the Higgs vacuum stable [36]. Due to the same reason, the SM Higgs phenomenology also gets altered [37].We show by detail analysis that despite their abelian nature, U (1) R and U (1) B−L have distinguishable phenomenology. The primary reason that leads to different features is:U (1) R gauge boson couples only to the right-handed chiral fermions, whereas U (1) B−L is chirality-universal. As a consequence, U (1) R model leads to large left-right (LR) asymmetry and also forward-backward (FB) asymmetry that can be tested in the current and future colliders that make use of the polarized initial states, such as in ILC. We also comment on the differences of our U (1) R scenario with the other U (1) R models existing in the literature.Slightly different features emerge as a result of different charge assignment of the righthanded neutrinos in our set-up for the realization of Dirac neutrino mass. In the existing U (1) R models, flavor universal charge assignment for the right-handed neutrinos are considered and neutrinos are assumed to be Majorana particles. Whereas, in our set-up, neutrinos are Dirac particles that demands non-universal charge assignment for the right-handed neutrinos under U (1) R . Neutrinos being Dirac in nature also leads to null neutrinoless double beta decay signal.The originality of this work is, by employing only the gauged U (1) R symmetry, we construct Dirac neutrino masses at the tree-level and one-loop level (with or without DM) which has not been done before and, by a detailed study of the phenomenology associated to the new heavy gauge boson, we show that U (1) R model is very promising to be discovered in the future colliders. Due to the presence of the TeV or sub-TeV scale BSM particles, these models can give rise to sizable rate for the charged lepton flavor violating processes which we also analyze. On top of that, we bring both the dark matter and the neutrino mass generation issues under one umbrella without imposing any additional symmetry and, work out the associated dark matter phenomenology. We also discuss the cosmological consequences due to the presence of the light right-handed neutrinos in our framework.The paper is organized as follows. In Section 2, we discuss the framework where SM is extended by an abelian gauge symmetry U (1) R . In Section 3, we present the minimal
Introduction
Neutrino oscillation data [1] indicates that at-least two neutrinos have tiny masses. The origin of the neutrino mass is one of the unsolved mysteries in Particle Physics. The minimal way to obtain the non-zero neutrino masses is to introduce three right-handed neutrinos that are singlets under the Standard Model (SM). Consequently, Dirac neutrino mass term at the tree-level is allowed and has the form: L Y ⊃ y ν L L Hν R . However, this leads to unnaturally small Yukawa couplings for neutrinos (y ν ≤ 10 −11 ). There have been many proposals to naturally induce neutrino mass mostly by using the seesaw mechanism [2][3][4][5][6] or via radiative mechanism [7]. Most of the models of neutrino mass generation assume that the neutrinos are Majorana 1 type in nature. Whether neutrinos are Dirac or Majorana type particles is still an open question. This issue can be resolved by neutrinoless double beta decay experiments [10]. However, up-to-now there is no concluding evidence from these experiments.
Recently, there has been a growing interest in models where neutrinos are assumed to be Dirac particles. Many of these models use ad hoc discrete symmetries [11][12][13][14][15][16][17][18][19][20][21] to forbid the aforementioned unnaturally small tree-level Yukawa term as well as Majorana mass terms.
However, it is more appealing to forbid all these unwanted terms utilizing simple gauge extension of the SM instead of imposing discrete or continuous global symmetries. This choice is motivated by the fact that contrary to gauge symmetries, global symmetries are known not to be respected by the gravitational interactions [22][23][24][25][26].
In this work, we extend the SM with U (1) R gauge symmetry, under which only the SM right-handed fermions are charged and the left-handed fermions transform trivially. This realization is very simple in nature and has several compelling features to be discussed in great details. Introducing only the three right-handed neutrinos all the gauge anomalies can be canceled and U (1) R symmetry can be utilized to forbid all the unwanted terms to build desired models of Dirac neutrino mass. Within this framework, by employing the U (1) R symmetry we construct a tree-level Dirac seesaw model [27] and two models where neutrino mass appears at the one-loop level. One of these loop models presented in this work is the most minimal model of radiative Dirac neutrino mass [28] and the second model uses the scotogenic mechanism [29] that links two seemingly uncorrelated phenomena: neutrino mass with Dark Matter (DM). As we will discuss, the stability of the DM in the latter scenario is a consequence of a residual Z 2 discrete symmetry that emerges from the spontaneous breaking of the U (1) R gauge symmetry.
Among other simple possibilities, one can also extend the SM with U (1) B−L gauge sym-Quarks Q Li (3, 2, 1 6 , 0) u Ri (3, 1, 2 3 , R H ) d Ri (3, 1 Dirac neutrino mass models in details, along with the particle spectrum and charge assignments. In Section 4, we discuss the running of the U (1) R coupling. Charged lepton flavor violating processes are analyzed in Section 5. We have also done the associated dark matter phenomenology in Section 6 for the scotogenic model. Furthermore, we analyze the collider implications in Section 7. In Section 8, we study the constraints from cosmological measurement and finally, we conclude in Section 9.
Framework
Our framework is a very simple extension of the SM: an abelian gauge extension under which only the right-handed fermions are charged. Such a charge assignment is anomalous, however, all the gauge anomalies can be canceled by the minimal extension of the SM with just three right-handed neutrinos. Within this framework the minimal choice to generate the charged fermion masses is to utilize the already existing SM Higgs doublet, hence the associated Yukawa couplings have the form:
L Y ⊃ y u Q L Hu R + y d Q L Hd R + y e L L H R + h.c. (2.1)
As a result, the choice of the U (1) R charges of the right-handed fermions of the SM must be universal and obey the following relationship:
R u = −R d = −R = R H . (2.2)
Here R k represents the U (1) R charge of the particle k. Hence, all the charges are determined once R H is fixed, which can take any value. The anomaly is canceled by the presence of the right-handed neutrinos that in general can carry non-universal charge under U (1) R . Under the symmetry of the theory, the quantum numbers of all the particles are shown in Table I.
In our set-up, all the anomalies automatically cancel except for the following two:
[U (1) R ] : R ν 1 + R ν 2 + R ν 3 = 3R H ,(2.
3)
[U (1) R ] 3 : R 3 ν 1 + R 3 ν 2 + R 3 ν 3 = 3R 3 H . (2.4)
This system has two different types of solutions. The simplest solution corresponds to the case of flavor universal charge assignment that demands: R ν1,2,3 = R H which has been studied in the literature [38][39][40][41][42]. In this work, we adopt the alternative choice of flavor non-universal solution and show that the predictions and phenomenology of this set-up can be very different from the flavor universal scenario. We compare our model with the other U (1) R extensions, as well as U (1) B−L extensions of the SM. As already pointed out, a different charge assignment leads to distinct phenomenology in our model and can be distinguished in the neutrino and collider experiments.
Since SM is a good symmetry at the low energies, U (1) R symmetry needs to be broken around O(10) TeV scale or above. We assume that U (1) R gets broken spontaneously by the VEV of a SM singlet χ(1, 1, 0, R χ ) that must carry non-zero charge (R χ = 0) under U (1) R .
As a result of this symmetry breaking, the imaginary part of χ will be eaten up by the corresponding gauge boson X µ to become massive. Since EW symmetry also needs to break down around the O(100) GeV scale, one can compute the masses of the gauge bosons from the covariant derivatives associated with the SM Higgs H and the SM singlet scalar χ:
D µ H = (∂ µ − igW µ − ig Y H B µ − ig R R H X µ ) H, (2.5) D µ χ = (∂ µ − ig R R χ X µ ) χ. (2.6)
As a consequence of the symmetry breaking, the neutral components of the gauge bosons will all mix with each other. Inserting the following VEVs:
H = 0 v H √ 2 , χ = v χ √ 2 , (2.7)
one can compute the neutral gauge boson masses as:
B W 3 X v 2 H 4 g 2 −g g 2g g R R H −g g g 2 −2gg R R χ 2g g R R H −2gg R R χ 4g 2 R R 2 H (1 + r 2 v ) B W 3 X . (2.8)
Where, r v = Rχvχ R H v H and the well-known relation tan θ w = g /g and furthermore v H = 246 GeV. In the above mass matrix denoted by M 2 , one of the gauge bosons remains massless, which must be identified as the photon field, A µ . Moreover, two massive states appear which are the SM Z-boson and a heavy Z -boson (M Z < M Z ). The corresponding masses are given by:
M Z,Z = gv H 2c w 1 2 1 + r 2 X c 2 w (1 + r 2 v ) ∓ r X c w sin(2θ X ) 1 2 ,(2.9)
here we define:
r X = (2g R R H )/g, (2.10) sin(2θ X ) = 2r X c w [2r X c w ] 2 + [(1 + r 2 v )r 2 X c 2 w − 1] 2 1 2 . (2.11)
Which clearly shows that for g R = 0, the mass of the SM gauge boson is reproduced:
M SM Z = 1 2 v H (g 2 + g 2 ) 1/2 = 1 2 gv H /c w .
To find the corresponding eigenstates, we diagonalize the mass matrix as:
M 2 = U † M 2 diag U * , with: B W 3 X = U A Z Z , U = c w −s w c X s w s X s w c w c X −c w s X 0 s X c X .(2.|∆M Z | = M SM Z 1 − r 2 v 1 + r 2 v ≤ 2.1 MeV.(2.L ⊃ g ψ ψγ µ Z µ ψ.
(2.14)
The couplings g ψ of all the fermions in our theory are collected in Table II and will be useful for our phenomenological study performed later in the text. Note that the couplings of the left-handed SM fermions are largely suppressed compared to the right-handed ones, since they are always proportional to sin θ X and θ X must be small and is highly constrained by the experimental data.
Based on the framework introduced in this section, we construct various minimal models of Dirac neutrino masses in Sec 3 and study various phenomenology in the subsequent sections.
Fermion, ψ
Coupling, g ψ Quarks
g u L = − 1 6 g c w (1 + 2c 2w )s X g d L = 1 6 g c w (2 + c 2w )s X g u R = 2 3 g c w s 2 w s X + g R c X R H g d R = − 1 3 g c w s 2 w s X − g R c X R H Leptons g ν L = − 1 2 g c w s X g L = 1 2 g c w c 2w s X g R = − g c w s 2 w s X − g R c X R H g ν R i = g R c X R ν i Vector-like fermions g N = g R c X R N
Dirac Neutrino Mass Models
By adopting the set-up as discussed above in this section, we construct models of Dirac neutrino masses. Within this set-up, if the solution R ν i = R H is chosen which is allowed by the anomaly cancellation conditions, then tree-level Dirac mass term y ν v H ν L ν R is allowed and observed oscillation data requires tiny Yukawa couplings of order y ν ∼ 10 −11 . This is expected not to be a natural scenario, hence due to aesthetic reason we generate naturally small Dirac neutrino mass by exploiting the already existing symmetries in the theory. This requires the implementation of the flavor non-universal solution of the anomaly cancellation conditions, in such a scenario U (1) R symmetry plays the vital role in forbidding the direct Dirac mass term and also all Majorana mass terms for the neutrinos.
In this section, we explore three different models within our framework where neutrinos receive naturally small Dirac mass either at the tree-level or at the one-loop level. Furthermore, we also show that the stability of DM can be assured by a residual discrete symmetry resulting from the spontaneous symmetry breaking of U (1) R . In the literature, utilizing U (1) R symmetry, two-loop Majorana neutrino mass is constructed with the imposition of an additional Z 2 symmetry in [38,39] and three types of seesaw cases are discussed, standard type-I seesaw in [40], type-II seesaw in [41] and inverse seesaw model in [42]. In constructing the inverse seesaw model, in addition to U (1) R , additional flavor dependent U(1) symmetries are also imposed in [42]. In all these models, neutrinos are assumed to be Majorana particles which is not the case in our scenario.
Tree-level Dirac Seesaw
In this sub-section, we focus on the tree-level neutrino mass generation via Dirac seesaw mechanism [27] 2 . For the realization of this scenario, we introduce three generations of vector-like fermions that are singlets under the SM: N L,R (1, 1, 0, R N ). In this model, the quantum numbers of the multiplets are shown in Table III and the corresponding Feynman diagram for neutrino mass generation is shown in Fig. 1. This choice of the particle content allows one to write the following Yukawa coupling terms relevant for neutrino mass generation:
L Y ⊃ y H L L HN R + M N N L N R + y χ N L ν R χ * + h.c. (3.15)
Here, we have suppressed the generation and the group indices. And the Higgs potential is given by:
V = −µ 2 H H † H + λ(H † H) 2 − µ 2 χ χ * χ + λ 1 (χ * χ) 2 + λ 2 H † Hχ * χ. (3.16)
When both the U (1) R and EW symmetries are broken, the part of the above Lagrangian responsible for neutrino mass generation can be written as:
L Y ⊃ ν L N L M ν,N ν R N R , M ν,N = 0 v H √ 2 y H vχ √ 2 y χ M N . (3.17)
Where M ν,N is a 6 × 6 matrix and, since ν R 1 carries a different charge we have y χ i1 = 0. The bare mass term M N of the vector-like fermions can in principle be large compared to the two VEVs, M N v H,χ , assuming this scenario the light neutrino masses are given by:
m ν ∼ v H v χ 2 y H y χ M N . (3.18)
Assuming v χ = 10 TeV, y H = y χ ∼ 10 −3 , to get m ν = 0.1 eV one requires M N ∼ 10 10 GeV.
Dirac neutrino mass generation of this type from a generic point of view without specifying the underline symmetry is discussed in [17].
In this scenario two chiral massless states appear, one of them is ν R 1 , which is a consequence of its charge being different from the other two generations. In principle, all three generations of neutrinos can be given Dirac mass if the model is extended by a second SM singlet χ (1, 1, 0, −6). When this field acquires an induced VEV all neutrinos become massive. This new SM singlet scalar, if introduced, gets an induced VEV from a cubic coupling of the form: µχ 2 χ + h.c.. Alternatively, without specifying the ultraviolet completion of the model, a small Dirac neutrino mass for the massless chiral states can be generated via the dimension-5 operator N L ν R χ χ /Λ once U (1) R is broken spontaneously.
Multiplets SU (3) C × SU (2) L × U (1) Y × U (1) R Leptons L Li (1, 2, − 1 2 , 0) Ri (1, 1, −1, −1) ν Ri (1, 1, 0, {−5, 4, 4}) Scalars H(1, 2, 1 2 , 1) χ(1, 1, 0, 3)
Vector-like fermion N L,R (1, 1, 0, 1) Table III: Quantum numbers of the fermions and the scalars in Dirac seesaw model.
Simplest one-loop implementation
In this sub-section, we consider the most minimal [28] model of radiative Dirac neutrino mass in the context of U (1) R symmetry. Unlike the previous sub-section, we do not introduce any vector-like fermions, hence neutrino mass does not appear at the tree-level. All treelevel Dirac and Majorana neutrino mass terms are automatically forbidden due to U (1) R symmetry reasons. This model consists of two singly charged scalars S + i to complete the loop-diagram and a neutral scalar χ to break the U (1) R symmetry, the particle content with their quantum numbers is presented in Table IV. With this particle content, the gauge invariant terms in the Yukawa sector responsible for generating neutrino mass are given by:
L Y ⊃ y H L L R H + y S 1 L c L L L S + 1 + y S 2 ν c R R S + 2 + h.c. (3.19)
And the complete Higgs potential is given by:
V = −µ 2 H H † H + µ 2 1 |S + 1 | 2 + µ 2 2 |S + 2 | 2 − µ 2 χ χ * χ + (µS + 2 S − 1 χ + h.c.) + λ(H † H) 2 + λ 1 |S + 1 | 4 + λ 2 |S + 2 | 4 + λ χ (χ * χ) 2 + λ 3 |S + 1 | 2 |S + 2 | 2 + λ 4 |S + 1 | 2 H † H + λ 5 |S + 2 | 2 H † H + λ 6 H † Hχ * χ. (3.20)
By making use of the existing cubic term V ⊃ µS + 2 S − 1 χ + h.c. one can draw the desired Fig. 2. The neutrino mass matrix in this model is given by:
Leptons L Li (1, 2, − 1 2 , 0) Ri (1, 1, −1, −1) ν Ri (1, 1, 0, {−5, 4, 4}) Scalars H(1, 2, 1 2 , 1) χ(1, 1, 0, 3) S + 1 (1, 1, 1, 0) S + 2 (1, 1, 1, −3)m ν ab = sin(2θ) 16π 2 ln m 2 H 2 m 2 H 1 y S 1 ai m E i y S 2 ib . (3.21)
Here θ represents the mixing between the singly charged scalars and m H i represents the mass of the physical state H + i . Here we make a crude estimation of the neutrino masses: This is the most minimal radiative Dirac neutrino mass mechanism which was constructed by employing a Z 2 symmetry in [44] and just recently in [28,33] by utilizing U (1) B−L symmetry. As a result of the anti-symmetric property of the Yukawa couplings y S 1 , one pair of chiral states remains massless to all orders, higher dimensional operators cannot induce mass to all the neutrinos. As already pointed out, neutrino oscillation data is not in conflict with one massless state.
for θ = 0.1 radian, m H 2 /m H 1 = 1.1 and y S i ∼ 10 −
Scotogenic Dirac neutrino mass
The third possibility of Dirac neutrino mass generation that we discuss in this sub-section contains a DM candidate. The model we present here belongs to the radiative scotogenic [29] class of models and contains a second Higgs doublet in addition to two SM singlets.
Furthermore, a vector-like fermion singlet under the SM is required to complete the oneloop diagram. The particle content of this model is listed in Table V and the associated loop-diagram is presented in Fig. 3. The relevant Yukawa interactions are given as follows:
Multiplets SU (3) C × SU (2) L × U (1) Y × U (1) R Leptons L Li (1, 2, − 1 2 , 0) Ri (1, 1, −1, −1) ν Ri (1, 1, 0, {−5, 4, 4}) Scalars H(1, 2, 1 2 , 1) χ(1, 1, 0, 3) S(1, 1, 0, − 7 2 ) η(1, 2, 1 2 , 1 2 ) Vector-like fermion N L,R (1, 1, 0, 1 2 )y η L L N R η + M N N L N R + y S N L ν R S + h.c. (3.22)
And the complete Higgs potential is given by:
V = −µ 2 H H † H + λ(H † H) 2 + µ 2 η η † η + λ η (η † η) 2 − µ 2 χ χ * χ + λ χ (χ * χ) 2 + µ 2 S S * S + λ S (S * S) 2 + λ 1 H † Hη † η + λ 2 H † HS * S + λ 3 H † Hχ * χ + λ 4 η † ηS * S + λ 5 η † ηχ * χ + λ 6 χ * χS * S + (λ 7 H † ηη † H + h.c.) + (λ D η † HχS + h.c.).
(3.23)
The SM singlet S and the second Higgs doublet η do not acquire any VEV and the loop-diagram is completed by making use of the quartic coupling V ⊃ λ D η † HχS + h.c..
Here for simplicity, we assume that the SM Higgs does not mix with the other CP-even states, consequently, the mixing between S 0 and η 0 originates from the quartic coupling λ D (and similarly for the CP-odd states). Then the neutrino mass matrix is given by:
m ν ab = 1 16π 2 sin θ cos θ 2 y η ai M N i y S ib F m 2 H 0 2 M 2 N i − F m 2 H 0 1 M 2 N i (3.24) − 1 16π 2 sin θ cos θ 2 y η ai M N i y S ib F m 2 A 0 2 M 2 N i − F m 2 A 0 1 M 2 N i . (3.25)
Where the mixing angle θ ( θ ) between the CP-even (CP-odd) states are given by:
θ = 1 2 sin −1 λ D v H v χ m 2 H 0 2 − m 2 H 0 1 , θ = 1 2 sin −1 λ D v H v χ m 2 A 0 2 − m 2 A 0 1 . (3.26)
For a rough estimation we assume no cancellation among different terms occurs. Then by setting m H = 1 TeV, M N = 10 3 TeV, λ D = 0.1, v χ = 10 TeV, y η,S ∼ 10 −3 one can get the correct order of neutrino mass m ν ∼ 0.1 eV.
Since ν R 1 carries a charge of −5, a pair of chiral states associated with this state remains massless. However, in this scotogenic version, unlike the simplest one-loop model presented in the previous sub-section, all the neutrinos can be given mass by extending the model further. Here just for completeness, we discuss a straightforward extension, even though this is not required since one massless neutrino is not in conflict with the experimental data. If the model defined by Table V is extended by two SM singlets χ (1, 1, 0, −6) and a S (1, 1, 0, 11 2 ), all the neutrinos will get non-zero mass. The VEV of the field χ can be induced by the allowed cubic term of the form µχ 2 χ + h.c. whereas, S does not get any induced VEV.
Here we comment on the DM candidate present in this model. As aforementioned, we do not introduce new symmetries by hand to stabilize the DM. In search of finding the unbroken symmetry, first, we rescale all the U (1) R charges of the particles in the theory given in Table V including the quark fields in such a way that the magnitude of the minimum charge is unity. From this rescaling, it is obvious that when the U (1) R symmetry is broken spontaneously by the VEV of the χ field that carries six units of rescaled charge leads to:
U (1) R → Z 6 .
However, since the SM Higgs doublet carries a charge of two units under this surviving Z 6 symmetry, its VEV further breaks this symmetry down to: Z 6 → Z 2 . This unbroken discrete Z 2 symmetry can stabilize the DM particle in our theory. Under this residual symmetry, all the SM particles are even, whereas only the scalars S, η and vectorlike fermions N L,R are odd and can be the DM candidate. Phenomenology associated with the DM matter in this scotogenic model will be discussed in Sec. 6.
Running of the U (1) R Gauge Coupling
In this section, we briefly discuss the running of the U (1) R gauge coupling g R , at the one-loop level in our framework. The associated β-function can be written as:
β R = 1 16π 2 b R g 3 R . (4.27)
Where the coefficient b R can be calculated from [45]:
b R = f i 4 3 κN g S 2 (f i ) + s i 1 6 ηS 2 (s i ). (4.28)
The first (second) sum is over the fermions (scalars), f i (s i ). Here, κ = 1/2 for Weyl fermions, N g is the number of fermion generations, η = 2 for complex scalars and S 2 are the Dynkin indices of the representations with the appropriate multiplicity factors. By solving Eq. (4.27), the Landau pole can be found straightforwardly: Fig. 4 for the three different models discussed in this work. As expected, the higher the value of g R , smaller the Λ Landau gets.
Λ Landau = µ 0 e 16π 2 2b R (g R (µ 0 )) 2 .
Lepton Flavor Violation
In this section, we pay special attention to the charged lepton flavor violation (cLFV) which is an integral feature of these Dirac neutrino mass models. These lepton flavor violating processes provide stringent constraints on TeV-scale extensions of the standard model and,
as a consequence put restrictions on the free parameters of our theories. For the first model we discussed, where neutrino masses are generated via Dirac seesaw mechanism, the cLFV decay rates induced by the neutrino mixings (cf. The cLFV decay processes α → β + γ arise from one-loop diagrams are shown in the SU (2) L singlet charged scalars (H ± 1,2 ). However, the charged scalar S ± 1 determines the chirality of the initial and final-state charged leptons to be left-handed, whereas S ± 2 mediated process fixes the chirality to be right-handed and hence there will be no interference between these two contributions. The Yukawa term y S 1 is anti-symmetric in nature, whereas y S 2 has completely arbitrary elements in the second and third rows (recall the restriction y S 2 i1 = 0). We can always make such a judicious choice that no more than one entry in a given row of y S 2 can be large and thus we can suppress the contribution from the charged scalar H ± 2 for the cLFV processes. The expression for α → β + γ decay rates can be expressed as 3 : Similarly, we analyze the major cLFV processes in scotogenic Dirac neutrino mass model.
Γ ( α → β + γ) = α 4 (16π 2 ) 2 m 5 α 144 cos 2 θ m 2 H 1 + sin 2 θ m 2 H 2 2 y S 1 iα y S 1 * iβ 2 + sin 2 θ m 2 H 1 + cos 2 θ m 2 H 2 2 y S 2 iα y S 2 * iβ 2 .
The representative Feynman diagram for the cLFV process α → β + γ is shown in Fig. 5 (right diagram). Here also, charged Higgs H ± , which is the part of the SU (2) L doublet η, mainly contributes to the cLFV process α → β + γ (cf. Fig. 5). The decay rate for α → β + γ solely depends on the two mass terms m H + , m N and Yukawa term y η . The decay width expression for this process can be written as:
Γ (l α → l β + γ) = α 4 y η αi y η * βi 2 (16π 2 ) 2 m 2 α − m 2 β 3 m 2 α + m 2 β m 3 α m 4 H + [f B (t)] 2 . (5.31)
Here, t = m 2 F /m 2 B , and the function f B (t) is expressed as [46,47] f
B (t) = 2t 2 + 5t − 1 12(t − 1) 3 − t 2 log t 2(t − 1) 4 . (5.32)
In Fig. 7, we have shown the branching ratio predictions for the different cLFV processes:
µ → e + γ (top left), τ → e + γ (top right) and τ → µ + γ (bottom) as a function of mass (m H + ) in scotogenic one-loop Dirac neutrino mass model for three benchmark values of Yukawas: y η αi y η * βi = 10 −1 , 10 −2 and 10 −3 . For our analysis, we set the vector-like fermion mass m N to be 5 TeV. The µ → eγ process imposes the most stringent bounds. In this setup, for the Yukawas: y η αi y η * βi = 10 −1 , 10 −2 and 10 −3 , we get charged Higgs mass bounds to be m H + = 3.1 TeV, 4.6 TeV and 5 TeV respectively. As we can see from Fig. 7, most of the parameter space in this model is well-consistent with these cLFV processes and which can be testable at the future experiments. We have shown the future projection reach for these cLFV processes by red dashed lines in Fig. 7.
Dark Matter Phenomenology
In this section, we briefly discuss the Dark Matter phenomenology in the scotogenic Dirac neutrino mass model. As aforementioned, in this model, a Z 2 subgroup of the original U (1) R symmetry remains unbroken that can stabilize the DM particle. Under this residual symmetry, all the SM particles are even, whereas only the scalars S, η and vector-like Dirac fermion Ref. [52] in a different set-up and corresponding study has been done for the neutral singlet scalar, S in Ref. [53,54]. In the following analysis, we consider N 1 to be the lightest among all of these particles, hence serves as a good candidate for DM (for simplicity we will drop the subscript from N 1 in the following). We aim to study the DM phenomenology associated with the vector-like Dirac fermion N L,R here. Due to Dirac nature of the dark matter, the phenomenology associated with it is very different from the Majorana fermionic dark matter scenario [55].
In our case, N pairs can annihilate through s-channel Z exchange process to a pair of SM fermions and right-handed neutrinos. Furthermore, if m DM > m Z , then N may also annihilate directly into pairs of on-shell Z bosons, which subsequently decay to SM fermions. It can also annihilate to SM fermions and right-handed neutrinos via t− channel scalar (S, η 0 , η + ) exchanges. The representative Feynman diagrams for the annihilation of DM particle are shown in Fig. 8. It is important to mention that for the Majorana fermionic dark matter case, the annihilation rate is p− wave (∼ v 2 ) suppressed since the vector coupling to a self-conjugate particle vanishes, on the contrary, the annihilation rate is not suppressed for the Dirac scenario (s-wave). The non-relativistic form for this annihilation cross-section can be found here [58]. In Fig. 9, we analyze the dark matter relic abundance as a function of dark matter mass m DM for various gauge couplings g R (left) and Z boson masses (right). In addition to the relic density, we also take into account the constraints from DM direct detection experiments. In case of Majorana fermionic dark matter, at the tree-level, the spin-independent DM-nucleon scattering cross-section vanishes. However, at the loop-level, the spin-independent operators can be generated and hence it is considerably suppressed.
The dominant direct detection signal remains the spin-dependent DM-nucleon scattering cross-section which for the Majorana fermionic dark matter is four times that for the Diracfermionic dark matter case. In general, the Z interactions induce both spin-independent (SI) and spin-dependent (SD) scattering with nuclei. The representative Feynman diagram for the DM-nucleon scattering is shown in Fig. 10. Particularly, in the scotogenic Dirac neutrino mass model, DM can interact with nucleon through t− channel Z exchange. Hence, Figure 11: Spin-independent dark matter-nucleon scattering cross-section, σ (in pb) as a function of the dark matter mass m DM with different gauge coupling g R = 0.2, 0.277.
Here we set m Z = 10 TeV. Yellow, blue and green color solid lines represent current direct detection cross-section limit from LUX-2017 [59], XENON1T [60] and PandaX-II (2017) [61] experiment respectively. large coherent spin-independent scattering may occur since both dark matter and the valence quarks of nucleons possess vector interactions with Z and this process is severely constrained by present direct detection experiment bounds. The DM-nucleon scattering cross-section is estimated in Ref. [58]. In Fig. 11, we analyze the spin-independent dark matter-nucleon
Collider Implications
Models with extra U (1) R implies a new Z neutral boson, which contains a plethora of phenomenological implications at colliders. Here we mainly focus on the phenomenology of the heavy gauge boson Z emerging from U (1) R .
Constraint on Heavy Gauge Boson Z from LEP
There are two kinds of Z searches: indirect and direct. In case of indirect searches, one can look for deviations from the SM which might be associated with the existence of a new gauge boson Z . This generally involves precision EW measurements below and above the Z-pole. e + e − collision at LEP experiment [62] above the Z boson mass provides significant constraints on contact interactions involving e + e − and fermion pairs. One can integrate out the new physics and express its influence via higher-dimensional (generally dim-6) operators.
For the process e + e − → ff , contact interactions can be parameterized by an effective
Lagrangian, L ef f , which is added to the SM Lagrangian and has the form:
L ef f = 4π Λ 2 (1 + δ ef ) i,j=L,R η f ij (ē i γ µ e i )(f j γ µ f j ).L ef f = 1 1 + δ ef g 2 R M 2 Z (ēγ µ P R e)(f γ µ P R f ). (7.34)
Due to the nature of U (1) R gauge symmetry, the above interaction favors only the righthanded chirality structure. Thus, the constraint on the scale of the contact interaction for the process e + e − → l + l − from LEP measurements [62] will indirectly impose bound on Z mass and the gauge coupling (g R ) that can be translated into:
M Z g R 3.59 TeV. (7.35)
Other processes such as e + e − → cc and e + e − → bb impose somewhat weaker bounds than the ones quoted in Eq. 7.35. on the Z mass and U (1) R coupling constant g R in our model as the production cross-section solely depends on these two free parameters. Throughout our analysis, we consider that the mixing Z −Z angle is not very sensitive (s X = 0). In order to obtain the constraints on these parameter space, we use the dedicated search for new resonant high-mass phenomena in dielectron and di-muon final states using 36.1 fb −1 of proton-proton collision data, collected at √ s = 13 TeV by the ATLAS collaboration [63]. The searches for high mass phenomena in dijet final states [64] will also impose bound on the model parameter space, but it is somewhat weaker than the di-lepton searches due to large QCD background. For our analysis, we implement our models in FeynRules_v2.0 package [65] and simulate the events for the process pp → Z → e + e − (µ + µ − ) with MadGraph5_aMC@NLO_v3_0_1 code [66]. Then, using parton distribution function (PDF) NNPDF23_lo_as_0130 [67], the cross-section and cut efficiencies are estimated. Since no significant deviation from the SM prediction is observed in experimental searches [63] for high-mass phenomena in di-lepton final states, the upper limit on the cross-section is derived from the experimental analyses [63] using σ×
Heavy Gauge Boson Z at the LHC
BR = N rec /(A × × Ldt),
where N rec is the number of reconstructed heavy Z candidate, σ is the resonant production cross-section of the heavy Z , BR is the branching ratio of Z decaying into di-lepton final states , A × is the acceptance times efficiency of the cuts for the analysis. In Fig. 12 In Fig. 13, we have shown all the current experimental bounds in M Z − g R plane. Red meshed zone is excluded from the current experimental di-lepton searches [63]. The cyan meshed zone is forbidden from the LEP constraint [62] and the blue meshed zone is excluded from the limit on SM Z boson mass correction: 1 3 M Z /g R > 12.082 TeV as aforementioned. We can see from Fig. 13 that the most stringent bound in M Z − g R plane is coming from direct Z searches at the LHC. After imposing all the current experimental bounds, we analyze the future discovery prospect of this heavy gauge boson Z within the allowed parameter space in M Z − g R plane looking at the prompt di-lepton resonance signature at the LHC. We find that a wider region of parameter space in M Z − g R plane can be tested at the future collider experiment. Black, green, purple and brown dashed lines represent the projected discovery reach at 5σ significance at 13 TeV LHC for 100 fb −1 , 300 fb −1 , 500 fb −1 and 1 ab −1 luminosities. On the top of that, the right-handed chirality structure 5 For related works see also [68,69].
of U (1) R can be investigated at the LHC by measuring Forward-Backward (FB) and top polarization asymmetries in Z → tt mode [70] and which can discriminate our U (1) R Z interaction from the other Z interactions in U (1) B−L model. The investigation of other exotic decay modes (N N , χχ, S + 2 S − 2 ) of heavy Z is beyond the scope of this article and shall be presented in a future work since these will lead to remarkable multi-lepton or displaced vertex signature [71][72][73][74][75][76][77] at the colliders.
Heavy Gauge Boson Z at the ILC
Due to the point-like structure of leptons and polarized initial and final state fermions, lepton colliders like ILC will provide much better precision of measurements. The purpose of the Z search at the ILC would be either to help identifying any Z discovered at the LHC or to extend the Z discovery reach (in an indirect fashion) following effective interaction.
Even if the mass of the heavy gauge boson Z is too heavy to directly probe at the LHC, we will show that by measuring the process e + e − → f + f − , the effective interaction dictated by Eq. 7.34 can be tested at the ILC. Furthermore, analysis with the polarized initial states at ILC can shed light on the chirality structure of the effective interaction and thus it can distinguish between the heavy gauge boson Z emerging from U (1) R extended model and the Z from other U (1) extended model such as U (1) B−L . The process e + e − → f + f − typically exhibits asymmetries in the distributions of the final-state particles isolated by the angularor polarization-dependence of the differential cross-section. These asymmetries can thus be utilized as a sensitive measurement of differences in interaction strength and to distinguish a small asymmetric signal at the lepton colliders. In the following, the asymmetries (Forward-Backward asymmetry, Left-Right asymmetry) related to this work will be described in great detail.
Forward-Backward Asymmetry
The differential cross-section in Eq. 7.44 is asymmetric in polar angle, leading to a difference of cross-sections for Z decays between the forward and backward hemispheres. Earlier, LEP experiment [62] used Forward-backward asymmetries to measure the difference in the interaction strength of the Z-boson between left-handed and right-handed fermions, which gives a precision measurement of the weak mixing angle. Here we will show that our framework leads to sizable and distinctive Forward-Backward (FB) asymmetry discriminating from other models and which can be tested at the ILC, since only the right-handed fermions carry non-zero charges under the U (1) R . For earlier analysis of FB asymmetry in the context of other models as well as model-independent analysis see for example Refs. [40,42,[78][79][80][81][82][83][84][85][86][87][88]. At the ILC, Z effects have been studied for the following processes:
e − (k 1 , σ 1 ) + e + (k 2 , σ 2 ) → e − (k 3 , σ 3 ) + e + (k 4 , σ 4 ), (7.36) e − (k 1 , σ 1 ) + e + (k 2 , σ 2 ) → µ − (k 3 , σ 3 ) + µ + (k 4 , σ 4 ), (7.37) e − (k 1 , σ 1 ) + e + (k 2 , σ 2 ) → τ − (k 3 , σ 3 ) + τ + (k 4 , σ 4 ),(7.38)
where σ i = ±1 are the helicities of initial (final)-state leptons and k i 's are the momenta.
Since the e + e − → µ + µ − process is the most sensitive one at the ILC, we will focus on this process only for the rest of our analysis. One can write down the corresponding helicity amplitudes as: where s = (k 1 + k 2 ) 2 = (k 3 + k 4 ) 2 , s Z = s − m 2 Z + im Z Γ Z , and cos θ indicates the scattering polar angle. e 2 = 4πα with α = QED coupling constant, c R = tan θ W and c L = − cot 2θ W and θ W is the weak mixing angle.
M(+ − +−) = −e 2 (1 + cos θ) 1 + c 2 R s s Z + 4s α(Λ e R ) 2 ,(7.
For a purely polarized initial state, the differential cross-section is expressed as:
dσ σ 1 σ 2 d cos θ = 1 32πs σ 3 ,σ 4 M {σ i } 2 . (7.43)
Then the differential cross-section for the partially polarized initial state with a degree of polarization P e − for the electron beam and P e + for the positron beam can be written as [40,78]:
dσ(P e − , P e + ) d cos θ = 1 + P e − 2 1 + P e + 2 dσ ++ d cos θ + 1 + P e − 2 1 − P e + 2 dσ +− d cos θ + 1 − P e − 2 1 + P e + 2 dσ −+ d cos θ + 1 − P e − 2 1 − P e + 2 dσ −− d cos θ . (7.44)
One can now define polarized cross-section σ L,R (for the realistic values at the ILC [89]) as:
dσ R d cos θ = dσ(0.8, −0.3) d cos θ , (7.45) dσ L d cos θ = dσ(−0.8, 0.3) d cos θ ,(7.46)
Using this one can study the initial state polarization-dependent forward-backward asymmetry as: (7.48) where L represents the integrated luminosity, indicates the efficiency of observing the events, and c max is a kinematical cut chosen to maximize the sensitivity. For our analysis we consider = 1, and c max = 0.95. Then we estimate the sensitivity to Z contribution by:
A F B (σ L,R ) = N F (σ L,R ) − N B (σ L,R ) N F (σ L,R ) + N B (σ L,R ) , where N F (σ L,R ) = L cmax 0 d cos θ dσ (σ L,R ) d cos θ , (7.47) N B (σ L,R ) = L 0 −cmax d cos θ dσ (σ L,R ) d cos θ ,∆A F B (σ L,R ) = |A SM +Z F B (σ L,R ) − A SM F B (σ L,R ) |,(7.49)
where A SM +Z 2σ sensitivity for FB asymmetry by looking at e + e − → µ + µ − process at the ILC. We can also expect much higher sensitivity while combining different final fermionic states such as other leptonic modes (e + e − , τ + τ ) as well as hadronic modes jj. Moreover, the sensitivity to Z interactions can be enhanced by analyzing the scattering angular distribution in details, although it is beyond the scope of our paper.
Left-Right Asymmetry
The simplest example of the EW asymmetry for an experiment with a polarized electron beam is the left-right asymmetry A LR , which measures the asymmetry at the initial vertex.
Since there is no dependence on the final state fermion couplings, one can get an advantage by looking at LR asymmetry at lepton collider. Another advantage of this LR asymmetry measurement is that it is barely sensitive to the details of the detector. As long as at each value of cos θ, its detection efficiency of fermions is the same as that for anti-fermions, the efficiency effects should be canceled within the ratio because the Z decays into a back-toback fermion-antifermion pair and about the midplane perpendicular to the beam axis, the detector was designed to be symmetric. For earlier studies on LR asymmetry in different contexts, one can see for example Refs. [78][79][80][81][82][83][84][85][86][87][88]90]. LR asymmetry is defined as: where N L is the number of events in which initial-state particle is left-polarized, while N R is the corresponding number of right-polarized events.
A LR = N L − N R N L + N R ,N L = L cmax −cmax d cos θ dσ L d cos θ , (7.51) N R = L cmax −cmax d cos θ dσ R d cos θ . (7.52)
Similarly, one can estimate the sensitivity to Z contribution in LR asymmetry by [79,82,90]:
∆A LR = |A SM +Z LR − A SM LR |,(7.53)
with a statistical error of the asymmetry δA LR , given [79,82,90] as
δA LR = 1 − (A SM LR ) 2 N SM L + N SM R .
(7.54)
In Fig. 15, we analyze the strength of LR asymmetry ∆A LR for the e + e − → µ + µ − process as a function of VEV v χ (= M Z /3g R ). In order to distinguish Z interaction, we have analysed both the cases: Z emerging from both U (1) R and U (1) B−L cases. We have considered the center of mass energy for the ILC at √ s = 500 GeV and the integrated ∆N ef f and to compute it we follow the procedure discussed in Ref. [91]. After ν R states decouple, specifically for T < T ν L dec < T ν R dec (T ν L/R dec represents the decoupling temperature of the ν L/R neutrinos) their total contribution is given by: 55) here N ν R is the number of massless or light right-handed neutrinos, g(T ) is the relativistic degrees of freedom at temperature T, with the well-known quantities g(T ν L dec ) = 43/4 and T ν L dec = 2.3 MeV [92]. For the following computation, we take the temperature-dependent degrees of freedom from the data listed in Table S2 of Ref. [93], and by utilizing the cubic spline interpolation method, we present g as a function of T in Fig. 17 (left plot).
∆N ef f = N ν R g(T ν L dec ) g(T ν R dec ) 4/3 ,(8.
The current cosmological measurement of this quantity is N ef f = 2.99 +0. 34 −0.33 [94], which is completely consistent with the SM prediction N SM ef f = 3.045 [95]. These data limit the contribution of the right-handed neutrinos to be ∆N ef f < 0.285. However, future measurements [96] can put even tighter constraints on this deviation ∆N ef f < 0.06. The righthanded neutrinos decouple from the thermal bath when the interaction rate drops below the expansion rate of the Universe:
Γ (T ν R dec ) = H (T ν R dec ) . (8.56)
Here the Hubble expansion parameter is defined as: where M P l is the Planck mass and g ν R = 2 is the spin degrees of freedom of the right-handed neutrinos. And the interaction rate that keeps the right-handed neutrinos at the thermal bath is given by:
H 2 (T ) = T 4 4π 3 45M 2 P l g(T ) + N ν R 7 8 g ν R ,(8.Γ(T ) = f g 2 ν R n ν R (T ) d 3 p (2π) 3 d 3 q (2π) 3 f ν R (p)f ν R (q)σ f (s)v. (8.58)
Here, the Fermi-Dirac distribution is f ν R (p) = 1/(e p/T + 1), the number density is n ν R = (3/(2π 2 )) ζ(3)T 3 , s = 2pq(1 − cos θ) and v = 1 − cos θ. Furthermore, the annihilation cross-section σ(ν R ν R → f i f i ) is as follows:
σ f (s) = f N f C Q 2 f g 4 R 12π √ s s − 4m 2 f (s + 2m 2 f ) (s − M 2 Z ) 2 + Γ 2 Z M 2 Z . (8.59)
Where N f C and Q f represent the color degrees of freedom and the charge under the U (1) R for a fermion f respectively.
Conclusions
We believe that the scale of new physics is not far from the EW scale and a simple extension of the SM should be able to address a few of the unsolved problems of the SM. Adopting this belief, in this work, we have explored the possibility of one of the most minimal gauge extensions of the SM which is U (1) R that is responsible for generating Dirac neutrino mass and may also stabilize the DM particle. Cancellations of the gauge anomalies are guaranteed by the presence of the right-handed neutrinos that pair up with the left-handed partners to form Dirac neutrinos. Furthermore, this U (1) R symmetry is sufficient to forbid all the unwanted terms for constructing naturally light Dirac neutrino mass models without imposing any additional symmetries by hand. The chiral non-universal structure of our framework induces asymmetries, such as forward-backward asymmetry and especially left-right asymmetry that are very distinct compared to any other U (1) models. By performing detailed phenomenological studies of the associated gauge boson, we have derived the constraints on the U (1) R model parameter space and analyzed the prospect of its testability at the collider such as at LHC and ILC. We have shown that a heavy Z (emerging from U (1) R ), even if its mass is substantially higher than the center of mass energy available at the ILC, would manifest itself at tree-level by its propagator effects producing sizable contributions to the LR asymmetry or FB asymmetry. This can be taken as an initial guide to explore the U (1) R model at colliders. These models can lead to large lepton flavor violating observables which we have studied and they could give a complementary test for these models. In this work, we have also analyzed the possibility of having a viable Dirac fermionic DM candidate stabilized by the residual discrete symmetry originating from U (1) R , which connects to SM via Z portal coupling in a framework that also cater for neutrino mass generation. The DM phenomenology is shown to be crucially dictated by the interaction of N with Z . Furthermore, we have inspected the constraints coming from the cosmological measurements and compared this result with the different collider bounds. For a comparison, here we provide a benchmark point by fixing the gauge coupling g R = 0.056. With this, the current upper bound on the Z mass is M Z > 4.25 TeV from 13 TeV LHC data with 36.1f b −1 luminosity, and the future projection reach limit translates into M Z > 4.67 TeV with 100f b −1 luminosity. Whereas for the same value of the gauge coupling, the ILC has the discovery reach of 4.63 TeV at the 2σ confidence level looking at the left-right asymmetry. The corresponding bounds from LEP, Z−boson mass correction and from cosmology are M Z > 0.2, 2, 1.49 TeV respectively, which are somewhat weaker compared to LHC and ILC bounds. To summarize, the presented Dirac neutrino mass models are well motivated and have rich phenomenology.
Figure 1 :
1Representative Feynman diagram for tree-level Dirac Seesaw.
3 one gets the correct order of neutrino mass m ν = 0.1 eV.
Figure 2 :
2Representative Feynman diagram for the simplest one-loop Dirac neutrino mass.
Figure 3 :
3Representative Feynman diagram for scotogenic Dirac neutrino mass model.
Figure 4 :
4Possible presence of Landau poles associated with U (1) R gauge coupling running. For this plot, we have fixed µ 0 = 10 TeV. Red, gray and blue lines correspond to Dirac seesaw, simplest one-loop and Scotogenic models respectively. The scale of the Landau pole depends on the value of the coupling g R , at the input scale µ 0 . Depending on the choice, both the Λ Landau < M P lanck and Λ Landau > M P lanck scenarios can emerge. Utilizing the basic set-up defined in Sec. 2, we have constructed three different models in Sec. 3, which correspond to three different coefficients b R = {179/3, 56, 731/12} for the Dirac seesaw, simplest one-loop, and Scotogenic models respectively. For demonstration purpose, we choose µ 0 = 10 TeV and show the scale Λ Landau as a function of gauge coupling in
Fig. 5 )
5are highly suppressed by the requirement that the scale of new physics (vector-like fermions N L,R ) is at 10 15 GeV to satisfy the neutrino oscillation data, with Yukawa couplings being order one, and hence, are well below the current experimental bounds. Here, we can safely ignore cLFV processes associated with Dirac seesaw model. On the other hand, in the simplest one-loop Dirac neutrino mass model and in the scotogenic model, several new contributions appear due to the additional contributions from charged scalars (cf. Fig. 5), which could lead to sizable cLFV rates.
Figure 5 :
5Representative one-loop Feynman diagrams contributing to α → β + γ processes mediated by charged Bosons in minimal tree-level Dirac seesaw model (left), simplest oneloop Dirac neutrino mass model (middle) and scotogenic Dirac neutrino mass model (right).
Figure 6 :
6Contour plot for branching ratio predictions for the processes: µ → e + γ (top left), τ → e+γ (top right) and τ → µ+γ (bottom) as a function of mass (m H 1 ) and Yukawa plane in simplest one-loop Dirac neutrino mass model. Red solid lines indicate the current bounds on branching ratios and red dashed lines indicate the future projected bounds on the branching ratios.
Fig. 5 .
5Let us now focus on the major cLFV processes α → β + γ in the simplest one-loop Dirac neutrino mass model. Processes of these types are most dominantly mediated by both
Figure 7 :
7Branching ratio predictions for the processes: µ → e + γ (top left), τ → e + γ (top right) and τ → µ + γ (bottom) as a function of mass (m H + ) in scotogenic one-loop Dirac neutrino mass model for three benchmark values of Yukawas: y η αi y η * βi = 10 −1 , 10 −2 and 10 −3 . Red solid lines indicate the current bounds on branching ratios and red dashed lines indicate the future projected bounds on the branching ratios.InFig. 6, we have shown the contour plots for branching ratio predictions for the cLFV processes: µ → e + γ (top left), τ → e + γ (top right) and τ → µ + γ (bottom) as a function of mass (m H 1 ) and Yukawa y S 1 iα y S 1 * iβ plane in simplest one-loop Dirac neutrino mass model.
Figure 8 :
8Representative Feynman diagrams for the annihilation of DM particle.N L,R are odd and the lightest among these can be the DM candidate. DM phenomenology associated with the neutral component of inert scalar doublet, η is extensively studied in
Figure 9 :
9Dark matter relic abundance as a function of dark matter mass m DM for various gauge couplings g R (left) and Z boson masses (right). For simplicity, we set m Z = 10 TeV (left) and g R = 0.1 (right). Horizontal red and blue lines represent WMAP [56] relic density constraint 0.094 ≤ Ω DM h 2 ≤ 0.128 and the PLANCK constraints 0.112 ≤ Ω DM h 2 ≤ 0.128 [57] respectively.
Horizontal red and blue lines represent WMAP[56] relic density constraint 0.094 ≤ Ω DM h 2 ≤ 0.128 and the PLANCK constraint 0.112 ≤ Ω DM h 2 ≤ 0.128[57] respectively.For simplicity, we set m Z = 10 TeV (left) and provide the relic abundance prediction for two different values of gauge coupling (g R = 0.1 and 0.277). For the right plot in Fig. 9, DM relic abundance is analyzed for two different values of the Z masses m Z = 10 and 20 TeV setting g R = 0.1. As expected, we can satisfy the WMAP [56] relic density constraint 0.094 ≤ Ω DM h 2 ≤ 0.128 and the PLANCK constraint 0.112 ≤ Ω DM h 2 ≤ 0.128 [57] for most of the parameter space in our model as long as m DM is not too far away from m Z /2 mass. Throughout our DM analysis, we make sure that we are consistent with the SM Z− boson mass correction constraint while choosing specific g R and m Z values.
Figure 10 :
10Representative Feynman diagram for the DM-nucleon scattering for the DM direct detection.
scattering cross-section, σ (in pb) as a function of the dark matter mass m DM with different gauge coupling g R = 0.2, 0.277. For this plot, we set m Z = 10 TeV. Yellow, blue and green color solid lines represent current direct detection cross-section limits from LUX-2017[59], XENON1T[60] and PandaX-II (2017)[61] experiments respectively. As can be seen fromFig. 11, we can satisfy all the present direct detection experiment bounds as long as we are consistent with the other severe bounds on mass m Z and g R arising from colliders to be discussed in the next section.
is the new physics scale, δ ef is the Kronecker delta function, f indicates all the fermions in the model and η takes care of the chirality structure coefficients. The exchange of the new Z boson state emerging from U (1) R can be stated in a similar way:
we analyze the physics of the heavy neutral gauge boson Z at the Large Hadron Collider (LHC). At the LHC, Z can be resonantly produced via the quark fusion process qq → Z since the coupling of Z with right-handed quarks (u R , d R ) are not suppressed. After resonantly produced at the LHC, Z will decay into SM fermions and also to the exotic scalars (S + 2 S − 2 , χχ) or fermions (N N ) depending on the model if kinematically allowed 4 . The present lack of any signal for di-lepton resonances at the LHC dictates the stringent bound4 Even if we include Z → N N , S + 2 S − 2 , χχ decay modes, the branching fraction (∼ 4%) for Z → e + e − /µ + µ − mode does not change much.
Figure 12 :
12Upper limits at 95% C.L. on the cross-section for the process pp → Z → l + l − as a function of the di-lepton invariant mass using ATLAS results at √ s = 13 TeV with 36.1 fb −1 integrated luminosity. The black solid line is the observed limit, whereas the green and yellow regions correspond to the 1σ and 2σ bands on the expected limits. Red solid (dashed) [dotted] line is for model predicted cross-section for this different values of U (1) R gauge coupling constant g R = 0.5 (0.3) [0.1] respectively.
, we have shown the upper limits on the cross-section at 95% C.L. for the process pp → Z → l + l − as a function of the di-lepton invariant mass using ATLAS results[63] at √ s = 13 TeV with 36.1 fb −1 integrated luminosity. Red solid, dashed and dotted lines inFig. 12indicate the model predicted cross-section for three different values of U (1) R gauge coupling constant g R = 0.5, 0.3, 0.1 respectively. We find that Z mass should be heavier 5 than 4.4, 3.9 and 2.9 TeV for three different values of U (1) R gauge coupling constant g R = 0.5, 0.3 and 0.1.
Figure 13 :
13Red meshed zone in M Z − g R plane indicates the excluded region from the upper limit on the cross-section for the process pp → Z → l + l − at 95% C.L. using ATLAS results at √ s = 13 TeV with 36.1 fb −1 integrated luminosity. The cyan meshed zone is excluded from the LEP constraint. The blue meshed zone is excluded from the limit on SM Z boson mass correction: 1 3 M Z /g R > 12.082 TeV. Black, green, purple and brown dashed lines represent the projected discovery reach at 5σ significance at 13 TeV LHC for 100 fb −1 , 300 fb −1 , 500 fb −1 and 1 ab −1 luminosities.
Figure 14 :
14The strength of FB asymmetry ∆A F B as a function of VEV v χ (= M Z /3g R ) for both left and right-handed polarized cross-sections of the e + e − → µ + µ − process at the ILC. Red dashed (solid) line represents ∆A F B for U (1) R case for left (right) handed polarized cross-sections of the e + e − → µ + µ − process, whereas blue dotted (dashed) line indicates ∆A F B for U (1) B−L case for left (right) handed polarized cross-sections. Here, we set COM energy of the ILC at √ s = 500 GeV with 1 ab −1 (left) and 5 ab −1 (right) integrated luminosity. Here the horizontal solid black lines correspond to the 1σ and 2σ ( 2σ and 3σ ) sensitivity for left (right) figure, and the grey shaded region corresponds to excluded region from the SM Z boson mass correction.
39) M(− + −+) = −e 2 (1 + cos θ) 1 + c 2 L s s Z , (7.40) M(+ − −+) = M(− + +−) = e 2 (1 − cos θ) 1 + c R c L s s Z , (7.41)
SM F B are FB asymmetry originated from both the SM and Z contribution and from the SM case only. Next, it is compared with the statistical error of the asymmetry (in only SM case) δA F B[40,78]:δA F B (σ L,R ) = 1 − (A SM F B (σ L,R )) 2 N SM F (σ L,R ) + N SM B (σ L,R ).(7.50) In Fig. 14, we analyze the strength of FB asymmetry ∆A F B as a function of VEV v χ (= M Z /3g R ) for both left and right-handed polarized cross-sections of the e + e − → µ + µ − process. In order to compare, we have done the analysis for both the cases: Z from both U (1) R and U (1) B−L cases. We have considered the center of mass energy for the ILC at √ s = 500 GeV and the integrated luminosity L is set to be 1 ab −1 (5 ab −1 ) for the left (right) panel of Fig. 14. The grey shaded region corresponds to excluded region from the SM Z boson mass correction. Red dashed (solid) line represents ∆A F B for U (1) R case for left (right) handed polarized cross-sections of the e + e − → µ + µ − process, whereas blue dotted (dashed) line indicates ∆A F B for U (1) B−L case for left (right) handed polarized crosssections. From Fig. 14, we find that in case of U (1) R model, it provides significant difference of ∆A F B for σ R and σ L due to the right-handed chirality structure of Z interaction from U (1) R , while in the case of U (1) B−L model, it provides small difference. Hence by comparing the difference of ∆A F B for differently polarized cross-section σ R and σ L at the ILC, we can easily discriminate the Z interaction from U (1) R and U (1) B−L model. As we can see from Fig. 14 that there are significant region for M Z /3g R > 12.082 TeV which can give more than
Figure 15 :
15The strength of LR asymmetry ∆A F B as a function of VEV v χ (= M Z /3g R ) for both left and right-handed polarized cross-sections of the e + e − → µ + µ − process at the ILC. Red solid line represents ∆A LR for U (1) R case for the e + e − → µ + µ − process, whereas blue solid line indicates ∆A LR for U (1) B−L case. Here, we set COM energy of the ILC at √ s = 500 GeV with 1 ab −1 (left) and 5 ab −1 (right) integrated luminosity. Here the horizontal lines corresponding to sensitivity confidence level 3σ and 5σ, and the grey shaded region corresponds to excluded region from the Z boson mass correction.
Figure 16 :
16Current existing bounds and projected discovery reach at the ILC in M Z − g R plane. Green and yellow shaded zones correspond to sensitivity confidence levels 1σ and 2σ looking LR asymmetry for U (1) R extended model at the ILC. Red meshed zone in M Z − g R plane indicates the excluded region from the upper limit on the cross-section for the process pp → Z → l + l − at 95% C.L. using ATLAS results at √ s = 13 TeV with 36.1 fb −1 integrated luminosity. The cyan meshed zone is excluded from the LEP constraint. The blue meshed zone is excluded from the limit on SM Z boson mass correction: 1 3 M Z /g R > 12.082 TeV. luminosity L is set to be 1 ab −1 (5 ab −1 ) for the left (right) panel of Fig. 15. The grey shaded region corresponds to excluded region from the SM Z boson mass correction. Red (blue) solid line represents ∆A LR for U (1) R (U (1) B−L ) case. From Fig. 15, we find that in case of U (1) R model, it provides remarkably large LR asymmetry ∆A LR due to the righthanded chirality structure of Z interaction from U (1) R , while in case of U (1) B−L model, it
Figure 17 :
17On the left, we plot the effective number of degrees of freedom as a function of the temperature without including the contribution of the right-handed neutrinos. On the right, we present the contribution of the right-handed neutrinos to ∆N ef f as a function of M Z /g R . The horizontal dashed red line represents the current upper bound on the shift on the N ef f[94].
By plugging Eqs. (8.57)-(8.59) in Eq. (8.56) and then solving numerically, we present our result of ∆N ef f as a function of M Z /g R in Fig. 17 (right plot). From this figure, one sees that cosmology provides strong bound on the mass of the new gauge boson based on the associated decoupling temperature of the right-handed neutrinos. The blue curve corresponds to the contribution of all the three right-handed neutrinos and the red dashed line represents the current experimental upper bound on the deviation of ∆N ef f . This bound puts the restriction M Z /g R 26.5 TeV, which is quite stronger than the LEP bound M Z /g R 3.59 TeV, however, lies within the constraint provided by the SM Z-boson mass correction M Z /g R 36.2 TeV. The framework presented in this work puts larger bound on the mass of the new gauge boson from cosmology due to large charge assignment of the right-handed neutrinos compared to the conventional U (1) B−L models with universal charge, M Z /g B−L 14 TeV [97, 98].
Table I :
IQuantum numbers of the fermions and the SM Higgs doublet.
12 )
12From Eq. (2.9) one can see that the mass of the SM Z-boson gets modified as a consequence of U (1) R gauge extension. Precision measurement of the SM Z-boson puts bound on the scale of the new physics. From the experimental measurements, the bound on the lower limit of the new physics scale can be found by imposing the constraint ∆M Z ≤ 2.1 MeV [43]. For our case, this bound can be translated into:
TeV for R H = 1 and R χ = 3 (this charge assignment for the SM Higgs doublet H and the SM singlet scalar χ that breaks U (1) R will be used in Secs. 3 and 7). Furthermore, the coupling of all the fermions with the new gauge boson can be computed from the following relevant part of the Lagrangian:13)
With M SM
Z
= 91.1876 GeV [43], we find v χ ≥ v H R H
Rχ
21708.8. Which corresponds to
v χ ≥ 12.08
Table II :
IICouplings of the fermions with the new gauge boson. Here we use the notation: c 2w = cos(2θ w ). N L,R is any vector-like fermion singlet under the SM and carries R N charge under U (1) R . If a model does not contain vector-like fermions, we set R N = 0.
Table IV :
IVQuantum numbers of the fermions and the scalars in radiative Dirac model.one-loop Feynman diagram that is presented in
Table V :
VQuantum numbers of the fermions and the scalars in scotogenic Dirac neutrino mass model.
processes respectively. For simplicity, we choose m H 2 = m H 1 + 100 GeV. As we can see from theFig. 6, µ → e + γ is the most constraining cLFV process in this model. Since this could lead to sizable rates, it can be tested in the upcoming experiments.µ → e + γ (top left) process, 3.3 ×10 −8 [49] for the τ → e + γ (top right) process and
4.4 ×10 −8 [49] for the τ → µ + γ (top right) process. Red dashed lines indicate the future
projected bounds on the branching ratios: 6 ×10 −14 [50] for the µ → e + γ (top left), 3
×10 −9 [51] for the τ → e + γ (top right) and 3 ×10 −9 [51] for the τ → µ + γ (top right)
gives a smaller contribution. Hence by comparing the difference of ∆A LR at the ILC, we can easily discriminate the Z interaction from U (1) R and U (1) B−L model. As we can see fromFig. 15 that there is a significant region for M Z /3g R > 12.082 TeV which can give more than 3σ sensitivity for LR asymmetry by looking at e + e − → µ + µ − process at the ILC. Even if, we can achieve 5σ sensitivity for a larger parameter space in our framework if integrated luminosity of ILC is upgraded to 5 ab −1 . Although, measurement of both the FB and LR asymmetries at the ILC can discriminate Z interaction for U (1) R model from other U (1) extended models such as U (1) B−L model, it is needless to mention that the LR asymmetry provides much better sensitivity than the FB asymmetry in our case. InFig. 16, we have shown the survived parameter space in M Z − g R plane satisfying all existing bounds and which can be probed at the ILC in future by looking at LR asymmetry strength. Green and yellow shaded zones correspond to sensitivity confidence levels 1σ and 2σ by measuring LR asymmetry for U (1) R extended model at the ILC. For higher Z mass (above ∼ 10 TeV), it is too heavy to directly produce and probe at the LHC looking at prompt di-lepton signature.On the other hand, ILC can probe the heavy Z effective interaction and LR asymmetry can pin down/distinguish our U (1) R model from other existing U (1) extended model for a large region of the parameter space. Thus, Z search at the ILC would help to identify the origin of Z boson as well as to extend the Z discovery reach following effective interaction.In the previous section, we have extensively analyzed the collider implications of the new gauge boson Z . In this section, we aim to study the constraints on the mass of the new gauge boson from cosmological measurements and compare with the collider bounds. Since the right-handed neutrinos carry non-zero U (1) R charge in our set-up, they couple to the SM sector via the Z boson interactions. Furthermore, since they are either massless or very light, they contribute to the relativistic degrees of freedom N ef f , hence in principle can increase the expansion rate of the Universe. Their contribution to this process is parametrized by8 Constraint from Cosmology
For a recent review on models based on Majorana neutrinos see Ref.[8]. For Majorana neutrino mass models within the context of simple grand unified theories see Ref.[9].
For correlating Dirac seesaw with leptogenesis, see for example[99,100].
The general expression for this decay rate can be found in Ref.[46,47].
AcknowledgmentsWe thank K. S. Babu, Bhupal Dev and S. Nandi for useful discussions. The work of SJ and VPK was in part supported by US Department of Energy Grant Number DE-SC 0016013.The work of SJ was also supported in part by the Neutrino Theory Network Program. SJ thanks the Theoretical Physics Department at Washington University in St. Louis for warm hospitality during the completion of this work.
Neutrino Oscillations with MINOS and MINOS+. L H Whitehead, MINOS CollaborationarXiv:1601.05233Nucl. Phys. B. 908130hep-exL. H. Whitehead [MINOS Collaboration], "Neutrino Oscillations with MINOS and MINOS+," Nucl. Phys. B 908, 130 (2016) [arXiv:1601.05233 [hep-ex]];
KamLAND's precision neutrino oscillation measurements. M P Decowski, KamLAND CollaborationNucl. Phys. B. 90852M. P. De- cowski [KamLAND Collaboration], "KamLAND's precision neutrino oscillation mea- surements," Nucl. Phys. B 908, 52 (2016);
Combined Analysis of Neutrino and Antineutrino Oscillations at T2K. K Abe, T2K CollaborationarXiv:1701.00432Phys. Rev. Lett. 11815151801hep-exK. Abe et al. [T2K Collaboration], "Com- bined Analysis of Neutrino and Antineutrino Oscillations at T2K," Phys. Rev. Lett. 118, no. 15, 151801 (2017) [arXiv:1701.00432 [hep-ex]];
Status of neutrino oscillations 2018: 3σ hint for normal mass ordering and improved CP sensitivity. P F Salas, D V Forero, C A Ternes, M Tortola, J W F Valle, arXiv:1708.01186Phys. Lett. B. 782633hep-phP. F. de Salas, D. V. Forero, C. A. Ternes, M. Tortola and J. W. F. Valle, "Status of neutrino oscillations 2018: 3σ hint for normal mass ordering and improved CP sensitivity," Phys. Lett. B 782, 633 (2018) [arXiv:1708.01186 [hep-ph]].
. P Minkowski, Phys. Lett. B. 67421P. Minkowski, Phys. Lett. B 67 (1977) 421;
T Yanagida, proceedings of the Workshop on Unified Theories and Baryon Number in the Universe. the Workshop on Unified Theories and Baryon Number in the UniverseTsukubaT. Yanagida, proceedings of the Work- shop on Unified Theories and Baryon Number in the Universe, Tsukuba, 1979, eds. A.
A Sawada, ; S Sugamoto, Glashow, Cargese 1979, Proceedings, Quarks and Leptons. Sawada, A. Sugamoto; S. Glashow, in Cargese 1979, Proceedings, Quarks and Leptons (1979);
Neutrino Mass and Spontaneous Parity Violation. M Gell-Mann, P Ramond, R Slansky, ; P Van Niewenhuizen, D Freeman; R. Mohapatra, G Senjanovic, proceedings of the Supergravity Stony Brook Workshop. the Supergravity Stony Brook WorkshopNew York44912M. Gell-Mann, P. Ramond, R. Slansky, proceedings of the Supergravity Stony Brook Workshop, New York, 1979, eds. P. Van Niewenhuizen, D. Freeman; R. Mohapa- tra, G. Senjanovic, "Neutrino Mass and Spontaneous Parity Violation," Phys.Rev.Lett. 44 (1980) 912.
Neutrino Mass Problem and Gauge Hierarchy. M Magg, C Wetterich, Phys. Lett. B. 9461M. Magg and C. Wetterich, "Neutrino Mass Problem and Gauge Hierarchy," Phys. Lett. B 94, 61 (1980);
. J Schechter, J W F Valle, Neutrino Masses in SU(2) x U(1)J. Schechter and J. W. F. Valle, "Neutrino Masses in SU(2) x U(1)
. Theories, Phys. Rev. D. 222227Theories," Phys. Rev. D 22, 2227 (1980);
Proton Lifetime and Fermion Masses in an SO(10) Model. G Lazarides, Q Shafi, C Wetterich, Nucl. Phys. B. 181287G. Lazarides, Q. Shafi and C. Wetterich, "Proton Lifetime and Fermion Masses in an SO(10) Model," Nucl. Phys. B 181, 287 (1981);
Neutrino Masses and Mixings in Gauge Models with Spontaneous Parity Violation. R N Mohapatra, G Senjanovic, Phys. Rev. D. 23165R. N. Mohapatra and G. Senjanovic, "Neutrino Masses and Mixings in Gauge Models with Spontaneous Parity Violation," Phys. Rev. D 23, 165 (1981).
Seesaw Neutrino Masses Induced by a Triplet of Leptons. R Foot, H Lew, X G He, G C Joshi, Z. Phys. C. 44441R. Foot, H. Lew, X. G. He and G. C. Joshi, "Seesaw Neutrino Masses Induced by a Triplet of Leptons," Z. Phys. C 44, 441 (1989).
Mechanism for Understanding Small Neutrino Mass in Superstring Theories. R N Mohapatra, Phys. Rev. Lett. 56561R. N. Mohapatra, "Mechanism for Understanding Small Neutrino Mass in Superstring Theories," Phys. Rev. Lett. 56, 561 (1986);
Neutrino Mass and Baryon Number Nonconservation in Superstring Models. R N Mohapatra, J W F Valle, Phys. Rev. D. 341642R. N. Mohapatra and J. W. F. Valle, "Neutrino Mass and Baryon Number Nonconservation in Superstring Models," Phys. Rev. D 34, 1642 (1986).
Neutrino Masses and Mixings Dynamically Generated by a Light Dark Sector. E Bertuzzo, S Jana, P A N Machado, R Zukanovich Funchal, arXiv:1808.02500Phys. Lett. B. 791210hep-phE. Bertuzzo, S. Jana, P. A. N. Machado and R. Zukanovich Funchal, "Neutrino Masses and Mixings Dynamically Generated by a Light Dark Sector," Phys. Lett. B 791, 210 (2019) [arXiv:1808.02500 [hep-ph]].
On Weak Interaction Induced Neutrino Oscillations. T P Cheng, L F Li, Phys. Rev. D. 172375T. P. Cheng and L. F. Li, "On Weak Interaction Induced Neutrino Oscillations," Phys. Rev. D 17, 2375 (1978);
A Theory of Lepton Number Violation, Neutrino Majorana Mass, and Oscillation. A Zee, Phys. Lett. 93389Phys. Lett.A. Zee, "A Theory of Lepton Number Violation, Neutrino Majorana Mass, and Oscillation," Phys. Lett. 93B, 389 (1980) Erratum: [Phys. Lett.
. 95B. 46195B, 461 (1980)];
Neutrino Masses, Mixings and Oscillations in SU(2) x U(1) Models of Electroweak Interactions. T P Cheng, L F Li, Phys. Rev. D. 222860T. P. Cheng and L. F. Li, "Neutrino Masses, Mixings and Oscillations in SU(2) x U(1) Models of Electroweak Interactions," Phys. Rev. D 22, 2860 (1980);
Model of "Calculable" Majorana Neutrino Masses. K S Babu, Phys. Lett. B. 203132K. S. Babu, "Model of "Calculable" Majorana Neutrino Masses," Phys. Lett. B 203, 132 (1988).
From the trees to the forest: a review of radiative neutrino mass models. Y Cai, J Herrero-Garcia, M A Schmidt, A Vicente, R R Volkas, arXiv:1706.08524Front. in Phys. 563hep-phY. Cai, J. Herrero-Garcia, M. A. Schmidt, A. Vicente and R. R. Volkas, "From the trees to the forest: a review of radiative neutrino mass models," Front. in Phys. 5, 63 (2017) [arXiv:1706.08524 [hep-ph]].
Origin of a two-loop neutrino mass from SU(5) grand unification. S Saad, arXiv:1902.11254Phys. Rev. D. 9911115016hep-phS. Saad, "Origin of a two-loop neutrino mass from SU(5) grand unification," Phys. Rev. D 99, no. 11, 115016 (2019) [arXiv:1902.11254 [hep-ph]].
GERDA results and the future perspectives for the neutrinoless double beta decay search using 76 Ge. M Agostini, GERDA CollaborationInt. J. Mod. Phys. A. 33091843004M. Agostini et al. [GERDA Collaboration], "GERDA results and the future perspectives for the neutrinoless double beta decay search using 76 Ge," Int. J. Mod. Phys. A 33, no. 09, 1843004 (2018);
. A Gando, KamLAND-Zen CollaborationSearch forA. Gando et al. [KamLAND-Zen Collaboration], "Search for
Kamland-Zen, arXiv:1605.02889Majorana Neutrinos near the Inverted Mass Hierarchy Region with. 117109903Phys. Rev. Lett.. hep-exMajorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen," Phys. Rev. Lett. 117, no. 8, 082503 (2016) Addendum: [Phys. Rev. Lett. 117, no. 10, 109903 (2016)] [arXiv:1605.02889 [hep-ex]];
Background-free search for neutrinoless double-β decay of 76 Ge with GERDA. M Agostini, arXiv:1703.00570Nature. 544nucl-exM. Agostini et al., "Background-free search for neutrinoless double-β decay of 76 Ge with GERDA," Nature 544, 47 (2017) [arXiv:1703.00570 [nucl-ex]];
Improved limit on the branching ratio of mu--> e+ conversion on titanium. J Kaulard, SINDRUM II CollaborationPhys. Lett. B. 422334J. Kaulard et al. [SINDRUM II Collaboration], "Improved limit on the branching ratio of mu--> e+ conversion on titanium," Phys. Lett. B 422, 334 (1998).
Dirac or inverse seesaw neutrino masses with B − L gauge symmetry and S 3 flavor symmetry. E Ma, R Srivastava, arXiv:1411.5042Phys. Lett. B. 741217hep-phE. Ma and R. Srivastava, "Dirac or inverse seesaw neutrino masses with B − L gauge symmetry and S 3 flavor symmetry," Phys. Lett. B 741, 217 (2015) [arXiv:1411.5042 [hep-ph]].
Gauge B − L Model with Residual Z 3 Symmetry. E Ma, N Pollard, R Srivastava, M Zakeri, arXiv:1507.03943Phys. Lett. B. 750135hep-phE. Ma, N. Pollard, R. Srivastava and M. Zakeri, "Gauge B − L Model with Residual Z 3 Symmetry," Phys. Lett. B 750, 135 (2015) [arXiv:1507.03943 [hep-ph]].
Naturally light neutrinos in Diracon model. C Bonilla, J W F Valle, 10.1016/j.physletb.2016.09.022arXiv:1605.08362Phys. Lett. B. 762162hep-phC. Bonilla and J. W. F. Valle, "Naturally light neutrinos in Diracon model," Phys. Lett. B 762, 162 (2016) doi:10.1016/j.physletb.2016.09.022 [arXiv:1605.08362 [hep-ph]].
CP violation from flavor symmetry in a lepton quarticity dark matter model. S Chulia, R Srivastava, J W F Valle, arXiv:1606.06904Phys. Lett. B. 761431hep-phS. Centelles Chulia, R. Srivastava and J. W. F. Valle, "CP violation from flavor sym- metry in a lepton quarticity dark matter model," Phys. Lett. B 761, 431 (2016) [arXiv:1606.06904 [hep-ph]].
Dirac Neutrinos and Dark Matter Stability from Lepton Quarticity. S Chulia, E Ma, R Srivastava, J W F Valle, arXiv:1606.04543Phys. Lett. B. 767209hep-phS. Centelles Chulia, E. Ma, R. Srivastava and J. W. F. Valle, "Dirac Neutrinos and Dark Matter Stability from Lepton Quarticity," Phys. Lett. B 767, 209 (2017) [arXiv:1606.04543 [hep-ph]].
Two-loop Dirac neutrino mass and WIMP dark matter. C Bonilla, E Ma, E Peinado, J W F Valle, arXiv:1607.03931Phys. Lett. B. 762214hep-phC. Bonilla, E. Ma, E. Peinado and J. W. F. Valle, "Two-loop Dirac neutrino mass and WIMP dark matter," Phys. Lett. B 762, 214 (2016) [arXiv:1607.03931 [hep-ph]].
Pathways to Naturally Small Dirac Neutrino Masses. E Ma, O Popov, arXiv:1609.02538Phys. Lett. B. 764142hep-phE. Ma and O. Popov, "Pathways to Naturally Small Dirac Neutrino Masses," Phys. Lett. B 764, 142 (2017) [arXiv:1609.02538 [hep-ph]].
Naturally Small Dirac Neutrino Mass with Intermediate SU (2) L Multiplet Fields. W Wang, Z L Han, arXiv:1611.03240JHEP. 1704166hep-phW. Wang and Z. L. Han, "Naturally Small Dirac Neutrino Mass with Intermediate SU (2) L Multiplet Fields," JHEP 1704, 166 (2017) [arXiv:1611.03240 [hep-ph]].
A 4 flavour model for Dirac neutrinos: Type I and inverse seesaw. D Borah, B Karmakar, arXiv:1712.06407Phys. Lett. B. 780461hep-phD. Borah and B. Karmakar, "A 4 flavour model for Dirac neutrinos: Type I and inverse seesaw," Phys. Lett. B 780, 461 (2018) [arXiv:1712.06407 [hep-ph]].
Systematic analysis of Dirac neutrino masses from a dimension five operator. C Y Yao, G J Ding, arXiv:1802.05231Phys. Rev. D. 97995042hep-phC. Y. Yao and G. J. Ding, "Systematic analysis of Dirac neutrino masses from a dimen- sion five operator," Phys. Rev. D 97, no. 9, 095042 (2018) [arXiv:1802.05231 [hep-ph]].
Bound-state dark matter and Dirac neutrino masses. M Reig, D Restrepo, J W F Valle, O Zapata, arXiv:1803.08528Phys. Rev. D. 9711115032hep-phM. Reig, D. Restrepo, J. W. F. Valle and O. Zapata, "Bound-state dark matter and Dirac neutrino masses," Phys. Rev. D 97, no. 11, 115032 (2018) [arXiv:1803.08528 [hep-ph]].
. S B Giddings, A Strominger, Nucl. Phys. B. 306890S. B. Giddings and A. Strominger, Nucl. Phys. B 306, 890 (1988).
String Wormholes. S B Giddings, A Strominger, Phys. Lett. B. 23046S. B. Giddings and A. Strominger, "String Wormholes," Phys. Lett. B 230, 46 (1989).
Baby Universes, Third Quantization and the Cosmological Constant. S B Giddings, A Strominger, Nucl. Phys. B. 321481S. B. Giddings and A. Strominger, "Baby Universes, Third Quantization and the Cos- mological Constant," Nucl. Phys. B 321, 481 (1989).
Wormholes and Global Symmetries. L F Abbott, M B Wise, Nucl. Phys. B. 325687L. F. Abbott and M. B. Wise, "Wormholes and Global Symmetries," Nucl. Phys. B 325, 687 (1989).
Wormholes Made Without Massless Matter Fields. S R Coleman, K M Lee, Nucl. Phys. B. 329387S. R. Coleman and K. M. Lee, "Wormholes Made Without Massless Matter Fields," Nucl. Phys. B 329, 387 (1990).
Observable Neutrino Dirac Mass and Supergrand Unification. P Roy, O U Shanker, Phys. Rev. Lett. 522190Phys. Rev. Lett.P. Roy and O. U. Shanker, "Observable Neutrino Dirac Mass and Supergrand Unifica- tion," Phys. Rev. Lett. 52, 713 (1984) Erratum: [Phys. Rev. Lett. 52, 2190 (1984)].
Simplest Radiative Dirac Neutrino Mass Models. S Saad, arXiv:1902.07259Nucl. Phys. B. 943114636hep-phS. Saad, "Simplest Radiative Dirac Neutrino Mass Models," Nucl. Phys. B 943, 114636 (2019) [arXiv:1902.07259 [hep-ph]].
Verifiable radiative seesaw mechanism of neutrino mass and dark matter. E Ma, hep-ph/0601225Phys. Rev. D. 7377301E. Ma, "Verifiable radiative seesaw mechanism of neutrino mass and dark matter," Phys. Rev. D 73, 077301 (2006) [hep-ph/0601225].
B − L as the Fourth Color, Quark -Lepton Correspondence, and Natural Masslessness of Neutrinos Within a Generalized Ws Model. A Davidson, Phys. Rev. D. 20776A. Davidson, "B − L as the Fourth Color, Quark -Lepton Correspondence, and Natu- ral Masslessness of Neutrinos Within a Generalized Ws Model," Phys. Rev. D 20, 776 (1979);
Local B-L Symmetry of Electroweak Interactions, Majorana Neutrinos and Neutron Oscillations. R N Mohapatra, R E Marshak, Phys. Rev. Lett. 441643Phys. Rev. Lett.R. N. Mohapatra and R. E. Marshak, "Local B-L Symmetry of Electroweak Interactions, Majorana Neutrinos and Neutron Oscillations," Phys. Rev. Lett. 44, 1316 (1980) Erratum: [Phys. Rev. Lett. 44, 1643 (1980)];
Quark -Lepton Symmetry and B-L as the U(1) Generator of the Electroweak Symmetry Group. R E Marshak, R N Mohapatra, Phys. Lett. 91222R. E. Marshak and R. N. Moha- patra, "Quark -Lepton Symmetry and B-L as the U(1) Generator of the Electroweak Symmetry Group," Phys. Lett. 91B, 222 (1980);
Neutrino Masses and the Scale of B-L Violation. C Wetterich, Nucl. Phys. B. 187343C. Wetterich, "Neutrino Masses and the Scale of B-L Violation," Nucl. Phys. B 187, 343 (1981).
The B − L Scotogenic Models for Dirac Neutrino Masses. W Wang, R Wang, Z L Han, J Z Han, arXiv:1705.00414Eur. Phys. J. C. 7712hep-phW. Wang, R. Wang, Z. L. Han and J. Z. Han, "The B − L Scotogenic Models for Dirac Neutrino Masses," Eur. Phys. J. C 77, no. 12, 889 (2017) [arXiv:1705.00414 [hep-ph]].
Z Portal Dark Matter in B − L Scotogenic Dirac Model. Z L Han, W Wang, arXiv:1805.02025Eur. Phys. J. C. 7810hep-phZ. L. Han and W. Wang, "Z Portal Dark Matter in B − L Scotogenic Dirac Model," Eur. Phys. J. C 78, no. 10, 839 (2018) [arXiv:1805.02025 [hep-ph]].
Minimal radiative Dirac neutrino mass models. J Calle, D Restrepo, C E Yaguna, O Zapata, arXiv:1812.05523hep-phJ. Calle, D. Restrepo, C. E. Yaguna and O. Zapata, "Minimal radiative Dirac neutrino mass models," arXiv:1812.05523 [hep-ph].
Dark matter stability and Dirac neutrinos using only Standard Model symmetries. C Bonilla, S Chulia, R Cepedello, E Peinado, R Srivastava, arXiv:1812.01599hep-phC. Bonilla, S. Centelles Chulia, R. Cepedello, E. Peinado and R. Srivastava, "Dark matter stability and Dirac neutrinos using only Standard Model symmetries," arXiv:1812.01599 [hep-ph].
The role of residual symmetries in dark matter stability and the neutrino nature. C Bonilla, E Peinado, R Srivastava, arXiv:1903.01477hep-phC. Bonilla, E. Peinado and R. Srivastava, "The role of residual symmetries in dark matter stability and the neutrino nature," arXiv:1903.01477 [hep-ph].
. W Chao, M Gonderinger, M J Ramsey-Musolf, arXiv:1210.0491Phys. Rev. D. 86113017hep-phW. Chao, M. Gonderinger and M. J. Ramsey-Musolf, Phys. Rev. D 86, 113017 (2012) [arXiv:1210.0491 [hep-ph]].
A Resolution of the Flavor Problem of Two Higgs Doublet Models with an Extra U (1) H Symmetry for Higgs Flavor. P Ko, Y Omura, C Yu, arXiv:1204.4588Phys. Lett. B. 717202hep-phP. Ko, Y. Omura and C. Yu, "A Resolution of the Flavor Problem of Two Higgs Doublet Models with an Extra U (1) H Symmetry for Higgs Flavor," Phys. Lett. B 717, 202 (2012) [arXiv:1204.4588 [hep-ph]].
Two-loop Induced Majorana Neutrino Mass in a Radiatively Induced Quark and Lepton Mass Model. T Nomura, H Okada, arXiv:1609.01504Phys. Rev. D. 94993006hep-phT. Nomura and H. Okada, "Two-loop Induced Majorana Neutrino Mass in a Radia- tively Induced Quark and Lepton Mass Model," Phys. Rev. D 94, no. 9, 093006 (2016) [arXiv:1609.01504 [hep-ph]].
Loop suppressed light fermion masses with U (1) R gauge symmetry. T Nomura, H Okada, arXiv:1704.03382Phys. Rev. D. 96115016hep-phT. Nomura and H. Okada, "Loop suppressed light fermion masses with U (1) R gauge symmetry," Phys. Rev. D 96, no. 1, 015016 (2017) [arXiv:1704.03382 [hep-ph]].
Minimal realization of right-handed gauge symmetry. T Nomura, H Okada, arXiv:1707.00929Phys. Rev. D. 97115015hep-phT. Nomura and H. Okada, "Minimal realization of right-handed gauge symmetry," Phys. Rev. D 97, no. 1, 015015 (2018) [arXiv:1707.00929 [hep-ph]].
Phenomenology of the gauge symmetry for right-handed fermions. W Chao, arXiv:1707.07858Eur. Phys. J. C. 782103hep-phW. Chao, "Phenomenology of the gauge symmetry for right-handed fermions," Eur. Phys. J. C 78, no. 2, 103 (2018) [arXiv:1707.07858 [hep-ph]].
An inverse seesaw model with U (1) R gauge symmetry. T Nomura, H Okada, arXiv:1806.01714LHEP. 1210hep-phT. Nomura and H. Okada, "An inverse seesaw model with U (1) R gauge symmetry," LHEP 1, no. 2, 10 (2018) [arXiv:1806.01714 [hep-ph]].
Review of Particle Physics. M Tanabashi, Phys. Rev. D. 98330001Particle Data GroupM. Tanabashi et al. [Particle Data Group], "Review of Particle Physics," Phys. Rev. D 98, no. 3, 030001 (2018).
Model for small neutrino masses at the TeV scale. S Nasri, S Moussa, hep-ph/0106107Mod. Phys. Lett. A. 17771S. Nasri and S. Moussa, "Model for small neutrino masses at the TeV scale," Mod. Phys. Lett. A 17, 771 (2002) [hep-ph/0106107].
Two Loop Renormalization Group Equations in a General Quantum Field Theory. 1. Wave Function Renormalization. M E Machacek, M T Vaughn, Nucl. Phys. B. 22283M. E. Machacek and M. T. Vaughn, "Two Loop Renormalization Group Equations in a General Quantum Field Theory. 1. Wave Function Renormalization," Nucl. Phys. B 222, 83 (1983).
General formulae for f(1) -> f(2) gamma. L Lavoura, hep-ph/0302221Eur. Phys. J. C. 29191L. Lavoura, "General formulae for f(1) -> f(2) gamma," Eur. Phys. J. C 29, 191 (2003) [hep-ph/0302221].
K S Babu, P S B Dev, S Jana, A Thapa, arXiv:1907.09498Non-Standard Interactions in Radiative Neutrino Mass Models. hep-phK. S. Babu, P. S. B. Dev, S. Jana and A. Thapa, "Non-Standard Interactions in Ra- diative Neutrino Mass Models," arXiv:1907.09498 [hep-ph].
Search for the lepton flavour violating decay µ + → e + γ with the full dataset of the MEG experiment. A M Baldini, MEG CollaborationarXiv:1605.05081Eur. Phys. J. C. 768hep-exA. M. Baldini et al. [MEG Collaboration], "Search for the lepton flavour violating decay µ + → e + γ with the full dataset of the MEG experiment," Eur. Phys. J. C 76, no. 8, 434 (2016) [arXiv:1605.05081 [hep-ex]].
Searches for Lepton Flavor Violation in the Decays tau+--> e+-gamma and tau+--> mu+-gamma. B Aubert, BaBar CollaborationarXiv:0908.2381Phys. Rev. Lett. 10421802hep-exB. Aubert et al. [BaBar Collaboration], "Searches for Lepton Flavor Violation in the Decays tau+--> e+-gamma and tau+--> mu+-gamma," Phys. Rev. Lett. 104, 021802 (2010) [arXiv:0908.2381 [hep-ex]].
. A M Baldini, arXiv:1301.7225MEG Upgrade Proposal. physics.ins-detA. M. Baldini et al., "MEG Upgrade Proposal," arXiv:1301.7225 [physics.ins-det].
Results and prospects on lepton flavor violation at Belle/Belle II. K Hayasaka, J. Phys. Conf. Ser. 40812069Belle and Belle-II CollaborationsK. Hayasaka [Belle and Belle-II Collaborations], "Results and prospects on lepton flavor violation at Belle/Belle II," J. Phys. Conf. Ser. 408, 012069 (2013).
Dark matter in the Inert Doublet Model after the discovery of a Higgs-like boson at the LHC. E M Dolle, S Su, ; , L Lopez Honorez, C E Yaguna, ; O Stãěl, arXiv:0906.1609arXiv:1303.3010Phys. Rev. D. 80106JHEP. hep-phE. M. Dolle and S. Su, "The Inert Dark Matter," Phys. Rev. D 80, 055012 (2009) [arXiv:0906.1609 [hep-ph]], L. Lopez Honorez and C. E. Yaguna, "The inert dou- blet model of dark matter revisited," JHEP 1009, 046 (2010) [arXiv:1003.3125 [hep- ph]], A. Goudelis, B. Herrmann and O. StÃěl, "Dark matter in the Inert Doublet Model after the discovery of a Higgs-like boson at the LHC," JHEP 1309, 106 (2013) [arXiv:1303.3010 [hep-ph]].
Common Origin of Neutrino Mass, Dark Matter and Dirac Leptogenesis. D Borah, A Dasgupta, arXiv:1608.03872JCAP. 16121234hep-phD. Borah and A. Dasgupta, "Common Origin of Neutrino Mass, Dark Matter and Dirac Leptogenesis," JCAP 1612, no. 12, 034 (2016) [arXiv:1608.03872 [hep-ph]].
Neutrino Masses and Scalar Singlet Dark Matter. S Bhattacharya, S Jana, S Nandi, arXiv:1609.03274Phys. Rev. D. 95555003hep-phS. Bhattacharya, S. Jana and S. Nandi, "Neutrino Masses and Scalar Singlet Dark Matter," Phys. Rev. D 95, no. 5, 055003 (2017) [arXiv:1609.03274 [hep-ph]].
Radiative neutrino mass and Majorana dark matter within an inert Higgs doublet model. A Ahriche, A Jueid, S Nasri, arXiv:1710.03824Phys. Rev. D. 97995012hep-phA. Ahriche, A. Jueid and S. Nasri, "Radiative neutrino mass and Majorana dark mat- ter within an inert Higgs doublet model," Phys. Rev. D 97, no. 9, 095012 (2018) [arXiv:1710.03824 [hep-ph]].
Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. G Hinshaw, WMAP CollaborationarXiv:1212.5226Astrophys. J. Suppl. 208astro-ph.COG. Hinshaw et al. [WMAP Collaboration], "Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results," Astrophys. J. Suppl. 208, 19 (2013) [arXiv:1212.5226 [astro-ph.CO]].
Planck 2013 results. XVI. Cosmological parameters. P A R Ade, Planck CollaborationarXiv:1303.5076Astron. Astrophys. 571astro-ph.COP. A. R. Ade et al. [Planck Collaboration], "Planck 2013 results. XVI. Cosmological parameters," Astron. Astrophys. 571, A16 (2014) [arXiv:1303.5076 [astro-ph.CO]].
Simplified Dark Matter Models for the Galactic Center Gamma-Ray Excess. A Berlin, D Hooper, S D Mcdermott, arXiv:1404.0022Phys. Rev. D. 8911115022hep-phA. Berlin, D. Hooper and S. D. McDermott, "Simplified Dark Matter Models for the Galactic Center Gamma-Ray Excess," Phys. Rev. D 89, no. 11, 115022 (2014) [arXiv:1404.0022 [hep-ph]].
Results from a search for dark matter in the complete LUX exposure. D S Akerib, LUX CollaborationarXiv:1608.07648Phys. Rev. Lett. 118221303astro-ph.COD. S. Akerib et al. [LUX Collaboration], "Results from a search for dark matter in the complete LUX exposure," Phys. Rev. Lett. 118, no. 2, 021303 (2017) [arXiv:1608.07648 [astro-ph.CO]].
First Dark Matter Search Results from the XENON1T Experiment. E Aprile, XENON CollaborationarXiv:1705.06655Phys. Rev. Lett. 11918181301astro-ph.COE. Aprile et al. [XENON Collaboration], "First Dark Matter Search Results from the XENON1T Experiment," Phys. Rev. Lett. 119, no. 18, 181301 (2017) [arXiv:1705.06655 [astro-ph.CO]].
Dark Matter Results From 54-Ton-Day Exposure of PandaX-II Experiment. X Cui, PandaX-II CollaborationarXiv:1708.06917Phys. Rev. Lett. 11918181302astro-ph.COX. Cui et al. [PandaX-II Collaboration], "Dark Matter Results From 54-Ton-Day Exposure of PandaX-II Experiment," Phys. Rev. Lett. 119, no. 18, 181302 (2017) [arXiv:1708.06917 [astro-ph.CO]].
A Combination of preliminary electroweak measurements and constraints on the standard model. [ Electroweak, Opal Lep, Collaborations, Electroweak, hep-ex/0312023Working Group and SLD Electroweak Group and SLD Heavy Flavor GroupElectroweak [LEP and ALEPH and DELPHI and L3 and OPAL Collaborations and LEP Electroweak Working Group and SLD Electroweak Group and SLD Heavy Flavor Group], "A Combination of preliminary electroweak measurements and constraints on the standard model," hep-ex/0312023.
Search for new high-mass phenomena in the dilepton final state using 36 fb −1 of proton-proton collision data at √ s = 13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationarXiv:1707.02424JHEP. 1710182hep-exM. Aaboud et al. [ATLAS Collaboration], "Search for new high-mass phenomena in the dilepton final state using 36 fb −1 of proton-proton collision data at √ s = 13 TeV with the ATLAS detector," JHEP 1710, 182 (2017) [arXiv:1707.02424 [hep-ex]].
Search for new phenomena in dijet events using 37 fb −1 of pp collision data collected at √ s =13 TeV with the ATLAS detector. M Aaboud, ATLAS CollaborationM. Aaboud et al. [ATLAS Collaboration], "Search for new phenomena in dijet events using 37 fb −1 of pp collision data collected at √ s =13 TeV with the ATLAS detector,"
. arXiv:1703.09127Phys. Rev. D. 96552004hep-exPhys. Rev. D 96, no. 5, 052004 (2017) [arXiv:1703.09127 [hep-ex]].
FeynRules 2.0 -A complete toolbox for tree-level phenomenology. A Alloul, N D Christensen, C Degrande, C Duhr, B Fuks, arXiv:1310.1921Comput. Phys. Commun. 1852250hep-phA. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, "FeynRules 2.0 -A complete toolbox for tree-level phenomenology," Comput. Phys. Commun. 185, 2250 (2014) [arXiv:1310.1921 [hep-ph]].
The automated computation of tree-level and next-to-leading order differential cross-sections, and their matching to parton shower simulations. J , arXiv:1405.0301JHEP. 140779hep-phJ. Alwall et al., "The automated computation of tree-level and next-to-leading order differential cross-sections, and their matching to parton shower simulations," JHEP 1407, 079 (2014) [arXiv:1405.0301 [hep-ph]].
. R D Ball, NNPDF CollaborationarXiv:1308.0598Nucl. Phys. B. 877290hep-phR. D. Ball et al. [NNPDF Collaboration], Nucl. Phys. B 877, 290 (2013) [arXiv:1308.0598 [hep-ph]].
Constraining minimal anomaly free U(1) extensions of the Standard Model. A Ekstedt, R Enberg, G Ingelman, J Lofgren, T , arXiv:1605.04855JHEP. 161171hep-phA. Ekstedt, R. Enberg, G. Ingelman, J. Lofgren and T. Mandal, "Constraining min- imal anomaly free U(1) extensions of the Standard Model," JHEP 1611, 071 (2016) [arXiv:1605.04855 [hep-ph]].
Reappraisal of constraints on Z models from unitarity and direct searches at the LHC. T Bandyopadhyay, G Bhattacharyya, D Das, A Raychaudhuri, arXiv:1803.07989Phys. Rev. D. 98335027hep-phT. Bandyopadhyay, G. Bhattacharyya, D. Das and A. Raychaudhuri, "Reappraisal of constraints on Z models from unitarity and direct searches at the LHC," Phys. Rev. D 98, no. 3, 035027 (2018) [arXiv:1803.07989 [hep-ph]].
Discovering and profiling Z' bosons using asymmetry observables in top quark pair production with the lepton-plus-jets final state at the LHC. L Cerrito, D Millar, S Moretti, F Spano, arXiv:1609.05540hep-phL. Cerrito, D. Millar, S. Moretti and F. Spano, "Discovering and profiling Z' bosons using asymmetry observables in top quark pair production with the lepton-plus-jets final state at the LHC," arXiv:1609.05540 [hep-ph].
Signals of a 2 TeV W boson and a heavier Z boson. B A Dobrescu, P J Fox, arXiv:1511.02148JHEP. 160547hep-phB. A. Dobrescu and P. J. Fox, "Signals of a 2 TeV W boson and a heavier Z boson," JHEP 1605, 047 (2016) [arXiv:1511.02148 [hep-ph]].
Displaced vertex signature of type-I seesaw model. S Jana, N Okada, D Raut, arXiv:1804.06828Phys. Rev. D. 98335023hep-phS. Jana, N. Okada and D. Raut, "Displaced vertex signature of type-I seesaw model," Phys. Rev. D 98, no. 3, 035023 (2018) [arXiv:1804.06828 [hep-ph]].
Long-Lived Particles at the Energy Frontier: The MATHUSLA Physics Case. D Curtin, arXiv:1806.07396hep-phD. Curtin et al., "Long-Lived Particles at the Energy Frontier: The MATHUSLA Physics Case," arXiv:1806.07396 [hep-ph].
Displaced vertices as probes of sterile neutrino mixing at the LHC. G Cottin, J C Helo, M Hirsch, arXiv:1806.05191Phys. Rev. D. 98335012hep-phG. Cottin, J. C. Helo and M. Hirsch, "Displaced vertices as probes of sterile neutrino mixing at the LHC," Phys. Rev. D 98, no. 3, 035012 (2018) [arXiv:1806.05191 [hep-ph]].
A Letter of Intent for MATH-USLA: a dedicated displaced vertex detector above ATLAS or CMS. C Alpigiani, MATHUSLA CollaborationarXiv:1811.00927physics.ins-detC. Alpigiani et al. [MATHUSLA Collaboration], "A Letter of Intent for MATH- USLA: a dedicated displaced vertex detector above ATLAS or CMS.," arXiv:1811.00927 [physics.ins-det].
Inclusive Displaced Vertex Searches for Heavy Neutral Leptons at the LHC. A Abada, N Bernal, M Losada, X Marcano, arXiv:1807.10024JHEP. 190193hep-phA. Abada, N. Bernal, M. Losada and X. Marcano, "Inclusive Displaced Vertex Searches for Heavy Neutral Leptons at the LHC," JHEP 1901, 093 (2019) [arXiv:1807.10024 [hep-ph]].
Probing right-handed neutrinos at the LHeC and lepton colliders using fat jet signatures. A Das, S Jana, S Mandal, S Nandi, arXiv:1811.04291Phys. Rev. D. 99555030hep-phA. Das, S. Jana, S. Mandal and S. Nandi, "Probing right-handed neutrinos at the LHeC and lepton colliders using fat jet signatures," Phys. Rev. D 99, no. 5, 055030 (2019) [arXiv:1811.04291 [hep-ph]].
Signals of new gauge bosons at future e+ e-colliders. A Djouadi, A Leike, T Riemann, D Schaile, C Verzegnassi, Z. Phys. C. 56289A. Djouadi, A. Leike, T. Riemann, D. Schaile and C. Verzegnassi, "Signals of new gauge bosons at future e+ e-colliders," Z. Phys. C 56, 289 (1992).
Diagnostic power of future colliders for Z-prime couplings to quarks and leptons: e+ e-versus p p colliders. F , Del Aguila, M Cvetic, hep- ph/9312329Phys. Rev. D. 503158F. Del Aguila and M. Cvetic, "Diagnostic power of future colliders for Z-prime couplings to quarks and leptons: e+ e-versus p p colliders," Phys. Rev. D 50, 3158 (1994) [hep- ph/9312329].
Discovery and identification of extra gauge bosons. M Cvetic, S Godfrey, hep-ph/9504216Electroweak symmetry breaking and new physics at the TeV scale* 383-415. *Barklow, T.L.M. Cvetic and S. Godfrey, "Discovery and identification of extra gauge bosons," In *Barklow, T.L. (ed.) et al.: Electroweak symmetry breaking and new physics at the TeV scale* 383-415 [hep-ph/9504216].
Study of Z-prime couplings to leptons and quarks at NLC," eConf C 960625. S Riemann, hep-ph/9610513141S. Riemann, "Study of Z-prime couplings to leptons and quarks at NLC," eConf C 960625, NEW141 (1996) [hep-ph/9610513].
Z search in e + e − annihilation. A Leike, S Riemann, hep-ph/9607306Z. Phys. C. 75341A. Leike and S. Riemann, "Z search in e + e − annihilation," Z. Phys. C 75, 341 (1997) [hep-ph/9607306].
Extended gauge sectors at future colliders: Report of the new gauge boson subgroup. T G Rizzo, hep-ph/9612440C. 960625136T. G. Rizzo, "Extended gauge sectors at future colliders: Report of the new gauge boson subgroup," eConf C 960625, NEW136 (1996) [hep-ph/9612440].
Polarized observables to probe Z-prime at the e+ e-linear collider. A A Babich, A A Pankov, N Paver, hep-ph/9811328Phys. Lett. B. 452355A. A. Babich, A. A. Pankov and N. Paver, "Polarized observables to probe Z-prime at the e+ e-linear collider," Phys. Lett. B 452, 355 (1999) [hep-ph/9811328].
The Phenomenology of extra neutral gauge bosons. A Leike, hep-ph/9805494Phys. Rept. 317A. Leike, "The Phenomenology of extra neutral gauge bosons," Phys. Rept. 317, 143 (1999) [hep-ph/9805494].
Z-prime indication from new APV data in cesium and searches at linear colliders. R Casalbuoni, S Curtis, D Dominici, R Gatto, S Riemann, hep-ph/0001215R. Casalbuoni, S. De Curtis, D. Dominici, R. Gatto and S. Riemann, "Z-prime indica- tion from new APV data in cesium and searches at linear colliders," hep-ph/0001215.
Physics interplay of the LHC and the ILC. G Weiglein, LHC/LC Study Grouphep-ph/0410364Phys. Rept. 426G. Weiglein et al. [LHC/LC Study Group], "Physics interplay of the LHC and the ILC," Phys. Rept. 426, 47 (2006) [hep-ph/0410364].
Distinguishing between models with extra gauge bosons at the ILC. S Godfrey, P Kalyniak, A Tomkins, hep-ph/0511335S. Godfrey, P. Kalyniak and A. Tomkins, "Distinguishing between models with extra gauge bosons at the ILC," hep-ph/0511335.
. H Baer, arXiv:1306.6352The International Linear Collider Technical Design Report. 2Physics. hep-phH. Baer et al., "The International Linear Collider Technical Design Report -Volume 2: Physics," arXiv:1306.6352 [hep-ph].
Measurement of the polarized forward -backward asymmetry of s quarks at SLD. S Narita, SLAC-520520SLAC-R-0520S. Narita, "Measurement of the polarized forward -backward asymmetry of s quarks at SLD," SLAC-R-0520, SLAC-R-520, SLAC-0520, SLAC-520.
Primordial nucleosynthesis constraints on Z properties. V Barger, P Langacker, H S Lee, hep-ph/0302066Phys. Rev. D. 6775009V. Barger, P. Langacker and H. S. Lee, "Primordial nucleosynthesis constraints on Z properties," Phys. Rev. D 67, 075009 (2003) [hep-ph/0302066].
Neutrino annihilation in hot plasma. K Enqvist, K Kainulainen, V Semikoz, Nucl. Phys. B. 374392K. Enqvist, K. Kainulainen and V. Semikoz, "Neutrino annihilation in hot plasma," Nucl. Phys. B 374, 392 (1992).
Calculation of the axion mass based on high-temperature lattice quantum chromodynamics. S Borsanyi, arXiv:1606.07494Nature. 5397627heplatS. Borsanyi et al., "Calculation of the axion mass based on high-temperature lattice quantum chromodynamics," Nature 539, no. 7627, 69 (2016) [arXiv:1606.07494 [hep- lat]].
N Aghanim, Planck CollaborationarXiv:1807.06209Planck 2018 results. VI. Cosmological parameters. astro-ph.CON. Aghanim et al. [Planck Collaboration], "Planck 2018 results. VI. Cosmological pa- rameters," arXiv:1807.06209 [astro-ph.CO].
Relic neutrino decoupling with flavour oscillations revisited. P F Salas, S Pastor, arXiv:1606.06986JCAP. 16070751hep-phP. F. de Salas and S. Pastor, "Relic neutrino decoupling with flavour oscillations revis- ited," JCAP 1607, no. 07, 051 (2016) [arXiv:1606.06986 [hep-ph]].
. K N Abazajian, CMB-S4 CollaborationarXiv:1610.02743CMB-S4 Science Book, First Editionastro-ph.COK. N. Abazajian et al. [CMB-S4 Collaboration], "CMB-S4 Science Book, First Edition," arXiv:1610.02743 [astro-ph.CO].
Neutrino-Dark Matter Connections in Gauge Theories. P Perez, C Murgui, A D Plascencia, arXiv:1905.06344hep-phP. Fileviez Perez, C. Murgui and A. D. Plascencia, "Neutrino-Dark Matter Connections in Gauge Theories," arXiv:1905.06344 [hep-ph].
Observing Dirac neutrinos in the cosmic microwave background. K N Abazajian, J Heeck, arXiv:1908.03286hep-phK. N. Abazajian and J. Heeck, "Observing Dirac neutrinos in the cosmic microwave background," arXiv:1908.03286 [hep-ph].
From Dirac neutrino masses to baryonic and dark matter asymmetries. P H Gu, arXiv:1209.4579Nucl. Phys. B. 87238hep-phP. H. Gu, "From Dirac neutrino masses to baryonic and dark matter asymmetries," Nucl. Phys. B 872, 38 (2013) [arXiv:1209.4579 [hep-ph]].
Peccei-Quinn symmetry for Dirac seesaw and leptogenesis. P H Gu, arXiv:1603.05070JCAP. 1607074hep-phP. H. Gu, "Peccei-Quinn symmetry for Dirac seesaw and leptogenesis," JCAP 1607, no. 07, 004 (2016) [arXiv:1603.05070 [hep-ph]].
| [] |
[
"Entangled quantum currents in distant mesoscopic Josephson junctions",
"Entangled quantum currents in distant mesoscopic Josephson junctions"
] | [
"D I Tsomokos \nDepartment of Computing\nUniversity of Bradford\nBD7 1DPBradfordEngland\n",
"C C Chong \nInstitute of High Performance Computing\n1 Science Park Road117528Singapore\n",
"A Vourdas \nDepartment of Computing\nUniversity of Bradford\nBD7 1DPBradfordEngland\n"
] | [
"Department of Computing\nUniversity of Bradford\nBD7 1DPBradfordEngland",
"Institute of High Performance Computing\n1 Science Park Road117528Singapore",
"Department of Computing\nUniversity of Bradford\nBD7 1DPBradfordEngland"
] | [] | Two mesoscopic SQUID rings which are far from each other, are considered. A source of twomode nonclassical microwaves irradiates the two rings with correlated photons. The Josephson currents are in this case quantum mechanical operators, and their expectation values with respect to the density matrix of the microwaves, yield the experimentally observed currents. Classically correlated (separable) and quantum mechanically correlated (entangled) microwaves are considered, and their effect on the Josephson currents is quantified. Results for two different examples that involve microwaves in number states and coherent states are derived. It is shown that the quantum statistics of the tunnelling electron pairs through the Josephson junctions in the two rings, are correlated. | 10.1088/0953-8984/16/50/008 | [
"https://arxiv.org/pdf/cond-mat/0411439v1.pdf"
] | 17,309,644 | cond-mat/0411439 | 8c1f157099fe1a78ab284bdf7d0dfab0a0d82249 |
Entangled quantum currents in distant mesoscopic Josephson junctions
17 Nov 2004
D I Tsomokos
Department of Computing
University of Bradford
BD7 1DPBradfordEngland
C C Chong
Institute of High Performance Computing
1 Science Park Road117528Singapore
A Vourdas
Department of Computing
University of Bradford
BD7 1DPBradfordEngland
Entangled quantum currents in distant mesoscopic Josephson junctions
17 Nov 2004(Dated: November 1, 2004; final version)numbers: 8525Dq4250Dv8535Ds0367Mn
Two mesoscopic SQUID rings which are far from each other, are considered. A source of twomode nonclassical microwaves irradiates the two rings with correlated photons. The Josephson currents are in this case quantum mechanical operators, and their expectation values with respect to the density matrix of the microwaves, yield the experimentally observed currents. Classically correlated (separable) and quantum mechanically correlated (entangled) microwaves are considered, and their effect on the Josephson currents is quantified. Results for two different examples that involve microwaves in number states and coherent states are derived. It is shown that the quantum statistics of the tunnelling electron pairs through the Josephson junctions in the two rings, are correlated.
I. INTRODUCTION
Superconducting quantum interference devices (SQUIDs) exhibit quantum coherence at the macroscopic level 1 . This is a major research field within condensed matter, and has potential applications in the developing area of quantum information processing 2,3 . A lot of the work on superconducting rings investigates their interaction with classical electromagnetic fields.
In the last twenty years nonclassical electromagnetic fields at low temperatures (k B T ≪hω) have been studied extensively theoretically and experimentally 4 at both optical and microwave frequencies 5 . They are carefully prepared in a particular quantum state, which is described mathematically with a density matrix ρ. The interaction of SQUID rings with nonclassical microwaves has been studied in the literature 6,7 . In this case the full system, device and microwaves, is quantum mechanical and displays interesting quantum behaviour. For example, the quantum noise in the nonclassical microwaves affects the Josephson currents. Experimental work, which involves the interaction of a Josephson device with a single photon, has recently been reported 8 .
An important feature of two-mode nonclassical microwaves is entanglement. Entangled electromagnetic fields have been produced experimentally 9 . There is currently a lot of work on the classification of correlated two-mode electromagnetic fields into classically correlated (separable) and quantum mechanically correlated (entangled) 10 . In a previous publication 11 we have studied the effects of entangled electromagnetic fields on distant electron interference experiments. The interaction of entangled electromagnetic fields with two superconducting charge qubits (that are approximated by twolevel systems) has recently been studied 12 . In that work it has been shown that the entanglement is transferred from the photons to the superconducting charge qubits. Related work has also been reported 13 . In this paper we study the effects of entangled electromagnetic fields on the Josephson currents of distant SQUID rings.
We consider two mesoscopic SQUID rings, which are far from each other ( Fig. 1). They are irradiated with entangled microwaves, produced by a single source. In this case the phase differences across the Josephson junctions are quantum mechanical operators. Consequently the quantum currents, which are sinusoidal functions of the phase differences, are also operators and their expectation values with respect to the density matrix of the microwaves give the observed Josephson currents. It is shown that for entangled microwaves the currents in the two distant SQUID rings are correlated. We consider suitable examples of separable and entangled microwaves, which differ only by nondiagonal elements; and we show that the correlations between the induced Josephson currents are sensitive to these nondiagonal elements.
In Sec. II we consider a single SQUID ring and study its interaction with nonclassical microwaves. We assume the external field approximation, where the electromagnetic field created by the Josephson current (back reaction) is neglected. We also consider mesoscopic rings which are small in comparison to the wavelength of the microwaves. It is shown that under these assumptions the Josephson current is proportional to the imaginary part of the Weyl function of the nonclassical microwaves.
In Sec. III we analyze the experiment depicted in Fig. 1, where two distant SQUID rings are irradiated with entangled microwaves. We present examples of separable and entangled microwaves that involve number states (Sec. IV) and coherent states (Sec. V). In Sec. VI we present numerical calculations for these examples. In Sec. VII we conclude with a discussion of our results.
II. INTERACTION OF A SINGLE SQUID RING WITH NONCLASSICAL MICROWAVES
In this section we consider a single SQUID ring and study its interaction with both classical and nonclassical microwaves.
A. Classical microwaves
The current is I A = I 1 sin θ A , where θ A = 2eΦ A is the phase difference across the junction due to the total flux Φ A through the ring. In the external field approximation, Φ A is simply the externally applied flux, while the back reaction (i.e., the flux induced by the SQUID ring current) is neglected. In other words the flux LI A , where L is the self-inductance of the ring, is assumed to be much smaller than the external flux Φ A . We consider a magnetic flux with a linear and a sinusoidal component:
Φ A = V A t + φ A , φ A = A sin(ω 1 t).(1)
In this case the current is
I A = I 1 sin[ω A t + 2eA sin(ω 1 t)], ω A = 2eV A .(2)
B. Nonclassical microwaves
In this subsection we consider nonclassical electromagnetic fields, which are carefully prepared in a particular quantum state and are described by a density matrix ρ. In this case, not only the average values E , B of the electric and magnetic fields are known, but also the standard deviations ∆E, ∆B (and their higher moments).
A particular example is coherent versus squeezed microwaves. In both cases the average values E , B are sinusoidal functions of time, that can be made equal for suitable values of the parameters. However the uncertainties are ∆E = ∆B = 2 −1/2 for coherent microwaves; and ∆E = σ −1 2 −1/2 , ∆B = σ2 −1/2 , for squeezed microwaves (where σ is the squeezing parameter).
Another way of describing nonclassical electromagnetic fields is through the photon counting distribution P N = N |ρ|N . For example, in the case of coherent and squeezed microwaves the distribution P N is Poissonian and sub-Poissonian, correspondingly. One of our aims in this paper is to study how the quantum noise in the nonclassical fields, quantified by ∆E, ∆B, or with the distribution P N , affects the Josephson currents.
In quantized electromagnetic fields the vector potential A i and the electric field E i are dual quantum variables (operators). Strictly speaking the dual quantum variables should be local quantities, but we consider mesoscopic SQUID rings which are much smaller than the wavelength of the microwaves. Therefore we can integrate these quantities over the SQUID ring and obtain the magnetic flux and the electromotive force:
φ = C A i dx i ,V EMF = C E i dx i .(3)
As explained above we work in the external field approximation and we neglect the back reaction flux from the electron pairs on the external microwaves. In this case the flux operator evolves aŝ
φ(t) = ξ √ 2 [â † exp(iωt) +â exp(−iωt)],(4)
where ξ is a parameter proportional to the area of the SQUID ring and theâ † ,â are the photon creation and annihilation operators (e.g., Refs. 4,6 ). In order to go beyond the external field approximation we need to consider the Hamiltonian
H = ω â †â + 1 2 + H SQUID + H int ,(5)
where H SQUID is the SQUID Hamiltonian and H int is the interaction term between the SQUID and the microwaves. In this case the flux operatorφ ′ (t) evolves aŝ
φ ′ (t) = exp(iHt)φ(0) exp(−iHt) =φ(t) + · · ·(6)
In this paper we work in the external field approximation and consider the flux operator of Eq. (4). Consequently the phase difference θ A is the operator (7) and the current also becomes an operator, (8) Expectation values of the current are calculated by taking its trace with respect to the density matrix ρ, which describes the nonclassical electromagnetic fields,
θ A = ω A t + q[â † exp(iωt) +â exp(−iωt)], q = √ 2eξ;I A = I 1 sin{ω A t + q[â † exp(iωt) +â exp(−iωt)]}.Î A = Tr(ρÎ A ) = I 1 Im[exp(iω A t)W (λ A )],(9)λ A = iq exp(iω 1 t).(10)
HereW (x) is the Weyl function 14 which is defined in terms of the displacement operator D(x) as
W (x) = Tr[ρD(x)]; D(x) = exp(xâ † − x * â ). (11)
The tilde in the notation of the Weyl function indicates that it is the two-dimensional Fourier transform of the Wigner function.
In a similar way we can calculate the Î 2 A = Tr(ρÎ 2 A ). The second (and higher) moments of the current describe the quantum statistics of the electron pairs tunnelling through the Josephson junctions. As explained earlier, nonclassical electromagnetic fields are characterized by the photon counting distribution P N = N |ρ|N . The statistics of the photons threading the ring affects the statistics of the tunnelling electron pairs, which is quantified with the Î 2 A , Î 3 A , etc.
III. INTERACTION OF TWO DISTANT SQUID RINGS WITH ENTANGLED MICROWAVES
In this section we consider two SQUID rings far apart from each other, which we refer to as A and B (Fig. 1). They are irradiated with microwaves which are produced by the same source and are correlated. Let ρ be the density matrix of the microwaves, and
Φ A S EM ω 1 ω 2 Φ B V t B V t A FIGρ A = Tr B ρ, ρ B = Tr A ρ,(12)
the density matrices of the microwaves interacting with the two SQUID rings A, B, correspondingly. When the density matrix ρ is factorizable as ρ fact = ρ A ⊗ ρ B the two modes are not correlated. If it can be written as
ρ sep = i p i ρ Ai ⊗ ρ Bi ,
where p i are probabilities, it is called separable and the two modes are classically correlated. Density matrices which cannot be written in one of these two forms are entangled (quantum mechanically correlated). There has been a lot of work on criteria which distinguish separable and entangled states 10 . The currents in the two SQUIDs are
Î A = I 1 Tr(ρ A sinθ A ),(13)Î B = I 2 Tr(ρ B sinθ B ).(14)
The Î A is written in terms of the Weyl functionW (λ A ) in Eq. (9), and similarly for B one may obtain Î B =
I 2 Im[exp(iω B t)W (λ B )], where λ B = iq exp(iω 2 t) and ω B = 2eV B .
The expectation value of the product of the two current operators is given by:
Î AÎB = I 1 I 2 Tr(ρ sinθ A sinθ B ).(15)
We consider the ratio of the currents
R = Î AÎB Î A Î B .(16)
For factorizable density matrices ρ fact = ρ A ⊗ρ B we easily see that R fact = 1. For separable density matrices ρ sep =
i p i ρ Ai ⊗ ρ Bi we get R sep = i p i Î Ai Î Bi ( k p k Î Ak )( l p l Î Bl )
.
We also calculate the second moments
Î 2 A = I 2 1 Tr[ρ A (sinθ A ) 2 ],(18)Î 2 B = I 2 2 Tr[ρ B (sinθ B ) 2 ].(19)
As explained earlier, the statistics of the photons threading the ring affects the statistics of the tunnelling electron pairs, which is quantified with the Î AÎB , Î 2 A , Î 2 B , etc. In the following sections we consider particular examples for the density matrix ρ of the nonclassical microwaves that interact with the two SQUID rings, and examine its effect on these quantities.
IV. MICROWAVES IN NUMBER STATES
We consider microwaves in the separable (mixed) state
ρ sep = 1 2 (|N 1 N 2 N 1 N 2 | + |N 2 N 1 N 2 N 1 |),(20)
where N 1 = N 2 . We also consider microwaves in the entangled state |s = 2 −1/2 (|N 1 N 2 + |N 2 N 1 ), which is a pure state. The density matrix of |s is
ρ ent = ρ sep + 1 2 (|N 1 N 2 N 2 N 1 | + |N 2 N 1 N 1 N 2 |),(21)
where the ρ sep is given by Eq. (20). It is seen that the ρ ent and the ρ sep differ only by the above nondiagonal elements, and below we calculate their effect on the Josephson currents. We note that it is possible to have 'interpolating' density matrices of the form
ρ p = pρ sep + (1 − p)ρ ent(22)= ρ sep + 1 − p 2 (|N 1 N 2 N 2 N 1 | + |N 2 N 1 N 1 N 2 |)
where 0 ≤ p ≤ 1. Below we present results for the two extreme cases of ρ sep , where the nondiagonal terms make no contribution; and for the ρ ent , where the nondiagonal terms make maximal contribution. We also present numerical results for the case of ρ p . In this example, the reduced density matrices are the same for both the separable and entangled states:
ρ sep,A = ρ ent,A = ρ sep,B = ρ ent,B(23)= 1 2 (|N 1 N 1 | + |N 2 N 2 |).
Consequently in this example Î A sep = Î A ent , and also Î B sep = Î B ent .
A. Classically correlated photons
For the density matrix ρ sep of Eq. (20) we find
Î A = I 1 C sin(ω A t);(24)Î B = I 2 C sin(ω B t);(25)C = 1 2 exp − q 2 2 [L N1 (q 2 ) + L N2 (q 2 )],(26)
where the L α n (x) are Laguerre polynomials (in the case of Eq. (26) we have α = 0). The currents Î A , Î B are in this case independent of the microwave frequencies ω 1 , ω 2 .
The expectation value of the product of the two currents [Eq. (15)] is
Î AÎB sep = I 1 I 2 C 1 sin(ω A t) sin(ω B t),(27)C 1 = exp(−q 2 )L N1 (q 2 )L N2 (q 2 ).(28)
Consequently the ratio R of Eq. (16) is
R sep = C 1 C 2 = 4L N1 (q 2 )L N2 (q 2 ) [L N1 (q 2 ) + L N2 (q 2 )] 2 .(29)
In this example the R sep is time-independent. The moments of the currents, defined by Eqs. (18), (19), are also calculated:
Î 2 A = I 2 1 2 [1 − C 2 cos(2ω A t)],(30)Î 2 B = I 2 2 2 [1 − C 2 cos(2ω B t)],(31)C 2 = 1 2 exp(−2q 2 )[L N1 (4q 2 ) + L N2 (4q
where
I cross = −I 1 I 2 C 3 cos[(N 1 − N 2 )(ω 1 − ω 2 )t] ×[cos(ω A t + ω B t) − (−1) N1−N2 cos(ω A t − ω B t)],(34)C 3 = 1 2 exp(−q 2 )L N2−N1 N1 (q 2 )L N1−N2 N2 (q 2 ).(35)
The term I cross is induced by the nondiagonal elements of ρ ent , and depends on the photon frequencies ω 1 , ω 2 . This term quantifies the difference between the effect of separable and entangled microwaves on the Josephson currents. We note that the nondiagonal terms of ρ ent [Eq.
(21)] are small and consequently they are very sensitive to the back reaction. Therefore our results which neglect the back reaction are relevant to experiments with small Josephson currents. In other words it is required that the fluxes L A I A and L B I B are much smaller than the external flux. The ratio R of Eq. (16) can be simplified in two distinct expressions according to whether the difference N 1 − N 2 is even or odd. In the case N 1 − N 2 = 2k, the ratio is
R (2k) ent = R sep + 4L −2k N1 (q 2 )L 2k N2 (q 2 ) [L N1 (q 2 ) + L N2 (q 2 )] 2 cos(Ωt),(36)
where
Ω = (N 1 − N 2 )(ω 1 − ω 2 ).(37)
It is seen that the R (2k) ent oscillates around the R sep with frequency Ω given by Eq. (37). If there is no detuning between the nonclassical electromagnetic fields, i.e. ω 1 = ω 2 , then R (2k) ent is constant, although it is still R ent = R sep . In the case N 1 − N 2 = 2k + 1 the ratio is
R (2k+1) ent = R sep − 4L −2k−1 N1 (q 2 )L 2k+1 N2 (q 2 ) [L N1 (q 2 ) + L N2 (q 2 )] 2 cos(Ωt) tan(ω A t) tan(ω B t)
. (38) In both cases the R ent is time-dependent and it is a function of the photon frequencies ω 1 , ω 2 , in contrast to the case of R sep (which is time-independent).
V. MICROWAVES IN COHERENT STATES
We consider microwaves in the classically correlated state
ρ sep = 1 2 (|A 1 A 2 A 1 A 2 | + |A 2 A 1 A 2 A 1 |).(39)
The |A 1 , |A 2 are microwave coherent states (eigenstates of the annihilation operators). We also consider the entangled state |u = N (|A 1 A 2 + |A 2 A 1 ), with density matrix
ρ ent = 2N 2 ρ sep + N 2 (|A 1 A 2 A 2 A 1 | + |A 2 A 1 A 1 A 2 |),(40)
where the normalization constant is given by
N = 2 + 2 exp −|A 1 − A 2 | 2 −1/2 .(41)
A. Classically correlated photons
For microwaves in the separable state of Eq. (39) the reduced density matrices are
ρ sep,A = ρ sep,B = 1 2 (|A 1 A 1 | + |A 2 A 2 |),(42)
and hence the currents in A and B are
Î A sep = I 1 2 exp(− q 2 2 ){sin[ω A t + 2q|A 1 | cos(ω 1 t − θ 1 )] + sin[ω A t + 2q|A 2 | cos(ω 1 t − θ 2 )]},(43)Î B sep = I 2 2 exp(− q 2 2 ){sin[ω B t + 2q|A 1 | cos(ω 2 t − θ 1 )] + sin[ω B t + 2q|A 2 | cos(ω 2 t − θ 2 )]},(44)
where θ 1 = arg(A 1 ), and θ 2 = arg(A 2 ). We have also calculated numerically the ratio R sep .
B. Quantum mechanically correlated photons
For microwaves in the entangled state of Eq. (40) the reduced density matrices are
ρ ent,A = ρ ent,B = N 2 (|A 1 A 1 | + |A 2 A 2 | + τ |A 1 A 2 | + τ * |A 2 A 1 |),(45)
where
τ = A 1 |A 2 = exp − |A 1 | 2 2 − |A 2 | 2 2 + A * 1 A 2 . (46)
The current in A is
Î A ent = 2N 2 Î A sep + N 2 EF 1 exp − q 2 2 I 1 ,(47)
where
E = exp[−|A 1 | 2 − |A 2 | 2 + 2|A 1 A 2 | cos(θ 1 − θ 2 )],(48)F 1 = {exp[q|A 1 |S A,1 (t) − q|A 2 |S A,2 (t)] + exp[−q|A 1 |S A,1 (t) + q|A 2 |S A,2 (t)]} × sin[ω A t + q|A 1 |C A,1 (t) + q|A 2 |C A,2 (t)],(49)
and S A,1 = sin(ω 1 t − θ 1 ), S A,2 = sin(ω 1 t − θ 2 ), C A,1 = cos(ω 1 t − θ 1 ), C A,2 = cos(ω 1 t − θ 2 ). (50) The current in B is
Î B ent = 2N 2 Î B sep + N 2 EF 2 exp − q 2 2 I 2 ,(51)
where
F 2 = {exp[q|A 1 |S B,1 (t) − q|A 2 |S B,2 (t)] + exp[−q|A 1 |S B,1 (t) + q|A 2 |S B,2 (t)]} × sin[ω B t + q|A 1 |C B,1 (t) + q|A 2 |C B,2 (t)],(52)
and
S B,1 = sin(ω 2 t − θ 1 ), S B,2 = sin(ω 2 t − θ 2 ), C B,1 = cos(ω 2 t − θ 1 ), C B,2 = cos(ω 2 t − θ 2 ). (53)
We have also calculated numerically the ratio R ent . As we already explained, the nondiagonal terms in ρ ent are very sensitive to back reaction and therefore these results are relevant to experiments with small Josephson currents. In other words it is required that the fluxes L A I A and L B I B are much smaller than the external flux.
VI. NUMERICAL RESULTS
In all numerical results of Figs. 2 to 6 the microwave frequencies are ω 1 = 1.2×10 −4 , ω 2 = 10 −4 , in units where k B =h = c = 1. The critical currents are I 1 = I 2 = 1. The other parameters are ξ = 1, ω A = ω 1 , ω B = ω 2 , N 1 = 2, N 2 = 0, and the arguments of the coherent eigenstates are θ 1 = θ 2 = 0. For a meaningful comparison between microwaves in number states and microwaves in coherent states, we take |A 1 | 2 = N 1 and |A 2 | 2 = N 2 , so that the average number of photons in the coherent states is equal to the number of photons in the number states.
In Fig. 2 we plot R sep against (ω 1 − ω 2 )t for currents induced by microwaves in the number state of Eq. (20) with N 1 = 2, N 2 = 0 (line of circles), and the coherent state of Eq. (39) with A 1 = √ 2, A 2 = 0 (solid line). It is seen that two different microwave states with the same average number of photons give different results on the quantum statistics of the electron pairs.
In Fig. 3 we plot R sep − R ent against (ω 1 − ω 2 )t for currents induced by microwaves in (a) the number states of Eqs. (20) and (21) with N 1 = 2, N 2 = 0, and (b) the coherent states of Eqs. (39) and (40) with
A 1 = √ 2, A 2 = 0.
It is seen that the separable and entangled states, which differ only by nondiagonal elements, give different results. As expected, the difference (which shows the effect of the nondiagonal elements) is small, but it is nonzero.
In Fig. 4 we plot (a) Î A sep − Î A ent , and (b) (40) with A 1 = √ 2, A 2 = 0, against (ω 1 − ω 2 )t. In this figure also, we get different results due to the nondiagonal elements in the entangled state.
Î 2 A sep − Î 2 A ent , against (ω 1 − ω 2 )t
In Fig. 6 we plot the ratio R p of Eq. (16) against p ∈ [0, 1] for currents that are induced by the interpolating density matrix ρ p of Eq. (22) for number states with N 1 = 2, N 2 = 0. The time t has been fixed so that (ω 1 − ω 2 )t = π/2.
VII. DISCUSSION
We have considered the interaction of SQUID rings with nonclassical microwaves. We have assumed the external field approximation, where the electromagnetic field created by the Josephson currents (back reaction) is neglected. We have also considered small rings in com-parison to the wavelength of the microwaves and taken as dual quantum variables the magnetic flux and electromotive force of Eq. (3).
The Josephson current is an operator and its expectation value with respect to the density matrix of the nonclassical microwaves is the observed current. We have shown that the expectation value of the current is proportional to the imaginary part of the Weyl function [Eq. (9)]. This shows clearly how the full density matrix of the microwaves affects the Josephson current. The higher moments of the current Î M A can also be calculated and used to quantify the statistics of the tunnelling electron pairs. It has been shown that the statistics of the irradiating photons determine the tunnelling statistics of the electron pairs.
We have also considered the interaction of two distant SQUID rings A and B with two-mode nonclassical microwaves, which are produced by the same source. It has been shown that classically correlated (separable) and quantum mechanically correlated (entangled) photons induce different Josephson currents and different tunnelling statistics in the two devices. The results show that the entangled photons produce entangled Josephson currents in the distant SQUID rings. This can have applications in the general area of quantum information processing.
The work can be extended in various directions. The first is to take into account the back reaction and include the extra terms which we have neglected in Eq. (6). This can be done numerically. The second direction is the study of Bell-like inequalities for the Josephson currents; which are violated when the currents are entangled. Another direction is the potential use of the system as a detector of entangled photons. There is a lot of work on the use of mesoscopic devices as detectors 15 . Application of the present work in this direction requires further work, which could lead to the development of a detection system for entangled photons based on two distant SQUID rings.
---------- * Corresponding author: [email protected]
. 1 :
1Two distant mesoscopic SQUID rings A and B are irradiated with nonclassical microwaves of frequencies ω1 and ω2, correspondingly. The microwaves are produced by the source SEM and are correlated. Classical magnetic fluxes VAt and VBt are also threading the two rings A and B, correspondingly.
2 )]. (32) B. Quantum mechanically correlated photons For the case of ρ ent the reduced density matrices ρ A , ρ B are those given by Eq. (23), and consequently the Î A , Î B are the same as in Eqs. (24), (25); and the Î 2 A , Î 2 B are the same as in Eqs. (30), (31). However in this case the Î AÎB is Î AÎB ent = Î AÎB sep + I cross ,
for microwaves in the coherent state ρ A,sep of Eq. (42) and ρ A,ent of Eq. (45) with A 1 = √ 2, A 2 = 0. For coherent states ρ ent,A is not equal to ρ sep,A [cf. Eqs. (45), (42)] and consequently the corresponding currents are different. For number states, the currents corresponding to ρ ent,A and ρ sep,A are the same because ρ ent,A = ρ sep,A [cf. Eq. (23)]. In Fig. 5 we plot Î AÎB sep − Î AÎB ent for (a) number states of Eqs. (20) and (21) with N 1 = 2, N 2 = 0, and (b) coherent states of Eqs. (39) and
FIG. 2 :
2Rsep against (ω1 − ω2)t for the number state of Eq. (20) with N1 = 2, N2 = 0 (line of circles), and the coherent state of Eq. (39) with A1 = √ 2, A2 = 0 (solid line). The photon frequencies are ω1 = 1.2 × 10 −4 and ω2 = 10 −4 , in units where kB =h = c = 1.
FIG. 3 :
3Rsep − Rent against (ω1 − ω2)t for (a) number states of Eqs. (20) and (21) with N1 = 2, N2 = 0, and (b) coherent states of Eqs. (39) and (40) with A1 = √ 2, A2 = 0. The photon frequencies are ω1 = 1.2 × 10 −4 and ω2 = 10 −4 , in units where kB =h = c = 1.
a) Î A sep − Î A ent , and (b) Î 2 A sep − Î 2 A ent against (ω1 − ω2)tfor the coherent state ρsep,A of Eq. (42) and ρent,A of Eq. (45) with A1 = √ 2, A2 = 0. The photon frequencies are ω1 = 1.2 × 10 −4 and ω2 = 10 −4 , in units where kB =h = c = 1.
FIG. 5 :FIG. 6 :
56Î AÎB sep − Î AÎB ent for (a) number states of Eqs. (20) and (21) with N1 = 2, N2 = 0, and (b) coherent states of Eqs. (39) and (40) with A1 = √ 2, A2 = 0, against (ω1 − ω2)t. The photon frequencies are ω1 = 1.2 × 10 −4 and ω2 = 10 −4 , in units where kB =h = c = Rp against p for currents that are induced by the interpolating density matrix ρp of Eq. (22) for number states with N1 = 2, N2 = 0. The time t has been fixed so that (ω1 −ω2)t = π/2. The photon frequencies are ω1 = 1.2×10 −4 and ω2 = 10 −4 , in units where kB =h = c = 1.
. N Byers, C N Yang, Phys. Rev. Lett. 746N. Byers and C.N. Yang, Phys. Rev. Lett. 7, 46 (1961);
. F Bloch, Phys. Rev. B. 2109F. Bloch, Phys. Rev. B 2, 109 (1970);
. A Barone, G Paterno, Physics and Applications of the Josephson Effect. WileyA. Barone and G. Paterno, Physics and Applications of the Josephson Effect (Wiley, New York, 1982);
M Tinkham, Introduction to Superconductivity. New YorkMcGraw-HillM. Tinkham, Introduction to Superconductivity (McGraw-Hill, New York, 1996).
. Y Makhlin, G Schön, A Shnirman, Rev. Mod. Phys. 73357Y. Makhlin, G. Schön, and A. Shnirman, Rev. Mod. Phys. 73, 357 (2001);
. M A Kastner, 64849M.A. Kastner, ibid. 64, 849 (1992);
. G Schön, A D Zaikin, Phys. Rep. 198237G. Schön and A.D. Zaikin, Phys. Rep. 198, 237 (1990).
. I Chiorescu, Y Nakamura, C Harmans, J Mooij, Science. 2991869I. Chiorescu, Y. Nakamura, C. Harmans, and J. Mooij, Science 299, 1869 (2003);
. Y Nakamura, Y A Pashkin, J S Tsai, Nature. 398786Y. Nakamura, Y.A. Pashkin, and J.S. Tsai, Nature 398, 786 (1999);
. C H Van Der Wal, A C J Ter Haar, F K Wilhem, R N Schouten, C J P M Harmans, T P Orlando, S Lloyd, J E Mooij, Science. 290773C.H. van der Wal, A.C.J. ter Haar, F.K. Wilhem, R.N. Schouten, C.J.P.M. Harmans, T.P. Orlando, S. Lloyd, J.E. Mooij, Science 290, 773 (2000);
. D Vion, A Aassime, A Cottet, P Joyez, H Pothier, C Urbina, D Esteve, M H Devoret, Science. 296886D. Vion, A. Aassime, A. Cottet, P. Joyez, H. Pothier, C. Urbina, D. Esteve, and M.H. Devoret, Science 296, 886 (2002).
. R Loudon, P L Knight, J. Mod. Optics. 34709R. Loudon and P.L. Knight, J. Mod. Optics 34, 709 (1987);
R Loudon, The Quantum Theory of Light. OxfordOxford Univ. PressR. Loudon, The Quantum Theory of Light (Oxford Univ. Press, Oxford, 2000);
D F Walls, G Milburn, Quantum Optics. BerlinSpringerD.F. Walls and G. Milburn, Quan- tum Optics (Springer, Berlin, 1994).
. B Yurke, P G Kaminsky, R E Miller, E A Whittaker, A D Smith, A H Silver, R W Simon, Phys. Rev. Lett. 60764B. Yurke, P.G. Kaminsky, R.E. Miller, E.A. Whittaker, A.D. Smith, A.H. Silver, R.W. Simon, Phys. Rev. Lett. 60, 764 (1988);
. B Yurke, L R Corruccini, P G Kaminsky, L W Rupp, A D Smith, A H Silver, R W Simon, E A Whittaker, Phys. Rev. A. 392519B. Yurke, L.R. Corruccini, P.G. Kaminsky, L.W. Rupp, A.D. Smith, A.H. Silver, R.W. Simon, E.A. Whittaker, Phys. Rev. A 39, 2519 (1989);
. P Bertet, S Osnaghi, P Milman, A Auffeves, P Maioli, M Brune, J M Raimond, S Haroche, Phys. Rev. Lett. 88143601P. Bertet, S. Osnaghi, P. Milman, A. Auffeves, P. Maioli, M. Brune, J.M. Raimond, S. Haroche, Phys. Rev. Lett. 88, 143601 (2002);
. S Haroche, Phil. Trans. R. Soc. Lond. A. 3611339S. Haroche, Phil. Trans. R. Soc. Lond. A 361, 1339 (2003).
. A Vourdas, Phys. Rev. B. 4940A. Vourdas, Phys. Rev. B 49, 12 040 (1994);
. Z. Phys. B. 100455Z. Phys. B 100, 455 (1996);
. A Vourdas, T P Spiller, ibid. 10243A. Vourdas and T.P. Spiller, ibid. 102, 43 (1997);
. A A Odintsov, A Vourdas, Europhys. Lett. 34385A.A. Odintsov and A. Vourdas, Europhys. Lett. 34, 385 (1996).
. L M Kuang, Y Wang, M L Ge, Phys. Rev. B. 53764L.M. Kuang, Y. Wang, and M.L. Ge, Phys. Rev. B 53, 11 764 (1996);
. J Zou, B Shao, X S Xing, 56116J. Zou, B. Shao, and X.S. Xing, ibid. 56, 14 116 (1997);
. Phys. Lett. A. 231123Phys. Lett. A 231, 123 (1997);
. Z. Phys. B. 104439Z. Phys. B 104, 439 (1997);
. M J Everitt, P Stiffell, T D Clark, A , M.J. Everitt, P. Stiffell, T.D. Clark, A.
. J F Vourdas, H Ralph, R J Prance, Prance, Phys. Rev. B. 63530Vourdas, J.F. Ralph, H. Prance, R.J. Prance, Phys. Rev. B 63, 144 530 (2001);
. W Al-Saidi, D Stroud, 6514512W. Al-Saidi and D. Stroud, ibid. 65, 014512 (2002).
. A Wallraff, D I Schuster, A Blais, L Frunzio, R S Huang, J Majer, S Kumar, S M Girvin, R J Schoelkopf, Nature. 431162A. Wallraff, D.I. Schuster, A. Blais, L. Frunzio, R.S. Huang, J. Majer, S. Kumar, S.M. Girvin, and R.J. Schoelkopf, Nature 431, 162 (2004).
. A Aspect, P Grangier, G Roger, Phys. Rev. Lett. 47460A. Aspect, P. Grangier, G. Roger, Phys. Rev. Lett. 47, 460 (1981);
. Z Y Ou, L Mandel, Phys. Rev. Lett. 6150Z.Y. Ou and L. Mandel, Phys. Rev. Lett. 61, 50 (1988);
. P G Kwiat, K Mattle, H Weinfurter, A Zeilinger, A V Sergienko, Y Shih, Phys. Rev. Lett. 754337P.G. Kwiat, K. Mattle, H. Weinfurter, A. Zeilinger, A.V. Sergienko, Y. Shih, Phys. Rev. Lett. 75, 4337 (1995);
. G D Giuseppe, M Atatüre, M D Shaw, A V Sergienko, B E A Saleh, M C Teich, Phys. Rev. A. 6613801G.D. Giuseppe, M. Atatüre, M.D. Shaw, A.V. Sergienko, B.E.A. Saleh, M.C. Teich, Phys. Rev. A 66, 013801 (2002);
. J M Raimond, M Brune, S Haroche, Rev. Mod. Phys. 73565J.M. Raimond, M. Brune, S. Haroche, Rev. Mod. Phys. 73, 565 (2001).
. R F Werner, Phys. Rev. A. 404277R.F. Werner, Phys. Rev. A 40, 4277 (1989);
. R Horodecki, M Horodecki, 541838R. Horodecki and M. Horodecki, ibid. 54, 1838 (1996);
. A Peres, Phys. Rev. Lett. 771413A. Peres, Phys. Rev. Lett. 77, 1413 (1996);
. V Vedral, M B Plenio, M A Rippin, P L Knight, ibid. 782275V. Vedral, M.B. Plenio, M.A. Rippin, P.L. Knight, ibid. 78, 2275 (1997);
. V Vedral, Rev. Mod. Phys. 74197V. Vedral, Rev. Mod. Phys. 74, 197 (2002).
. D I Tsomokos, C C Chong, A Vourdas, Phys. Rev. A. 6913810D.I. Tsomokos, C.C. Chong, and A. Vourdas, Phys. Rev. A 69, 013810 (2004).
. M Paternostro, G Falci, M Kim, G M Palma, Phys. Rev. B. 69214502M. Paternostro, G. Falci, M. Kim, and G.M. Palma, Phys. Rev. B 69, 214502 (2004).
. Z Kis, E Paspalakis, Phys. Rev. B. 6924510Z. Kis and E. Paspalakis, Phys. Rev. B 69, 024510 (2004).
. See, S Example, A Chountasis, Vourdas, Phys. Rev. A. 58and references thereinSee, for example, S. Chountasis and A. Vourdas, Phys. Rev. A 58, 848 (1998) and references therein.
. A N Korotkov, D V Averin, Phys. Rev. B. 64165310A.N. Korotkov and D.V. Averin, Phys. Rev. B 64, 165310 (2001);
. S Pilgram, M Büttiker, Phys. Rev. Lett. 89200401S. Pilgram and M. Büttiker, Phys. Rev. Lett. 89, 200401 (2002);
. A A Clerk, S M Girvin, A K Nguyen, A D Stone, Phys. Rev. Lett. 89176804A.A. Clerk, S.M. Girvin, A.K. Nguyen, A.D. Stone, Phys. Rev. Lett. 89, 176804 (2002);
. A A Clerk, S M Girvin, A D Stone, Phys. Rev. B. 67165324A.A. Clerk, S.M. Girvin, A.D. Stone, Phys. Rev. B 67, 165324 (2003);
. R Ruskov, A N Korotkov, Phys. Rev. B. 67241305R. Ruskov and A.N. Korotkov, Phys. Rev. B 67, 241305 (2003);
. O Buisson, F Balestro, J P Pekola, F W J Hekking, Phys. Rev. Lett. 90238304O. Buisson, F. Balestro, J.P. Pekola, F.W.J. Hekking, Phys. Rev. Lett. 90, 238304 (2003).
| [] |
[
"A New Game Equivalence and its Modal Logic",
"A New Game Equivalence and its Modal Logic"
] | [
"Johan Van Benthem [email protected] ",
"Nick Bezhanishvili [email protected] ",
"Sebastian Enqvist [email protected] ",
"\nInstitute for Logic, Language and Computation\nDepartment of Philosophy\nUniversity of Amsterdam\nNetherlands\n",
"\nChangjiang Scholar Program\nStanford University\nUSA\n",
"\nInstitute for Logic, Language and Computation\nTsinghua University\nChina\n",
"\nDepartment for Philosophy\nUniversity of Amsterdam\nNetherlands\n",
"\nStockholm University\nSweden\n"
] | [
"Institute for Logic, Language and Computation\nDepartment of Philosophy\nUniversity of Amsterdam\nNetherlands",
"Changjiang Scholar Program\nStanford University\nUSA",
"Institute for Logic, Language and Computation\nTsinghua University\nChina",
"Department for Philosophy\nUniversity of Amsterdam\nNetherlands",
"Stockholm University\nSweden"
] | [
"TARK 2017 EPTCS 251"
] | We revisit the crucial issue of natural game equivalences, and semantics of game logics based on these. We present reasons for investigating finer concepts of game equivalence than equality of standard powers, though staying short of modal bisimulation. Concretely, we propose a more finegrained notion of equality of 'basic powers' which record what players can force plus what they leave to others to do, a crucial feature of interaction. This notion is closer to game-theoretic strategic form, as we explain in detail, while remaining amenable to logical analysis. We determine the properties of basic powers via a new representation theorem, find a matching 'instantial neighborhood game logic', and show how our analysis can be extended to a new game algebra and dynamic game logic. | 10.4204/eptcs.251.5 | [
"https://arxiv.org/pdf/1707.08737v1.pdf"
] | 29,129,060 | 1707.08737 | 69a9225cb20421af9f8fd02bd9284416facd8bf7 |
A New Game Equivalence and its Modal Logic
2017
Johan Van Benthem [email protected]
Nick Bezhanishvili [email protected]
Sebastian Enqvist [email protected]
Institute for Logic, Language and Computation
Department of Philosophy
University of Amsterdam
Netherlands
Changjiang Scholar Program
Stanford University
USA
Institute for Logic, Language and Computation
Tsinghua University
China
Department for Philosophy
University of Amsterdam
Netherlands
Stockholm University
Sweden
A New Game Equivalence and its Modal Logic
TARK 2017 EPTCS 251
201710.4204/EPTCS.251.5
We revisit the crucial issue of natural game equivalences, and semantics of game logics based on these. We present reasons for investigating finer concepts of game equivalence than equality of standard powers, though staying short of modal bisimulation. Concretely, we propose a more finegrained notion of equality of 'basic powers' which record what players can force plus what they leave to others to do, a crucial feature of interaction. This notion is closer to game-theoretic strategic form, as we explain in detail, while remaining amenable to logical analysis. We determine the properties of basic powers via a new representation theorem, find a matching 'instantial neighborhood game logic', and show how our analysis can be extended to a new game algebra and dynamic game logic.
Introduction
Games are a basic model for interactive agency, but how much structure do we want to consider? Game theory offers strategic form games and extensive games, which represent two levels of structure, less or more detailed. Logic of games has also looked at other natural invariances between representations of games, such as equivalence of powers for players. As in other areas of mathematics, the search for natural invariances continues, and in this paper we offer a new notion bridging between game theory and logic: strong power equivalence, that uses powers encoding a sort of qualitative equilibria. We determine its properties in a new representation theorem for the "basic powers" in a game, show that it has a natural associated logic, and that it supports an interesting new game algebra where the methodological principle of compositionality eventually forces us to change from functional to relational strategies. Besides the representation theorem for basic powers, the main technical contribution of the paper is a completeness theorem for the new game logic that we define. The proof uses a technique developed in [3], but requires a non-trivial adaptation due to the presence of extra frame constraints.
We believe that our proposed game equivalence is new, but even so, it fits with a body of earlier work. Our approach is partly inspired by the extensive computational literature on process equivalences, ranging from coarser trace equivalence to more fine-grained notions of bisimulation [10]. Even more central in our approach was the by now standard notion of power equivalence, implicit in the game algebra of Parikh [13], which also links with the set-theoretic forms for games presented in [4]. Another obvious precursor inside game theory is the celebrated transformation analysis of equivalent games with imperfect information by Thompson [14] (refined by Elmes and Reny in [6]), which is close to power equivalence. But game theory also has comparative discussions of the information available in exten-sive forms and in strategic normal forms [7], a style of analysis that remains to be connected to our representation theorems and logics for different levels of describing games.
Finally, it should be said that further intuitions of game equivalence emerge once we consider players' preferences, so that game equivalence can also refer to correlations between available equilibria. This further level is beyond the scope of this paper, but it would be a natural next step to take.
Powers
Powers and power equivalence
We begin by reviewing a standard logical notion of game equivalence in terms of powers of the players. For a standard overview of the basic concepts of game theory, see [12]. For more on equivalence of extensive games, see [4,6,14].
Definition 1.
A tree T is a prefix closed subset of N * , subject to the condition that if w · j ∈ T and i < j then w · i ∈ T as well. The empty word ε is the root of the tree.
Definition 2. An extensive game G for a finite set of players A with outcomes in the set O is a tuple (T ,t, o, Π) where T is a finite tree, t a map from T to A, o a map from branches of T to O, and Π a partition of T subject to the following condition: for any pair w, v within the same partition cell of Π, w and v have the same number of children in T , and furthermore t(n) = t(n ′ ). If all partition cells of Π are singletons we call G a game of perfect information, and we omit Π.
Maximal branches of T will also be called full matches, and prefixes of maximal branches are called partial matches.
A strategy for player a ∈ A is a map σ : t −1 [a] → N where w · σ (w) is a child of w for each w with t(w) = a, and σ (w) = σ (w ′ ) whenever w, w ′ are in the same partition cell in Π. A strategy profile is a tuple (σ a ) a∈A of one strategy for each player in A. A strategy profile p completely determines a full match, and hence a leaf of the game tree, so we can speak of the outcome of p, denoting it by o(p). Generally, we say that a full match m of G is guided by the strategy σ for a if for every prefix w of m such that t(w) = a, σ (w) is also a prefix of m. Match(σ ) is the set of σ -guided matches.
We denote the set of games for players A with outcomes in O as G(A, O). For two-player games we call the players by A (Alice) and B (Bob). We set A = B and B = A.
Note that we have not attributed payoffs to matches in a game or preferences over the outcomes, but rather (and more generally) simply outcomes from some fixed chosen set. In this sense we are dealing with game forms rather than proper games. We return to the issue of preferences in Section 7.1.
Definition 3. Let G = (T ,t, o, Π) be a game with outcomes in O. A set P ⊆ O is a power of player a ∈ A in the game G if there is a strategy σ for a in G such that o(m) ∈ P for every σ -guided match m.
Given a player a ∈ A we let P a (G ) denote the set of powers of a in G .
Two games G 1 , G 2 ∈ G(A, O) are power equivalent if for all a ∈ A: P a (G 1 ) = P a (G 2 ). We denote this by G 1 ∼ G 2 . If P a (G 1 ) = P a (G 2 ) for some specific a ∈ A, we write G 1 ∼ a G 2 .
Every game G in G(A, O) gives rise to a tuple (P a (G )) a∈A of subsets of O, which represents a crucial aspect of social scenarios: the abilities of participants to force outcomes.
Powers in two-player games, our focus in what follows, are characterized by three formal properties, for a set of outcomes O, and a pair F A , F B of families of subsets of O:
Non-emptiness For all u ∈ W there are Z, Z ′ ⊆ W such that (u, Z) ∈ F A and (u, Z ′ ) ∈ F B . Monotonicity If P ∈ F A (P ∈ F B ) and P ⊆ Q ⊆ O, then Q ∈ F A (Q ∈ F B ).
Consistency If P ∈ F A and Q ∈ F B , then P ∩ Q = / 0. A representation theorem also holds for perfect information games, with this additional property:
Determinacy For all sets P ⊆ O, either P ∈ F A or O \ P ∈ F B .that F A = P A (G ) and F B = P B (G ).
For proofs of these results we refer to [2].
Neighborhood logic and bisimilarity
The use of neighborhood semantics to interpret a propositional dynamic logic of powers in determined games dates back to [13]. Here, we review a modal logic GL for powers in two-player games from [2], which drops the game constructions (for these, see Section 6) as well as determinacy, referring explicitly to powers of separate players in the syntax. Given a set of propositional variables Prop, the syntax of GL is given by the following grammar:
ϕ := p ∈ Prop | ϕ ∧ ϕ | ¬ϕ | [A]ϕ | [B]ϕ
The semantics for this logic uses neighborhood models that assign each player a neighborhood relation representing the powers of that player relative to each world. Of course, we must impose suitable constraints to ensure that these actually behave as powers of players in some game. The representation result Theorem 1 tells us what these should be: We define the interpretations of all formulas in a game model M = (W, R A , R B ,V ) as follows: Game models come with a natural notion of bisimulation: Forth For all Z such that uR P Z, there exists a Z ′ such that u ′ (R ′ P )Z ′ and the following condition holds:
Definition 4. A game frame is a triple (W, R A , R B ) such that W(a) [[p]] = V (p), (b) [[ϕ ∧ ψ]] = [[ϕ]] ∩ [[ψ]], (c) [[¬ϕ]] = W \ [[ϕ]],Definition 5. Let M = (W, R,V ), M ′ = (W ′ , R ′ ,V ′ ) beForth-Back For all v ′ ∈ Z ′ there is some v ∈ Z such that vBv ′ .
Back For all Z ′ such that u ′ R P Z ′ there is some Z such that uR P Z and the following condition holds:
Back-Forth For all v ∈ Z there is some v ′ ∈ Z ′ such that vBv ′ .
We say that pointed game models M, w and N, v are power bisimilar, written M, w ←→ N, v, if there is a power bisimulation B between M and N such that wBv.
All formulas of GL are invariant for power bisimilarity:
Proposition 1. If M, w ←→ N, v then M, w ϕ iff N, v ϕ, for each formula ϕ of GL.
The logic GL can be axiomatized by a simple extension of monotone (multi-)modal logic. Here is a version using axiom schemata and a rule of replacement of equivalents:
Axioms for GL Non-Em [P]⊤ Mon [P]ϕ → [P](ϕ ∨ ψ) Cons [P]ϕ → ¬[P]¬ϕ Proof rules MP ϕ → ψ ϕ ψ RE ϕ ↔ ψ θ θ [ϕ/ψ]
where θ [ϕ/ψ] is the result of substituting some occurrences of the formula ψ by ϕ in θ . We denote this system of axioms by GL and write GL ⊢ ϕ to say that the formula ϕ is provable in this axiom system. Theorem 3. The logic GL is sound and complete for validity on game frames.
The completeness proof is an exercise involving a straightforward canonical model construction, which we omit. Furthermore, GL is decidable and has the finite model property.
3 Rethinking powers and game equivalence 3.1 From powers to strategic equivalence Power equivalence, while a natural and simple notion of game equivalence, is relatively coarse. In particular, it misses much of the interactive nature of games. To illustrate what we mean by this, here is an example from [2].
Consider the two games depicted in Figure 1. In both games, each player can perform two actions "left" and "right", and there are three possible outcomes 1, 2, 3. If Alice moves left, then the outcome is 1 regardless of the action chosen by Bob, but if Alice moves right, the outcome depends on the actions of Bob: if Bob moves left, the outcome is 2, otherwise 3. The difference lies in which player moves first. In the figure, the game where Alice chooses first is depicted to the left, and the game where Bob chooses first to the right. Figure 1: Two power equivalent games.
2 3 1 B l ✼ ✼ ✼ ✼ r ✞ ✞ ✞ ✞ A l ❅ ❅ ❅ ❅ ❅ r ② ② ② ② ② ② 1 2 1 3 A l ✼ ✼ ✼ ✼ r ✞ ✞ ✞ ✞ A l ✼ ✼ ✼ ✼ r ✞ ✞ ✞ ✞ B l ❊ ❊ ❊ ❊ ❊ ❊ r ② ② ② ② ② ②
It is easy to see that each player has the same powers in both games, and this is the basis for standard game logics (the games represent two sides of a standard propositional distribution law). But the interaction of the players looks different: in the right game, A has an obvious strategy for which the possible outcomes are precisely 1 and 2. But in the left game, the only way that A can exclude the outcome 3 is to go left at the start of the game, making 1 the only possible outcome. Thus, it is doubtful if one should see these games as equivalent.
Another way of phrasing the difference is this. The two games differ if we think of powers more 'socially' as what a player is going to force while at the same time recording which choices are left intentionally to the other player. That is, both players have a say, and the notion of power becomes oriented toward both players, more in the spirit of game-theoretic equilibrium. This intuition can be made a bit more precise if we bring in a standard game-theoretic device. Let us display the strategic forms of the two games, with rows corresponding to strategies for A and columns strategies of B:
1 1 2 3 1 1 2 1 1 3 2 3
Looking at the yields of columns and rows, the above difference is clear. Here is a finer notion of game equivalence, inspired by the 'matrix logic' of [1]: Definition 6. Let G 1 and G 2 be two-player games over the set of outcomes O. A strategy profile bisimulation between these two games is a relation R ⊆ P 1 × P 2 , where P 1 is the set of strategy profiles of G 1 and P 2 is the set of strategy profiles of G 2 , such that if (σ 1 , τ 1 )R(σ 2 , τ 2 ), then:
Atomic o 1 (σ 1 , τ 1 ) = o 2 (σ 2 , τ 2 ), Forth(A) For all strategies σ ′ 1 for A in G 1 there is some strategy σ ′ 2 in G 2 with (σ ′ 1 , τ 1 )R(σ ′ 2 , τ 2 ), Back(A) For all strategies σ ′ 2 for A in G 2 there is some strategy σ ′ 1 in G 1 with (σ ′ 1 , τ 1 )R(σ ′ 2 , τ 2 ) Forth(B) For all strategies τ ′ 1 for B in G 1 there is some strategy τ ′ 2 in G 2 with (σ 1 , τ ′ 1 )R(σ 2 , τ ′ 2 ), Back(B) For all strategies τ ′ 2 for B in G 2 there is some strategy τ ′ 1 in G 1 with (σ 1 , τ ′ 1 )R(σ 2 , τ ′ 2 )
We call G 1 and G 2 are strategic form equivalent if there is a strategy profile bisimulation R between them, relating every profile in P 1 to some profile in P 2 , and vice versa.
This equivalence concept is more fine-grained than power equivalence. In particular, the power equivalent games displayed in Figure 1 are not strategic form equivalent, as can be seen by inspecting their matrix forms. However, this approach sacrifices much of the logical simplicity of power equivalence. We therefore proceed to modify the notion of power itself in line with the above strategic form perspective.
Basic powers and strong equivalence
Our proposed new game equivalence works as follows.
Definition 7. Let G be any game in G(A, O), let a ∈ A. A power P ⊆ O is said to be a basic power for a in G if there is a strategy σ for a in G such that P = {o(m) | m ∈ Match(σ )}. The set of all basic powers of a in G is denoted by B a (G ).
Definition 8. Two games G 1 and G 2 are strongly power equivalent, written
G 1 ≃ G 2 , iff B a (G 1 ) = B a (G 2 ) for all a ∈ A. We write G 1 ≃ a G 2 to say that B a (G 1 ) = B a (G 2 ).
Strong power equivalence is more fine-grained than power equivalence: the two games in Figure 1 are not strongly power equivalent. It also retains a connection to strategic forms.
Proposition 2. Any two strategic form equivalent games are strongly power equivalent, and any two strongly power equivalent games are power equivalent.
All inclusions are strict here. Here are two games that are strongly power equivalent but not strategic form equivalent -displayed in strategic form with outcome set {0, 1}:
0 1 0 1 0 0 0 0 0 1 1 0 0 0 0
In the matrix on the right, the profile in the middle upper square is not bisimilar with any profile on the left.
As a prelude to our later logical analysis, we generalize strong power equivalence to games with different outcomes. Given
G 1 ∈ G(A, O 1 ) and G 2 ∈ G(A, O 2 ), R ⊆ O 1 × O 2 is a strategy bisimulation between G 1 and G 2 if, for all a ∈ A: Forth For all Z 1 ∈ B a (G 1 ), there exists Z 2 ∈ B a (G 2 ) such that Z 1R Z 2 , Back For all Z 2 ∈ B a (G 2 ), there exists Z 1 ∈ B a (G 1 ) such that Z 1R Z 2 ,
whereR is the Egli-Milner lifting of R. I.e., ZRZ ′ if, for all x ∈ Z, there is x ′ ∈ Z ′ with xRx ′ , and vice versa. It is clear that strong power equivalence is a special case of this.
Having proposed our new notion of game equivalence, we now determine its basic properties. This is the content of the following representation theorem for basic powers. Obviously, the earlier monotonicity condition has to be dropped, since it typically fails on our new reading of powers as also offering choices to the other player. On the other hand, this role for the other player also validates a new condition that did not hold before. Consider any pair F A , F B ⊆ P(O):
Instantiatedness Given P ∈ F A (P ∈ F B ): for any x ∈ P, there is some P ′ ∈ F B (P ′ ∈ F A ) with x ∈ P ′ . Theorem 4. Suppose F A , F B ⊆ P(O). Then the pair (F A , F B )
satisfies the Non-emptiness, Instantiatedness and Consistency conditions if, and only if, there exists a game G such that F
A = B A (G ) and F B = B B (G ).
We can give a more compact statement of these conditions in terms of the above Egli-Milner lifting. Then the Instantiatedness and Consistency conditions together become: for all P ∈ F A , we have P ∈F B , and for all P ∈ F B , we have P ∈F A .
In order to prove the theorem, it will be convenient to work with games in strategic form: Definition 9. A strategic form two-player game with outcomes in O is a tuple (Σ A , Σ B , o) such that Σ A and Σ B are non-empty sets (interpreted as strategy sets for each player) and o :
Σ A × Σ B → O is the outcome map. An element of Σ A × Σ B is called a strategy profile.
We can define powers and instantiated powers for strategic form games in the expected manner. In particular an instantiated power for Player A is a subset P of O such that, for some strategy σ ∈ Σ A :
P = {u ∈ O | o(σ , σ ′ ) = u for some σ ′ ∈ Σ B }
and dually for B. Since every extensive game has a strategic normal form, and conversely every strategic form game is the strategic normal form of some extensive game of imperfect information, we can work with strategic and extensive games interchangeably (see for example [12]).
It is straightforward to check that the conditions Non-emptiness, Consistency and Instantiatedness hold for the instantiated powers of any game. For the converse, let F A , F B ⊆ P(O) be given, and suppose all three conditions hold for the pair (F A , F B ). We shall construct a game G such that the instantiated powers of each player P in G coincides with the set F P . We construct G as a strategic form game, as follows:
• The set Σ B of strategies for B is just the set F B × O × {0, 1}.
• The set Σ A of strategies for A is defined as the collection of all maps c : Σ B → O such that:
-c(Z, u, j) ∈ Z for all Z ∈ F B , u ∈ O and j ∈ {0, 1}, and -the image c[Σ B ] of the set Σ B under the map c is a member of F A .
• The outcome map o sends a strategy profile (c, (Z, u, j)) ∈ Σ A × Σ B to c(Z, u, j).
The set Σ B is non-empty by the Non-emptiness condition, and it will follow from Claim 1 below that Σ A is also non-empty. The appearance of the set O × {0, 1} in this construction is merely a way to create "enough copies" of each set in F B to make sure that certain suitable strategies for A can be defined. In particular, it allows us to establish the following claim:
Claim 1. For every set Z ∈ F A , there exists a strategy c for A in G such that c[F B × O × {0, 1}] = Z. Proof of Claim 1. Suppose Z ∈ F A . For every u ∈ Z, there exists some Z ′ ∈ F B with u ∈ Z ′ , by the Instantiatedness property. So we can define a choice function g : Z → F B such that for each u ∈ Z we have u ∈ g(u)
. We can modify this g to obtain a map
g ′ : Z → F B × O × {0, 1}
by mapping u ∈ Z to the triple (g(u), u, 0). We can now define the strategy c as follows: given a triple (g(u), u, 0) in the image of Z under the map g ′ , we set c(g(u), u, 0) = u. For every triple (Z ′ , u ′ , k) not in the image of g ′ , we set c(Z ′ , u ′ , k) to be some arbitrary element of Z ∩ Z ′ , which exists by the Consistency condition. Clearly we get that the image of the set F B × O × {0, 1} under the map c is equal to Z. Furthermore, since c(Z ′ , u ′ , k) ∈ Z ′ for each triple (Z ′ , u ′ , k), we get that c is a legitimate strategy for A in G .
A second claim that we will need is the following: We now show that every instantiated power for either player P is an element of F P , and vice versa. We have four different inclusions to prove, two for each player.
Claim 2. Let Z ∈ F B , u ∈ Z,
Suppose that Z ∈ Conversely, suppose that Z is an instantiated power for A in G . Then there exists a strategy c for A such that the possible outcomes consistent with c are precisely the members of Z. It follows that
c[F 2 × O × {0, 1}] = Z. But since the strategies for A are subject to the constraint that c[F 2 × O × {0, 1}] ∈ F A , it follows that we must have Z ∈ F A .
To prove the inclusions for B we need the following claim: (Z, u, k) is of the form c(Z, u, k) for some strategy c ∈ F A , which means that u ′ = c(Z, u, k) ∈ Z. Conversely, if u ′ ∈ Z then by Claim 2 there exists a strategy c for A such that c(Z, u, k) = u ′ . So u ′ is an outcome that is consistent with the strategy (Z, u, k), hence u ′ ∈ Z ′ . So we get Z = Z ′ , and the proof is finished.
We can now easily prove both of the inclusions for B: if Z ∈ F B , then (Z, u, 0) is a legitimate strategy for B for any arbitrarily chosen u ∈ O, and it now follows directly from Claim 3 that Z is an instantiated power for B. Conversely, if Z is an instantiated power for B in G then there is some strategy (Z ′ , u, k) for B witnessing this. By Claim 3 we get Z = Z ′ , and since (Z ′ , u, k) is a strategy for B we have Z ′ ∈ F B . So Z ∈ F B , and we are done.
From this proof, we can also read off the following result:
Theorem 5. Our properties of basic powers also capture the powers computed from rows and columns of matrix games.
At present we do not have a representation theorem for basic powers in the special case of perfect information games. It is easy to find additional conditions on powers that hold in this setting, but we have not yet found a complete set.
The logic of basic powers: instantial game logic
What is the game logic that goes with strong power equivalence? Our earlier notion of strategy bisimulation points the way. It resembles the instantial neighborhood bisimulations introduced in [?] as the invariance underlying instantial neighborhood logic. Accordingly, we now introduce a logic for games at this level of structure. The syntax of instantial game logic IGL is given by the following grammar:
ϕ := p ∈ Prop | ϕ ∧ ϕ | ¬ϕ | [A](Ψ; ϕ) | [B](Ψ; ϕ)
where Ψ ranges over finite sets of formulas of IGL. We sometimes write [P](ψ 1 , ..., ψ n ; ϕ) rather than The semantics of IGL, as for GL, uses neighborhood models. However, the constraints are different, since we are now dealing with basic powers. These constraints come from our representation result for basic powers Theorem 4. The key clause in the truth definition in instantiated game models M = (W, R A , R B ,V ) runs as follows:
u ∈ [[[P](ψ 1 , ..., ψ k ; ϕ)]] iff there is some Z ⊆ W such that (u, Z) ∈ R P and Z ⊆ [[ϕ]], Z ∩ [[ψ i ]] = / 0 for i ∈ {1, ..., k}.
If we interpret formulas as 'outcomes', then we see why IGL is a natural language for basic powers: the formula [P](Ψ; Ψ) says that Ψ is a basic power for the player P, while the weaker formula [P] Ψ says that Ψ is simply a power.
Instantial models come with a notion of bisimulation which stands to strong power equivalence as standard neighborhood bisimulations stands to power equivalence: Definition 11. Let M = (W, R,V ) and M ′ = (W ′ , R ′ ,V ′ ) be any neighborhood models. The relation B ⊆ W × W ′ is said to be an instantial neighborhood bisimulation if, for all uBu ′ and P ∈ {A, B}, we have:
Forth For all Z such that uR P Z, there is some Z ′ such that u ′ (R ′ P )Z ′ and the following conditions hold:
Forth-Back For all v ′ ∈ Z ′ there is some v ∈ Z such that vBv ′ . Forth-Forth For all v ∈ Z there is some v ′ ∈ Z ′ such that vBv ′ .
Back For all Z ′ such that u ′ R P Z ′ there is some Z such that uR P Z and the following condition holds:
Back-Forth For all v ∈ Z there is some v ′ ∈ Z ′ such that vBv ′ . Back-Back For all v ′ ∈ Z ′ there is some v ∈ Z such that vBv ′ .
Pointed instantial game models M, w and N, v are instantial neighborhood bisimilar, M, w ←→ N, v, if some instantial neighborhood bisimulation B between M and N has wBv.
Formulas of IGL are invariant for instantial bisimilarity:
Proposition 3. If M, w ←→ N, v then M, w ϕ iff N, v ϕ, for each formula ϕ of IGL.
More can be said about the model theory of instantial neighborhood simulation, but the present facts suffice here.
Axiomatizing IGL
In this section we axiomatize the valid formulas of IGL, thus pinning down the modal logic of basic powers. Our system is a gentle modification of instantial neighborhood logic.
IGL axioms.
Mon [P](ψ 1 , ..., ψ n ; ϕ) → [P](ψ 1 ∨ α 1 , ..., ψ n ∨ α n ; ϕ ∨ β )
Weak [P](Ψ; ϕ) → [P](Ψ ′ ; ϕ) for Ψ ′ ⊆ Ψ
Cons [P]ϕ → ¬[P]¬ϕ
Proof rules.
MP ϕ → ψ ϕ ψ RE ϕ ↔ ψ θ θ [ϕ/ψ]
We denote this system of axioms by IGL and write IGL ⊢ ϕ to say that the formula ϕ is provable in this axiom system. Theorem 6. The system IGL is sound and complete for validity over instantial game models.
The soundness part of this result is checked case by case, and the easy argument is omitted. The completeness proof proceeds via a normal form argument, following an idea in [3]. The adaptation to the present setting is not trivial however, since we have to deal with the new frame constraints of Nonemptines, Consistency and Instantiatedness. The main contribution here is thus to prove that the model construction satisfies these constraints. We outline the key parts of the proof below.
Definition 12. The modal depth of a formula is defined inductively by:
-d(p) = 0 -d(¬ϕ) = d(ϕ) -d(ϕ ∧ ψ) = max(d(ϕ), d(ψ)) -d([P](Γ; ϕ)) = max(d[Γ ∪ {ϕ}]) + 1
Definition 13. Given a finite set of propositional variables Q, a formula ϕ is said to be a Q-formula if all propositional variables appearing in ϕ belong to Q.
Given k ∈ ω and a finite set Q of propositional variables, a (Q, k)-description is a consistent Qformula ϕ of modal depth ≤ k, such that for any Q-formula θ of depth ≤ k, we have ϕ ⊢ θ or ϕ ⊢ ¬θ .
Note that there are at most finitely many Q-formulas of depth ≤ k, given that Q is finite. We omit the (standard) argument for this.
The key lemma for the completeness proof is the following:
Lemma[P](Θ i ; Θ i )
where I is a finite set and for each i ∈ I, Θ i is a finite set of (Q, k)-descriptions, such that:
-every member of Θ i provably entails ϕ, and -every member of Γ is provably entailed by some member of Θ i .
For a proof of this lemma, see [3]. Now fix a finite set of propositional variables Q. Given a Q-formula ϕ let ϕ denote the equivalence class of the formula under provable equivalence. For a finite set of formulas Γ set
Γ = { ϕ | ϕ ∈ Γ}
We construct a neighborhood model M = (W, R,V ) as follows:
• W = {( ϕ, k) | ϕ is a (Q, k)-description and k < ω}
• For a player P let R P be the union of the sets
{(( ϕ, k + 1), Γ × {k}) ∈ W × P(W ) | ϕ ⊢ [P]( Γ; Γ)} and {(( ϕ, 0),W ) | ( ϕ, 0) ∈ W } • Finally, for any propositional variable p, set V (p) = { ϕ | ϕ ⊢ p} if p ∈ Q, V (p) = / 0 otherwise.
Note that this is well defined, i.e. whether ( ϕ, Γ) ∈ R P is independent of the choice of witnesses ϕ, Γ of the equivalence classes. The following lemma can be proved exactly as in [?], and we refer to that paper for the details:
Lemma 2 (Truth lemma). Let M be constructed as above, and let ψ be any basic formula of modal depth ≤ k whose propositional variables all belong to Q and such that all game terms appearing in ψ belong to τ. Then for every (Q, τ, k)-description ϕ, we have:
M, ( ϕ, k) ψ iff ϕ ⊢ ψ
The addition we need to make here is the following lemma:
Lemma 3. The structure M construced above is a game model, i.e. it satisfies the Non-emptiness, Consistency and Instantiatedness constraints.
Proof. First, note that all the conditions hold for the image of each relation on an element of W of the form ( ϕ, 0). So we can focus on the images of relations of the form R P on states of the form ( ϕ, k + 1) for some k. The Non-emptiness condition is proved straightforwardly using the axiom (Non-Em), we leave this to the reader. For Instantiatedness, suppose that
(( ϕ, k + 1), Θ × {k}) ∈ R A By definition, we get ϕ ⊢ [A](Θ; Θ). Pick an element ( θ , k) ∈ Θ × {k}. By (Weak) and (Mon) we get [A](Θ; Θ) ⊢ [A](θ ; ⊤), so ϕ ⊢ [A](θ ; ⊤)
. By the axiom (Inst) we get ϕ ⊢ [B](θ ; ⊤) as well. Since ϕ is a (Q, k + 1)-description, we can derive from Lemma 1 that there is some set Ψ of (Q, k)-descriptions such that ϕ ⊢ [B](Ψ; Ψ), and such that there exists some ψ ∈ Ψ with ψ ⊢ θ . But since ψ, θ are both (Q, k)-descriptions, clearly this means that θ = ψ, so ( θ , k) = ( ψ, k). But this means that we get (( ϕ, k + 1), Ψ × {k}) ∈ R B and ( θ , k) ∈ Ψ × {k} as required. The converse direction is proved in the same manner.
For the Consistency condition, suppose that (( ϕ, k +1), Θ×{k}) ∈ R A and (( ϕ, k +1), Θ ′ ×{k}) ∈ R B . It is straightforward to prove, using that Θ and Θ ′ are both sets of (Q, k)-descriptions, that if Θ × {k} does not intersect Θ ′ × {k} then in fact Θ ′ → ¬ Θ. But we have ϕ ⊢ [A](Θ; Θ), hence ϕ ⊢ [A] Θ by the axiom schema (Weak). Furthermore we have:
ϕ ⊢ [B](Θ ′ ; Θ ′ ) ⊢ [B] Θ ′ ⊢ [B]¬ Θ But then ϕ ⊢ [A] Θ∧[B]
¬ Θ, and it follows from the axiom schema (Cons) that ϕ cannot be consistent, which contradicts our assumption that ϕ was a (Q, k + 1)-description.
Combining Lemmas 3 and 2 with the easy observation that any consistent basic formula of depth ≤ k, variables in Q and atomic games among τ is provably entailed by some (Q, k)-description 1 , we obtain Theorem 6.
As a corollary to this proof, we get:
Theorem 7.
The logic IGL is decidable and has the effective finite model property.
IGL is a high-level logic of basic powers in social interaction. The reader may find it of interest to see what the above axioms say when read as statements about games.
Adding game operations
Our third contribution in this paper concerns the addition of structure to games, in the form of natural game operations.
Game algebra of strong powers
In this section we use some basic concepts of universal algebra, see for example [5]. For simplicity, we restrict attention to finite games, so that G ({A, B}, O) is now the set of finite games with outcomes in O. Thus the outcome map of a game G can be viewed a map o from the leaves in G into O.
Consider a set of games on a fixed set of outcomes O. We define operations in a standard manner, with binary +, × corresponding to choice for A, B respectively, and a unary operation − for game dual ('role switch'). The game G 1 + G 2 (G 1 × G 2 ) is defined as follows: let G 1 = (T 1 ,t 1 , o 1 , Π 1 ) and let G 2 = (T 2 ,t 2 , o 2 , Π 2 ). We first construct the tree T ′ by adding a new root r with two successors, and the left successor is the root of a subtree isomorphic with T 1 via a fixed isomorphism i 1 , the right successor is the root of a subtree isomorphic with T 2 via a fixed isomorphism i 2 . The turn function t ′ is defined by setting t ′ (r) = A (t ′ (r) = B). For a node u in the subtree corresponding to the left successor of the root r we set t ′ (u) = t 1 (i 1 (u)) and similarly for a node u in the subtree corresponding to the right successor of the root r we set t ′ (u) = t 2 (i 2 (u)). The outcome map o ′ is defined by setting o ′ (l) = o 1 (i 1 (l)) for a leaf in the subtree corresponding to the left successor of r, and o ′ (l) = o 2 (i 2 (l)) for a leaf in the subtree corresponding to the right successor of r. We define a partition Π ′ by setting
Π ′ = {{r}} ∪ {i −1 1 [Z] | Z ∈ Π 1 } ∪ {i −1 2 [Z] | Z ∈ Π 2 }. The game G 1 + G 2 (G 1 × G 2 ) is then defined as (T ′ ,t ′ , o ′ , Π ′ ).
The construction of −G i is much simpler, it merely changes the turn assignment by switching players at each position, otherwise keeping everything the same. The equational theory of the algebra G is of special interest here, as it can be viewed as a new weaker propositional logic, where distributivity fails, witness our example in Figure 1. By contrast, for standard power equivalence, this algebra is known to be a distributive de Morgan algebra.
But with strong power equivalence, much more basic principles than distributivity fail. For instance, the operations × and and + are not idempotent, the following equations fail:
x × x = x x + x = x
For the first failure, take a two-player game G in which A has the first move, and simply chooses between two outcomes 0, 1. Then {0, 1} is a basic power of A in G × G , but not in G . Still, many laws known from game algebra do go through.
Proposition 5. The following equations hold in G:
Associativity x + (y + z) = (x + y) + z, x × (y × z) = (x × y) × z Commutativity x + y = y + x, x × y = y × x Double Negation − − x = x De Morgan −(x + y) = −x × −y, −(x × y) = −x + −y
Dynamic game logic for basic powers
Our game algebra still misses one important operation, namely, sequential composition. For this operation to make sense, we need to take a dynamic view of games as state-transforming processes, in the style of dynamic game logic (cf. [13], [9], [2]).
Definition 15.
A dynamic two-player game over a set X (of "states") is a map g : X → G({A, B}, X ), assigning a game with outcome set X to each state in X . We denote the set of dynamic two-player games over X by D ({A, B}, X ).
The operations +, × and − are naturally lifted to dynamic games by defining them component-wise in an obvious manner, and so are the relations ∼, ≃ of power equivalence and strong power equivalence. Now, given dynamic games g 1 , g 2 : X → G({A, B}, X ), we can define the sequential composition g 1 • g 2 by letting g 1 • g 2 (u) be constructed by replacing each leaf l in g 1 (u) by a copy of the game tree g 2 (o 1 (l)), where o 1 is the outcome map associated with g 1 (u).
In this way, we get an extended game algebra. For power equivalence, the complete algebra of the propositional operations plus sequential composition has been axiomatized in [8,15]. For basic powers and strong equivalence in our new sense, however, this is an open problem.
However, we now run into some unexpected trouble: To see why this is so, consider the two perfect information games displayed in Figure 2, which have an obvious instantiantial neighborhood bisimulation between them. The games are not strongly power equivalent, since player B has a basic power {x, y} in the game to the right, but not to the left. But both games can clearly be obtained as the sequential composition of strongly power equivalent games: Figure 2: A threat to compositionality: instantial bisimulation does not preserve basic powers.
x y B ✼ ✼ ✼ ✼ ✞ ✞ ✞ ✞ A x y x y B ✼ ✼ ✼ ✼ ✞ ✞ ✞ ✞ B ✼ ✼ ✼ ✼ ✞ ✞ ✞ ✞ A ❊ ❊ ❊ ❊ ❊ ❊ ② ② ② ② ② ②
This failure might seem a serious challenge to the compositional methodology of dynamic game logic. But when we analyze what goes wrong in the example, the reason is the functional character of strategies. They force a unique choice at each turn, making B too specific in the game on the left.
To remedy this situation, we suggest to widen the notion of a strategy to allow non-determinism, so that strategies may constrain the moves of a player, but not determine them uniquely. This is not altogether foreign in game theory: in fact, mixed strategies can be interpreted in a similar way, and the same move has also been defended for the broader notion of a 'plan' in [2].
Definition 16. A relational strategy for player P in a game G = (T ,t, o, Π) is a binary relation σ over T such that:
• σ [u] = / 0 whenever u ∈ t −1 [P]
, and
• σ [u] = σ [v]
if u, v are in the same partition cell in Π.
The set Match(σ ) of non-deterministic matches guided by a strategy σ is defined in the obvious way. We say that P ⊆ O is a basic relational power of P if there is a relational strategy σ for P in G such that P = {o(m) | m ∈ Match(σ )}. The set of basic relational powers of P is denoted by R P (G ). We say that G 1 , G 2 are semi-strongly power equivalent if R P (G 1 ) = R P (G 2 ), for each P ∈ {A, B}. Finally, we write G 1 ≡ G 2 when G 1 , G 2 are semi-strongly power equivalent.
The relation of semi-strong power equivalence can be lifted to an equivalence relation between dynamic games in the same component-wise manner as before.
The move to relational strategies does not trivialize the new equivalence concept proposed in this paper. The two games in our running example of Figure 1 are not semi-strongly power equivalent, so this slightly coarser equivalence notion is still fine enough to make the distinction that we wanted. Furthermore, we get the result we are after: In this game algebra, we get a number of interesting valid equations known from the game algebra of power equivalence: Proposition 8. The following equations hold in D:
Associativity x • (y • z) = (x • y) • z Dualization −(x • y) = (−x) • (−y) Left Distribution (x + y) • z = (x • z) + (y • z)
Note also that some equations that were not valid in the functional setting now become valid, such as idempotence: x+x = x. In this way the algebra is closer to the algebra of games under power equivalence, but does not collapse to it: the distribution law x × (y + z) = (x × y) + (x × z) still fails, for example.
Relational strategies improve our game algebra. At the same time, our earlier results go through with suitable modifications. In particular, our representation theorem for basic powers is easily amended to capture relational basic powers. Consider the following condition on pairs
= R A (G ) and F B = R B (G ).
Finally, our basic game logic can also be extended with game terms to capture this algebraic reasoning, in the style of dynamic game logic, [13], [2]. We will then have instantial modalities describing basic powers of player i in game G:
[G, i](Ψ; ϕ)
This formalism can be interpreted on our instantial game models, when we provide these with worlddependent basic power relations for all players. The crucial point here is that, with the earlier obstacle to compositionally overcome, we can define the power relation for a product game G 1 • G 2 in the following inductive manner.
(u, Z) ∈ R P G 1 •G 2 iff Z = F, for some family F ⊆ PW and some Y ⊆ W with (u,Y ) ∈ R P G 1 and (Y, F) ∈ R P G 2 .
This is the basis for obvious recursion axioms for the game operations that lead to the following result.
Theorem 9. The dynamic game logic of relational basic powers is completely axiomatizable.
This logic has interesting further properties that deviate from known systems, especially in its axiomatization of game iteration, which we will treat in a separate publication.
Further directions
In this final section we briefly consider some directions for future reseach, in particular:
• Enriching the logic with epistemic modalities to reason about imperfect information.
• Incorporating preference into the framework of basic powers and instantial game logic.
• Placing instantial game logic and strong power equivalence in a wider view perspective on the many possible game logics and notions of game equivalence.
Imperfect information
In this paper, we have worked with imperfect information games from the start. This raises some issues of intuitive interpretation. Imperfect information in games can arise for quite different reasons: players' limited powers of observation of moves, but also players' uncertainty about the strategies of other players. All this brings in what players know about the game, and a richer game logic reflecting this would have to incorporate epistemic modalities. Moreover, imperfect information sits somewhat uneasily with our game algebra, since information sets can cross between subgames, disrupting the obvious compositional structure. We have side-stepped this issue in the definition for our game operations, but at the price of dealing with only a special class of imperfect information games. Clearly, a lot remains to be clarified.
Preference
This paper has studied 'game forms' with abstract outcomes without any specified preference ordering. A natural next step is to consider proper games in which the players have preference orders over the set of outcomes. This is necessary to connect the game equivalences we have considered here with standard game theoretic concepts such as Nash equilibrium or solution methods like the elimination of dominated strategies. Adding modalities for preference is a well-known device in game logics, so we could also do this. But preference does raise questions for our perspective. For instance, the games in Figure 1 have different Backward Induction solutions with preference 2 < 1 < 3 for B, 1 < 3 < 2 for A. We may have to redefine our basic powers in the presence of preference, considering only preference-optimal sets for a player. But solution methods like Backward Induction also incorporate one particular view of rationality: they make an assumption about agents, rather than being part of the neutral mathematics of the game. Perhaps we need to study game equivalences parametrized to particular types of player.
A multitude of game logics
This paper does not claim that there is one best level for viewing games. Extensive form, standard powers, or strategic form all have their virtues, and we have merely claimed that there is room for one more natural new option. All these levels come with their own logical languages matching the invariance relation, [2]: relational modal logic for extensive games, modal neighborhood logic for standard powers, instantial neighborhood logic for our basic powers, and multi-modal logics accessing the different dimensions of matrix games.
This raises a systematic question. How are all these different logics related, given natural transformations from one level of game structure to another? For instance, on finite games, the modal logic of forcing powers can be translated into a µ-calculus on the underlying extensive games, and the same is true of our instantial modalities for basic powers, using the observations in Section 6. But modalities for strategic form games are not easily compared with our forcing modalities: moving across rows or columns means considering alternative strategies for players, something that would require a serious extension of our forcing language. In addition, matrix logics have surprising features that have no counterpart in our forcing logics, such as the undecidability of the full system for three players, which reflect the undecidability of the product logic S5 × S5 × S5 [11].
We believe that systematizing the total picture of game logics and their interrelations holds great interest, but we must leave this for further investigation.
u ′ ∈ O and let j ∈ {0, 1}. Then there exists a strategy c for A in G such that c(Z, u ′ , j) = u.Proof of Claim 2. By the Instantiatedness property there exists some Z ′ ∈ F A such that u ∈ Z ′ . But by Claim 1 there exists some c which is a legitimate move forA in G , such that c[F B × O × {0, 1}] = Z ′ .In fact, by inspection of the proof of Claim 1, we see that we may pick c so that c[F B × O × {0}] = Z ′ as well. Now define the map c ′ as follows: if j = 0 then define c ′ to be like c except that c ′ (Z, u ′ , 0) = u and c ′ (Z, u ′ , 1) = c(Z, u ′ , 0). If j = 1, then define c ′ to be like c except that c ′ (Z, u ′ , 1) = u. In either case, we still have c ′ [F B × O × {0, 1}] = Z ′ , and so we see that c ′ is a legitimate strategy for A. Since c ′ (Z, u ′ , j) = u, we are done.
F
A . By Claim 1 there exists a strategy c for A such that c[F B × O × {0, 1}] = Z. But since the possible strategies for B are exactly the members of the set F B × O × {0, 1}, it follows that Z is an instantiated power for A.
Claim 3 .
3Let (Z, u, k) ∈ F B × O × {0, 1}, considered as a strategy for B. Then the possible outcomes consistent with this strategy are precisely the members of Z, i.e. Z = {c(Z, u, k) | c ∈ Σ A }. Proof of Claim 3. Let Z ′ denote the set of all possible outcomes consistent with the strategy (Z, u, k). It is clear that Z ′ ⊆ Z, since every possible outcome u ′ consistent with the strategy
[P]({ψ 1 , ..., ψ n }; ϕ), [P](ψ; ϕ) for [P]({ψ}; ϕ), and [P]ϕ for [P]( / 0; ϕ).
Definition 10 .
10An instantial game frame (W, R A , R B ) is a triple with W a set, and R P ⊆ W × PW for each player P ∈ {A, B}, where for all u ∈ W , the pair (R A [u], R B [u]) satisfies Non-emptiness, Instantiatedness and Consistency. Instantial game models then add a valuation for propositional variables.
Un [P](ψ 1 , ..., ψ n ; ϕ) → [P](ψ 1 ∧ ϕ, ..., ψ n ∧ ϕ; ϕ) Lem [P](Ψ; ϕ) → [P](Ψ ∪ {γ}; ϕ) ∨ [P](Ψ; ϕ ∧ ¬γ) Bot¬[P](⊥; ϕ) Axioms for frame constraints Non-Em [P]⊤ Inst [P](ψ; ⊤) ↔ [P](ψ; ⊤)
Proposition 4 .
4Strong power equivalence is a congruence on the algebra G({A, B}, O), +, ×, − . This motivates the following definition: Definition 14. The strong algebra of games G (with outcomes O) is the quotient: G({A, B}, O), +, ×, − / ≃
Proposition 6 .
6Let O = {x, y}. Then the relation of strong power equivalence over D({A, B}, O) is not a congruence with respect to sequential game composition.
Proposition 7 .
7The relation ≡ of semi-strong power equivalence over D({A, B}, O) is a congruence with respect to the operations +, ×, − plus sequential game composition •. Definition 17. The strong dynamic game algebra is the quotient D({A, B}, O), +, ×, −, • / ≡.
F A , F B ⊆ P(O): Union Closure For any non-empty family of relational basic powers M ⊆ F A (M ⊆ F B ), we have that M ∈ F A ( M ∈ F B ).
Theorem 1 .
1The families F A , F B ⊆ P(O) satisfy the Non-emptiness, Monotonicity and Consistency properties if, and only if, there exists a game G ∈ G({A, B}, O) such that F A = P A (G ) and F B = P B (G ).
Theorem 2 .
2The families F A , F B ⊆ P(O) satisfy the Non-emptiness, Monotonicity, Consistency and Determinacy properties if, and only if, there exists a perfect information game G ∈ G({A, B}, O) such
is a set and R P ⊆ W × PW for each player P ∈ {A, B}, and such that for all u ∈ W the pair R A [u], R B [u] satisfies the Non-emptiness, Monotonicity and Consistency conditions (with W viewed as the set of outcomes O, so that R A [u], R B [u] make up two families of sets of outcomes). A game model is a game frame together with a valuation V : Prop → PW .
Note that the Monotonicity condition makes this equivalent to: u ∈ [[[P]ϕ]] iff there exists Z ⊆ [[ϕ]] with uR P Z. write M, v ϕ for v ∈ [[ϕ]], and ϕ ('ϕ is valid') if, for every game model M and v ∈ W , we have M, v ϕ.and crucially, for the modality: (d) [[[P]ϕ]] = R −1
P [[[ϕ]]]. We
game models. The relation B ⊆ W ×W ′ is said to be a power bisimulation if, for all uBu ′ and each P ∈ {A, B}, we have:
Theorem 8 .
8Let F A , F B be two families of subsets of O. Then F A , F B satisfy Non-emptiness, Consistency, Instantiatedness and Union Closure if, and only if there is a game G such that F A
This follows from Lindenbaum's lemma together with the observation that there are at most finitely many formulas of depth ≤ k, variables in Q and game terms among τ up to provable equivalence, so we can take a conjunction of all representatives of each equivalence class of such formulas belonging to a given maximal consistent set.
E Pacuit, J Van Benthem, & O Roy, 10.3390/g2010052Toward a theory of play: A logical perspective on games and interaction. Games. 2E. Pacuit J. van Benthem & O. Roy (2011): Toward a theory of play: A logical perspective on games and interaction. Games 2(1), pp. 52-86, doi:10.3390/g2010052.
J Van Benthem, Logic in games. Cambridge, MAMIT PressJ. van Benthem (2014): Logic in games. MIT Press, Cambridge, MA.
Instantial Neighborhood Logic. J Van Benthem, N Bezhanishvili, S Enqvist, & J Yu, 10.1017/S1755020316000447The Review of Symbolic Logic. 101J.van Benthem, N. Bezhanishvili, S. Enqvist & J. Yu (2017): Instantial Neighborhood Logic. The Review of Symbolic Logic 10(1), pp. 116-144, doi:10.1017/S1755020316000447.
Set-theoretic equivalence of extensive-form games. G Bonanno, 10.1007/BF01271135International Journal of Game Theory. 204G. Bonanno (1992): Set-theoretic equivalence of extensive-form games. International Journal of Game The- ory 20(4), pp. 429-447, doi:10.1007/BF01271135.
A Course in Universal Algebra. R Burris, & H Sankappanavar, 10.1007/978-1-4613-8130-3SpringerR. Burris & H. Sankappanavar (1981): A Course in Universal Algebra. Springer, doi:10.1007/978-1-4613-8130-3.
On the strategic equivalence of extensive form games. S J Elmes & P, Reny, 10.1006/jeth.1994.1001Journal of Economic Theory. 621S. Elmes & P. J. Reny (1994): On the strategic equivalence of extensive form games. Journal of Economic Theory 62(1), pp. 1-23, doi:10.1006/jeth.1994.1001.
Normal form structures in extensive form games. L Samuelson, G J Mailath, & J M Swinkels, 10.1006/jeth.1994.1072Journal of Economic Theory. 642L. Samuelson G. J. Mailath & J. M. Swinkels (1994): Normal form structures in extensive form games. Journal of Economic Theory 64(2), pp. 325-371, doi:10.1006/jeth.1994.1072.
The basic algebra of game equivalences. V Goranko, 10.1023/A:1027311011342Studia Logica. 752V. Goranko (2003): The basic algebra of game equivalences. Studia Logica 75(2), pp. 221-238, doi:10.1023/A:1027311011342.
Modal logic for games and information. W Van Der Hoek, & M Pauly, 10.1016/S1570-2464(07)80023-1Handbook of modal logic. van Benthem J. Blackburn, P. & F. WolterElsevier3W. van der Hoek & M. Pauly (2007): Modal logic for games and information. In van Benthem J. Blackburn, P. & F. Wolter, editors: Handbook of modal logic, Studies in Logic and Practical Reasoning 3, Elsevier, pp. 1077-1148, doi:10.1016/S1570-2464(07)80023-1.
Handbook of process algebra. A Ponse, J A A Bergstra & S, Smolka, ElsevierA. Ponse J. A. Bergstra & S. A. Smolka (2001): Handbook of process algebra. Elsevier.
The Equational Theory of CA 3 is Undecidable. R Maddux, 10.2307/2273191J. Symb. Log. 452R. Maddux (1980): The Equational Theory of CA 3 is Undecidable. J. Symb. Log. 45(2), pp. 311-316, doi:10.2307/2273191.
A Course in Game Theory. M Osborne, & A Rubinstein, 10.2307/2554642MIT Press Books1M. Osborne & A. Rubinstein (1994): A Course in Game Theory. MIT Press Books 1. doi:10.2307/2554642
R Parikh, 10.1016/S0304-0208(08)73078-0The logic of games and its applications. Borgholm; North-Holland Math; North-Holland, Amsterdam102Topics in the theory of computationR. Parikh (1985): The logic of games and its applications. In: Topics in the theory of compu- tation (Borgholm, 1983), North-Holland Math. Stud. 102, North-Holland, Amsterdam, pp. 111-139, doi:10.1016/S0304-0208(08)73078-0.
Equivalence of games in extensive form. F Thompson, 10.1006/jeth.1994.1001Classics in game theory 36. F. Thompson (1997): Equivalence of games in extensive form. Classics in game theory 36, doi:10.1006/jeth.1994.1001.
Representation of game algebras. Y Venema, 10.1023/A:1027363028181Studia Logica. 752Y. Venema (2003): Representation of game algebras. Studia Logica 75(2), pp. 239-256, doi:10.1023/A:1027363028181.
| [] |
[
"WEAK HARNACK INEQUALITY FOR A MIXED LOCAL AND NONLOCAL PARABOLIC EQUATION",
"WEAK HARNACK INEQUALITY FOR A MIXED LOCAL AND NONLOCAL PARABOLIC EQUATION"
] | [
"Prashanta Garain ",
"Juha Kinnunen "
] | [] | [] | This article proves a weak Harnack inequality with a tail term for sign changing supersolutions of a mixed local and nonlocal parabolic equation. Our argument is purely analytic. It is based on energy estimates and the Moser iteration technique. Instead of the parabolic John-Nirenberg lemma, we adopt a lemma of Bombieri to the mixed local and nonlocal parabolic case. To this end, we prove an appropriate reverse Hölder inequality and a logarithmic estimate for weak supersolutions.2010 Mathematics Subject Classification. 35R11, 35K05, 35B65, 47G20, 35D30. | 10.1016/j.jde.2023.02.049 | [
"https://arxiv.org/pdf/2105.15016v1.pdf"
] | 235,254,439 | 2105.15016 | 8597b1f2e98f0e7df9e30d316940ce143ae5aa6b |
WEAK HARNACK INEQUALITY FOR A MIXED LOCAL AND NONLOCAL PARABOLIC EQUATION
31 May 2021
Prashanta Garain
Juha Kinnunen
WEAK HARNACK INEQUALITY FOR A MIXED LOCAL AND NONLOCAL PARABOLIC EQUATION
31 May 2021
This article proves a weak Harnack inequality with a tail term for sign changing supersolutions of a mixed local and nonlocal parabolic equation. Our argument is purely analytic. It is based on energy estimates and the Moser iteration technique. Instead of the parabolic John-Nirenberg lemma, we adopt a lemma of Bombieri to the mixed local and nonlocal parabolic case. To this end, we prove an appropriate reverse Hölder inequality and a logarithmic estimate for weak supersolutions.2010 Mathematics Subject Classification. 35R11, 35K05, 35B65, 47G20, 35D30.
Introduction
In this article, we establish a weak Harnack inequality for the mixed local and nonlocal parabolic Laplace equation
(1.1) ∂ t u + Lu(x, t) = ∆u(x, t) in Ω × (0, T ),
where T > 0, Ω ⊂ R N with N ≥ 2 is a bounded domain (i.e. bounded, open and connected set) and L is an integro-differential operator of the form (1.2) Lu(x, t) = P.V.ˆR N u(x, t) − u(y, t) K(x, y, t) dx dy dt,
where P.V. denotes the principal value and K is a symmetric kernel in x and y such that for some 0 < s < 1 and Λ ≥ 1, we have
(1.3) Λ −1 |x − y| N +2s ≤ K(x, y, t) ≤ Λ |x − y| N +2s
uniformly in t ∈ (0, T ). If K(x, y, t) = C |x − y| N +2s , for some constant C, then L reduces to the well known fractional Laplace operator (−∆) s and (1.2) is the mixed local and nonlocal fractional heat equation (1.4) ∂ t u + (−∆) s u = ∆u.
This kind of evolution equations arise in the study of Lévy processes, image processing etc, see Dipierro-Valdinoci [23] and the references therein for more details on the physical interpretation.
In the elliptic case, Foondun [27] have obtained Harnack and local Hölder continuity estimates for the mixed local and nonlocal problem (1.5) − ∆u + (−∆) s u = 0.
Chen-Kim-Song-Vondraček in [14] have proved Harnack estimates for (1.5) by a different approach. In addition to symmetry results and strong maximum principles, several other qualitative properties of solutions of (1.5) have recently been studied by Biagi-Dipierro-Valdinoci-Vecchi [3,4], Dipierro-Proietti Lippi-Valdinoci [20,21] and Dipierro-Ros-Oton-Serra-Valdinoci [22]. For a nonlinear version of (1.5) with the p-Laplace equation, Harnack inequality, local Hölder continuity and other regularity results are discussed in Garain-Kinnunen [28]. In the parabolic case Barlow-Bass-Chen-Kassmann [2] have obtained Harnack inequality for (1.4). Chen-Kumagai [15] have also proved a Harnack inequality and local Hölder continuity. For the fractional heat equation ∂ t u + (−∆) s u = 0, a weak Harnack inequality for globally nonnegative solutions is established by Felsinger-Kassmann [26], see also Bonforte-Sire-Vázquez [7], Caffarelli-Chan-Vasseur [12], Chaker-Kassmann [13], Kassmann-Schwab [31] and Kim [32] for related results.
The main purpose of this article is to provide a weak Harnack inequality for (1.1) (Theorem 2.8). To the best of our knowledge, a weak Harnack inequality is unknown even for the prototype equation (1.4). Our main result is stated for sign changing weak supersolutions of (1.1). For sign changing solutions of nonlocal problems, in both the elliptic and parabolic context, an extra quantity referred to as "Tail" or "Parabolic Tail", generally appears in the Harnack estimates. This phenomenon has been first observed by Kassmann in [30] for the fractional Laplace equation (−∆) s u = 0 and further extended by Di Castro-Kuusi-Palatucci [16,17] and Brasco-Lindgren-Schikorra [8,9] to the fractional p-Laplace equation. For the parabolic nonlocal case, see Strömqvist [39], Brasco-Lindgren-Strömqvist [10], Banerjee-Garain-Kinnunen [1], Ding-Zhang-Zhou [19] and the references therein. For the mixed local and nonlocal elliptic equations, a new tail quantity appears that captures both the local and nonlocal behavior of the mixed operator as observed in [28]. For the mixed local and nonlocal parabolic problem (1.1), we introduce an appropriate tail term (Definition 2.7), which captures both local and nonlocal behavior of the mixed equation.
In contrast to the probabilistic approach in [2,15], we prove the weak Harnack estimate (Theorem 2.8) by analytic techniques. More precisely, we employ the approach of Moser [36] that uses a lemma by Bombieri-Giusti [6], further avoiding the use of technically demanding parabolic John-Nirenberg lemma, see Moser [37] and Fabes-Garofalo [25]. We discuss energy estimates for negative and positive powers of weak supersolutions for (1.1) (Lemma 3.1 and Lemma 3.2). These energy estimates together with the Sobolev inequality and the Moser iteration technique, enable us to estimate the supremum of the negative power of a weak supersolution (Lemma 4.1) of (1.1) and to prove the reverse Hölder inequality (Lemma 4.2). Finally, a logarithmic estimate (Lemma 4.3) of weak supersolutions of (1.1) is deduced that allows us to apply the Bombieri lemma (Lemma 2.13) to establish our main result (Theorem 2.8).
Preliminaries and main results
We use the following notation throughout. We denote the positive and negative parts of a ∈ R by a + = max{a, 0} and a − = max{−a, 0}, respectively. The Lebesgue outer measure of a set S is denoted by |S|. The barred integral sign denotes the corresponding integral average. We write C to denote a constant which may vary from line to line or even in the same line. If C depends on r 1 , r 2 , . . . , r k , we denote C = C(r 1 , r 2 , . . . , r k ).
We recall some known results for the fractional Sobolev spaces, see Di Nezza-Palatucci-Valdinoci [18] for more details. Definition 2.1. Let 0 < s < 1 and assume that Ω ⊂ R N is an open and connected subset of R N . The fractional Sobolev space W s,2 (Ω) is defined by
W s,2 (Ω) = u ∈ L 2 (Ω) : |u(x) − u(y)| |x − y| N 2 +s ∈ L 2 (Ω × Ω)
and it is endowed with the norm
u W s,2 (Ω) = ˆΩ |u(x)| 2 dx +ˆΩˆΩ |u(x) − u(y)| 2 |x − y| N +2s dx dy 1 2 .
The fractional Sobolev space with zero boundary values is defined by
W s,2 0 (Ω) = u ∈ W s,2 (R N ) : u = 0 in R N \ Ω .
Both W s,2 (Ω) and W s,2 0 (Ω) are reflexive Banach spaces, see [18]. We denote the classical Sobolev space by W 1,2 (Ω). The parabolic Sobolev space L 2 (0, T ; W 1,2 (Ω)), T > 0, consists of measurable functions u on Ω × (0, T ) such that
(2.1) ||u|| L 2 (0,T ;W 1,2 (Ω)) = ˆT 0 ||u(·, t)|| 2 W 1,2 (Ω) dt 1 2 < ∞.
The space L 2 loc (0, T ; W 1,2 loc (Ω)) is defined by requiring the conditions above for every
Ω ′ ×[t 1 , t 2 ] ⋐ Ω×(0, T ). Here Ω ′ ×[t 1 , t 2 ] ⋐ Ω×(0, T ) denotes that Ω ′ ×[t 1 , t 2 ] is a compact subset of Ω×(0, T ).
The next result asserts that the classical Sobolev space is continuously embedded in the fractional Sobolev space, see [18,Proposition 2.2]. The argument applies an extension property of Ω so that we can extend functions from W 1,2 (Ω) to W 1,2 (R N ) and that the extension operator is bounded.
Lemma 2.2.
Let Ω be a smooth bounded domain in R N and 0 < s < 1. There exists a positive constant C = C(Ω, N, s) such that ||u|| W s,2 (Ω) ≤ C||u|| W 1,2 (Ω) for every u ∈ W 1,2 (Ω).
The following result for the fractional Sobolev spaces with zero boundary value follows from [11, Lemma 2.1]. The main difference compared to Lemma 2.2 is that the result holds for any bounded domain, since for the Sobolev spaces with zero boundary value, we always have a zero extension to the complement.
Lemma 2.3.
Let Ω be a bounded domain in R N and 0 < s < 1. There exists a positive constant
C = C(N, s, Ω) such that R NˆRN |u(x) − u(y)| 2 |x − y| N +2s dx dy ≤ CˆΩ |∇u| 2 dx
for every u ∈ W 1,2 0 (Ω). Here we consider the zero extension of u to the complement of Ω. The notion of weak supersolutions for (1.1) is defined as follows.
Definition 2.4. A function u ∈ L ∞ (0, T ; L ∞ (R N )) is a weak supersolution of the problem (1.1), if u ∈ C(0, T ; L 2 loc (Ω)) ∩ L 2 (0, T ; W 1,2 loc (Ω)) and for every Ω ′ × [t 1 , t 2 ] ⋐ Ω × (0, T ) and every nonnegative test function φ ∈ W 1,2 loc (0, T ; L 2 (Ω ′ )) ∩ L 2 loc (0, T ; W 1,2 0 (Ω ′ )), we havê
Ω ′ u(x, t 2 )φ(x, t 2 ) dx −ˆΩ ′ u(x, t 1 )φ(x, t 1 ) dx −ˆt 2 t 1ˆΩ ′ u(x, t)∂ t φ(x, t) dx dt +ˆt 2 t 1ˆΩ ′ ∇u∇φ dx dt +ˆt 2 t 1ˆR NˆRN A u(x, y, t) φ(x, t) − φ(y, t) dµ dt ≥ 0, (2.2)
where A u(x, y, t) = u(x, t) − u(y, t) and dµ = K(x, y, t) dx dy.
Remark 2.5. By Lemma 2.2 and Lemma 2.3, we observe that the Definition 2.4 well stated. Note that if u is a weak supersolution of (1.1), so is u + c for any scalar c.
Remark 2.6. Below we obtain energy estimates where the test functions depend on the supersolution itself. The admissibility of these test functions can be justified by using the mollification in time defined for f ∈ L 1 (Ω × (0, T )) by
(2.3) f h (x, t) := 1 hˆt 0 e s−t h f (x, s) ds.
See [5,34] for more details on f h .
Next, we define the parabolic tail which appears in estimates throughout the article.
Definition 2.7. Let x 0 ∈ R N , t 1 , t 2 ∈ (0, T ) and r > 0. The parabolic tail of a weak supersolution u of (1.1) (Definition 2.4) is
(2.4) Tail ∞ (u; x 0 , r, t 1 , t 2 ) = r 2 ess sup t 1 <t<t 2ˆR n \Br(x 0 ) |u(y, t)| |y − x 0 | N +2s dy.
Now we are ready to state our main result, which asserts that a weak Harnack inequality holds for weak supersolutions of (1.1).
Theorem 2.8. Assume that u is a weak supersolution of (1.1) such that u ≥ 0 in B R (x 0 ) × (t 0 − r 2 , t 0 + r 2 ) ⊂ Ω × (0, T ). Let 0 < r ≤ 1, r < R 2 and T = r R 2 Tail ∞ u − ; x 0 , R, t 0 − r 2 , t 0 + r 2 ,
where Tail ∞ is defined by (2.4). Then for any 0 < q < 2 − 2 κ , where κ > 2 is given by (2.10), there exists a positive constant C = C(N, s, Λ, q) such that
(2.5) V − ( r 2 ) (u + T ) q dx dt 1 q ≤ C ess inf V + ( r 2 ) u + T , where V − r 2 = B r 2 (x 0 ) × (t 0 − r 2 , t 0 − 3 4 r 2 ) and V + r 2 = B r 2 (x 0 ) × (t 0 + 3 4 r 2 , t 0 + r 2 ).
Corollary 2.9. If u ≥ 0 in R N × (t 0 − r 2 , t 0 + r 2 ) in Theorem 2.8, then (2.5) reduces to
(2.6) V − ( r 2 ) u q dx dt 1 q ≤ C ess inf V + ( r 2 )
u.
We state some useful results that are needed to prove our main result (Theorem 2.8). The following inequalities follows from [26,Lemma 3.3].
(b − a) τ ǫ+1 1 a −ǫ − τ ǫ+1 2 b −ǫ ≥ τ 1 τ 2 ǫ − 1 b τ 2 1−ǫ 2 − a τ 1 1−ǫ 2 2 − C(ǫ)(τ 2 − τ 1 ) 2 b τ 2 1−ǫ + a τ 1 1−ǫ , (2.7)
(ii) For every ǫ ∈ (0, 1), there exist constants ζ(ǫ) = 4ǫ 1−ǫ , ζ 1 (ǫ) = ζ(ǫ) 6 and ζ 2 (ǫ) = ζ(ǫ) + 9 ǫ such that
(b − a) τ 2 1 a −ǫ − τ 2 2 b −ǫ ≥ ζ 1 (ǫ) τ 2 b 1−ǫ 2 − τ 1 a 1−ǫ 2 2 − ζ 2 (ǫ)(τ 2 − τ 1 ) 2 b 1−ǫ + a 1−ǫ . (2.8)
Next, we state a weighted Poincaré inequality that follows from [24, Corollary 3] by a change of variables. See also [38,Theorem 5.3.4]. This plays an important role in the logarithmic estimate for supersolutions (Lemma 4.3).
Lemma 2.11. Let φ : B r (x 0 ) → [0, ∞) be a radially decreasing function. There exists a positive constant C = C(N, φ) such that
(2.9) Br(x 0 ) |u − u φ | 2 φ dx ≤ Cr 2 Br(x 0 ) |∇u| 2 φ dx for every u ∈ W 1,2 (B r (x 0 )), where u φ =´B r (x 0 ) uφ dx Br(x 0 ) φ dx .
The following version of the Gagliardo-Nirenberg-Sobolev inequality will be useful for us, see [35,Corollary 1.57]. There exists a positive constant C = C(N ) such that
(2.11) ˆΩ |u| κ dx 1 κ ≤ C|Ω| 1 N − 1 2 + 1 κ ˆΩ |∇u| 2 dx 1 2
for every u ∈ W 1,2 0 (Ω). Our final auxiliary result is a lemma of Bombieri-Giusti [6], which can be proved with the same arguments as in the proof of [ Lemma 2.13. Assume that ν is a Borel measure on R N +1 and let θ, A and γ be positive
constants, 0 < δ < 1 and 0 < α ≤ ∞. Let U (σ) be bounded measurable sets with U (σ ′ ) ⊂ U (σ) for 0 < δ ≤ σ ′ < σ ≤ 1.
Let f be a positive measurable function on U (1) which satisfies the reverse Hölder inequality
U (σ ′ ) f α dν 1 α ≤ A (σ − σ ′ ) θ U (σ) f β dν 1 β , with 0 < β < min{1, α}. Further assume that f satisfies |{x ∈ U (1) : log f > λ}| ≤ A|U (δ)| λ γ for all λ > 0. Then U (δ) f α dν 1 α ≤ C,
for some constant C = C(θ, δ, α, γ, A) > 0.
Energy estimates
In this section, we establish energy estimates for weak supersolutions of (1.1). The first one is the following lemma that helps us to estimate the supremum of the inverse of weak supersolutions.
Lemma 3.1. Assume that u is a weak supersolution of (1.1) such that u ≥ 0 in B R (x 0 ) × (τ 1 − τ, τ 2 ) ⊂ Ω × (0, T ).
Let 0 < r ≤ 1 be such that r < R and denote v = u + l, l > 0. Then for any m > 0 there exists a positive constant C(m) ≈ 1 + m such that
τ 2 τ 1 −τˆBr(x 0 ) |∇v − m 2 | 2 ψ(x) m+2 η(t) dx dt ≤ m 2 C(m + 1) m + 1ˆτ 2 τ 1 −τˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) −m + v(y, t) ψ(y) −m η(t) dµ dt + m 2 m + 1 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x) m+2 η(t) dx dt + 2Λ l ess sup τ 1 −τ <t<τ 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x, t) m+2 η(t) dx dt + m 2 (m + 2) 2 (m + 1) 2ˆτ 2 τ 1 −τˆBr(x 0 ) |∇ψ| 2 v(x, t) −m ψ(x) m η(t) dx dt + m m + 1ˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x) m+2 |∂ t η(t)| dx dt (3.1)
and ess sup
τ 1 <t<τ 2ˆBr (x 0 ) v(x, t) −m ψ(x) m+2 dx ≤ mC(m + 1)ˆτ 2 τ 1 −τˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) −m + v(y, t) ψ(y) −m η(t) dµ dt + m 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x) m+2 η(t) dx dt + 2Λ l ess sup τ 1 −τ <t<τ 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x, t) m+2 η(t) dx dt + m(m + 2) 2 (m + 1)ˆτ 2 τ 1 −τˆBr(x 0 ) |∇ψ| 2 v(x, t) −m ψ(x) m η(t) dx dt +ˆτ 2 τ 1 −τˆBr(x 0 ) v(x, t) −m ψ(x) m+2 |∂ t η(t)| dx dt (3.2) hold for every nonnegative ψ ∈ C ∞ c (B r (x 0 )) and η ∈ C ∞ (R) such that η(t) ≡ 0 if t ≤ τ 1 − τ and η(t) ≡ 1 if t ≥ τ 2 . Proof. Let ǫ > 1 and ψ ∈ C ∞ c (B r (x 0 )). Let t 1 = τ 1 − τ, t 2 ∈ (τ 1 , τ 2 ) and η ∈ C ∞ (t 1 , t 2 ) be such that η(t 1 ) = 0 and η(t) = 1 for all t ≥ t 2 . Note that v = u + l is again a weak supersolution of (1.1) and we observe that φ(x, t) = v(x, t) −ǫ ψ(x) ǫ+1 η(t)
is an admissible test function in (2.2). Indeed, following [5,34], since v is a weak supersolution of (1.1), noting the definition of (·) h from (2.3), we have
(3.3) lim h→0 (I h + L h + A h ) ≥ 0, where I h =ˆt 2 t 1ˆBr (x 0 ) ∂ t v h φ dx dt, L h =ˆt 2 t 1ˆBr (x 0 ) ∇v(x, t) h ∇φ dx dt and A h =ˆt 2 t 1ˆR NˆRN (v(x, t) − v(y, t) K(x, y) h φ(x, t) − φ(y, t) dx dy dt.
Estimate of I h : We observe that
I h =ˆt 2 t 1ˆBr (x 0 ) ∂ t v h v(x, t) −ǫ ψ(x) ǫ+1 η(t) dx dt =ˆt 2 t 1ˆBr (x 0 ) ∂ t v h v(x, t) −ǫ − v h (x, t) −ǫ ψ(x) ǫ+1 η(t) dx dt +ˆt 2 t 1ˆBr (x 0 ) ∂ t v h v h (x, t) −ǫ ψ(x) ǫ+1 η(t) dx dt.
Since
∂ t v h = v − v h h , we have ∂ t v h v −ǫ − v −ǫ h ≤ 0, over the set B r (x 0 ) × (t 1 , t 2 )
. Therefore, the first integral in the above estimate of I h is nonpositive. As a consequence, we obtain
I h ≤ˆt 2 t 1ˆBr (x 0 ) ∂ t v h v h (x, t) −ǫ ψ(x) ǫ+1 η(t) dx dt = 1 1 − ǫˆt 2 t 1ˆBr (x 0 ) ∂ t v 1−ǫ h ψ(x) ǫ+1 η(t) dx dt.
(3.4)
Now passing the limit as h → 0, we have
(3.5) lim h→0 I h ≤ I 0 ,
where
I 0 := − 1 ǫ − 1ˆB r (x 0 ) ψ(x) ǫ+1 v(x, t 2 ) 1−ǫ dx + 1 ǫ − 1ˆt 2 t 1ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ ∂ t η(t) dx dt. (3.6) Estimate of L h : Since ∇v ∈ L 2 (B r (x 0 ) × (t 1 , t 2 )), by [34, Lemma 2.2], we have (3.7) L := lim h→0 L h =ˆt 2 t 1ˆBr (x 0 ) ∇v∇φ dx dt.
Estimate of A h : Following the same arguments as in the proof of [1, Lemma 3.1], we obtain
(3.8) A := lim h→0 A h =ˆt 2 t 1ˆR NˆRN v(x, t) − v(y, t) φ(x, t) − φ(y, t) dµ dt.
Therefore, using (3.5), (3.7), (3.8) in (3.3), we have
0 ≤ I 0 + A + L = I 0 +ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) φ(x, t) − φ(y, t) dµ dt + 2ˆt 2 t 1ˆR N \Br(x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) φ(x, t) dµ dt +ˆt 2 t 1ˆBr (x 0 )
∇v∇φ dx dt
:= I 0 + I 1 + I 2 + I 3 .
(3.9)
Estimate of I 1 : Using [(2.7), Lemma 2.10] for C(ǫ) = max 4, 6ǫ−5 2 , we have
I 1 =ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) v(x, t) −ǫ ψ(x) ǫ+1 − v(y, t) −ǫ ψ(y) ǫ+1 η(t) dµ dt ≤ −ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x)ψ(y) ǫ − 1 v(x, t) ψ(x) 1−ǫ 2 − v(y, t) ψ(y) 1−ǫ 2 2 η(t) dµ dt + C(ǫ)ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) 1−ǫ + v(y, t) ψ(y) 1−ǫ η(t) dµ dt.
(3.10)
Estimate of I 2 : Since l > 0 and u ≥ 0 in B R (x 0 ) × (t 1 , t 2 ), we have v ≥ l in B R (x 0 ) × (t 1 , t 2 ). Further, for any x ∈ B R (x 0 ), y ∈ R N and t ∈ (t 1 , t 2 ), we have v(x, t) − v(y, t) ≤ v(x, t) + u − (y, t) and v(x, t) −ǫ = v(x, t) 1−ǫ v(x, t) −1 ≤ l −1 v(x, t) 1−ǫ .
Using these estimates and the fact that
u ≥ 0 in B R (x 0 ) × (t 1 , t 2 ), using that u(y, t) − vanishes in B R (x 0 ) × (t 1 , t 2 ), we have I 2 = 2ˆt 2 t 1ˆR N \Br(x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) v(x, t) −ǫ ψ(x) ǫ+1 η(t) dµ dt ≤ 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x) ǫ+1 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x) ǫ+1 η(t) dx dt. (3.11) Estimate of I 3 : We have I 3 =ˆt 2 t 1ˆBr (x 0 ) ∇v∇ v(x, t) −ǫ ψ(x) ǫ+1 η(t) dx dt = −ǫˆt 2 t 1ˆBr (x 0 ) v(x, t) −ǫ−1 |∇v| 2 ψ(x) ǫ+1 η(t) dx dt + (ǫ + 1)ˆt 2 t 1ˆBr (x 0 ) ψ(x) ǫ v(x, t) −ǫ ∇v∇ψη(t) dx dt. (3.12)
Using Young's inequality (3.13) ab
≤ δa 2 + 1 4δ b 2 ,
for any a, b ≥ 0 and δ > 0, we have
ψ ǫ v −ǫ |∇v||∇ψ| = v −ǫ−1 2 ψ ǫ+1 2 |∇v| |∇ψ|v 1−ǫ 2 ψ ǫ−1 2 ≤ δv −ǫ−1 ψ ǫ+1 |∇v| 2 + 1 4δ |∇ψ| 2 v 1−ǫ ψ ǫ−1 . (3.14)
Choosing δ = ǫ 2(ǫ+1) and employing the estimate (3.14) in (3.12) we obtain
I 3 ≤ − ǫ 2ˆt 2 t 1ˆBr (x 0 ) v(x, t) −ǫ−1 |∇v| 2 ψ(x) ǫ+1 η(t) dx dt + (ǫ + 1) 2 2ǫˆt 2 t 1ˆBr (x 0 ) |∇ψ| 2 v 1−ǫ ψ ǫ−1 dx dt ≤ − 2ǫ (ǫ − 1) 2ˆt 2 t 1ˆBr (x 0 ) ∇v 1−ǫ 2 2 ψ(x) ǫ+1 η(t) dx dt + (ǫ + 1) 2 2ǫˆt 2 t 1ˆBr (x 0 ) |∇ψ| 2 v 1−ǫ ψ ǫ−1 dx dt,(3.15)
where we have also used the fact that
v −ǫ−1 |∇v| 2 = 4 (ǫ − 1) 2 ∇v 1−ǫ 2
By combining the estimates (3.6), (3.10), (3.11) and (3.15) in (3.9), we obtain
t 2 t 1ˆBr (x 0 ) ∇v 1−ǫ 2 2 ψ(x) ǫ+1 η(t) dx dt + ǫ − 1 2ǫˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x)ψ(y) v(x, t) ψ(x) 1−ǫ 2 − v(y, t) ψ(y) 1−ǫ 2 2 η(t) dµ dt + ǫ − 1 2ǫˆB r (x 0 ) ψ(x) ǫ+1 v(x, t 2 ) 1−ǫ dx ≤ C(ǫ)(ǫ − 1) 2 2ǫˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) 1−ǫ + v(y, t) ψ(y) 1−ǫ η(t) dµ dt + (ǫ − 1) 2 2ǫ 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + ǫ 2 − 1 2 4ǫ 2ˆt 2 t 1ˆBr (x 0 ) |∇ψ| 2 v(x, t) 1−ǫ ψ(x) ǫ−1 dx dt + ǫ − 1 2ǫˆt 2 t 1ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ |∂ t η(t)| dx dt.
(3.16)
Letting t 2 → τ 2 in (3.16), we havê
τ 2 t 1ˆBr (x 0 ) |∇v 1−ǫ 2 | 2 ψ(x) ǫ+1 η(t) dx dt ≤ C(ǫ)(ǫ − 1) 2 2ǫˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) 1−ǫ + v(y, t) ψ(y) 1−ǫ η(t) dµ dt + (ǫ − 1) 2 2ǫ 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + ǫ 2 − 1 2 4ǫ 2ˆt 2 t 1ˆBr (x 0 ) |∇ψ| 2 v(x, t) 1−ǫ ψ(x) ǫ−1 dx dt + ǫ − 1 2ǫˆt
we obtain from (3.16) ess sup The following energy estimate is useful to obtain reverse Hölder inequality for weak supersolutions.
t 1 <t<τ 2ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ dx ≤ C(ǫ)(ǫ − 1)ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) ψ(x) 1−ǫ + v(y, t) ψ(y) 1−ǫ η(t) dµ dt + (ǫ − 1) 2Λ ess sup x∈suppψˆR N \Br (x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) ǫ+1 η(t) dx dt + (ǫ − 1)(ǫ + 1) 2 2ǫˆt 2 t 1ˆBr (x 0 ) |∇ψ| 2 v(x, t) 1−ǫ ψ(x) ǫ−1 dx dt +ˆt 2 t 1ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ |∂ t η(t)| dx dt.
Lemma 3.2. Assume that u is a weak supersolution of (1.1) such that u ≥ 0 in B R (x 0 ) × (τ 1 , τ 2 + τ ) ⊂ Ω × (0, T ). Let 0 < r ≤ 1 be such that r < R and denote v = u + l, l > 0. Then for any 0 < α < 1, we havê
τ 2 +τ τ 1ˆBr (x 0 ) ∇v α 2 2 ψ 2 η dx dt ≤ α 2 1 − α ζ 2 (1 − α)ˆτ 2 +τ τ 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) α + v(y, t) α η(t) dµ dt + 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) α ψ(x, t) 2 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) α ψ(x, t) 2 η(t) dx dt + 2 1 − αˆτ 2 +τ τ 1ˆBr (x 0 ) v α |∇ψ| 2 η dxdt + 1 αˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) α ψ(x) 2 |∂ t η(t)| dx dt (3.19)
and ess sup
τ 1 <t<τ 2ˆBr (x 0 ) v(x, t) α ψ(x) 2 dx ≤ 2α ζ 2 (1 − α)ˆτ 2 +τ τ 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) α + v(y, t) α η(t) dµ dt + 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) α ψ(x, t) 2 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) 2 ψ(x, t) α η(t) dx dt + 2 1 − αˆτ 2 +τ τ 1ˆBr (x 0 ) v α |∇ψ| 2 η dxdt + 1 αˆτ 2 +τ τ 1ˆBr (x 0 ) v(x, t) α ψ(x) 2 |∂ t η(t)| dx dt , (3.20) where ζ 2 (α) = ζ(α) + 9 α for ζ(α) = 4α 1−α and ψ ∈ C ∞ c (B r (x 0 )) is nonnegative and η ∈ C ∞ (R) is also nonnegative such that η(t) = 1 if τ 1 ≤ t ≤ τ 2 and η(t) = 0 if t ≥ τ 2 + τ .
Proof. Let ǫ ∈ (0, 1) and ψ ∈ C ∞ c (B r (x 0 )). Assume that t 1 ∈ (τ 1 , τ 2 ), t 2 = τ 2 + l and η ∈ C ∞ (t 1 , t 2 ) such that η(t) = 1 for all t ≤ t 1 and η(t 2 ) = 0. Since v = u + l is again a weak supersolution of (1.1), choosing
φ(x, t) = v(x, t) −ǫ ψ(x) 2 η(t)
as a test function in (2.2) (which is again justified by mollifying in time as in the proof of Lemma 3.1), we obtain
0 ≤ J 0 + J 1 + J 2 + J 3 , (3.21) where (3.22) J 0 = − 1 1 − ǫˆB r (x 0 ) v 1−ǫ (x, t 1 )ψ(x) 2 dx − 1 1 − ǫˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x) 2 ∂ t η(t) dx dt, J 1 =ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) φ(x, t) − φ(y, t) η(t) dµ dt, J 2 = 2ˆt 2 t 1ˆR N \Br(x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) φ(x, t) dµ dt and J 3 =ˆt 2 t 1ˆBr (x 0 ) ∇v∇φ dx dt.
Estimate of J 1 : Using [(2.8), Lemma 2.10], for ζ(ǫ) = 4ǫ 1−ǫ , ζ 1 (ǫ) = ζ(ǫ) 6 and ζ 2 (ǫ) = ζ(ǫ) + 9 ǫ , we have
J 1 =ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) v(x, t) −ǫ ψ(x) 2 − v(y, t) −ǫ ψ(y) 2 η(t) dµ dt ≤ −ζ 1 (ǫ)ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x)v(x, t) 1−ǫ 2 − ψ(y)v(y, t) 1−ǫ 2 2 η(t) dµ dt + ζ 2 (ǫ)ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) 1−ǫ + v(y, t) 1−ǫ η(t) dµ dt.
(3.23)
Estimate of J 2 : Since l > 0 and u ≥ 0 in B R (x 0 ) × (t 1 , t 2 ), we have v ≥ l in B R (x 0 ) × (t 1 , t 2 ) and following the same arguments as in the proof of the estimate (3.11), we have
J 2 = 2ˆt 2 t 1ˆR N \Br (x 0 )ˆBr(x 0 ) v(x, t) − v(y, t) v(x, t) −ǫ ψ(x) 2 η(t) dµ dt ≤ 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) 2 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) 2 η(t) dx dt.
(3.24)
Estimate of J 3 : We observe that Using Young's inequality (3.13) for δ = ǫ 4 , we obtain
J 3 =ˆt 2 t 1ˆBr (x 0 ) ∇v∇ v −ǫ ψ 2 η(t) dx dt = −ǫˆt 2 t 1ˆBr (x 0 ) v −ǫ−1 |∇v| 2 ψ 2 η dx dt + 2ˆt 2 t 1ˆBr (x 0 ) ψηv −ǫ ∇v∇ψ dx dt.(3.26) 2ψv −ǫ |∇v||∇ψ| = √ 2ψ|∇v|v −ǫ−1 2 √ 2v 1−ǫ 2 |∇ψ| ≤ ǫ 2 ψ 2 |∇v| 2 v −ǫ−1 + 2 ǫ v 1−ǫ |∇ψ| 2 .
Hence using (3.26) in (3.25), we have Define α = 1 − ǫ. Then, letting t 1 → τ 1 in (3.28) we obtain (3.19). Next, choosing t 1 ∈ (τ 1 , τ 2 ) such thatˆB
J 3 ≤ − ǫ 2ˆt 2 t 1ˆBr (x 0 ) v −ǫ−1 |∇v| 2 ψ 2 η dx dt + 2 ǫˆt 2 t 1ˆBr (x 0 ) v 1−ǫ |∇ψ| 2 η dx dt = − 2ǫ (1 − ǫ) 2ˆt 2 t 1ˆBr (x 0 ) ∇v 1−ǫ 2 2 ψ 2 η dx dt + 2 ǫˆt 2 t 1ˆBr (x 0 ) v 1−ǫ |∇ψ| 2 η dx dt,t 2 t 1ˆBr (x 0 ) |∇v 1−ǫ 2 | 2 ψ 2 η dx dt + (1 − ǫ) 2 ζ 1 (ǫ) 2ǫˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x)v(x, t) 1−ǫ 2 − ψ(y)v(y, t) 1−ǫ 2 2 η(t) dµ dt + 1 − ǫ 2ǫˆB r (x 0 ) v(x, t 1 ) 1−ǫ ψ(x) 2 dx dt ≤ (1 − ǫ) 2 2ǫ ζ 2 (ǫ)ˆt 2 t 1ˆBr (x 0 )ˆBr(x 0 ) ψ(x) − ψ(y) 2 v(x, t) 1−ǫ + v(y, t) 1−ǫ η(t) dµ dt + 2Λ ess sup x∈suppψˆR N \Br(x 0 ) dy |x − y| N +2sˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) 2 η(t) dx dt + 2Λ l ess sup t 1 <t<t 2 , x∈suppψˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x, t) 2 η(t) dx dt + 2 ǫˆt 2 t 1ˆBr (x 0 ) v 1−ǫ |∇ψ| 2 η dxdt + 1 1 − ǫˆt 2 t 1ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x) 2 |∂ t η(t)| dx dt .r (x 0 ) v(x, t 1 ) 1−ǫ ψ(x) 2 dx ≥ ess sup τ 1 <t<τ 2ˆBr (x 0 ) v(x, t) 1−ǫ ψ(x) 2 dx,
and using (3.28), the estimate (3.20) follows.
Estimates for weak supersolutions
We establish an estimate of supremum for weak supersolutions of (1.1).
Lemma 4.1. Assume that u is a weak supersolution of (1.1) such that u ≥ 0 in B R (x 0 ) × t 0 − r 2 , t 0 ⊂ Ω × (0, T ). Let 0 < r ≤ 1 be such that r < R 2 . Let d > 0 and v = u + l, with (4.1) l ≥ r R 2 Tail ∞ u − ; x 0 , R, t 0 − r 2 , t 0 + d,
where Tail ∞ is defined by (2.4). For any 0 < β < 1 and κ is given by (2.10), there exists constants C = C(N, s, Λ) and θ = θ(κ) such that
(4.2) ess sup U − (σ ′ r) v −1 ≤ C (σ − σ ′ ) θ 1 β U − (σr) v −β dx dt 1 β , for every 1 2 ≤ σ ′ < σ ≤ 1, where U − (ηr) = B ηr (x 0 ) × (t 0 − (ηr) 2 , t 0 ), 1 2 ≤ η ≤ 1.
Proof. Let us divide the interval (σ ′ , σ) into k parts by setting
σ 0 = σ, σ k = σ ′ , σ j = σ − (σ − σ ′ ) 1 − γ −j , j = 1, . . . , k − 1, where γ = 2 − 2 κ , where κ is given by (2.10). Let r j = σ j r, B j = B r j (x 0 ), Γ j = (t 0 − r 2 j , t 0 ) and Q j = U − (r j ) = B j × Γ j . Let ψ j ∈ C ∞ c (B j ) and η j ∈ C ∞ (R) be such that 0 ≤ ψ j ≤ 1 in B j , ψ j ≡ 1 in B j+1 , 0 ≤ η j ≤ 1 in Γ j , η j (t) = 1 for every t ≥ t 0 − r 2 j+1 , η j (t) = 0 for every t ≤ t 0 − r 2 j , dist supp ψ j , R N \ B j ≥ 2 −j−1 r, |∇ψ j | ≤ 8γ j r(σ − σ ′ ) in B j and ∂η j ∂t ≤ 8 γ j r(σ − σ ′ ) 2 in Γ j .
Let ǫ ≥ 1 and m = 1 + ǫ. Then, for w = v −1 , using Lemma 2.12 along with Hölder's inequality with exponents t = κ κ−2 and t ′ = κ 2 , for some positive constant C = C(N ), we have Setting r = r j , τ 1 = t 0 − r 2 j+1 , τ 2 = t 0 and τ = r 2 j − r 2 j+1 in Lemma 3.1, for some constant C = C(Λ) > 0, we have Estimate of I 1 : Using the properties of ψ j , η j and the fact that 0 < r ≤ 1, we have for some constant C = C(N, s, Λ) > 0. Estimate of I 2 : Without loss of generality, we may assume that x 0 = 0. Then, by noting the properties of ψ j and η j , for any x ∈ supp ψ j and y ∈ R N \ B j , we have
Q j+1 w γm dx dt = Γ j+1 B j+1 w γm dx dt = Γ j+1 B j+1 w m t + mκ 2t ′ dx dt ≤ Γ j+1 1 |B j+1 | ˆB j+1 w m dx 1 t ˆB j+1 w mκ 2 dx 1 t ′ dt ≤ Γ j+1 1 |B j+1 | ess sup Γ j+1ˆBj w m ψ m+2 j dx 1 t ˆB j w mκ 2 ψ (m+2)κ 2 j η j (t) κ 2 dx 1 t ′ dt = |Γ j ||B j | |Γ j+1 ||B j+1 | ess sup Γ j+1 B j w m ψ m+2 j dx 1 t Γ j B j w mκ 2 ψ (m+2)κ 2 j η j (t) κ 2 dx 1 t ′ dt ≤ C ess sup Γ j+1 B j w m ψ m+2 j dx 1 t r 2 Γ j B j ∇ w m 2 ψ m+2 2 j η 1 2 j 2 dx dt ≤ C r γN j ess sup Γ j+1ˆBj w m ψ m+2 j dx 1 t ·ˆΓ jˆBj ∇ w m 2 2 ψ (m+2) j η j (t) + m 2 w m |∇ψ j | 2 dx dt.I, J ≤ Cm 4 ˆΓ jˆBj ψ j (x) − ψ j (y) 2 w(x, t) m ψ j (x) m + w(y, t) m ψ j (y) m η j (t) dµ dt + 2 ess sup x∈suppψ jˆR N \B j dy |x − y| N +2sˆΓ jˆBj w(x, t) m ψ j (x) m+2 η j (t) dx dt + 2 l ess sup t∈Γ j , x∈suppψ jˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆΓ jˆBj w(x, t) m ψ j (x, t) m+2 η j (t) dx dt +ˆΓ jˆBj |∇ψ j | 2 w(x, t) m ψ j (x) m η j (t) dx dt +ˆΓ jˆBj w(x, t) m ψ j (x) m+2I 1 = Cm 4ˆΓ jˆBj (ψ j (x) − ψ j (y)) 2 w(x, t) m ψ(x) m + w(y, t) m ψ j (y) m η j (t) dµ dt ≤ Cm 4 γ 2j r 2 (σ − σ ′ ) 2 ess sup x∈B jˆBj |x − y| 2 |x − y| N +2s dyˆΓ jˆBj w(x, t) m dx dt ≤ Cm 4 γ 2j r 2s j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt ≤ Cm 4 γ 2j r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt,1 |x − y| = 1 |y| |y| |x − y| ≤ 1 |y| 1 + |x| |x − y| ≤ 1 |y| 1 + r 2 −j−1 r ≤ 2 j+2 |y| .
This implies
I 2 = Cm 4 ess sup x∈supp ψ jˆR N \B j dy |x − y| N +2sˆΓ jˆBj w(x, t) m ψ j (x) m+2 η j (t) dx dt ≤ C2 N +2s+2 m 4 2 j(N +2s) ess sup x∈supp ψ jˆR N \B j dy |y| N +2sˆΓ jˆBj w(x, t) m dx dt = Cm 4 2 j(N +2s) r 2s jˆΓjˆBj w(x, t) m dx dt ≤ Cm 4 2 j(N +2s) r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt,(4.6)
for some constant C = C(N, s, Λ) > 0. Estimate of I 3 : without loss of generality, again we assume that x 0 = 0. Let x ∈ supp ψ j and y ∈ B j , then
1 |x − y| ≤ 1 |y| 1 + r R − r ≤ 2 |y| .
By (4.1) we have for some constant C = C(N, s, Λ) > 0. Estimate of I 4 : By the properties of ψ j and η j , we obtain
I 3 = m 4 C l ess sup t∈Γ j , x∈suppψ jˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆΓ jˆBj w(x, t) m ψ j (x, t) m+2 η j (t) dx dt ≤ Cm 4 l R −2 Tail ∞ u − ; 0, R, t 0 − r 2 , t 0 ˆΓ jˆBj w(x, t) m ψ j (x, t) m+2 η j (t) dx dt ≤ Cm 4 r 2 (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt,I 4 = Cm 4ˆΓ jˆBj |∇ψ j | 2 w(x, t) m ψ j (x) m η j (t) dx dt ≤ Cm 4 γ 2j r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt,(4.8)
for some constant C = C(N, s, Λ) > 0. Estimate of I 5 : By the properties of ψ j and η j , we have
I 5 = Cm 4ˆΓ jˆBj w(x, t) m ψ j (x) m+2 |∂ t η j (t)| dx dt ≤ Cm 4 γ 2j r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt,(4.9)
for some constant C = C(N, s, Λ) > 0. Plugging the estimates (4.5) (4.6), (4.7), (4.8) and (4.9) in (4.4), since γ < 2, we obtain
I, J ≤ Cm 4 2 j(N +4) r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt, (4.10)
for some positive constant C = C (N, s, Λ). Again, using the properties of ψ j , we obtain
K ≤ Cm 4 2 2j r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) m dx dt, (4.11)
for some positive constant C = C(N, s, Λ). Employing the estimates (4.10) and (4.11) in (4.3), for m = 1 + ǫ, ǫ ≥ 1 and γ = 2 − 2 κ , we have
(4.12) Q j+1 w γm dx dt ≤ C m 4 2 j(N +4) (σ − σ ′ ) 2 Q j w m dx dt γ ,
for some positive constant C = C(N, s, Λ). Now, we use Moser's iteration technique to prove the estimate (4.2). Let m j = 2γ j , j = 0, 1, 2, . . . . By iterating (4.12), we have Q 0
w 2 dx dt 1 2 ≥ σ − σ ′ C 1+γ −1 +γ −2 +···+γ 1−k 2 (N+8) 2 (γ −1 +γ −2 +···+γ 1−k ) γ 2(γ −1 +2γ −2 +···+(k−1)γ 1−k ) Q k w m k dx dt 1 m k .
(4.13)
Letting k → ∞ in (4.13), the result follows for β = 2. If 0 < β < 1, by Young's inequality ess sup
U − (σ ′ r) w ≤ C (σ − σ ′ ) θ 1 2 U − (σr) w 2 dx dt 1 2 ≤ 2 − β 4 ess sup U − (σr) w 2−β 2 4 2 − β 2−β 2 C (σ − σ ′ ) θ 1 2 U − (σr) w β dx dt 1 2 ≤ 1 2 ess sup U − (σr) w + (2 − β) β−2 C (σ − σ ′ ) θ 1 β U − (σr) w β dx dt 1 β ≤ 1 2 ess sup U − (σr) w + C (σ − σ ′ ) θ 1 β U − (σr) w β dx dt 1 β .
(4.14)
The result follows by a similar iteration argument as in [29,Lemma 5.1].
We obtain a reverse Hölder inequality for weak supersolutions of the problem (1.1).
Lemma 4.2. Assume that u is a weak supersolution of (1.1) such that u ≥ 0 in B R (x 0 ) × (t 0 , t 0 + r 2 ) ⊂ Ω × (0, T ). Let 0 < r ≤ 1 be such that r < R 2 , d > 0 and w = u + l, with (4.15) l ≥ r R 2 Tail ∞ u − ; x 0 , R, t 0 , t 0 + r 2 + d,
where Tail ∞ is defined by (2.4). Let γ = 2 − 2 κ , where κ > 2 as given by (2.10). Then there exist positive constants C = C(N, s, Λ, q) and θ = θ(κ) such that (4.16)
U + (σ ′ r) w q dx dt 1 q ≤ C (σ − σ ′ ) θ 1 q U + (σr) w q dx dt 1 q ,
for all 1 2 ≤ σ ′ < σ ≤ 1 and 0 < q < q < γ and U + (ηr) = B ηr (x 0 ) × (t 0 , t 0 + (ηr) 2 ), 1 2 ≤ η ≤ 1.
Proof. We divide the interval (σ, σ ′ ) into k parts by setting
σ 0 = σ, σ k = σ ′ , σ j = σ − (σ − σ ′ ) 1 − γ −j 1 − γ −k , j = 1, . . . , k − 1.
We shall choose k below. Denote r j = σ j r, B j = B r j (x 0 ), Γ j = (t 0 , t 0 + r 2 j ) and
Q j = U + (r j ) = B j × Γ j . We choose ψ j ∈ C ∞ c (B j ) and η j ∈ C ∞ (R) such that 0 ≤ ψ j ≤ 1 in B j , ψ j ≡ 1 in B j+1 , 0 ≤ η j ≤ 1 in Γ j , η j (t) = 1 for every t ≤ t 0 + r 2 j+1 , η j (t) = 0 for every t ≥ t 0 + r 2 j , dist supp ψ, R N \ B j ≥ 2 −j−1 r, |∇ψ j | ≤ 8γ j r(σ − σ ′ ) and ∂η j ∂t ≤ 8 γ j r(σ − σ ′ ) 2 in Q j .
Let 0 < ǫ < 1 and α = 1 − ǫ. Using Lemma 2.12 along with Hölder's inequality with exponents t = κ κ−2 and t ′ = κ 2 , for some positive constant C = C(N ), we have Setting r = r j , τ 1 = t 0 , τ 2 = t 0 + r 2 j+1 and τ = r 2 j − r 2 j+1 in Lemma 3.1, for some constant C = C(Λ) > 0, we have Proceeding as in the proof of Lemma 4.1, we obtain and
Q j+1 w γα dx dt = Γ j+1 B j+1 w γα dxdt = Γ j+1 B j+1 w α t + ακ 2t ′ dx dt ≤ Γ j+1 1 |B j+1 | ˆB j+1 w α dx 1 t ˆB j+1 w ακ 2 dx 1 t ′ dt ≤ Γ j+1 1 |B j+1 | ess sup Γ j+1ˆBj w α ψ 2 j dx 1 t ˆB j w ακ 2 ψ (α+2)κ 2 j η j (t) κ 2 dx 1 t ′ dt = |Γ j ||B j | |Γ j+1 ||B j+1 | ess sup Γ j+1 B j w α ψ 2 j dx 1 t Γ j B j w ακ 2 ψ (α+2)κ 2 j η j (t) κ 2 dx 1 t ′ dt ≤ C ess sup Γ j+1 B j w α ψ 2 j dx 1 t r 2 Γ j B j ∇ w α 2 ψ α+2 2 j η 1 2 j 2 dx dt = C r γN j ess sup Γ j+1ˆBj w α ψ 2 j dx 1 tˆΓ jˆBj ∇ w α 2 2 ψ j (x) α+2 η j (t) + α 2 w α |∇ψ j | 2 dx dt ≤ C r γN j ess sup Γ j+1ˆBj w α ψ 2 j dx 1 tˆΓ jˆBj ∇ w α 2 2 ψ j (x) 2 η j (t) + α 2 w α |∇ψ j | 2 dx dt.I, J ≤ C ǫ 2 ˆΓ jˆBjˆBj ψ j (x) − ψ j (y) 2 w(x, t) α + w(y, t) α η j (t) dµ dt + 2 ess sup x∈suppψ jˆR N \B j dy |x − y| N +2sˆΓ jˆBj w(x, t) α ψ j (x) 2 η j (t) dx dt + 2 l ess sup t∈Γ j , x∈suppψ jˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆΓ jˆBj w(x, t) α ψ j (x) 2 η j (t) dx dt +ˆΓ jˆBj w α |∇ψ j | 2 η j dx dt +ˆΓ jˆBj w(x, t) α ψ j (x) 2 |∂ t η j (t)| dx dt = I 1 + I 2 + I 3 + I 4 + I 5 .I 1 = C ǫ 2ˆΓ jˆBjˆBj ψ j (x) − ψ j (y) 2 w(x, t) α + w(y, t) α η j (t) dµ dt ≤ Cγ 2j ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) α dx dt,I 2 = C ǫ 2 ess sup x∈supp ψ jˆR N \B j dy |x − y| N +2sˆΓ jˆBj w(x, t) α ψ j (x) 2 η j (t) dx dt ≤ C2 j(N +2s) ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) α dx dt,I 3 = C lǫ 2 ess sup t∈Γ j , x∈suppψ jˆR N \B R (x 0 ) u − (y, t) |x − y| N +2s dyˆΓ jˆBj w(x, t) α ψ j (x, t) 2 η j (t) dx dt ≤ Cγ 2j ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) α dx dt, (4.21) (4.22) I 4 = C ǫ 2ˆΓ jˆBjˆBj |∇ψ j | 2 w(x, t) α ψ j (x) α η j (t) dx dt ≤ Cγ 2j ǫ 2 r 2 j (σ − σ ′ ) 2ˆB j w(x, t) α dxdt,
(4.23)
I 5 = C ǫ 2ˆΓ jˆBj w(x, t) α ψ j (x) 2 |∂ t η j (t)| dx dt ≤ Cγ 2j ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓI, J ≤ C2 j(N +4) ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) α dx dt. (4.24)
for some positive constant C = C(N, s, Λ). Using the properties of ψ j , we obtain
K ≤ C2 2j ǫ 2 r 2 j (σ − σ ′ ) 2ˆΓ jˆBj w(x, t) α dx dt.Q j+1 w γα dx dt ≤ C 2 j(N +4) (σ − σ ′ ) 2 Q j w α dx dt γ ,
for some positive constant C = C(N, s, Λ). Note that C is independent of ǫ as long as α is away from 1. We use the Moser iteration technique to conclude the result. Fix q and q such that q > q and k such that qγ k−1 ≤ q ≤ qγ k . Let t 0 be such that t 0 ≤ q and q = γ k t 0 . Let t j = γ j t 0 , j = 0, 1, · · · , k. By iterating (4.26) and Hölder's inequality, we arrive at
Q k w q dx dt 1 q ≤ C * (σ − σ ′ ) β Q 0 w t 0 dx dt 1 t 0 , ≤ C * (σ − σ ′ ) β 1 t 0 Q 0 w q dx dt 1 q , (4.27) where C * = 2 (N +4)(γ −1 +2γ −2 +···+(k−1)γ 1−k ) C γ −1 +γ −2 +···+γ −k and β = 2 1 + γ −1 + γ −2 + · · · + γ 1−k = 2γ γ − 1 1 − γ −k .
Note that, due to the singularity of ǫ at 0 in the estimates (4.24) and (4.25), the constant C in (4.27) depends on q. It is easy to observe that C * and β are uniformly bounded over k. Now, since qγ k−1 ≤ t 0 γ k , we have t 0 ≥ q γ . Hence, the result follows from (4.27), with θ = 2γ 2 γ−1 . Next, we prove the following logarithmic estimate for weak supersolutions of (1.1).
Lemma 4.3. Assume that u is a weak supersolution of the problem (1.1) such that u ≥ 0 in B R (x 0 ) × (t 0 − r 2 , t 0 + r 2 ) ⊂ Ω × (0, T ). Let 0 < r ≤ 1 be such that r < R 2 , d > 0 and v = u + l, with l = r R 2 Tail ∞ u − ; x 0 , R, t 0 − r 2 , t 0 + r 2 + d,
where Tail ∞ is defined by (2.4). Then there exists a constant C = C(N, s, Λ) > 0 such that
(4.28) U + (r) ∩ {log v < −λ − b} ≤ Cr N +2 λ ,
where U + (r) = B r (x 0 ) × (t 0 , t 0 + r 2 ). Moreover, there exists a constant C = C(N, s, Λ) > 0 such that
(4.29) U − (r) ∩ {log v > λ − b} ≤ Cr N +2 λ , where U − (r) = B r (x 0 ) × (t 0 − r 2 , t 0 ). Here (4.30) b = b v(·, t 0 ) = −´B 3r 2 (x 0 ) log v(x, t 0 )ψ(x) 2 dx B 3r 2 (x 0 ) ψ(x) 2 dx , where ψ ∈ C ∞ c B 3r 2 (x 0 ) is a nonnegative, radially decreasing function such that 0 ≤ ψ ≤ 1 in B 3r 2 (x 0 ), ψ ≡ 1 in B r (x 0 ), |∇ψ| ≤ C r in B 3r 2 (x 0 ), for some constant C > 0 independent of r.
Proof. We only prove the estimate (4.28), since the proof of (4.29) follows similarly. Without loss of generality, we assume that x 0 = 0 and denote by B r = B r (0), B 3r 2 = B 3r 2 (0). Since v is a weak supersolution of (1.1), choosing φ(x, t) = ψ(x) 2 v(x, t) −1 as a test function in (2.2) (which can again be justified as in the proof of Lemma 3.1), we get (4.31)
I 1 + I 2 + 2I 3 + I 4 ≥ 0,
where for any t 0 ≤ t 1 < t 2 ≤ t 0 + r 2 , we have (4.32)
I 1 =ˆB 3r 2 log v(x, t)ψ(x) 2 dx t 2 t=t 1 .
which gives the estimate,
(4.37)ˆt 2 t 1ˆB 3r 2 |∇ log v| 2 ψ 2 dx dt −ˆB 3r 2 log v(x, t)ψ(x) 2 dx t 2 t=t 1 ≤ C(t 2 − t 1 )r N −2 .
Let w(x, t) = − log v(x, t) and
W (t) =´B 3r 2 w(x, t)ψ(x) 2 dx B 3r 2 ψ(x) 2 dx .
Since 0 ≤ ψ ≤ 1 in B 3r 2 and ψ ≡ 1 in B r , we obtain´B 3r 2
ψ(x) 2 dx ≈ r N . Then by Lemma 2.11, for some positive constant C 1 > 0 (independent of r), we obtain
(4.38) 1 r 2 B 3r 2 |w − W (t)| 2 ψ 2 dx ≤ C 1´B 3r 2 |∇w| 2 ψ 2 dx B 3r 2 ψ 2 dx .
By dividing through by´B 3r 2 ψ 2 dx on both sides of (4.37) and using (4.38) together with the fact that ψ ≡ 1 in B r , we get
W (t 2 ) − W (t 1 ) + 1 C 1 r 2ˆt 2 t 1 Br |w(x, t) − W (t)| 2 dx dt ≤ C(t 2 − t 1 )r −2 . Let A 1 = C 1 , A 2 = C, w(x, t) = w(x, t) − A 2 r −2 (t − t 1 ) and W (t) = W (t) − A 2 r −2 (t − t 1 ). Then w(x, t) − W (t) = w(x, t) − W (t). Hence we get (4.39) W (t 2 ) − W (t 1 ) + 1 A 1 r N +2ˆt 2 t 1ˆBr |w(x, t) − W (t)| 2 dx dt ≤ 0.
This shows that W (t) is a monotone decreasing function in (t 1 , t 2 ). Hence W (t) is differentiable almost everywhere with respect to t. Hence, from (4.39) for almost every t such that t 1 < t < t 2 , we obtain (4.40) W ′ (t) + 1 A 1 r N +2ˆB r w(x, t) − W (t) 2 dx ≤ 0.
Setting t 1 = t 0 and t 2 = t 0 + r 2 , we have W (t 0 ) = W (t 0 ) and we denote by b v(·, t 0 ) = W (t 0 ). Let Ω + t (λ) = x ∈ B r : w(x, t) > b + λ . For every t ≥ t 0 , we have W (t) ≤ W (t 0 ) = b. Thus x ∈ Ω + t (λ) gives
w(t, x) − W (t) ≥ b + λ − W (t) ≥ b + λ − W (t 0 ) = λ > 0.
Hence from (4.40), we have
W ′ (t) + |Ω + t (λ)| A 1 r N +2 b + λ − W (t) 2 ≤ 0.
Therefore, we have
|Ω + t (λ)| ≤ −A 1 r N +2 ∂ t b + λ − W (t) −1 .
Integrating over t 0 to t 0 + r 2 , we obtain
{(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : w(x, t) > b + λ} ≤ −A 1 r N +2ˆt
0 +r 2 t 0 ∂ t b + λ − W (t) −1 dt,
which gives (4.41) {(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : log v(x, t) + A 2 r −2 (t − t 0 ) < −λ − b} ≤ A 1 r N +2 λ .
Finally we obtain
(4.42) {(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : log v(x, t) < −λ − b} ≤ A + B,
where using (4.41), for some positive constant C = C(N, s, Λ), we obtain
A = {(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : log v(x, t) + A 2 r −2 (t − t 0 ) < − λ 2 − b} ≤ Cr N +2 λ , and B = {(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : A 2 r −2 (t − t 0 ) > λ 2 } ≤ 1 − λ 2A 2 r N +2 . If λ 2A 2 < 1, then B ≤ 1 − λ 2A 2 r N +2 < r N +2 < 2A 2 λ r N +2 .
If λ 2A 2 ≥ 1, then B ≤ 0. Hence in any case we have
B ≤ Cr N +2 λ ,
for some positive constant C = C(N, s, Λ). Inserting the above estimates on A and B into (4.42), we obtain
{(x, t) ∈ B r × (t 0 , t 0 + r 2 ) : log v(x, t) < −λ − b} ≤ Cr N +2 λ ,
for some positive constant C = C(N, s, Λ), which proves the estimate (4.28).
Proof of the main result
Proof of Theorem 2.8: For 1 2 ≤ η ≤ 1, denote V + (ηr) = B ηr (x 0 ) × (t 0 + r 2 − (ηr) 2 , t 0 + r 2 ) and V − (ηr) = B ηr × (t 0 − r 2 , t 0 − r 2 + (ηr) 2 ).
Then V + (r) = B r (x 0 ) × (t 0 , t 0 + r 2 ) = U + (r) and V − (r) = B r (x 0 ) × (t 0 − r 2 , t 0 ) = U − (r). Observe that V − and V + are sequences of nondecreasing cylinders over the interval [ 1 2 , 1]. Let u be as given in the hypothesis and for any d > 0, assume that v = u + l, where
l = r R 2 Tail ∞ u − ; x 0 , R, t 0 − r 2 , t 0 + r 2 + d,
where Tail ∞ is defined by (2.4). We denote w 1 = e −b v −1 and w 2 = e b v, where b is given by (4.30). By applying [(4.28), Lemma 4.3], for any λ > 0 with some constant C = C(N, s, Λ) > 0, we have
(5.1) V + (r) ∩ {log w 1 > λ} ≤ C|V + ( r 2 )| λ .
Moreover, using Lemma 4.1, for any 0 < β < 1, there exists some constant C = C(N, s, Λ) > 0 such that (5.2) ess sup V + (σ ′ r)
w 1 ≤ C (σ − σ ′ ) θ V + (σr) w β 1 dx dt 1 β ,w 1 ≤ C 1 ,
for some constant C 1 = C 1 (θ, N, s, Λ) > 0. By (4.29) in Lemma 4.3 there exists a constant C = C(N, s, Λ) > 0 such that (5.4) V − (r) ∩ {log w 2 > λ} ≤ C|V − ( r 2 )| λ for every λ > 0. By Lemma 4.2 there exists a constant C = C(N, s, Λ, q) > 0 such that
(5.5) V − (σ ′ r) w q 2 dx dt 1 q ≤ C (σ − σ ′ ) θ V − (σr) w q 2 dx dt 1 q ,
for 1 2 ≤ σ ′ < σ ≤ 1 and 0 < q < q < 2 − 2 κ , where κ is given by (2.10). Therefore, using (5.4) and (5.5) in Lemma 2.13, we have
(5.6) V − ( r 2 ) w q 2 dxdt 1 q ≤ C 2 ,
for some constant C 2 = C 2 (θ, N, s, Λ, q) > 0. Multiplying (5.3) and (5.6), for any 0 < q < 2− 2 κ , we have
(5.7) V − ( r 2 ) v q dx dt 1 q ≤ C 1 C 2 ess inf V + ( r 2 ) v ≤ C 1 C 2 ess inf V + ( r 2 ) u + l + d .
Since d > 0 is arbitrary, the estimate (2.5) follows from (5.7).
Lemma 2 . 10 .
210Let a, b > 0 and τ 1 , τ 2 ≥ 0.
Lemma 2 . 12 .
212Let Ω ⊂ R N be an open set with |Ω| < ∞ and denote
> 1 is arbitrary, choosing m = ǫ − 1, from (3.17) and (3.18) the estimates (3.1) and (3.2) follows respectively.
the final step we have used the fact that v −ǫ−1 |∇v| 2 estimates (3.22), (3.23), (3.24) and (3.27) in (3.21), we obtain
K
= m 2ˆΓ jˆBj w m |∇ψ j | 2 dx dt.
|∂ t η j (t)| dx dt = I 1 + I 2 + I 3 + I 4 + I 5 .
K
= α 2ˆΓ jˆBj w α |∇ψ j | 2 dx dt.
constant C = C(N, s, Λ) > 0. Again arguing similarly as in the estimate of I 3 in the proof of Lemma 4.1 and noting (4.15), we have
w(x, t) α dx dt, for some positive constant C = C(Λ, N, s). Plugging the estimates (4.19) (4.20), (4.21),(4.22) and (4.23) in (4.18), since γ < 2, we obtain
positive constant C = C(N, s, Λ). As in the proof of Lemma 4.1, employing the estimates (4.24) and (4.25) in (4.17), for γ = 2 − 2 κ , we have (4.26)
33, Lemma 2.11] and [38, Lemma 2.2.6].
for 1 2
1≤ σ ′ < σ ≤ 1. Therefore, using (5.1) and (5.2) in Lemma 2.13, we have(5.3) ess supV + ( r
2 )
t 1ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ |∂ t η(t)| dx dt. (3.17)Now choosing t 2 ∈ (t 1 , τ 2 ) such thatBr(x 0 ) ψ(x) ǫ+1 v(x, t 2 ) 1−ǫ dx ≥ ess sup t 1 <t<τ 2ˆBr (x 0 ) ψ(x) ǫ+1 v(x, t) 1−ǫ dx,
Following the arguments as in the proof of[17,Lemma 1.3], for some constant C = C(N, s, Λ) > 0, we obtainwhere the last inequality is obtained using the properties of ψ and the fact that 0 < r ≤ 1.and arguing as in the proof of the estimate (3.11), we have for some constant C = C(N, s, Λ) > 0. Using Young's inequality, for some constant C = C(N, s, Λ) > 0, we obtain
Some local properties of subsolutons and supersolutions for a doubly nonlinear nonlocal parabolic p-Laplace equation. Agnid Banerjee, Prashanta Garain, Juha Kinnunen, arXiv:2010.05727arXiv e-printsAgnid Banerjee, Prashanta Garain, and Juha Kinnunen. Some local properties of subsolutons and superso- lutions for a doubly nonlinear nonlocal parabolic p-Laplace equation. arXiv e-prints, page arXiv:2010.05727, October 2020. 2, 8
Non-local Dirichlet forms and symmetric jump processes. Martin T Barlow, Richard F Bass, Zhen-Qing Chen, Moritz Kassmann, Trans. Amer. Math. Soc. 3614Martin T. Barlow, Richard F. Bass, Zhen-Qing Chen, and Moritz Kassmann. Non-local Dirichlet forms and symmetric jump processes. Trans. Amer. Math. Soc., 361(4):1963-1999, 2009. 2
Mixed local and nonlocal elliptic operators: regularity and maximum principles. Stefano Biagi, Serena Dipierro, Enrico Valdinoci, Eugenio Vecchi, arXiv:2005.06907arXiv e-printsStefano Biagi, Serena Dipierro, Enrico Valdinoci, and Eugenio Vecchi. Mixed local and nonlocal elliptic operators: regularity and maximum principles. arXiv e-prints, page arXiv:2005.06907, May 2020. 2
Semilinear elliptic equations involving mixed local and nonlocal operators. Stefano Biagi, Serena Dipierro, Enrico Valdinoci, Eugenio Vecchi, arXiv:2006.05830Stefano Biagi, Serena Dipierro, Enrico Valdinoci, and Eugenio Vecchi. Semilinear elliptic equations involving mixed local and nonlocal operators. arXiv e-prints, page arXiv:2006.05830, June 2020. 2
On the Hölder regularity of signed solutions to a doubly nonlinear equation. Verena Bögelein, Frank Duzaar, Naian Liao, arXiv:2003.0415847arXiv e-printsVerena Bögelein, Frank Duzaar, and Naian Liao. On the Hölder regularity of signed solutions to a doubly nonlinear equation. arXiv e-prints, page arXiv:2003.04158, March 2020. 4, 7
Harnack's inequality for elliptic differential equations on minimal surfaces. E Bombieri, E Giusti, Invent. Math. 155E. Bombieri and E. Giusti. Harnack's inequality for elliptic differential equations on minimal surfaces. Invent. Math., 15:24-46, 1972. 2, 5
Optimal existence and uniqueness theory for the fractional heat equation. Matteo Bonforte, Yannick Sire, Juan Luis Vázquez, Nonlinear Anal. 1532Matteo Bonforte, Yannick Sire, and Juan Luis Vázquez. Optimal existence and uniqueness theory for the fractional heat equation. Nonlinear Anal., 153:142-168, 2017. 2
Higher Sobolev regularity for the fractional p-Laplace equation in the superquadratic case. Lorenzo Brasco, Erik Lindgren, Adv. Math. 3042Lorenzo Brasco and Erik Lindgren. Higher Sobolev regularity for the fractional p-Laplace equation in the superquadratic case. Adv. Math., 304:300-354, 2017. 2
Higher Hölder regularity for the fractional p-Laplacian in the superquadratic case. Lorenzo Brasco, Erik Lindgren, Armin Schikorra, Adv. Math. 3382Lorenzo Brasco, Erik Lindgren, and Armin Schikorra. Higher Hölder regularity for the fractional p-Laplacian in the superquadratic case. Adv. Math., 338:782-846, 2018. 2
Continuity of solutions to a nonlinear fractional diffusion equation. Lorenzo Brasco, Erik Lindgren, Martin Strömqvist, arXiv:1907.00910arXiv e-printsLorenzo Brasco, Erik Lindgren, and Martin Strömqvist. Continuity of solutions to a nonlinear fractional diffusion equation. arXiv e-prints, page arXiv:1907.00910, July 2019. 2
S Buccheri, J V Da Silva, L H De Miranda, arXiv:2001.05985System of Local/Nonlocal p-Laplacians: The Eigenvalue Problem and Its Asymptotic Limit as p → ∞. arXiv e-prints. S. Buccheri, J. V. da Silva, and L. H. de Miranda. A System of Local/Nonlocal p-Laplacians: The Eigenvalue Problem and Its Asymptotic Limit as p → ∞. arXiv e-prints, page arXiv:2001.05985, January 2020. 3
Regularity theory for parabolic nonlinear integral operators. Luis Caffarelli, Chi Hin Chan, Alexis Vasseur, J. Amer. Math. Soc. 243Luis Caffarelli, Chi Hin Chan, and Alexis Vasseur. Regularity theory for parabolic nonlinear integral oper- ators. J. Amer. Math. Soc., 24(3):849-869, 2011. 2
Nonlocal operators with singular anisotropic kernels. Jamil Chaker, Moritz Kassmann, Comm. Partial Differential Equations. 451Jamil Chaker and Moritz Kassmann. Nonlocal operators with singular anisotropic kernels. Comm. Partial Differential Equations, 45(1):1-31, 2020. 2
Renming Song, and Zoran Vondraček. Boundary Harnack principle for ∆ + ∆ α/2. Zhen-Qing Chen, Panki Kim, Trans. Amer. Math. Soc. 3648Zhen-Qing Chen, Panki Kim, Renming Song, and Zoran Vondraček. Boundary Harnack principle for ∆ + ∆ α/2 . Trans. Amer. Math. Soc., 364(8):4169-4205, 2012. 2
A priori Hölder estimate, parabolic Harnack principle and heat kernel estimates for diffusions with jumps. Zhen-Qing Chen, Takashi Kumagai, Rev. Mat. Iberoam. 262Zhen-Qing Chen and Takashi Kumagai. A priori Hölder estimate, parabolic Harnack principle and heat kernel estimates for diffusions with jumps. Rev. Mat. Iberoam., 26(2):551-589, 2010. 2
. Agnese Di Castro, Tuomo Kuusi, Giampiero Palatucci, Nonlocal Harnack inequalities. J. Funct. Anal. 2676Agnese Di Castro, Tuomo Kuusi, and Giampiero Palatucci. Nonlocal Harnack inequalities. J. Funct. Anal., 267(6):1807-1836, 2014. 2
Local behavior of fractional p-minimizers. Agnese Di Castro, Tuomo Kuusi, Giampiero Palatucci, Ann. Inst. H. Poincaré Anal. Non Linéaire. 33522Agnese Di Castro, Tuomo Kuusi, and Giampiero Palatucci. Local behavior of fractional p-minimizers. Ann. Inst. H. Poincaré Anal. Non Linéaire, 33(5):1279-1299, 2016. 2, 22
Hitchhiker's guide to the fractional Sobolev spaces. Eleonora Di Nezza, Giampiero Palatucci, Enrico Valdinoci, Bull. Sci. Math. 13653Eleonora Di Nezza, Giampiero Palatucci, and Enrico Valdinoci. Hitchhiker's guide to the fractional Sobolev spaces. Bull. Sci. Math., 136(5):521-573, 2012. 2, 3
Local boundedness and Hölder continuity for the parabolic fractional p-Laplace equations. Mengyao Ding, Chao Zhang, Shulin Zhou, Calc. Var. Partial Differential Equations. 601Mengyao Ding, Chao Zhang, and Shulin Zhou. Local boundedness and Hölder continuity for the parabolic fractional p-Laplace equations. Calc. Var. Partial Differential Equations, 60(1):Paper No. 38, 45, 2021. 2
Linear theory for a mixed operator. Serena Dipierro, Enrico Edoardo Proietti Lippi, Valdinoci, arXiv:2006.03850with Neumann conditions. arXiv e-printsSerena Dipierro, Edoardo Proietti Lippi, and Enrico Valdinoci. Linear theory for a mixed operator with Neumann conditions. arXiv e-prints, page arXiv:2006.03850, June 2020. 2
Serena Dipierro, Enrico Edoardo Proietti Lippi, Valdinoci, arXiv:2101.02315Non)local logistic equations with Neumann conditions. arXiv e-prints. Serena Dipierro, Edoardo Proietti Lippi, and Enrico Valdinoci. (Non)local logistic equations with Neumann conditions. arXiv e-prints, page arXiv:2101.02315, January 2021. 2
Serena Dipierro, Xavier Ros-Oton, Joaquim Serra, Enrico Valdinoci, arXiv:2012.04833Non-symmetric stable operators: regularity theory and integration by parts. arXiv e-prints. Serena Dipierro, Xavier Ros-Oton, Joaquim Serra, and Enrico Valdinoci. Non-symmetric stable operators: regularity theory and integration by parts. arXiv e-prints, page arXiv:2012.04833, December 2020. 2
Description of an ecological niche for a mixed local/nonlocal dispersal: an evolution equation and a new Neumann condition arising from the superposition of Brownian and Lévy processes. Serena Dipierro, Enrico Valdinoci, arXiv:2104.11398arXiv e-printsSerena Dipierro and Enrico Valdinoci. Description of an ecological niche for a mixed local/nonlocal dispersal: an evolution equation and a new Neumann condition arising from the superposition of Brownian and Lévy processes. arXiv e-prints, page arXiv:2104.11398, April 2021. 1
On weighted Poincaré inequalities. Moritz Bart L Omiej Dyda, Kassmann, Ann. Acad. Sci. Fenn. Math. 382Bart l omiej Dyda and Moritz Kassmann. On weighted Poincaré inequalities. Ann. Acad. Sci. Fenn. Math., 38(2):721-726, 2013. 5
Parabolic B.M.O. and Harnack's inequality. Eugene B Fabes, Nicola Garofalo, Proc. Amer. Math. Soc. 951Eugene B. Fabes and Nicola Garofalo. Parabolic B.M.O. and Harnack's inequality. Proc. Amer. Math. Soc., 95(1):63-69, 1985. 2
Local regularity for parabolic nonlocal operators. Matthieu Felsinger, Moritz Kassmann, Comm. Partial Differential Equations. 3894Matthieu Felsinger and Moritz Kassmann. Local regularity for parabolic nonlocal operators. Comm. Partial Differential Equations, 38(9):1539-1573, 2013. 2, 4
Heat kernel estimates and Harnack inequalities for some Dirichlet forms with nonlocal part. Mohammud Foondun, Electron. J. Probab. 1411Mohammud Foondun. Heat kernel estimates and Harnack inequalities for some Dirichlet forms with non- local part. Electron. J. Probab., 14:no. 11, 314-340, 2009. 1
On the regularity theory for mixed local and nonlocal quasilinear elliptic equations. Prashanta Garain, Juha Kinnunen, arXiv:2102.13365arXiv e-printsPrashanta Garain and Juha Kinnunen. On the regularity theory for mixed local and nonlocal quasilinear elliptic equations. arXiv e-prints, page arXiv:2102.13365, February 2021. 2
Introduction to regularity theory for nonlinear elliptic systems. Mariano Giaquinta, Lectures in Mathematics ETH Zürich. Birkhäuser Verlag. 18Mariano Giaquinta. Introduction to regularity theory for nonlinear elliptic systems. Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel, 1993. 18
A new formulation of Harnack's inequality for nonlocal operators. Moritz Kassmann, C. R. Math. Acad. Sci. Paris. 3492Moritz Kassmann. A new formulation of Harnack's inequality for nonlocal operators. C. R. Math. Acad. Sci. Paris, 349(11-12):637-640, 2011. 2
Regularity results for nonlocal parabolic equations. Moritz Kassmann, Russell W Schwab, Riv. Math. Univ. Parma (N.S.). 51Moritz Kassmann and Russell W. Schwab. Regularity results for nonlocal parabolic equations. Riv. Math. Univ. Parma (N.S.), 5(1):183-212, 2014. 2
Nonlocal Harnack inequalities for nonlocal heat equations. Yong-Cheol Kim, J. Differential Equations. 26711Yong-Cheol Kim. Nonlocal Harnack inequalities for nonlocal heat equations. J. Differential Equations, 267(11):6691-6757, 2019. 2
Local behaviour of solutions to doubly nonlinear parabolic equations. Juha Kinnunen, Tuomo Kuusi, Math. Ann. 3373Juha Kinnunen and Tuomo Kuusi. Local behaviour of solutions to doubly nonlinear parabolic equations. Math. Ann., 337(3):705-728, 2007. 5
Pointwise behaviour of semicontinuous supersolutions to a quasilinear parabolic equation. Juha Kinnunen, Peter Lindqvist, Ann. Mat. Pura Appl. 1854Juha Kinnunen and Peter Lindqvist. Pointwise behaviour of semicontinuous supersolutions to a quasilinear parabolic equation. Ann. Mat. Pura Appl. (4), 185(3):411-435, 2006. 4, 7, 8
Fine regularity of solutions of elliptic partial differential equations. Jan Malý, William P Ziemer, Mathematical Surveys and Monographs. 5American Mathematical SocietyJan Malý and William P. Ziemer. Fine regularity of solutions of elliptic partial differential equations, vol- ume 51 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1997. 5
On a pointwise estimate for parabolic differential equations. J Moser, Comm. Pure Appl. Math. 242J. Moser. On a pointwise estimate for parabolic differential equations. Comm. Pure Appl. Math., 24:727-740, 1971. 2
A Harnack inequality for parabolic differential equations. Jürgen Moser, Comm. Pure Appl. Math. 172Jürgen Moser. A Harnack inequality for parabolic differential equations. Comm. Pure Appl. Math., 17:101- 134, 1964. 2
Aspects of Sobolev-type inequalities. Laurent Saloff-Coste, London Mathematical Society Lecture Note Series. 2895Cambridge University PressLaurent Saloff-Coste. Aspects of Sobolev-type inequalities, volume 289 of London Mathematical Society Lec- ture Note Series. Cambridge University Press, Cambridge, 2002. 5
Harnack's inequality for parabolic nonlocal equations. Martin Strömqvist, Ann. Inst. H. Poincaré Anal. Non Linéaire. 366Martin Strömqvist. Harnack's inequality for parabolic nonlocal equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 36(6):1709-1745, 2019. 2
| [] |
Subsets and Splits